

Omni Moderation Latest
Overview :
The omni-moderation-latest model is a next-generation multimodal content moderation system built on GPT-4o. It offers improved accuracy in detecting harmful information in both text and image content, aiding developers in building more robust moderation systems. This model supports both text and image inputs and demonstrates enhanced accuracy, particularly in non-English languages. It can assess whether content falls under categories such as hate speech, violence, and self-harm while providing more nuanced moderation decision controls. Additionally, it offers probability scores to reflect the likelihood of content matching detected categories. This model is freely available to all developers, aiming to help them benefit from the latest research and investments in safety systems.
Target Users :
The target audience includes social media platforms, productivity tools, generative AI platforms, and others looking to build safer products to protect users. For instance, Grammarly uses this API to ensure the safety and fairness of its AI communication assistance products; ElevenLabs utilizes the API to scan content generated by its audio AI products to prevent and flag policy violations.
Use Cases
Grammarly uses this API to ensure the safety and fairness of its AI communication assistance products.
ElevenLabs utilizes this API to scan content produced by its audio AI products to prevent and flag policy violations.
Other companies leverage this API to build safer products that protect users from harmful content.
Features
Supports multimodal harmful content classification for text and image inputs
Introduces two new harmful content categories: Illegal and Violent Illegal content (text only)
Enhances detection accuracy for non-English content, supporting 40 languages
Improves detection performance by 70% for low-resource languages (e.g., Khmer or Swahili)
Provides calibrated scores that more accurately reflect the likelihood of content violating relevant policies
Available for all developers free of charge, with rate limits across different usage tiers
How to Use
Visit the OpenAI Moderation API documentation page
Register and obtain an API key
Read the documentation to understand how to send text or image content for review
Write code according to the API documentation to call the omni-moderation-latest model
Send content to the API and receive moderation results
Analyze the returned moderation results, including categories and probability scores
Take appropriate content moderation actions based on the review results
Featured AI Tools

AI Content Detector
AI Content Detector is an enterprise-grade tool for verifying whether text content was generated by artificial intelligence. It can help users determine if the content they are reading was created by a human or an AI, including ChatGPT.
AI content detection
288.4K

Chat GPT Anti Censorship
This plugin prevents text content within Chat GPT from being censored or blocked. It can hide censorship warnings and prevent content from being flagged as violating policy.
AI content detection
258.6K