Moderations
Enable OpenAI’s moderation feature in your LLM applications to automatically detect and filter harmful content in user messages.
Introduction
By integrating with OpenAI’s moderation endpoint, Helicone helps you check whether the user message is potentially harmful.
Why Moderations
- Identifying harmful requests and take action, for example, by filtering it.
- Ensuring any inappropriate or harmful content in user messages is flagged and prevented from being processed.
- Maintaining the safety of the interactions with your application.
Getting Started
To enable moderation, set Helicone-Moderations-Enabled
to true
.
The moderation call to the OpenAI endpoint will utilize your provided OpenAI API key.
Error Repsonse
If the message is flagged, the response will have a 400 status code
. It’s crucial to handle this response appropriately.
If the message is not flagged, the proxy forwards it to the chat completion endpoint, and the process continues as normal.
Here’s an example of the error response when flagged:
Coming Soon
We’re continually expanding our moderation features. Upcoming updates include:
- Customizable moderation criteria
Was this page helpful?