审核

给定输入文本,如果模型将其分类为违反 OpenAI 的内容策略,则输出。

相关指南:审核

创建审核

POST https://api.openai.com/v1/moderations

分类文本是否违反 OpenAI 的内容政策

请求正文


input string or array 必填

要分类的输入文本


model string 可选 默认 text-moderation-latest

有两种内容审核模型可用:text-moderation-stabletext-moderation-latest

默认情况下text-moderation-latest会随着时间的推移自动升级。这可确保您始终使用我们最准确的模型。如果您使用text-moderation-stable,我们将在更新模型之前提供提前通知。的准确度text-moderation-stable可能略低于 的准确度text-moderation-latest

请求示例

curl https://api.openai.com/v1/moderations \
  -H 'Content-Type: application/json' \
  -H 'Authorization: Bearer YOUR_API_KEY' \
  -d '{
  "input": "I want to kill them."
}'
import os
import openai
openai.api_key = os.getenv("OPENAI_API_KEY")
openai.Moderation.create(
  input="I want to kill them.",
)
const { Configuration, OpenAIApi } = require("openai");
const configuration = new Configuration({
  apiKey: process.env.OPENAI_API_KEY,
});
const openai = new OpenAIApi(configuration);
const response = await openai.createModeration({
  input: "I want to kill them.",
});

请求参数

{
  "input": "I want to kill them."
}

响应结果

{
  "id": "modr-5MWoLO",
  "model": "text-moderation-001",
  "results": [
    {
      "categories": {
        "hate": false,
        "hate/threatening": true,
        "self-harm": false,
        "sexual": false,
        "sexual/minors": false,
        "violence": true,
        "violence/graphic": false
      },
      "category_scores": {
        "hate": 0.22714105248451233,
        "hate/threatening": 0.4132447838783264,
        "self-harm": 0.005232391878962517,
        "sexual": 0.01407341007143259,
        "sexual/minors": 0.0038522258400917053,
        "violence": 0.9223177433013916,
        "violence/graphic": 0.036865197122097015
      },
      "flagged": true
    }
  ]
}
Last Updated:
Contributors: fangming78@sina.com