Image Moderation
POST {{contentSafetyEndpoint}}/contentsafety/image:analyze?api-version=2023-04-30-preview
This covers flagging of inappropriate and/or offensive content within images.
To process a specific image, use a website like codebeautify.org to convert the image to a base64 string, then paste the string into the Body.
To interpret the response, each category is given a severity of 0, 2, 4 or 6, with 0 being the least and 6 being the most severe.
Request Params
Key | Datatype | Required | Description |
---|---|---|---|
api-version | string |
Request Body
{"image"=>{"content"=>"<base_64_string>"}}
HEADERS
Key | Datatype | Required | Description |
---|---|---|---|
Content-Type | string |
RESPONSES
status: OK
{"hateResult":{"category":"Hate","severity":0},"selfHarmResult":{"category":"SelfHarm","severity":0},"sexualResult":{"category":"Sexual","severity":0},"violenceResult":{"category":"Violence","severity":2}}