Image Moderation

POST {{contentSafetyEndpoint}}/contentsafety/image:analyze?api-version=2023-04-30-preview

Official documentation

This covers flagging of inappropriate and/or offensive content within images.

To process a specific image, use a website like codebeautify.org to convert the image to a base64 string, then paste the string into the Body.

To interpret the response, each category is given a severity of 0, 2, 4 or 6, with 0 being the least and 6 being the most severe.

Request Params

KeyDatatypeRequiredDescription
api-versionstring

Request Body

{"image"=>{"content"=>"<base_64_string>"}}

HEADERS

KeyDatatypeRequiredDescription
Content-Typestring

RESPONSES

status: OK

{&quot;hateResult&quot;:{&quot;category&quot;:&quot;Hate&quot;,&quot;severity&quot;:0},&quot;selfHarmResult&quot;:{&quot;category&quot;:&quot;SelfHarm&quot;,&quot;severity&quot;:0},&quot;sexualResult&quot;:{&quot;category&quot;:&quot;Sexual&quot;,&quot;severity&quot;:0},&quot;violenceResult&quot;:{&quot;category&quot;:&quot;Violence&quot;,&quot;severity&quot;:2}}