Text Moderation

POST {{contentSafetyEndpoint}}/contentsafety/text:analyze?api-version=2023-04-30-preview

Official documentation

This covers flagging of inappropriate and/or offensive content within text (specified in the Body).

To interpret the response, each category is given a severity of 0, 2, 4 or 6, with 0 being the least and 6 being the most severe.

Request Params

KeyDatatypeRequiredDescription
api-versionstring

Request Body

{"text"=>"<your_text_here>"}

HEADERS

KeyDatatypeRequiredDescription
Content-Typestring

RESPONSES

status: OK

{&quot;blocklistsMatchResults&quot;:[],&quot;hateResult&quot;:{&quot;category&quot;:&quot;Hate&quot;,&quot;severity&quot;:2},&quot;selfHarmResult&quot;:{&quot;category&quot;:&quot;SelfHarm&quot;,&quot;severity&quot;:0},&quot;sexualResult&quot;:{&quot;category&quot;:&quot;Sexual&quot;,&quot;severity&quot;:0},&quot;violenceResult&quot;:{&quot;category&quot;:&quot;Violence&quot;,&quot;severity&quot;:0}}