Perspective-Compatible Scoring
Use this if you are migrating from Google's Perspective API and want to keep the same style of scoring flow.
We have a compatibility endpoint:
POST /v0.2/perspective-compat/analyse
It accepts a Perspective-style request and returns a Perspective-style response.
What it returns:
- attributeScores: Google-style scores such as
TOXICITY,INSULT, andPROFANITY - summaryScore: the overall score for each requested attribute
- spanScores: optional span-level scores if you request
spanAnnotations - languages: the language context used for the response
Common Attributes
If you are migrating a typical Perspective integration, these are the attributes you are most likely to use:
TOXICITYSEVERE_TOXICITYIDENTITY_ATTACKINSULTPROFANITYTHREATSEXUALLY_EXPLICIT
Respectify also accepts a range of aliases and experimental names for migration convenience. For the full list, see Attribute Mapping.
Reference
See the Perspective-Compatible Analyse API Reference for the full request and response.
Response: PerspectiveAnalyzeCommentResponse
This is a scoring API.
If you want Respectify to help the writer improve the comment, use our own Comment Scoring. That is the API that explains what is wrong, quotes examples, and gives feedback aimed at better conversation.
How Scores Work
This endpoint currently supports one score type:
PROBABILITY
That means a 0 to 1 score where higher numbers mean more of the requested attribute. For example, a higher TOXICITY score means the comment is more likely to be toxic.
For a fuller explanation of summaryScore, spanScores, and unsupported score types, see Perspective Score Types.
By Example
Basic Usage
- TypeScript
- Python
- PHP
- REST API
const result = await client.perspective.analyzeComment({
comment: {
text: "You clearly did not read the article.",
},
requestedAttributes: {
TOXICITY: {},
INSULT: {},
},
});
console.log(result.attributeScores.TOXICITY.summaryScore.value);
console.log(result.attributeScores.INSULT.summaryScore.value);
- Blocking Client
- Async Client
result = client.perspective.analyze_comment(
{
"comment": {
"text": "You clearly did not read the article.",
},
"requestedAttributes": {
"TOXICITY": {},
"INSULT": {},
},
}
)
print(result.attributeScores["TOXICITY"].summaryScore.value)
print(result.attributeScores["INSULT"].summaryScore.value)
result = await client.perspective.analyze_comment(
{
"comment": {
"text": "You clearly did not read the article.",
},
"requestedAttributes": {
"TOXICITY": {},
"INSULT": {},
},
}
)
print(result.attributeScores["TOXICITY"].summaryScore.value)
print(result.attributeScores["INSULT"].summaryScore.value)
$client->perspective()->analyzeComment([
'comment' => [
'text' => 'You clearly did not read the article.',
],
'requestedAttributes' => [
'TOXICITY' => new stdClass(),
'INSULT' => new stdClass(),
],
])->then(function ($result) {
echo $result->attributeScores['TOXICITY']->summaryScore->value . "\n";
echo $result->attributeScores['INSULT']->summaryScore->value . "\n";
});
$client->run();
curl -X POST https://app.respectify.ai/v0.2/perspective-compat/analyse \
-H "Content-Type: application/json" \
-H "X-User-Email: your-email@example.com" \
-H "X-API-Key: your-api-key" \
-d '{
"comment": {
"text": "You clearly did not read the article."
},
"requestedAttributes": {
"TOXICITY": {},
"INSULT": {}
}
}'
Response:
{
"attributeScores": {
"TOXICITY": {
"summaryScore": {
"value": 0.73,
"type": "PROBABILITY"
}
},
"INSULT": {
"summaryScore": {
"value": 0.65,
"type": "PROBABILITY"
}
}
},
"languages": ["en"]
}
With Span Annotations
If your UI highlights the part of the comment that triggered a score, request span annotations:
- TypeScript
- Python
- PHP
- REST API
const result = await client.perspective.analyzeComment({
comment: {
text: "You are clueless and your whole argument is nonsense.",
},
requestedAttributes: {
TOXICITY: {},
},
spanAnnotations: true,
});
console.log(result.attributeScores.TOXICITY.spanScores);
- Blocking Client
- Async Client
result = client.perspective.analyze_comment(
{
"comment": {
"text": "You are clueless and your whole argument is nonsense.",
},
"requestedAttributes": {
"TOXICITY": {},
},
"spanAnnotations": True,
}
)
print(result.attributeScores["TOXICITY"].spanScores)
result = await client.perspective.analyze_comment(
{
"comment": {
"text": "You are clueless and your whole argument is nonsense.",
},
"requestedAttributes": {
"TOXICITY": {},
},
"spanAnnotations": True,
}
)
print(result.attributeScores["TOXICITY"].spanScores)
$client->perspective()->analyzeComment([
'comment' => [
'text' => 'You are clueless and your whole argument is nonsense.',
],
'requestedAttributes' => [
'TOXICITY' => new stdClass(),
],
'spanAnnotations' => true,
])->then(function ($result) {
print_r($result->attributeScores['TOXICITY']->spanScores);
});
$client->run();
curl -X POST https://app.respectify.ai/v0.2/perspective-compat/analyse \
-H "Content-Type: application/json" \
-H "X-User-Email: your-email@example.com" \
-H "X-API-Key: your-api-key" \
-d '{
"comment": {
"text": "You are clueless and your whole argument is nonsense."
},
"requestedAttributes": {
"TOXICITY": {}
},
"spanAnnotations": true
}'
Notes
- If you send
languages, Respectify preserves them. - If you do not send
languages, Respectify does not claim language detection. - If you use spans in your UI, test with the real kinds of text your users write. UTF-16 offsets, emoji, and escaped text matter here.
- If you send
doNotStore, Respectify accepts it. In practice, analyse requests are currently treated as thoughdoNotStorewere always true, because analyse traffic is not retained for training.