Skip to main content

Perspective-Compatible Scoring

Use this if you are migrating from Google's Perspective API and want to keep the same style of scoring flow.

We have a compatibility endpoint:

  • POST /v0.2/perspective-compat/analyse

It accepts a Perspective-style request and returns a Perspective-style response.

What it returns:

  • attributeScores: Google-style scores such as TOXICITY, INSULT, and PROFANITY
  • summaryScore: the overall score for each requested attribute
  • spanScores: optional span-level scores if you request spanAnnotations
  • languages: the language context used for the response

Common Attributes

If you are migrating a typical Perspective integration, these are the attributes you are most likely to use:

  • TOXICITY
  • SEVERE_TOXICITY
  • IDENTITY_ATTACK
  • INSULT
  • PROFANITY
  • THREAT
  • SEXUALLY_EXPLICIT

Respectify also accepts a range of aliases and experimental names for migration convenience. For the full list, see Attribute Mapping.

Reference

See the Perspective-Compatible Analyse API Reference for the full request and response.

Response: PerspectiveAnalyzeCommentResponse

tip

This is a scoring API.

If you want Respectify to help the writer improve the comment, use our own Comment Scoring. That is the API that explains what is wrong, quotes examples, and gives feedback aimed at better conversation.

How Scores Work

This endpoint currently supports one score type:

  • PROBABILITY

That means a 0 to 1 score where higher numbers mean more of the requested attribute. For example, a higher TOXICITY score means the comment is more likely to be toxic.

For a fuller explanation of summaryScore, spanScores, and unsupported score types, see Perspective Score Types.

By Example

Basic Usage

const result = await client.perspective.analyzeComment({
comment: {
text: "You clearly did not read the article.",
},
requestedAttributes: {
TOXICITY: {},
INSULT: {},
},
});

console.log(result.attributeScores.TOXICITY.summaryScore.value);
console.log(result.attributeScores.INSULT.summaryScore.value);

With Span Annotations

If your UI highlights the part of the comment that triggered a score, request span annotations:

const result = await client.perspective.analyzeComment({
comment: {
text: "You are clueless and your whole argument is nonsense.",
},
requestedAttributes: {
TOXICITY: {},
},
spanAnnotations: true,
});

console.log(result.attributeScores.TOXICITY.spanScores);

Notes

  • If you send languages, Respectify preserves them.
  • If you do not send languages, Respectify does not claim language detection.
  • If you use spans in your UI, test with the real kinds of text your users write. UTF-16 offsets, emoji, and escaped text matter here.
  • If you send doNotStore, Respectify accepts it. In practice, analyse requests are currently treated as though doNotStore were always true, because analyse traffic is not retained for training.