Skip to main content

Combined Evaluation

Performs multiple evaluations in parallel on a single comment. This endpoint can run any combination of comment scoring, spam detection, and relevance checking in a single call, potentially saving time and reducing latency compared to making separate API calls.

To control which evaluations to run, use the feature flags in the request body (run_comment_score, run_spam_check, run_relevance_check). At least one feature flag must be set to true.

The response contains separate sections for each requested evaluation, with the same structure as their individual endpoint responses.

Header Parameters
X-User-Email email REQUIRED

Account email address, for authentication

X-API-Key string REQUIRED

API key owned by the user (email), for authentication

Request Body REQUIRED

JSON format strongly recommended: This endpoint handles complex parameter types (arrays and booleans) which are more reliably processed when sent as JSON. A form-urlencoded format should work for simple cases, but arrays and booleans can be problematic when using form encoding. The banned_topics array in form-urlencoded requires special syntax like `banned_topics[]=topic1&banned_topics[]=topic2` (URL-encoded array notation), which can be different across HTTP libraries. Boolean values need to be represented as strings in form format (`'true'/'false'`). We recommend using JSON.

article_context_id uuid

UUID that identifies the article context. Required for comment scoring and relevance checking.

comment string REQUIRED

The comment text to evaluate.

reply_to_comment string

Optional context for comment scoring: the comment to which this one is replying.

banned_topics string[]

Optional list of banned topics to check against for relevance checking.

run_comment_score boolean

Whether to perform comment scoring evaluation.

run_spam_check boolean

Whether to perform spam detection.

run_relevance_check boolean

Whether to perform relevance checking.

run_dogwhistle_check boolean

Whether to perform dogwhistle detection.

sensitive_topics string[]

Optional list of sensitive topics to watch for during dogwhistle detection.

dogwhistle_examples string[]

Optional list of specific dogwhistle examples to look for during detection.

Responses
200

Successful response - returns JSON with results for each requested evaluation.

Schema OPTIONAL
comment_score object OPTIONAL

Schema for comment scoring results, including logical fallacies, objectionable phrases, and overall assessment.

logical_fallacies object[]

A list of any logical fallacies identified within the comment.

fallacy_name string

Name of the logical fallacy, eg, 'straw man'

quoted_logical_fallacy_example string

Quoted part of the comment that demonstrates the fallacy

explanation_and_suggestions string OPTIONAL

Explanation of why we think that quote shows a logical fallacy, and suggestions on how to tackle it

suggested_rewrite string OPTIONAL

Rarely, a suggested rewrite. Provided only if the commenter's intent was very clear.

objectionable_phrases object[]

A list of any objectionable or rude phrases identified within the comment.

quoted_objectionable_phrase string

Quoted part of the comment that may be seen as objectionable

explanation string OPTIONAL

Explanation of why we assessed this as potentially objectionable

suggested_rewrite string OPTIONAL

Rarely, a suggested rewrite. Provided only if the commenter's intent was very clear.

negative_tone_phrases object[]

A list of any negative tone phrases identified within the comment.

quoted_negative_tone_phrase string

Quoted part of the comment that may contribute negatively to the conversation

explanation string OPTIONAL

Explanation of why we assessed this as potentially negative for the conversation

suggested_rewrite string OPTIONAL

Rarely, a suggested rewrite. Provided only if the commenter's intent was very clear.

appears_low_effort boolean

True if the comment appears to be low effort, eg, 'me too' or 'I agree'.

overall_score integer

An approximation from 1 (low) to 5 (good) of the quality of the comment.

toxicity_score number OPTIONAL

Possible values: value ≤ 1

Score indicating level of toxicity in the comment, ranging from 0.0 (not toxic) to 1.0 (highly toxic).

toxicity_explanation string OPTIONAL

Educational explanation of toxicity issues found in the comment, if any.

spam_check object OPTIONAL

Schema for spam check result, including reasoning and confidence level.

reasoning string

Short explanation of why the comment was or was not considered spam.

confidence number

Confidence score (0-1) for the spam evaluation.

is_spam boolean

True if the comment is probably spam; false otherwise.

relevance_check object OPTIONAL

Schema for comment relevance evaluation, including on-topic assessment and banned topics check.

on_topic object

Holds data about the on-topic (relevance) assessment.

reasoning string

Short explanation of why this comment was or was not regarded as on-topic.

on_topic boolean

Indicates if the comment is considered on-topic. True if it is; false if it is not on-topic / relevant.

confidence number

Confidence score (0-1) for the on-topic evaluation.

banned_topics object

Holds data about the off-topic (banned topics) assessment.

reasoning string

Short explanation of why the comment was assessed as, or not, being about topics that are off-topic.

banned_topics string[]

List of off-topic topics detected.

quantity_on_banned_topics number

A score representing the extent of the off-topic content in the comment: 0-1, where 0.5 is that about half the comment was about a topic that was flagged.

confidence number

Confidence score (0-1) for the off-topic evaluation.

dogwhistle_check object OPTIONAL

Schema for dogwhistle detection results, including analysis reasoning and detailed findings.

detection object

Main detection results and analysis.

reasoning string

Explanation of the analysis and reasoning behind the detection.

dogwhistles_detected boolean

Whether dogwhistles were found in the comment.

confidence number

Possible values: value ≤ 1

Confidence level of the detection (0.0 to 1.0).

details object OPTIONAL

Optional detailed information about detected dogwhistles.

dogwhistle_terms string[] OPTIONAL

Specific terms or phrases detected as potential dogwhistles.

categories string[] OPTIONAL

Categories or types of dogwhistles detected.

subtlety_level number OPTIONAL

Possible values: value ≤ 1

How subtle the dogwhistles are (0.0 = obvious, 1.0 = very subtle).

harm_potential number OPTIONAL

Possible values: value ≤ 1

Potential harm level of the detected content (0.0 = low, 1.0 = high).

400

Bad Request - Missing or invalid parameters, or no feature flags enabled. To diagnose this, check the request body and ensure at least one of the feature flags is set to true, and check the documentation for the individual call that matches it to verify you are sending the required parameters. For example, the article_context_id is required for comment scoring and relevance checking but is marked optional here because not all of the features this runs in parallel require it.

401

Unauthorized - Missing or incorrect authentication.