Skip to main content

API Reference

Method signatures and parameters for the Respectify client libraries. For response field details, see the Schema Reference.

Methods

MethodDescription
ConstructorCreate a client instance
init_topic_from_textInitialize a topic from text content
init_topic_from_urlInitialize a topic from a URL
evaluate_commentFull comment quality analysis
check_spamSpam detection
check_relevanceOn-topic and banned topics check
check_dogwhistleCoded language detection
megacallMultiple analyses in one call
check_user_credentialsVerify API credentials
runExecute event loop (PHP only)

from respectify import RespectifyClient
from respectify import RespectifyAsyncClient

Network calls take time. The async client lets your code do other things while waiting, via other coroutines. The blocking client is simpler for straightforward code where parallel requests, or waiting for network responses, isn't an issue. If you're unsure, start with the blocking client.

Constructor

RespectifyClient

client = RespectifyClient(
email: str,
api_key: str,
base_url: Optional[str] = None,
version: Optional[str] = None,
timeout: float = 30.0,
website: Optional[str] = None
)
ParameterDescription
emailYour registered email address
api_key / $apiKeyYour API key from the dashboard
base_url / $baseUrlOverride the API endpoint (optional)
versionAPI version, defaults to 0.2 (optional)
timeoutRequest timeout in seconds (Python only)
websiteYour website domain, included in API calls (optional)

Initialize Topic from Text

Initialize a topic from plain text or Markdown content. You can also initialize from a URL instead. See examples of initializing topics.

init_topic_from_text

def init_topic_from_text(
text: str,
topic_description: Optional[str] = None
) -> InitTopicResponse
response = client.init_topic_from_text(
text="Article content here...",
topic_description="Optional context about the article"
)
article_id = response.article_id
ParameterDescription
textThe text content to initialize the topic from
topic_descriptionAdditional context about the topic (optional)

Returns: InitTopicResponse containing the article_id UUID.


Initialize Topic from URL

Initialize a topic from a URL pointing to HTML, Markdown, PDF, or plain text. You can also initialize from text instead. See examples of initializing topics.

init_topic_from_url

def init_topic_from_url(
url: str,
topic_description: Optional[str] = None
) -> InitTopicResponse
response = client.init_topic_from_url(
url="https://example.com/article",
topic_description="Blog post about AI safety"
)
article_id = response.article_id
ParameterDescription
urlURL to fetch content from (must be publicly accessible)
topic_descriptionAdditional context about the topic (optional)

Returns: InitTopicResponse containing the article_id UUID.

Throws: UnsupportedMediaTypeError / UnsupportedMediaTypeException if the URL points to an unsupported format (images, video, .docx, etc).


Evaluate Comment

Evaluates a comment in the context of the article or topic the conversation is about, and optionally the comment it is replying to. This is Respectify's main API and the one you will likely call the most.

It returns:

  • A list of any common logical fallacies that seem present, with the goal of educating the commenter on traps they may be falling into. This includes the fallacy name, the quoted part of their comment that demonstrates it, an explanation, and sometimes a suggested rewrite (only if the commenter's intent was very clear).

  • Any objectionable or rude phrases, including what they are, why we assessed them as such, and very rarely a suggested rewrite.

  • Any negative tone phrases. These aren't objectionable or rude, but are phrases that don't contribute to the conversation. Note that Respectify encourages friendly (genuine, well-intentioned) disagreement and healthy sharing of different opinions and views. Sometimes disagreement can be done in an unhealthy way and this section identifies those.

  • If the comment appears 'low effort' (usually short comments that don't add to the conversation, e.g., "me too").

  • Toxicity score (0-1): Targets attacks on people, not ideas. Attacking a policy harshly scores low; attacking the person making it scores high.

  • An overall score (1-5): An approximation of the quality of the comment, i.e., how well it engages and contributes to the conversation.

The idea is to use this not to censor but to educate and encourage better discussion. We hope that over time, as a result of feedback, your users will proactively write better comments without needing to be prompted.

See examples of scoring comments.

evaluate_comment

def evaluate_comment(
comment: str,
article_id: UUID
) -> CommentScore
result = client.evaluate_comment(
comment="This is a thoughtful response...",
article_id=article_id
)
print(f"Score: {result.overall_score}/5")
print(f"Toxicity: {result.toxicity_score}")
ParameterDescription
commentThe comment text to evaluate
article_id / $articleContextIdUUID from topic initialization
$replyToCommentThe parent comment, for context (PHP only, optional)

Returns: CommentScore with quality metrics, fallacies, and tone analysis.


Check Spam

Decides if a comment is spam. The article context is optional; if omitted, the comment is evaluated for spam on its own.

The definition of spam is broad: from obvious commercial sales, through to probable malware (signs of it in the comment, but do not rely on this for security), through to SEO or link spam, phishing attempts, crypto scams, fake reviews, AI-generated nonsense, and much more.

See examples of spam detection.

check_spam

def check_spam(
comment: str,
article_id: UUID
) -> SpamDetectionResult
result = client.check_spam(comment, article_id)
if result.is_spam:
print(f"Spam detected: {result.reasoning}")
ParameterDescription
commentThe comment text to check.
article_idUUID from topic initialization.

Returns: SpamDetectionResult with is_spam boolean and reasoning.


Check Relevance

Evaluates the relevance of a comment to the article it's commenting on, plus checks for topics the site admin does not want discussed.

This is intended to help keep conversations relevant or on-topic, plus to allow site admins to disallow conversation on specific topics. (Our use case here is how legitimate-sounding comments with dog whistles or what-abouts can simply be disallowed.)

Returns two assessments, each with a boolean result and confidence score (0-1):

  • On-topic: Is the comment plausibly related to the article? Even poor-quality comments are marked on-topic if they're about the subject.
  • Banned topics: Does the comment discuss any topics from your banned list? Also includes what proportion of the comment is about banned topics.

See examples of relevance checking.

check_relevance

def check_relevance(
comment: str,
article_id: UUID,
banned_topics: Optional[List[str]] = None
) -> CommentRelevanceResult
result = client.check_relevance(
comment=comment,
article_id=article_id,
banned_topics=["politics", "religion"]
)
print(f"On topic: {result.on_topic.is_on_topic}")
ParameterDescription
commentThe comment text to check
article_id / $articleContextIdUUID from topic initialization
banned_topics / $bannedTopicsTopics to flag (optional)

Returns: CommentRelevanceResult with on-topic and banned topics analysis.


Check Dogwhistle

Detects coded language (dogwhistles) that appears innocuous but carries hidden meanings to specific groups.

Dogwhistles can be subtle: numbers with hidden meanings (like "88" or "1488"), coded racial terms ("urban youths", "globalists"), extremist phrases, conspiracy markers ("just asking questions", "do your research"), and more. The detection is context-aware: discussing renewable energy policy is legitimate, but "green agenda pushed by globalists" is a dogwhistle.

Returns:

  • Detection: Whether dogwhistles were found, with reasoning and confidence (0-1).
  • Details (if detected): The specific terms found, their categories, how subtle they are (0-1), and harm potential (0-1).

You can provide sensitive_topics to focus detection on specific areas, and dogwhistle_examples if you know specific coded phrases used in your community.

See examples of dogwhistle detection.

check_dogwhistle

def check_dogwhistle(
comment: str,
article_id: UUID,
sensitive_topics: Optional[List[str]] = None,
dogwhistle_examples: Optional[List[str]] = None
) -> DogwhistleResult
result = client.check_dogwhistle(
comment=comment,
article_id=article_id,
sensitive_topics=["extremism"]
)
if result.detection.contains_dogwhistle:
print(f"Dogwhistle detected: {result.details.dogwhistle_terms}")
ParameterDescription
commentThe comment text to analyze
article_id / $articleContextIdUUID from topic initialization
sensitive_topics / $sensitiveTopicsTopics to watch for (optional)
dogwhistle_examples / $dogwhistleExamplesKnown examples to detect (optional)

Returns: DogwhistleResult with detection status and details.


Megacall

Run any combination of comment scoring, spam detection, relevance checking, and dogwhistle detection in a single API call.

A single request is faster than multiple requests for each kind of analysis. It's also more cost-effective: you're charged for fewer API calls than the individual features in the megacall.

Each analysis type is optional. Only request what you need to minimize latency and cost. At least one feature flag must be set.

See examples of using megacall and selecting which checks to run.

megacall

def megacall(
comment: str,
article_id: UUID,
include_spam: bool = False,
include_relevance: bool = False,
include_comment_score: bool = False,
include_dogwhistle: bool = False,
banned_topics: Optional[List[str]] = None,
sensitive_topics: Optional[List[str]] = None,
dogwhistle_examples: Optional[List[str]] = None
) -> MegaCallResult
result = client.megacall(
comment=comment,
article_id=article_id,
include_spam=True,
include_comment_score=True,
include_relevance=True
)

if result.spam_check:
print(f"Spam: {result.spam_check.is_spam}")
if result.comment_score:
print(f"Score: {result.comment_score.overall_score}")
ParameterDescription
commentThe comment text to analyze
article_id / $articleContextIdUUID from topic initialization
include_* / $servicesWhich analyses to run
banned_topics / $bannedTopicsFor relevance check (optional)
sensitive_topics / $sensitiveTopicsFor dogwhistle check (optional)
dogwhistle_examples / $dogwhistleExamplesFor dogwhistle check (optional)

Returns: MegaCallResult containing only the requested analyses (others are null).


Check User Credentials

Verify your API credentials and get subscription status. See Verifying Credentials.

check_user_credentials

def check_user_credentials() -> UserCheckResponse
result = client.check_user_credentials()
print(f"Active: {result.active}")
print(f"Plan: {result.plan_name}")
print(f"Endpoints: {result.allowed_endpoints}")

Returns: UserCheckResponse with subscription status, plan details, and allowed endpoints.


Run Event Loop (PHP only)

Execute the ReactPHP event loop. You must call this after API methods for promises to resolve.

run

public function run(): void
$client->checkSpam($comment, $articleId)->then(/* ... */);
$client->evaluateComment($articleId, $comment)->then(/* ... */);
$client->run(); // Execute all pending requests

See Also