Skip to main content

What it does

The /classify endpoint takes a piece of text and a set of labels you define, then returns a probability score for each label indicating how well the text fits. Scores are softmax-normalised across all labels so they sum to 1.0, and the highest-scoring label is returned as top_label. Each label is scored by running the text against a rubric — a yes/no question you write that describes what the label means. The model judges how strongly the text “answers yes” to each question, then normalises the results into a probability distribution.

When to use it

You need to route incoming content. Support tickets, user messages, or documents need to reach the right handler. Classify lets you define your own categories without training a model — just write rubrics that describe each category. You’re building a moderation layer. A two-label setup (spam / not spam, safe / unsafe) gives you a confidence score you can threshold however your policy requires. You need intent detection. Identify whether an incoming message is a question, complaint, feedback, or cancellation request — and act on it accordingly. You want domain tagging at scale. Tag documents as medical, legal, financial, or any domain relevant to your pipeline, without fine-tuning or labelling training data.

Common use cases

Use caseHow classify helps
Support ticket triageRoute tickets to billing, technical, or account teams automatically
Content moderationScore content against a rubric for spam, toxicity, or policy violations
Intent detectionIdentify question / complaint / feedback / cancellation intent
Domain taggingLabel documents by topic for downstream routing or filtering
Lead qualificationScore inbound inquiries against criteria like urgency or deal size
Email categorisationSort inbound email into categories without a rules engine

How it fits into your workflow

Classify sits between your input source and your routing or handling logic. You pass in the raw text — a ticket, a message, a document — and use the scores to decide what to do next.
[Incoming text] → [POST /classify] → [Route based on top_label or scores]
The response gives you both a top_label for simple routing and the full scores map if you need finer-grained control (e.g. only act if the top score exceeds a threshold).

Writing good rubrics

The rubric is the most important part of a classify request. Poor rubrics produce noisy, unreliable scores; well-written rubrics are specific, direct, and independent. Rules of thumb:
  • Be specific. Vague rubrics produce low-confidence scores.
  • Frame as a yes/no question. “Does this text describe X?” outperforms “X content”.
  • Avoid negations. “Is this NOT about finance?” will confuse the model. Use a positive label instead.
  • Keep rubrics independent. Overlapping rubrics (e.g. “Is this medical?” and “Is this about health?”) split probability mass unpredictably.
Examples:
LabelPoor rubricGood rubric
medicalmedical contentDoes this text describe a medical condition, symptom, treatment, or health topic?
urgenturgent or importantDoes this text indicate that the sender needs an immediate response or is describing a time-sensitive situation?
complaintnegative feedbackIs this text expressing dissatisfaction, frustration, or a formal complaint about a product or service?