Skip to main content
Version 0.1 ScaleDown is a context engineering platform that intelligently compresses AI prompts while preserving semantic integrity and reducing hallucinations. Our research-backed compression algorithms analyze prompt components—from reasoning chains to code contexts—and apply targeted optimization techniques that maintain output quality while dramatically reducing token consumption.

Main Classes

scaledown.compressor.ScaleDownCompressor

The compressor module contains various Compressors, the default being ScaleDownCompressor. The main entry point for compressing text. It manages API communication, batch processing, and compression settings.
This class inherits from BaseCompressor and handles both single-string and list-based inputs automatically.
class scaledown.compressor.ScaleDownCompressor(target_model: str = 'gpt-4o',
                                               rate: Union[float, str] = 'auto',
                                               api_key: Optional[str] = None,
                                               temperature: Optional[float] = None,
                                               preserve_keywords: bool = False,
                                               preserve_words: Optional[List[str]] = None)
Parameters
ParameterTypeDefaultDescription
target_modelstr'gpt-4o'The target LLM you plan to use downstream. ScaleDown optimizes specifically for this model’s tokenizer and attention biases.
Supported: 'gpt-4o', 'gpt-4o-mini', 'gemini-2.5-flash', etc.
ratefloat | 'auto''auto'The aggressiveness of compression.
- 'auto': ScaleDown determines the optimal rate based on redundancy (recommended).
- float: A target retention rate (e.g., 0.4 keeps ~40% of tokens).
api_keystrNoneYour ScaleDown API key. If None, looks for SCALEDOWN_API_KEY environment variable.
temperaturefloatNoneControls compression randomness. Higher values introduce more variation in token selection.
preserve_keywordsboolFalseIf True, forces the preservation of detected domain-specific keywords.
preserve_wordsList[str]NoneA list of specific words or phrases that must never be removed during compression.
Methods compress Compresses the given context and prompt.
def compress(context: Union[str, List[str]],
            prompt: Union[str, List[str]],
            max_tokens: int = None,
            **kwargs ) -> Union[CompressedPrompt, List[CompressedPrompt]]
ParameterTypeDescription
contextstr | List[str]The background information (documents, code, history) to compress.
promptstr | List[str]The user query or instruction. This is usually not compressed but used to guide the compression of the context.
max_tokensintOptional strict limit on the output token count.
**kwargsdictAdditional parameters passed directly to the API payload.
Returns
  • CompressedPrompt: If inputs are strings.
  • List[CompressedPrompt]: If inputs are lists (supports batch processing).

Data Structures

scaledown.types.CompressedPrompt

A smart object containing the compressed text and valid metadta. It behaves like a string but carries rich statistics. This also is the output of Compressor objects.
class scaledown.types.CompressedPrompt(content: str,
                                       metrics: CompressionMetrics)
Attributes
AttributeTypeDescription
contentstrThe actual compressed text string.
metricsCompressionMetricsStructured metrics object containing token counts and latency.
tokensTuple[int, int]A tuple of (original_count, compressed_count).
savings_percentfloatThe percentage of tokens removed (e.g., 60.0 for 60% reduction).
compression_ratiofloatThe ratio of original size to compressed size (e.g., 2.5x).
latencyintServer-side processing time in milliseconds.
Methods
  • print_stats()
    Prints a formatted summary of compression performance to stdout.
    As an example- ScaleDown Stats:
    • Tokens: 1000 -> 400
    • Savings: 60.0%
    • Ratio: 2.5x
    • Latency: 150ms

scaledown.metrics.CompressionMetrics

Pydantic model that validates the raw metrics returned by the API.
class scaledown.metrics.CompressionMetrics
FieldTypeDescription
original_prompt_tokensintToken count before compression. Validated to be non-negative.
compressed_prompt_tokensintToken count after compression. Validated to be non-negative.
latency_msintProcessing time in milliseconds.
timestampdatetimeTime when the compression request was processed.

Configuration & Exceptions

Configuration

ScaleDown uses a global configuration system for API keys and endpoints.
import scaledown
Set API key globally
scaledown.set_api_key("your-api-key")
Get current key
key = scaledown.get_api_key()
  • Environment Variables:
    • SCALEDOWN_API_KEY: Automatically loaded if not set in code.
    • SCALEDOWN_API_URL: Override the default API endpoint (Default: https://api.scaledown.xyz).

Exceptions

All custom exceptions inherit from ScaleDownError.
from scaledown.exceptions import (ScaleDownError,
                                  AuthenticationError,
                                  APIError)
ExceptionDescription
ScaleDownErrorBase class for all package errors.
AuthenticationErrorRaised when the API key is missing, invalid, or expired.
APIErrorRaised when the server returns a non-200 response (e.g., rate limits, server errors).