Python SDK
Python SDK
Automatic LLM tracing for OpenAI, Anthropic, and LiteLLM with zero code changes.
The carrot-ai Python SDK captures every LLM call as a structured trace — automatically, in the background, with no changes to your application code.
Installation
pip install carrot-aiInitialization
import carrot_ai
carrot_ai.init(api_key="sk-...")Or set the CARROT_API_KEY environment variable and skip the init() call — the SDK initializes automatically on first use.
| Parameter | Type | Default | Description |
|---|---|---|---|
api_key | str | CARROT_API_KEY env var | Your Carrot API key |
base_url | str | https://api.carrotlabs.ai | API endpoint |
flush_interval | float | 5.0 | Seconds between background flushes |
batch_size | int | 50 | Max traces per batch |
Wrapping a client
from openai import OpenAI
client = carrot_ai.wrap(OpenAI())
response = client.chat.completions.create(
model="gpt-4o",
messages=[{"role": "user", "content": "Hello!"}],
)from anthropic import Anthropic
client = carrot_ai.wrap(Anthropic())
response = client.messages.create(
model="claude-sonnet-4-20250514",
max_tokens=1024,
messages=[{"role": "user", "content": "Hello!"}],
)The wrapped client behaves identically to the original — same methods, same return types, same streaming. Every create() call is captured as a trace in the background.
Supported providers
| Provider | Client | Sync | Async | Streaming |
|---|---|---|---|---|
| OpenAI | OpenAI / AsyncOpenAI | ✓ | ✓ | ✓ |
| Anthropic | Anthropic / AsyncAnthropic | ✓ | ✓ | ✓ |
| LiteLLM | litellm.completion / litellm.acompletion | ✓ | ✓ | ✓ |
Flushing traces
Traces are sent in the background automatically. For short-lived scripts, call flush() before exit to ensure all traces are submitted:
carrot_ai.flush()The SDK also flushes automatically on normal interpreter shutdown.
Next steps
- OpenAI — OpenAI-specific examples
- Anthropic — Anthropic-specific examples
- LiteLLM — Trace 100+ LLM providers through LiteLLM
- SDK Reference — Full API reference