Quickstart
Start capturing traces from your LLM calls in 5 minutes.
Get an API key
Sign in to the Carrot Dashboard and navigate to API Keys. Click Create Key, give it a name, and copy the sk-... key. You'll only see it once.
Install the SDK
pip install carrot-aiStart tracing
import carrot_ai
from openai import OpenAI
carrot_ai.init(api_key="sk-...")
client = carrot_ai.wrap(OpenAI())
response = client.chat.completions.create(
model="gpt-4o",
messages=[{"role": "user", "content": "Hello!"}],
)
print(response.choices[0].message.content)import carrot_ai
from anthropic import Anthropic
carrot_ai.init(api_key="sk-...")
client = carrot_ai.wrap(Anthropic())
response = client.messages.create(
model="claude-sonnet-4-20250514",
max_tokens=1024,
messages=[{"role": "user", "content": "Hello!"}],
)
print(response.content[0].text)That's it — every create() call through the wrapped client is now captured as a trace. Your application code doesn't change at all.
View traces in the dashboard
Open the Traces page in the dashboard. You'll see your request with the full input, output, token counts, and latency.
As traces accumulate, Carrot Labs can use this data to build a custom model optimized for your workload.
What's next?
- Tracing — Learn what gets captured and how to filter traces
- Pipeline Tracing — Capture multi-step workflows (RAG, agents, chains) as nested traces
- Inference — Use your custom model through the Carrot API once it's ready