Python SDK
LiteLLM
Automatic tracing for LiteLLM across 100+ LLM providers.
LiteLLM provides a unified interface to 100+ LLM providers. One call enables automatic tracing for all of them.
Setup
import carrot_ai
import litellm
carrot_ai.init(api_key="sk-...")
carrot_ai.patch_litellm()Usage
response = litellm.completion(
model="openai/gpt-4o",
messages=[{"role": "user", "content": "Hello!"}],
)
print(response.choices[0].message.content)Every litellm.completion() and litellm.acompletion() call is now traced automatically.
Streaming
response = litellm.completion(
model="openai/gpt-4o",
messages=[{"role": "user", "content": "Tell me a story"}],
stream=True,
)
for chunk in response:
if chunk.choices and chunk.choices[0].delta.content:
print(chunk.choices[0].delta.content, end="", flush=True)Any provider
Tracing works across all LiteLLM-supported providers:
litellm.completion(model="anthropic/claude-sonnet-4-20250514", messages=[...])
litellm.completion(model="gemini/gemini-2.5-pro", messages=[...])
litellm.completion(model="azure/gpt-4o", messages=[...])With @trace
@carrot_ai.trace("multi-model-pipeline")
def compare_models(question: str) -> dict:
gpt = litellm.completion(
model="openai/gpt-4o",
messages=[{"role": "user", "content": question}],
).choices[0].message.content
claude = litellm.completion(
model="anthropic/claude-sonnet-4-20250514",
messages=[{"role": "user", "content": question}],
).choices[0].message.content
return {"gpt": gpt, "claude": claude}Default tags
carrot_ai.patch_litellm(tags=["production", "litellm"])