Carrot LabsCarrot Docs

Pipeline Tracing

Capture multi-step LLM workflows as nested traces with the @trace decorator.

For multi-step pipelines — RAG, agents, chains — you can capture the entire workflow as a single trace with individual LLM calls nested inside.

Basic usage

import carrot_ai
from openai import OpenAI

carrot_ai.init(api_key="sk-...")
client = carrot_ai.wrap(OpenAI())


@carrot_ai.trace
def answer_question(question: str) -> str:
    docs = search_knowledge_base(question)

    response = client.chat.completions.create(
        model="gpt-4o",
        messages=[
            {"role": "system", "content": f"Context:\n{docs}"},
            {"role": "user", "content": question},
        ],
    )

    return response.choices[0].message.content


result = answer_question("How do I reset my password?")

The @trace decorator creates a parent trace for the entire function. Any wrapped LLM calls inside automatically become child traces, giving you a complete view of each step.

How it looks in the dashboard

answer_question (parent trace)
├── input: {"question": "How do I reset my password?"}
├── output: "Go to Settings > Security..."
├── latency: 1200ms
└── child: chat.completions.create
    ├── input: {messages: [...]}
    ├── output: {content: "Go to..."}
    └── model: gpt-4o, tokens: 150 in / 42 out

Pipeline traces appear in the dashboard with an expandable view showing the parent and all child steps.

Decorator options

@carrot_ai.trace
def my_function():
    ...

@carrot_ai.trace("my-pipeline")
def my_function():
    ...

@carrot_ai.trace(name="my-pipeline", tags=["production"], metadata={"version": "2"})
def my_function():
    ...
ParameterTypeDefaultDescription
namestrFunction nameTrace name shown in the dashboard
tagslist[str]NoneTags for filtering
metadatadictNoneArbitrary key-value metadata

Async support

Works with async functions and async clients:

from openai import AsyncOpenAI

client = carrot_ai.wrap(AsyncOpenAI())


@carrot_ai.trace
async def answer_question(question: str) -> str:
    response = await client.chat.completions.create(
        model="gpt-4o",
        messages=[{"role": "user", "content": question}],
    )
    return response.choices[0].message.content

On this page