Instrument AI Agents

Learn how to manually instrument your code to use Sentry's Agents module.

As a prerequisite to setting up AI Agents, you'll need to first set up tracing. Once this is done, the JavaScript SDK will automatically instrument AI agents created with supported libraries. If that doesn't fit your use case, you can use custom instrumentation described below.

The JavaScript SDK supports automatic instrumentation for some AI libraries. We recommend adding their integrations to your Sentry configuration to automatically capture spans for AI agents.

If you're using a library that Sentry does not automatically instrument, you can manually instrument your code to capture spans. For your AI agents data to show up in the Sentry AI Agents Insights, a couple of spans must be created and have well-defined names and data attributes. See below.

Describes AI agent invocation.

  • The spans op MUST be "gen_ai.invoke_agent".
  • The span name SHOULD be "invoke_agent {gen_ai.agent.name}".
  • The gen_ai.operation.name attribute MUST be "invoke_agent".
  • The gen_ai.agent.name attribute SHOULD be set to the agents name. (e.g. "Weather Agent")
  • All Common Span Attributes SHOULD be set (all required common attributes MUST be set).

Additional attributes on the span:

Data AttributeTypeRequirement LevelDescriptionExample
gen_ai.request.available_toolsstringoptionalList of objects describing the available tools. [0]"[{\"name\": \"random_number\", \"description\": \"...\"}, {\"name\": \"query_db\", \"description\": \"...\"}]"
gen_ai.request.frequency_penaltyfloatoptionalModel configuration parameter.0.5
gen_ai.request.max_tokensintoptionalModel configuration parameter.500
gen_ai.request.messagesstringoptionalList of objects describing the messages (prompts) sent to the LLM [0], [1]"[{\"role\": \"system\", \"content\": [{...}]}, {\"role\": \"system\", \"content\": [{...}]}]"
gen_ai.request.presence_penaltyfloatoptionalModel configuration parameter.0.5
gen_ai.request.temperaturefloatoptionalModel configuration parameter.0.1
gen_ai.request.top_pfloatoptionalModel configuration parameter.0.7
gen_ai.response.tool_callsstringoptionalThe tool calls in the model’s response. [0]"[{\"name\": \"random_number\", \"type\": \"function_call\", \"arguments\": \"...\"}]"
gen_ai.response.textstringoptionalThe text representation of the model’s responses. [0]"[\"The weather in Paris is rainy\", \"The weather in London is sunny\"]"
gen_ai.usage.input_tokens.cachedintoptionalThe number of cached tokens used in the AI input (prompt)50
gen_ai.usage.input_tokensintoptionalThe number of tokens used in the AI input (prompt).10
gen_ai.usage.output_tokens.reasoningintoptionalThe number of tokens used for reasoning.30
gen_ai.usage.output_tokensintoptionalThe number of tokens used in the AI response.100
gen_ai.usage.total_tokensintoptionalThe total number of tokens used to process the prompt. (input and output)190
  • [0]: Span attributes only allow primitive data types (like int, float, boolean, string). This means you need to use a stringified version of a list of dictionaries. Do NOT set [{"foo": "bar"}] but rather the string "[{\"foo\": \"bar\"}]".
  • [1]: Each message item uses the format {role:"", content:""}. The role can be "user", "assistant", or "system". The content can be either a string or a list of dictionaries.

Copied
// some example agent implementation for demonstration
const myAgent = {
  name: "Weather Agent",
  modelProvider: "openai",
  model: "o3-mini",
  async run() {
    // Agent implementation
    return {
      output: "The weather in Paris is sunny",
      usage: {
        inputTokens: 15,
        outputTokens: 8,
      },
    };
  },
};

Sentry.startSpan(
  {
    op: "gen_ai.invoke_agent",
    name: `invoke_agent ${myAgent.name}`,
    attributes: {
      "gen_ai.operation.name": "invoke_agent",
      "gen_ai.system": myAgent.modelProvider,
      "gen_ai.request.model": myAgent.model,
      "gen_ai.agent.name": myAgent.name,
    },
  },
  async (span) => {
    // run the agent
    const result = await myAgent.run();

    // set agent response
    // we assume result.output is a string
    // type of `gen_ai.response.text` needs to be a string
    span.setAttribute(
      "gen_ai.response.text",
      JSON.stringify([result.output]),
    );

    // set token usage
    // we assume the result includes the tokens used
    span.setAttribute(
      "gen_ai.usage.input_tokens",
      result.usage.inputTokens,
    );
    span.setAttribute(
      "gen_ai.usage.output_tokens",
      result.usage.outputTokens,
    );

    return result;
  },
);

This span represents a request to an AI model or service that generates a response or requests a tool call based on the input prompt.

  • The span op MUST be "gen_ai.{gen_ai.operation.name}". (e.g. "gen_ai.chat")
  • The span name SHOULD be {gen_ai.operation.name} {gen_ai.request.model}". (e.g. "chat o3-mini")
  • All Common Span Attributes SHOULD be set (all required common attributes MUST be set).

Additional attributes on the span:

Data AttributeTypeRequirement LevelDescriptionExample
gen_ai.request.available_toolsstringoptionalList of objects describing the available tools. [0]"[{\"name\": \"random_number\", \"description\": \"...\"}, {\"name\": \"query_db\", \"description\": \"...\"}]"
gen_ai.request.frequency_penaltyfloatoptionalModel configuration parameter.0.5
gen_ai.request.max_tokensintoptionalModel configuration parameter.500
gen_ai.request.messagesstringoptionalList of objects describing the messages (prompts) sent to the LLM [0], [1]"[{\"role\": \"system\", \"content\": [{...}]}, {\"role\": \"system\", \"content\": [{...}]}]"
gen_ai.request.presence_penaltyfloatoptionalModel configuration parameter.0.5
gen_ai.request.temperaturefloatoptionalModel configuration parameter.0.1
gen_ai.request.top_pfloatoptionalModel configuration parameter.0.7
gen_ai.response.tool_callsstringoptionalThe tool calls in the model's response. [0]"[{\"name\": \"random_number\", \"type\": \"function_call\", \"arguments\": \"...\"}]"
gen_ai.response.textstringoptionalThe text representation of the model's responses. [0]"[\"The weather in Paris is rainy\", \"The weather in London is sunny\"]"
gen_ai.usage.input_tokens.cachedintoptionalThe number of cached tokens used in the AI input (prompt)50
gen_ai.usage.input_tokensintoptionalThe number of tokens used in the AI input (prompt).10
gen_ai.usage.output_tokens.reasoningintoptionalThe number of tokens used for reasoning.30
gen_ai.usage.output_tokensintoptionalThe number of tokens used in the AI response.100
gen_ai.usage.total_tokensintoptionalThe total number of tokens used to process the prompt. (input and output)190
  • [0]: Span attributes only allow primitive data types. This means you need to use a stringified version of a list of dictionaries. Do NOT set [{"foo": "bar"}] but rather the string "[{\"foo\": \"bar\"}]".
  • [1]: Each message item uses the format {role:"", content:""}. The role can be "user", "assistant", or "system". The content can be either a string or a list of dictionaries.

Copied
// some example implementation for demonstration
const myAi = {
  modelProvider: "openai",
  model: "o3-mini",
  modelConfig: {
    temperature: 0.1,
    presencePenalty: 0.5,
  },
  async createMessage(messages, maxTokens) {
    // AI implementation
    return {
      output:
        "Here's a joke: Why don't scientists trust atoms? Because they make up everything!",
      usage: {
        inputTokens: 12,
        outputTokens: 24,
      },
    };
  },
};

Sentry.startSpan(
  {
    op: "gen_ai.chat",
    name: `chat ${myAi.model}`,
    attributes: {
      "gen_ai.operation.name": "chat",
      "gen_ai.system": myAi.modelProvider,
      "gen_ai.request.model": myAi.model,
    },
  },
  async (span) => {
    // set up messages for LLM
    const maxTokens = 1024;
    const prompt = "Tell me a joke";
    const messages = [{ role: "user", content: prompt }];

    // set chat request data
    span.setAttribute("gen_ai.request.messages", JSON.stringify(messages));
    span.setAttribute("gen_ai.request.max_tokens", maxTokens);
    span.setAttribute(
      "gen_ai.request.temperature",
      myAi.modelConfig.temperature,
    );
    span.setAttribute(
      "gen_ai.request.presence_penalty",
      myAi.modelConfig.presencePenalty,
    );

    // ask the LLM
    const result = await myAi.createMessage(messages, maxTokens);

    // set response
    // we assume result.output is a string
    // type of `gen_ai.response.text` needs to be a string
    span.setAttribute(
      "gen_ai.response.text",
      JSON.stringify([result.output]),
    );

    // set token usage
    // we assume the result includes the tokens used
    span.setAttribute(
      "gen_ai.usage.input_tokens",
      result.usage.inputTokens,
    );
    span.setAttribute(
      "gen_ai.usage.output_tokens",
      result.usage.outputTokens,
    );

    return result;
  },
);

Describes a tool execution.

  • The spans op MUST be "gen_ai.execute_tool".
  • The spans name SHOULD be "gen_ai.execute_tool {gen_ai.tool.name}". (e.g. "gen_ai.execute_tool query_database")
  • The gen_ai.tool.name attribute SHOULD be set to the name of the tool. (e.g. "query_database")
  • All Common Span Attributes SHOULD be set (all required common attributes MUST be set).

Additional attributes on the span:

Data AttributeTypeRequirement LevelDescriptionExample
gen_ai.tool.descriptionstringoptionalDescription of the tool executed."Tool returning a random number"
gen_ai.tool.inputstringoptionalInput that was given to the executed tool as string."{\"max\":10}"
gen_ai.tool.namestringoptionalName of the tool executed."random_number"
gen_ai.tool.outputstringoptionalThe output from the tool."7"
gen_ai.tool.typestringoptionalThe type of the tools."function"; "extension"; "datastore"

Copied
// some example implementation for demonstration
const myAi = {
  modelProvider: "openai",
  model: "o3-mini",
  async createMessage(messages, maxTokens) {
    // AI implementation that returns tool calls
    return {
      toolCalls: [
        {
          name: "random_number",
          description: "Generate a random number",
          arguments: { max: 10 },
        },
      ],
    };
  },
};

// First, make the AI call
const result = await Sentry.startSpan(
  { op: "gen_ai.chat", name: `chat ${myAi.model}` },
  () => myAi.createMessage(messages, 1024),
);

// Check if we should call a tool
if (result.toolCalls && result.toolCalls.length > 0) {
  const tool = result.toolCalls[0];

  await Sentry.startSpan(
    {
      op: "gen_ai.execute_tool",
      name: `gen_ai.execute_tool ${tool.name}`,
      attributes: {
        "gen_ai.system": myAi.modelProvider,
        "gen_ai.request.model": myAi.model,
        "gen_ai.tool.type": "function",
        "gen_ai.tool.name": tool.name,
        "gen_ai.tool.description": tool.description,
        "gen_ai.tool.input": JSON.stringify(tool.arguments),
      },
    },
    async (span) => {
      // run tool (example implementation)
      const toolResult = Math.floor(Math.random() * tool.arguments.max);

      // set tool result
      span.setAttribute("gen_ai.tool.output", String(toolResult));

      return toolResult;
    },
  );
}

A span that describes the handoff from one agent to another agent.

  • The spans op MUST be "gen_ai.handoff".
  • The spans name SHOULD be "handoff from {from_agent} to {to_agent}".
  • All Common Span Attributes SHOULD be set.

Copied
// some example agent implementation for demonstration
const myAgent = {
  name: "Weather Agent",
  modelProvider: "openai",
  model: "o3-mini",
  async run() {
    // Agent implementation
    return {
      handoffTo: "Travel Agent",
      output:
        "I need to handoff to the travel agent for booking recommendations",
    };
  },
};

const otherAgent = {
  name: "Travel Agent",
  modelProvider: "openai",
  model: "o3-mini",
  async run() {
    // Other agent implementation
    return { output: "Here are some travel recommendations..." };
  },
};

// First agent execution
const result = await Sentry.startSpan(
  { op: "gen_ai.invoke_agent", name: `invoke_agent ${myAgent.name}` },
  () => myAgent.run(),
);

// Check if we should handoff to another agent
if (result.handoffTo) {
  // Create handoff span
  await Sentry.startSpan(
    {
      op: "gen_ai.handoff",
      name: `handoff from ${myAgent.name} to ${otherAgent.name}`,
      attributes: {
        "gen_ai.system": myAgent.modelProvider,
        "gen_ai.request.model": myAgent.model,
      },
    },
    () => {
      // the handoff span just marks the handoff
      // no actual work is done here
    },
  );

  // Execute the other agent
  await Sentry.startSpan(
    { op: "gen_ai.invoke_agent", name: `invoke_agent ${otherAgent.name}` },
    () => otherAgent.run(),
  );
}

Some attributes are common to all AI Agents spans:

Data AttributeTypeRequirement LevelDescriptionExample
gen_ai.systemstringrequiredThe Generative AI product as identified by the client or server instrumentation. [0]"openai"
gen_ai.request.modelstringrequiredThe name of the AI model a request is being made to."o3-mini"
gen_ai.operation.namestringoptionalThe name of the operation being performed. [1]"chat"
gen_ai.agent.namestringoptionalThe name of the agent this span belongs to."Weather Agent"

[0] Well defined values for data attribute gen_ai.system:

ValueDescription
"anthropic"Anthropic
"aws.bedrock"AWS Bedrock
"az.ai.inference"Azure AI Inference
"az.ai.openai"Azure OpenAI
"cohere"Cohere
"deepseek"DeepSeek
"gcp.gemini"Gemini
"gcp.gen_ai"Any Google generative AI endpoint
"gcp.vertex_ai"Vertex AI
"groq"Groq
"ibm.watsonx.ai"IBM Watsonx AI
"mistral_ai"Mistral AI
"openai"OpenAI
"perplexity"Perplexity
"xai"xAI

[1] Well defined values for data attribute gen_ai.operation.name:

ValueDescription
"chat"Chat completion operation such as OpenAI Chat API
"create_agent"Create GenAI agent
"embeddings"Embeddings operation such as OpenAI Create embeddings API
"execute_tool"Execute a tool
"generate_content"Multimodal content generation operation such as Gemini Generate Content
"invoke_agent"Invoke GenAI agent
Was this helpful?
Help improve this content
Our documentation is open source and available on GitHub. Your contributions are welcome, whether fixing a typo (drat!) or suggesting an update ("yeah, this would be better").