Skip to main content
Tusk Drift MCP is an MCP server that enables AI agents like Claude and Cursor to search, analyze, and debug your application’s live traffic. With Drift MCP, your AI agent can:
  • Search API traffic - Find specific requests by endpoint, status code, duration, or payload contents
  • Analyze performance - Calculate latency percentiles, error rates, and traffic patterns
  • Debug distributed traces - View full request/response traces with PII redacted
New to Tusk Drift? Check out the Drift overview to get started.

Setup

Choose between a hosted remote server (recommended) or running locally.

Configuration

VariableRequiredDescription
TUSK_API_KEYYesYour Tusk API token
TUSK_DRIFT_API_URLNoAPI URL (defaults to https://api.usetusk.ai)
TUSK_DRIFT_SERVICE_IDNoDefault service ID. Auto-discovered from .tusk/config.yaml if not set

Available Tools

The MCP server exposes six tools for different observability workflows:

query_spans

Search API traffic with flexible filters including endpoint, status code, duration, and request/response payloads.

get_schema

Understand the structure of captured traffic for different instrumentation types (HTTP, database, gRPC).

list_distinct_values

Discover what endpoints exist, which status codes are returned, and other field values in your traffic.

aggregate_spans

Calculate performance metrics: latency percentiles (p50, p95, p99), error rates, and request counts.

get_trace

View distributed traces as hierarchical trees for end-to-end debugging across services.

get_spans_by_ids

Fetch specific spans by ID with full request/response payloads for detailed inspection.

Usage Examples

Here are common workflows you can perform with AI agents using Drift MCP.

Analyze endpoint performance

Ask your AI agent:
“What are the slowest endpoints in my application? Show me p95 latency and error rates”
The agent will use aggregate_spans to calculate metrics:
{
  "groupBy": ["name"],
  "metrics": ["count", "p95Duration", "errorRate"],
  "orderBy": { "metric": "p95Duration", "direction": "desc" }
}

Debug a failing request

Ask your AI agent:
“Show me the full trace for this failed request to /api/checkout”
The agent will:
  1. Use query_spans to find the failing request
  2. Extract the traceId from the result
  3. Use get_trace to display the full request chain

Find requests with specific payload data

Ask your AI agent:
“Find all requests where the response included an error message containing ‘payment failed’”
The agent will use JSONB filters to search response payloads:
{
  "jsonbFilters": [{
    "column": "outputValue",
    "path": "$.body.error",
    "filter": { "contains": "payment failed" }
  }]
}

Multi-Service Support

If you have multiple services instrumented with Tusk Drift, the MCP server automatically discovers them from .tusk/config.yaml files in your workspace. When querying, specify the service:
“Show me the slowest requests in the payments-service”
The agent will include observableServiceId in the query to target the correct service.

Support

Need help? Contact us at support@usetusk.ai or open an issue on GitHub.