Documentation

DependWatch Docs

Observability for every API and tool your software depends on—including the ones your AI agents call. Latency, failures, cost, insights, guardrails, dependency map. First insight in under two minutes.

Quickstart

Get your first event and insight into the DependWatch dashboard in under two minutes.

  1. Create an account — Sign in with Google, GitHub, or magic link.
  2. Create a workspace and project — Onboarding guides you; the project gets a default ingest key.
  3. Copy your ingest key — Shown once when you create the project. Store it as DEPENDWATCH_INGEST_KEY in your environment.
  4. Send a test event (optional) — From the empty dashboard, click "Send a test event" to see the ingestion stream and watch the dashboard populate with real metrics. Events usually appear within a few seconds.
  5. Install the SDKnpm install @dependwatch/sdk-node.
  6. Initialize and wrap — Call init() at startup, then wrap your first API call with wrap().
  7. See events and insights — Events are batched and sent automatically; the dashboard shows calls, latency, error rate, projected cost, and auto-generated insights and guardrails.

Installation

Install the DependWatch Node SDK from npm. Use it in Node.js (server-side) only; never expose your ingest key in client-side code.

Bash
npm install @dependwatch/sdk-node

# or
yarn add @dependwatch/sdk-node
pnpm add @dependwatch/sdk-node

Create Project & API Key

In the dashboard, create a workspace (e.g. your company) and a project (e.g. an app or service). When you create a project, we generate a default ingest key. You can create more keys in Project → Settings → Ingest API keys.

Keep keys secret. Use environment variables (e.g. DEPENDWATCH_INGEST_KEY) and never commit them. The full key is shown only once when you create or rotate it.

Send Your First Event

After init(), wrap any async external API call with wrap(). DependWatch measures duration and success and sends an event to the ingest API. Events are batched and flushed on an interval (default 5 seconds) or when the batch is full. ingestKey is required — set DEPENDWATCH_INGEST_KEY in your environment.

TypeScript
import { init, wrap } from "@dependwatch/sdk-node";

init({
  ingestKey: process.env.DEPENDWATCH_INGEST_KEY!,  // required
});

await wrap(
  { provider: "openai", endpoint: "chat.completions" },
  async () => {
    return openai.chat.completions.create({ model: "gpt-4", messages });
  }
);

Refresh your project dashboard (or wait for auto-refresh); you should see the call count, latency, and optional cost.

SDK Overview

The Node SDK (@dependwatch/sdk-node) lets you instrument external API calls in two ways:

  • wrap(options, fn) — Wraps an async function, measures duration and success/failure, and sends one event per call. Recommended for most use cases.
  • track(event) — Sends a single event with pre-measured fields (e.g. when you already have duration from middleware).

Events are queued in memory and sent in batches to the Ingest API (POST /api/ingest). Batching and retries are built in; you only need to call init() once and then wrap() or track().

Initialize SDK

Call init() once at application startup, before any wrap() or track(). Pass your project's ingest key (and optionally base URL, environment, or batching options).

TypeScript
import { init } from "@dependwatch/sdk-node";

init({
  ingestKey: process.env.DEPENDWATCH_INGEST_KEY!,
  baseUrl: "https://app.dependwatch.app",  // optional; default or DEPENDWATCH_INGEST_URL
  environment: "prod",                      // optional: dev | staging | prod | test
  flushIntervalMs: 5000,                     // optional; default 5000
  maxBatchSize: 50,                         // optional; default 50
});

If you use a self-hosted or local app, set baseUrl to your app URL (e.g. http://localhost:3000). The SDK uses DEPENDWATCH_INGEST_URL or NEXT_PUBLIC_APP_URL if baseUrl is not provided.

Wrapping API Calls

Use wrap(options, fn) to wrap any async call. The SDK starts a span, runs your function, records duration and success (or failure with status code and error type/message), and enqueues an event. Options include provider, endpoint, optional method, and optional estimated_cost_usd for cost projection.

TypeScript
import { wrap } from "@dependwatch/sdk-node";

const result = await wrap(
  {
    provider: "openai",
    endpoint: "chat.completions",
    method: "POST",              // optional
    estimated_cost_usd: 0.002,   // optional; for dashboard cost projection
  },
  async () => {
    return openai.chat.completions.create({ model: "gpt-4", messages });
  }
);

If the inner function throws, the error is recorded (status code, error type, message) and rethrown. The event is still sent so the dashboard shows the failure.

Providers

Events are grouped by provider in the dashboard (e.g. openai, stripe, twilio, clerk, resend). You choose the provider string when calling wrap() or track(). Use lowercase; the ingest API normalizes it. There is no fixed list — use any name for custom or third-party APIs. DependWatch supports a broad set of provider categories, whether your calls come from application code or from the tools your AI agents call:

  • AI APIs — OpenAI, Anthropic, Mistral, Google Gemini, Cohere, Replicate, Together AI
  • Payments — Stripe, PayPal, Adyen, Checkout.com
  • Messaging — Twilio, Resend, SendGrid, Mailgun, Vonage
  • Auth & Identity — Clerk, Auth0, Supabase Auth, Firebase Auth, AWS Cognito, Okta
  • Cloud & Infrastructure — AWS, Google Cloud, Azure, Cloudflare, Supabase, Firebase
  • Search & Data — Algolia, Pinecone, Weaviate, Elasticsearch
  • Maps — Google Maps, Mapbox, HERE
  • Dev & Platform — GitHub, GitLab, Vercel, Cloudflare API
  • Generic HTTP / fetch — Any REST API

Known providers may have default cost models in the catalog; others still get full latency and error metrics.

Monitor by provider

The sections below are dedicated guides for key providers: why to monitor, what DependWatch captures, and a quick instrumentation example. Same pattern applies whether you're instrumenting a backend service, a SaaS integration, or the tool calls behind an AI agent workflow. Order follows the sidebar (AI → Auth & Identity → Payments → Messaging → Cloud → Generic).

Monitor OpenAI API

Monitor OpenAI API latency, errors, and cost so you can track usage, catch rate limits and timeouts, and avoid bill spikes. Wrap OpenAI SDK calls (e.g. chat.completions.create, embeddings.create) with wrap() and pass estimated_cost_usd per call for accurate projected spend.

What we capture: Call count, P50/P95/P99 latency, error rate, status codes, and projected monthly cost. Use guardrails to detect cost spikes and error-rate regressions.

TypeScript
import { init, wrap } from "@dependwatch/sdk-node";
import OpenAI from "openai";

init({ ingestKey: process.env.DEPENDWATCH_INGEST_KEY! });
const openai = new OpenAI();

const completion = await wrap(
  {
    provider: "openai",
    endpoint: "chat.completions",
    estimated_cost_usd: 0.002,
  },
  async () => {
    return openai.chat.completions.create({
      model: "gpt-4",
      messages: [{ role: "user", content: "Hello" }],
    });
  }
);

In the dashboard: You’ll see call count, P95 latency, error rate, and projected API spend for the selected time range.

Monitor Anthropic API

Monitor Anthropic API latency and failures so you can track Claude usage, catch rate limits, and control cost. Wrap your Anthropic SDK or HTTP calls with wrap() and pass estimated_cost_usd when you know it for accurate projected spend.

TypeScript
import { init, wrap } from "@dependwatch/sdk-node";
import Anthropic from "@anthropic-ai/sdk";

init({ ingestKey: process.env.DEPENDWATCH_INGEST_KEY! });
const anthropic = new Anthropic();

const message = await wrap(
  {
    provider: "anthropic",
    endpoint: "messages.create",
    estimated_cost_usd: 0.003,
  },
  async () =>
    anthropic.messages.create({
      model: "claude-3-5-sonnet-20241022",
      max_tokens: 1024,
      messages: [{ role: "user", content: "Hello" }],
    })
);

In the dashboard: You'll see call count, P95 latency, error rate, and projected API spend. Use guardrails to detect cost spikes and error-rate regressions.

Monitor Mistral API

Monitor Mistral API latency and response times so you can track chat and embedding calls and catch failures early. Wrap Mistral SDK or fetch calls with wrap().

TypeScript
import { init, wrap } from "@dependwatch/sdk-node";

init({ ingestKey: process.env.DEPENDWATCH_INGEST_KEY! });

const response = await wrap(
  {
    provider: "mistral",
    endpoint: "chat.completions",
    estimated_cost_usd: 0.0002,
  },
  async () =>
    fetch("https://api.mistral.ai/v1/chat/completions", {
      method: "POST",
      headers: {
        "Content-Type": "application/json",
        "Authorization": "Bearer " + process.env.MISTRAL_API_KEY,
      },
      body: JSON.stringify({
        model: "mistral-small",
        messages: [{ role: "user", content: "Hello" }],
      }),
    }).then((r) => r.json())
);

In the dashboard: Mistral appears in the provider table with latency, error rate, and projected cost when you pass estimated_cost_usd.

Monitor Google Gemini API

Monitor Google Gemini API latency and failures for chat and embedding calls. Wrap the Google AI SDK or REST calls with wrap().

TypeScript
import { init, wrap } from "@dependwatch/sdk-node";
import { GoogleGenerativeAI } from "@google/generative-ai";

init({ ingestKey: process.env.DEPENDWATCH_INGEST_KEY! });
const genAI = new GoogleGenerativeAI(process.env.GOOGLE_AI_API_KEY!);

const result = await wrap(
  {
    provider: "google-gemini",
    endpoint: "generateContent",
    estimated_cost_usd: 0.00025,
  },
  async () => {
    const model = genAI.getGenerativeModel({ model: "gemini-pro" });
    return model.generateContent("Hello");
  }
);

In the dashboard: Google Gemini shows up with call volume, P95 latency, error rate, and projected spend.

Monitor Clerk API

Monitor Clerk authentication API latency and failures so you know when sign-in, sign-up, or session checks fail. Auth failures are highly visible to users; tracking them in DependWatch helps you react quickly to outages or rate limits.

What we capture: Call count, latency, error rate, and status codes for every Clerk backend call. Set up error-rate alerts to get notified when auth failures spike.

TypeScript
import { init, wrap } from "@dependwatch/sdk-node";
import { createClerkClient } from "@clerk/backend";

init({ ingestKey: process.env.DEPENDWATCH_INGEST_KEY! });
const clerk = createClerkClient({ secretKey: process.env.CLERK_SECRET_KEY! });

const user = await wrap(
  { provider: "clerk", endpoint: "users.getUser" },
  async () => clerk.users.getUser(userId)
);

In the dashboard: Clerk appears in the provider table with latency and error rate. Set up error-rate alerts so you're notified when auth failures spike.

Monitor Auth0 API

Monitor Auth0 API latency and failures so you can detect outages and rate limits before users are locked out. Wrap Auth0 Management API or Authentication API calls (e.g. token exchange, user lookup) with wrap().

TypeScript
import { init, wrap } from "@dependwatch/sdk-node";
import { ManagementClient } from "auth0";

init({ ingestKey: process.env.DEPENDWATCH_INGEST_KEY! });
const auth0 = new ManagementClient({
  domain: process.env.AUTH0_DOMAIN!,
  clientId: process.env.AUTH0_CLIENT_ID!,
  clientSecret: process.env.AUTH0_CLIENT_SECRET!,
});

const users = await wrap(
  { provider: "auth0", endpoint: "users.list", method: "GET" },
  async () => auth0.users.getAll()
);

In the dashboard: Auth0 shows call volume, P95 latency, and error rate. Use error alerts to get notified when Auth0 error rate exceeds a threshold.

Monitor Supabase API

Monitor Supabase API reliability for database, Auth, and Storage. Auth and database failures directly impact users; tracking them in DependWatch helps you spot outages and latency regressions. Wrap Supabase client calls or REST requests with wrap().

TypeScript
import { init, wrap } from "@dependwatch/sdk-node";
import { createClient } from "@supabase/supabase-js";

init({ ingestKey: process.env.DEPENDWATCH_INGEST_KEY! });
const supabase = createClient(url, key);

// Database
const { data } = await wrap(
  { provider: "supabase", endpoint: "from.select", method: "GET" },
  async () => supabase.from("users").select("*").limit(10)
);

// Auth (e.g. server-side session check)
const { data: session } = await wrap(
  { provider: "supabase", endpoint: "auth.getSession" },
  async () => supabase.auth.getSession()
);

In the dashboard: Supabase appears with call volume, latency, and error rate. Use a consistent endpoint (e.g. auth.getSession, from.select) for operation-level breakdowns on Pro/Scale.

Monitor Stripe API

Monitor Stripe API latency and failures so checkout and subscription flows stay reliable. Wrap Stripe SDK calls with wrap() and use clear endpoint names (e.g. customers.create, paymentIntents.create) for easier filtering and operation-level analytics.

TypeScript
import { wrap } from "@dependwatch/sdk-node";

const customer = await wrap(
  { provider: "stripe", endpoint: "customers.create" },
  async () => stripe.customers.create({ email: "user@example.com" })
);

In the dashboard: Stripe appears in the provider table with call volume, P95 latency, and error rate. Add estimated_cost_usd when you track it for projected spend.

Monitor Twilio API

Monitor Twilio API reliability for SMS, voice, and messaging so you can catch delivery failures and rate limits early. Wrap Twilio SDK or HTTP calls with wrap() and pass estimated_cost_usd per message for cost projection.

TypeScript
import { wrap } from "@dependwatch/sdk-node";

const message = await wrap(
  {
    provider: "twilio",
    endpoint: "messages.create",
    estimated_cost_usd: 0.0079,
  },
  async () => twilioClient.messages.create({ to, from, body })
);

In the dashboard: Twilio shows up with calls, P95, error rate, and projected spend when cost is provided.

Monitor AWS APIs

Monitor AWS API dependencies (S3, DynamoDB, Lambda, Bedrock, etc.) so you can see latency, errors, and cost in one place. Wrap AWS SDK v3 calls with wrap() using a consistent provider name per service (e.g. aws-s3, aws-dynamodb, aws-bedrock) for clearer breakdowns.

TypeScript
import { init, wrap } from "@dependwatch/sdk-node";
import { S3Client, GetObjectCommand } from "@aws-sdk/client-s3";

init({ ingestKey: process.env.DEPENDWATCH_INGEST_KEY! });
const s3 = new S3Client({ region: "us-east-1" });

const result = await wrap(
  { provider: "aws-s3", endpoint: "GetObject", method: "GET" },
  async () =>
    s3.send(
      new GetObjectCommand({ Bucket: "my-bucket", Key: "path/to/file.json" })
    )
);

In the dashboard: Each provider name (e.g. aws-s3, aws-bedrock) appears as its own row. Use operation-level analytics (Pro/Scale) to see per-endpoint latency and errors.

Generic HTTP / fetch

Monitor any REST API by wrapping fetch() or your HTTP client with wrap(). Specify provider and endpoint (and optionally method) so the dashboard groups and labels correctly.

TypeScript
import { wrap } from "@dependwatch/sdk-node";

const data = await wrap(
  { provider: "resend", endpoint: "emails.send", method: "POST" },
  async () =>
    fetch("https://api.resend.com/emails", {
      method: "POST",
      headers: { "Content-Type": "application/json" },
      body: JSON.stringify({ from, to, subject, body }),
    }).then((r) => r.json())
);

Same pattern for Resend, SendGrid, Supabase, or any REST API. Use a consistent provider name so metrics aggregate in one row.

Dashboard Overview

The project dashboard shows high-level metrics for the selected time range (24h, 7d, 30d):

  • Total calls — Sum of instrumented API calls.
  • Avg latency — Average response time across events with duration_ms.
  • Error rate — Fraction of calls where success is false.
  • Projected monthly cost — Extrapolated from total cost in the period (see Cost Estimation).

Charts include call volume over time (bar) and average latency over time (line). The dashboard auto-refreshes when there is no data yet (e.g. right after onboarding).

Latency Tracking

When you use wrap(), the SDK measures duration automatically. When you use track(), you pass duration_ms yourself. The dashboard computes average, P50, P95, and P99 latency per project and per provider. These appear in the overview KPIs and in the per-provider table.

Error Tracking

Success or failure is derived from the wrapped function (throw = failure) or from the success field when using track(). Optional status_code, error_type, and error_message are stored and shown in the dashboard. The dashboard lists recent failures and can highlight error spikes (periods where a provider’s error rate was unusually high compared to its baseline).

Cost Estimation

Cost is tracked per event via estimated_cost_usd (in wrap() options or track() payload). The dashboard sums cost for the selected period and projects monthly cost by extrapolating: (total cost in period / days in period) × 30. So for a 7-day range, we take the sum of estimated_cost_usd over those 7 days and multiply by 30/7 to get a projected monthly spend. This appears in the overview KPI, the usage card (“Projected API cost monitored”), and per-provider and per-operation tables. We maintain default cost models for known providers where applicable; you can override per project in provider settings. If you don’t pass cost, the provider row still shows latency and errors; cost appears as — or 0.

Provider Breakdown

The dashboard includes a by-provider table: provider name, call count, P95 latency, error rate, and cost (or projected cost). Providers are detected from the provider field you send. Cost spike detection can highlight providers whose projected spend is up significantly versus the previous period (e.g. +30% or more).

Operation-Level Analytics

Pro and Scale only. Free shows provider-level totals only. On Pro and Scale, DependWatch tracks operations (provider + endpoint), e.g. openai.chat.completions, stripe.paymentIntents.create, twilio.messages.create. The Operations table shows per-operation metrics: calls, P95 latency, error rate, and projected cost. Click a row to open the operation detail: latency distribution (P50/P95/P99), calls over time, cost trend, and recent failures.

This lets you identify which exact endpoint is slow, failing, or expensive — so you can optimize the right API calls.

Event Stream & Recent Failures

The Event stream shows recent API events (provider, endpoint, latency, success). The dashboard refreshes periodically when data is present. Recent failures lists the latest failed calls with timestamp, provider, endpoint, status code, and error message. Click an event for full details (duration, status, error message, estimated cost).

API Intelligence (Insights)

The dashboard Insights card (API Cost Radar) shows auto-generated findings from your events. Pro and Scale only (Free shows provider-level metrics and projected spend but not cost-driver or cost-spike insights):

  • Cost driver — A provider or operation accounts for a large share of projected spend (e.g. ≥50%).
  • Reliability issue — Error rate for a provider or operation is elevated (e.g. ≥5%).
  • Slow endpoint — P95 latency for an operation exceeds a threshold (e.g. 2s).
  • Cost spike — Current period cost is ≥50% higher than the previous period (same window length).

Insights appear as soon as conditions are met; no configuration required.

Guardrails

Guardrails surface abnormal API behavior. Pro and Scale only (Free does not include guardrails). Each type has a clear trigger:

  • Cost spike — Provider cost in the current period is >2.5× the previous period.
  • Error spike — Error rate for a provider/operation is above a threshold (e.g. 5%) with enough calls.
  • Latency spike — P95 latency for an operation exceeds a threshold (e.g. 2s).
  • Traffic anomaly — Call volume for an operation is >3× the baseline (previous period).

Use guardrails to react to cost explosions, reliability regressions, and unexpected traffic before they impact users or invoices. Pro includes cost, error, and latency spike guardrails; traffic anomaly (e.g. call volume >3× baseline) is Scale-only.

Dependency Map

Pro and Scale only. The Dependency map shows every external provider and operation your project depends on: call volume, reliability score (1 − error rate), P95 latency, and cost contribution. It is a single view of your API dependency graph — no manual setup. Use it to see which providers and endpoints are critical, which are slow or unreliable, and where cost is concentrated. The dashboard table lists providers with these metrics; operation-level detail is in the Operations table.

Reliability & Cost per Provider

In the Dependency map, reliability is computed as 1 − error rate (0–100%). A provider at 99% reliability has a 1% error rate. Cost is the sum of estimated_cost_usd for the selected period. Together with latency (P50, P95), this gives you a quick picture of each dependency’s health and impact. Use it to prioritize fixes and to discuss SLAs with providers.

Control & Protection (Foundation)

DependWatch today delivers observability (metrics, event stream, failures) and intelligence (insights, guardrails, dependency map). We do not run or enforce retry, fallback, or circuit-breaker logic in your request path. You implement those in your application code; the dashboard and guardrails tell you when a provider is failing or when cost is spiking so you can act. Policy configuration and runtime enforcement are on our roadmap. See Retry & Fallback Patterns for how to implement protection in code today.

Retry & Fallback Patterns

When an external API fails or is slow, you can implement:

  • Retry with backoff — Retry failed calls with exponential backoff (e.g. 1s, 2s, 4s) and a max attempt count. Use wrap() around each attempt so DependWatch records every call; guardrails will surface error spikes if retries explode.
  • Fallback — On failure, call a backup provider or return a cached/default response. Instrument both code paths with wrap() so you see success/failure and cost for primary vs fallback.
  • Circuit breaker — After N failures in a window, stop calling the provider for a cooldown period. Implement in code; use the dashboard to see when a provider is unhealthy so you can tune thresholds.

DependWatch does not enforce these policies at runtime. Implement them in your app and use the dashboard to monitor provider health and alert when guardrails fire.

Latency Alerts

In Project → Settings you can configure alert rules. A latency alert triggers when the observed latency (e.g. P95) exceeds a threshold (in milliseconds). Free: 1 alert rule; delivery is in-app only (no Slack). Pro: up to 10 rules and up to 3 Slack webhooks. Scale: unlimited rules and Slack webhooks. Add your webhook URL in Project → Settings → Alerts; alerts are sent to Slack when thresholds are exceeded. A cooldown (plan-dependent, e.g. 30 min Free, 5 min Pro, 1 min Scale) prevents the same rule from firing repeatedly.

Error Alerts

An error rate alert triggers when the error rate for the project (or per provider) exceeds a configured percentage. Use it to catch regressions or provider outages. Delivery: Slack only (when webhooks are configured); Free has no webhooks. Same cooldown applies.

Cost Spike Alerts

A budget alert triggers when the projected monthly cost exceeds a configured budget (USD). This helps you avoid invoice surprises. Configure the monthly budget in the alert rule; when projected spend crosses it, you get notified via your configured Slack webhooks (Pro/Scale). Cooldown works the same as for latency and error alerts.

Ingest API

Events are sent to POST /api/ingest. The SDK uses this automatically; you can also send batches from your own code. Authenticate with Authorization: Bearer <ingest_key> or X-DependWatch-Key: <ingest_key>. Rate limit: 300 requests per minute per project. Request body: { "events": [ ... ] } with 1–100 events per request (see Event Schema).

TypeScript
const res = await fetch("https://app.dependwatch.app/api/ingest", {
  method: "POST",
  headers: {
    "Content-Type": "application/json",
    "Authorization": "Bearer " + process.env.DEPENDWATCH_INGEST_KEY,
  },
  body: JSON.stringify({
    events: [
      {
        provider: "openai",
        endpoint: "chat.completions",
        duration_ms: 1200,
        success: true,
        estimated_cost_usd: 0.002,
      },
    ],
  }),
});

Event Schema

Each event in the events array can include:

  • provider (string, required) — e.g. openai, stripe. Max 64 chars; stored lowercase.
  • timestamp (optional) — ISO string or number (ms). Defaults to now.
  • endpoint (optional, max 256) — Operation name (e.g. chat.completions, paymentIntents.create). Used for operation-level analytics and insights.
  • service_name (optional, max 128), method (optional, max 16) — For grouping/labels.
  • environment (optional) — dev | staging | prod | test.
  • duration_ms (optional) — Response time in milliseconds. Used for latency percentiles (P50, P95, P99).
  • status_code, success (optional) — HTTP status and boolean; success defaults from status < 400.
  • error_type, error_message (optional) — Truncated to 64 / 512 chars. Shown in recent failures and event details.
  • request_count (optional) — Default 1; 1–10000 for batching multiple logical calls.
  • estimated_cost_usd (optional) — For cost projection and cost-driver insights.
  • metadata, region (optional) — Extra context. model and provider_request_id are stored in metadata when provided.

API Keys

Ingest keys identify your project when sending events. They are created per project in the dashboard (on project creation or in Project → Settings → Ingest API keys). Each key has a name and a prefix (e.g. dw_live_); the full key is shown only once. Keys are stored as a hash; verification is done by comparing the hash of the provided key. Keep keys secret and use environment variables; never expose them in client-side code or public repos.

Key Rotation

You can create new ingest keys and revoke existing ones from Project → Settings. Rotate means creating a new key and revoking the old one in one step (e.g. from the dashboard “Rotate key” action). After rotation, apps still using the old key will get 401 from the ingest API; update them to the new key. The dashboard shows the new key once — copy it immediately.

Environment Variables

Use environment variables for all secrets. Recommended: DEPENDWATCH_INGEST_KEY for the ingest key. For self-hosted or local ingest URL, set DEPENDWATCH_INGEST_URL (or the SDK will use NEXT_PUBLIC_APP_URL if set). See Reference → Environment Variables for the full list.

MCP Integration

DependWatch supports the Model Context Protocol (MCP) so Cursor and Claude Code can search docs, list projects, send test events, and read metrics. Below: copy-paste setup and per-provider prompts and code so you can integrate in one go.

Step 1: Get your MCP token

In the app: Project → Connect assistant or Settings. Create an MCP access token; copy it. You’ll paste it into the config below.

Step 2: Cursor — copy-paste config

Create or edit .cursor/mcp.json in your project (or use Settings → Tools & MCP → Add new MCP server). Replace YOUR_MCP_ACCESS_TOKEN with your token, then restart Cursor.

json
{
  "mcpServers": {
    "dependwatch": {
      "type": "streamableHttp",
      "url": "https://app.dependwatch.app/api/mcp",
      "headers": {
        "Authorization": "Bearer YOUR_MCP_ACCESS_TOKEN"
      }
    }
  }
}

Step 3: Claude Code — copy-paste config

In your Claude client’s MCP config (e.g. Claude Desktop config file), add the same server. Replace YOUR_MCP_ACCESS_TOKEN with your token and restart.

json
{
  "mcpServers": {
    "dependwatch": {
      "type": "streamableHttp",
      "url": "https://app.dependwatch.app/api/mcp",
      "headers": {
        "Authorization": "Bearer YOUR_MCP_ACCESS_TOKEN"
      }
    }
  }
}

Prompts you can paste (any provider)

  • Search DependWatch docs for OpenAI integration
  • List my DependWatch projects
  • Send a test event to my DependWatch project
  • Show me the latest provider metrics from DependWatch

Per-provider: prompts + copy-paste code

Use the prompts in Cursor/Claude, then add the SDK snippet to your app so events show up in DependWatch.

OpenAI

Prompts to paste: "Search DependWatch docs for OpenAI setup" · "Send a test event for OpenAI"

TypeScript
import { init, wrap } from "@dependwatch/sdk-node";
import OpenAI from "openai";

init({ ingestKey: process.env.DEPENDWATCH_INGEST_KEY! });
const openai = new OpenAI();

const completion = await wrap(
  { provider: "openai", endpoint: "chat.completions", estimated_cost_usd: 0.002 },
  async () => openai.chat.completions.create({ model: "gpt-4", messages: [{ role: "user", content: "Hello" }] })
);

Full details: Monitor OpenAI API

Google Gemini

Prompts to paste: "Search DependWatch docs for Gemini" · "Send a test event for Gemini"

TypeScript
import { init, wrap } from "@dependwatch/sdk-node";
import { GoogleGenerativeAI } from "@google/generative-ai";

init({ ingestKey: process.env.DEPENDWATCH_INGEST_KEY! });
const genAI = new GoogleGenerativeAI(process.env.GOOGLE_AI_API_KEY!);

const result = await wrap(
  { provider: "google-gemini", endpoint: "generateContent", estimated_cost_usd: 0.00025 },
  async () => {
    const model = genAI.getGenerativeModel({ model: "gemini-pro" });
    return model.generateContent("Hello");
  }
);

Full details: Monitor Google Gemini API

Anthropic (Claude)

Prompts to paste: "Search DependWatch docs for Anthropic" · "Send a test event for Claude"

TypeScript
import { init, wrap } from "@dependwatch/sdk-node";
import Anthropic from "@anthropic-ai/sdk";

init({ ingestKey: process.env.DEPENDWATCH_INGEST_KEY! });
const anthropic = new Anthropic();

const message = await wrap(
  { provider: "anthropic", endpoint: "messages.create", estimated_cost_usd: 0.003 },
  async () => anthropic.messages.create({ model: "claude-3-5-sonnet-20241022", max_tokens: 1024, messages: [{ role: "user", content: "Hello" }] })
);

Full details: Monitor Anthropic API

Mistral

Prompts to paste: "Search DependWatch docs for Mistral" · "Send a test event for Mistral"

TypeScript
import { init, wrap } from "@dependwatch/sdk-node";

init({ ingestKey: process.env.DEPENDWATCH_INGEST_KEY! });

const response = await wrap(
  { provider: "mistral", endpoint: "chat.completions", estimated_cost_usd: 0.0002 },
  async () =>
    fetch("https://api.mistral.ai/v1/chat/completions", {
      method: "POST",
      headers: { "Content-Type": "application/json", "Authorization": "Bearer " + process.env.MISTRAL_API_KEY },
      body: JSON.stringify({ model: "mistral-small", messages: [{ role: "user", content: "Hello" }] }),
    }).then((r) => r.json())
);

Full details: Monitor Mistral API

xAI (Grok)

Prompts to paste: "Search DependWatch docs for xAI" · "Send a test event for xAI"

TypeScript
import { init, wrap } from "@dependwatch/sdk-node";

init({ ingestKey: process.env.DEPENDWATCH_INGEST_KEY! });

const response = await wrap(
  { provider: "xai", endpoint: "chat.completions", estimated_cost_usd: 0.001 },
  async () =>
    fetch("https://api.x.ai/v1/chat/completions", {
      method: "POST",
      headers: { "Content-Type": "application/json", "Authorization": "Bearer " + process.env.XAI_API_KEY },
      body: JSON.stringify({ model: "grok-beta", messages: [{ role: "user", content: "Hello" }] }),
    }).then((r) => r.json())
);

Alibaba (Qwen / DashScope)

Prompts to paste: "Search DependWatch docs for Alibaba Qwen" · "Send a test event for Alibaba"

TypeScript
import { init, wrap } from "@dependwatch/sdk-node";

init({ ingestKey: process.env.DEPENDWATCH_INGEST_KEY! });

const response = await wrap(
  { provider: "alibaba", endpoint: "chat.completions" },
  async () =>
    fetch("https://dashscope.aliyuncs.com/compatible-mode/v1/chat/completions", {
      method: "POST",
      headers: { "Content-Type": "application/json", "Authorization": "Bearer " + process.env.DASHSCOPE_API_KEY },
      body: JSON.stringify({ model: "qwen-turbo", messages: [{ role: "user", content: "Hello" }] }),
    }).then((r) => r.json())
);

Cohere

Prompts to paste: "Search DependWatch docs for Cohere" · "Send a test event for Cohere"

TypeScript
import { init, wrap } from "@dependwatch/sdk-node";
import { CohereClient } from "cohere-ai";

init({ ingestKey: process.env.DEPENDWATCH_INGEST_KEY! });
const cohere = new CohereClient({ token: process.env.COHERE_API_KEY! });

const response = await wrap(
  { provider: "cohere", endpoint: "chat", estimated_cost_usd: 0.0002 },
  async () => cohere.chat({ model: "command", message: "Hello" })
);

Other providers (Azure OpenAI, Together, Replicate, Meta/Llama): use the same pattern. Set provider to a consistent name (e.g. azure-openai, together, replicate, meta) and wrap your API calls with wrap(). No fixed list; the dashboard and guardrails work for any provider.

LLM providers reference

Quick reference for provider names and doc links. For copy-paste MCP setup and code, see MCP Integration above.

Using DependWatch in Cursor

After adding the DependWatch MCP server (see MCP Integration), restart Cursor. You can then use prompts like: “Search DependWatch docs for OpenAI integration”, “List my DependWatch projects”, “Send a test event to my project.”

Using DependWatch in Claude Code

After adding the DependWatch MCP server (see MCP Integration), restart your Claude client. Use the same prompts as in Cursor to search docs, list projects, send test events, or show metrics.

Events

An event is a single recorded API call: provider, timing, success/failure, optional cost and metadata. Events are sent in batches to the ingest API, stored per project, and aggregated for the dashboard (latency percentiles, error rate, cost). Retention depends on your plan (e.g. 7, 90, or 365 days).

Providers

A provider is the external API or service you’re calling (e.g. openai, stripe, twilio). You set the provider name in each event. The dashboard groups metrics by provider. We maintain a provider catalog with optional default cost models; you can override cost per project in provider settings.

Cost Estimation

Cost is estimated from estimated_cost_usd per event (or from catalog/override rules when applicable). The dashboard sums cost in the selected period and projects monthly cost by extrapolating over 30 days. This gives a “projected API spend” view so you can catch cost spikes before the invoice.

Project & Workspace Model

Workspaces group projects (e.g. one per company or team). Projects are the scope for ingest keys, events, and dashboard metrics. One project has one or more ingest keys; all keys for that project send events to the same dataset. Billing is at the workspace level (Stripe subscription); plans define limits such as max providers and retention.

SDK API Reference

init(config)config.ingestKey (required), baseUrl, environment, flushIntervalMs, maxBatchSize. Call once at startup. Returns the client instance.

wrap(options, fn)options.provider (required), endpoint, method, estimated_cost_usd, service_name. fn is () => Promise<T>. Returns the result of fn; on throw, records failure and rethrows.

track(event)event is ApiCallEvent (provider, duration_ms, success, etc.). No return. Call after init; events are queued and sent in batches.

getClient() — Returns the current client or null. Span / startSpan(options) — For manual spans: span.ok(statusCode) or span.fail(statusCode, errorType, errorMessage) and span.end(...). trackCompleted(event) — Convenience to call track without throwing.

Environment Variables

SDK / app: DEPENDWATCH_INGEST_KEY — Project ingest key. DEPENDWATCH_INGEST_URL — Base URL for ingest API (e.g. self-hosted). NEXT_PUBLIC_APP_URL — Fallback base URL if ingest URL not set.

Server (Next.js app): DATABASE_URL, NEXTAUTH_SECRET, NEXTAUTH_URL, AUTH_GOOGLE_ID / AUTH_GOOGLE_SECRET, AUTH_GITHUB_ID / AUTH_GITHUB_SECRET, SENDGRID_API_KEY or SMTP_*, EMAIL_FROM, AUTH_RESEND_KEY, STRIPE_*, NEXT_PUBLIC_APP_URL. See README or .env.example for the full list.

Limits

Ingest API: 300 requests per minute per project; 1–100 events per request. Event field limits: provider 64 chars, endpoint 256, method 16, error_type 64, error_message 512, request_count 1–10000.

Plans (usage-based):

  • Free — 10,000 events/month, 2 APIs (distinct providers), 7-day event history, 1 alert rule (no Slack). Dashboard shows provider-level metrics and projected spend. No Operations table, no guardrails, no cost-spike detection.
  • Pro ($29/mo) — 100,000 events/month, 10 APIs, 90-day event history, up to 10 alert rules and 3 Slack webhooks. Adds: Operations table (per-endpoint analytics), cost-spike guardrails, cost-driver insights, guardrails (cost/error/latency/traffic), digest delivery via cron. No anomaly detection.
  • Scale ($99/mo) — 1,000,000 events/month, unlimited APIs, 365-day event history, unlimited alert rules and Slack webhooks, anomaly detection. Everything in Pro plus 1-year retention.

Event history = how long we keep your event data for charts, trends, and debugging. APIs monitored = distinct providers you send events for (e.g. OpenAI, Stripe). Upgrade when you need more APIs, longer history, or Slack alerts. See Pricing for the full comparison.