Back to FeaturesRequest Logs & I/O

Turn every API call into observable, debuggable evidence

From status code to prompt and model output, your team gets complete visibility for faster debugging and higher reliability.

Deep Search

Filter by provider, project, status, model, and timeframe in seconds.

I/O Inspection

View parsed messages and raw JSON to verify behavior and formatting.

Debug Velocity

Identify failures quickly and reduce time-to-resolution for production issues.

Request Explorer Demo

165 total requests
Provider: AllStatus: SuccessLast 24h
StatusProviderModelLatencyView
200openaigpt-4o0.81sInspect
200anthropicclaude-3.5-sonnet1.29sInspect
500geminigemini-1.5-pro2.12sInspect
200openaigpt-4o-mini0.54sInspect

Raw JSON Preview

{
  "model": "gpt-4o",
  "messages": [{"role":"user","content":"Say something random"}],
  "response": {
    "finish_reason": "stop",
    "usage": {"prompt_tokens": 81, "completion_tokens": 56}
  }
}

Debugging depth

Compare parsed messages with raw payloads to catch prompt formatting, role ordering, and schema drift issues.

Incident handoffs

Share specific request IDs and I/O snapshots between platform and app teams without copy/paste confusion.

Quality checks

Validate system prompts and final completions when rolling out model, prompt, or routing changes.

Review workflow

  1. 1. Filter by incident window and status codes.
  2. 2. Open the failing request and inspect system/user prompts.
  3. 3. Compare response body and provider error metadata.
  4. 4. Validate fix with subsequent successful calls.

What teams learn fast

  • Prompt regressions tied to specific deploy windows.
  • Provider-specific response formatting differences.
  • Latency outliers clustered by model or endpoint.
  • Error bursts connected to one project or key.

Built for high-volume AI apps

Request logs are your forensic layer. Keep them readable, searchable, and immediately actionable for on-call teams.

FilterInspectCorrelateResolve

FAQ

What can I inspect in AI Cost Board request logs?

You can inspect provider, model, status code, latency, token usage, parsed messages, and raw request/response payloads for debugging.

Does request logging help with prompt debugging?

Yes. Prompt and response inspection helps identify formatting issues, schema drift, and regressions after model or prompt changes.

Can I filter logs by project and provider?

Yes. Logs can be filtered by provider, project, status, model, and time range to speed up incident investigation.

Is this useful for production incidents?

Yes. Request logs are the forensic layer for latency spikes, failures, and cost anomalies in production AI workflows.