Skip to main content
Verial exposes its full platform to AI coding agents (Claude Code, Cursor, ChatGPT Desktop, and any MCP-capable client) so they can compose environments, author benchmarks, run evaluations, and inspect results without leaving the chat. Pick the integration that fits your workflow:

MCP Server

Connect any MCP-capable client to Verial and operate the platform through structured tool calls.

CLI

Drive benchmarks from an agent’s shell tool. Supports --json for machine-readable output.

SDK

Use the TypeScript SDK from an agent’s code-execution tool.

REST API

Language-agnostic HTTP API for custom agent integrations.

Why AI agents on Verial?

Healthcare AI teams increasingly use coding agents to build, tune, and ship their own agents. Verial is built so those coding agents can close the loop on quality:
  • Self-testing. An agent can author a benchmark, run it against its own production configuration, and read back per-criterion evidence in the same conversation.
  • Tight feedback. Criterion-level scores and structured evidence (FHIR logs, voice transcripts, portal events) give coding agents actionable signal, not just a pass/fail.
  • No hidden state. Every resource is a first-class REST object. Agents can list, introspect, and edit without leaning on UI-only flows.
  1. Create a dedicated organization API key for your coding agent and scope it to a non-production environment. See Solver Keys and Authentication.
  2. Connect the agent. MCP is the richest integration (native tool calls with schemas); CLI is the simplest if your agent already has a shell tool.
  3. Start with a published benchmark. Point the agent at Running a Benchmark and let it drive a first rollout end to end.

Next Steps

Guided Onboarding

Walk an agent through connecting to Verial and running its first evaluation.

Agent Skills

Drop-in expertise for coding agents operating on Verial.