AI Integration
Dokkimi is designed to work with AI assistants out of the box.
How it works
When you install Dokkimi, it updates your AI assistant's context files so the assistant automatically knows about Dokkimi's definition format, CLI commands, and capabilities. There's nothing to configure — your AI assistant is ready to write and debug Dokkimi tests immediately after installation.
Specifically, Dokkimi adds a reference to ~/.dokkimi/dokkimi-instructions.md (the complete specification) in your assistant's context configuration. This means tools like Claude Code, Cursor, and similar AI coding assistants can:
- Write test definitions from natural language descriptions
- Add assertions to existing tests
- Create shared fragments and wire up
$refreferences - Debug failing tests using structured output
Writing tests with AI
Just describe what you want to test. Your AI assistant already has the full Dokkimi spec in context:
# In your AI assistant:
"Write a Dokkimi test definition that tests the checkout flow.
I have an API gateway (port 3000), an order service (port 3001),
and a Postgres database. Mock Stripe for payment processing." The AI will generate a valid definition file with the correct item types, connection strings, mock configuration, and assertion blocks — no need to tell it where to find the docs.
Debugging with dokkimi dump
When a test fails, dokkimi dump exports the entire run as structured JSON — captured HTTP traffic, console logs, assertion results, and timing data:
# Export only failed instances
dokkimi dump --failed -o failures.json Your AI assistant can read this file and diagnose the failure:
# In your AI assistant:
"The checkout test failed. Read failures.json and trace through
the captured traffic to tell me what went wrong." The dump contains everything the AI needs to diagnose the issue without access to your cluster — request/response bodies, inter-service calls, service logs, and the full test definition.
The dump file
The JSON output from dokkimi dump includes:
- Definition — the full resolved test definition (items, variables, tests)
- HTTP logs — every inter-service HTTP call with request and response bodies, headers, and timing
- Console logs — stdout/stderr from every service, with timestamps and detected log levels
- Assertion results — which assertions passed, which failed, and the expected vs actual values
- Database logs — queries executed and their results
- Execution logs — step-by-step timeline of the test run
See the CLI Reference for the full output structure.
Tips for AI-assisted workflows
- Use
dokkimi validateafter generation. Rundokkimi validateon AI-generated definitions to catch schema errors before deploying. - Save dumps from CI. Add
dokkimi dump --failed -o artifacts/failures.jsonto your CI failure handler so dump files are always available for debugging. - Include your service code. When debugging, share the relevant endpoint handler code alongside the dump. The dump shows what happened; your code shows why.