AI Integration

Dokkimi is designed to work with AI assistants out of the box.

How it works

When you install Dokkimi, it updates your AI assistant's context files so the assistant automatically knows about Dokkimi's definition format, CLI commands, and capabilities. There's nothing to configure — your AI assistant is ready to write and debug Dokkimi tests immediately after installation.

Specifically, Dokkimi adds a reference to ~/.dokkimi/dokkimi-instructions.md (the complete specification) in your assistant's context configuration. This means tools like Claude Code, Cursor, and similar AI coding assistants can:

Writing tests with AI

Just describe what you want to test. Your AI assistant already has the full Dokkimi spec in context:

# In your AI assistant:
"Write a Dokkimi test definition that tests the checkout flow.
I have an API gateway (port 3000), an order service (port 3001),
and a Postgres database. Mock Stripe for payment processing."

The AI will generate a valid definition file with the correct item types, connection strings, mock configuration, and assertion blocks — no need to tell it where to find the docs.

Debugging with dokkimi dump

When a test fails, dokkimi dump exports the entire run as structured JSON — captured HTTP traffic, console logs, assertion results, and timing data:

# Export only failed instances
dokkimi dump --failed -o failures.json

Your AI assistant can read this file and diagnose the failure:

# In your AI assistant:
"The checkout test failed. Read failures.json and trace through
the captured traffic to tell me what went wrong."

The dump contains everything the AI needs to diagnose the issue without access to your cluster — request/response bodies, inter-service calls, service logs, and the full test definition.

The dump file

The JSON output from dokkimi dump includes:

See the CLI Reference for the full output structure.

Tips for AI-assisted workflows