Skip to main contentMaitai is a platform for fine-tuning, testing, and monitoring LLM-powered applications.
You can route your LLM requests through Maitai (recommended), or keep your existing client and send Maitai the request/response for indexing. Maitai organizes traffic by Application and Intent, runs Sentinels to detect faults, and gives you Portal workflows to iterate safely with Test Sets and Test Runs.
Start here
- If you want the fastest setup: Quickstart
- If you already use an OpenAI-compatible SDK and want a drop-in integration: Base URL integration
- If you want a mental model of the resources you’ll see in the Portal: Core Concepts
What you’ll get out of Maitai
- Observe: sessions/requests, latency + cost, and evaluation outcomes in the Portal
- Test: regression suites (Test Sets) and benchmarks (Test Runs)
- Build: fine-tuning datasets/runs, and Agents for tool-using workflows