Run live experiments, compare variants, and let your agents learn which strategy performs best — in real-time, without manual testing.
Most AI agents are stuck with fixed prompts and static configurations. When you change a prompt, you're guessing whether it's better. There's no feedback loop, no learning, no adaptation.
ImproveFast provides the feedback loop your agents need. It runs live experiments, compares variants in real-time, and learns which approaches work best in production — automatically adapting as your agent runs.
The ImproveFast server is already running. One command to start optimizing.
claude mcp add --transport http improve-fast https://improve.fast/mcp
For Claude Desktop, Cursor, or other MCP clients, see the getting started guide.
Most experiments converge in 50-150 evaluations. If your agent handles 10 requests/hour, you'll have a winner in 5-15 hours.
Yes. Experiments are isolated by ID and expire after 7 days. Your experiment data is stored only for the duration needed to run your optimization, then automatically deleted.
Thompson Sampling works with any number of variants. More variants need more evaluations (~50 per variant), but the algorithm efficiently focuses on promising candidates.