Korvo Intelligence Engine
Meet Medha.
मेधा - Sanskrit for intelligence
On-device intelligence that verifies what AI models produce. Consensus synthesis, contradiction detection, experiment evaluation - under 10ms, zero tokens, completely offline.
Your models generate. Medha verifies. You decide.
Medha ships bundled with Korvo · API & HuggingFace coming soon
The Problem
AI models generate confidently.
Nobody checks their work.
You send a question to GPT-4, Claude, and Gemini. You get three plausible, confident answers. Which claims are actually agreed upon? Where do they contradict? What's uncertain? Today, you either read all three manually - or send them to yet another cloud model for synthesis, paying more tokens and waiting more seconds.
Cloud synthesis is slow and expensive
Sending multiple AI drafts to another LLM for evaluation costs tokens, takes 5-10 seconds per call, and requires internet. At scale, it's unusable.
No structured disagreement analysis
Models give you text. They don't give you "Claim A from GPT-4 contradicts Claim B from Claude with 73% similarity." You're left reading and comparing manually.
Evaluation at scale is a cost wall
Running 100+ experiment iterations overnight? At $0.03-0.10 per evaluation, that's $3-10 per run. Most people just... don't run experiments.
Architecture
Where Medha sits in your workflow.
Medha doesn't replace your AI models. It sits between them and you - verifying, scoring, and synthesizing before anything reaches your decision.
Your AI Models (BYOK)
Medha (on-device · Rust · <10ms · $0)
You
Capabilities
What Medha does - specifically.
Consensus Synthesis
ShippedWhen you run Multi-Agent Consensus in Korvo, multiple AI models answer the same question in parallel. Medha synthesizes the results locally - no cloud round-trip, no extra tokens.
Contradiction Detection
ShippedMedha identifies where AI models disagree - not just surface-level differences, but actual conflicting claims backed by evidence from different sources.
Experiment Evaluation
BuildingKorvo's AutoPilot runs experiment loops overnight - modify, measure, keep or discard, repeat. Medha makes the "measure" step instant and free.
The Math
The difference between 5 experiments
and 500 experiments.
When Korvo's AutoPilot runs experiment iterations overnight, the evaluation step determines how far you can go.
This is the difference between “I ran 5 experiments because tokens are expensive” and “I ran 500 experiments while I slept.”
Technical Specs
Medha v1 - what ships today.
Native Rust library, compiled per platform, integrated via Dart FFI.
Coming Soon
RoadmapMedha beyond Korvo.
Medha ships bundled inside Korvo today. Soon, we're bringing it to developers and researchers as a standalone capability.
Medha API
Call Medha's consensus synthesis, claim evaluation, and retrieval scoring from your own applications. REST API with sub-100ms response times.
POST /v1/eval/claims
POST /v1/eval/retrieval
HuggingFace
We're training and refining Medha's evaluation models and plan to publish weights on HuggingFace - so researchers can run, fine-tune, and benchmark independently.
🤗 korvo/medha-consensus-v1
🤗 korvo/medha-claims-v1
Interested in early API access or research collaboration? Get in touch
Honest Positioning
What Medha is - and isn't - today.
FAQ
Questions we get asked.
Is Medha an LLM?
Today, Medha is a deterministic reasoning engine - claim extraction, consensus synthesis, evaluation scoring. It doesn't generate text or hold conversations. Its capabilities will expand with every release. The core promise stays the same: intelligence that runs on your machine, not someone else's server.
Why not just use GPT or Claude for synthesis?
You can - Korvo lets you toggle between cloud synthesis and Medha synthesis. The difference: cloud synthesis costs tokens, takes 5-10 seconds, and requires internet. Medha does it in under 10ms, offline, for free. When AutoPilot runs 500 experiment iterations overnight, that difference is existential.
What if Medha isn't available on my platform?
Korvo falls back to cloud-based synthesis seamlessly. You won't notice a difference in the UI - just in speed and cost. Medha is an accelerator, not a requirement.
Does Medha send any data anywhere?
No. Medha is a compiled native library that runs entirely on your CPU. It makes zero network calls. Your data never leaves your device. This is an architectural guarantee, not a policy checkbox.
Will Medha be available outside of Korvo?
Yes. We're working on a standalone Medha API and plan to release weights on HuggingFace. Developers and researchers will be able to use Medha's evaluation and synthesis capabilities independently.
Is Medha open source?
Not yet. The core engine (libmedha_ffi) is proprietary and ships bundled with Korvo. We're evaluating open-sourcing components - particularly the eval functions - as Medha matures.
Will Medha ever generate text?
Medha's capabilities will expand. Today it verifies and evaluates. The roadmap includes deeper reasoning capabilities. We'll share more when we're ready.
Your models generate.
Medha verifies.
You decide.
On-device intelligence that never touches the internet. Ships with Korvo. API and HuggingFace coming soon.
Medha is part of Korvo. Download Korvo to start using it today.