Self-Learning Routing: How RoutePlex Gets Smarter Over Time
When you first use RoutePlex, the router knows nothing about you. It selects models based on general performance data — latency, cost, and capability scores that apply equally to everyone.
That's fine for getting started. But over time, your workload has patterns. Maybe Claude consistently produces better analysis for your use case. Maybe GPT-4o Mini is fast enough for your summarization tasks and costs a third of the price. The best router for you isn't the same as the best router for everyone.
Self-Learning Routing is how we solve this.
What Gets Tracked
After each successful request, RoutePlex records a metadata entry:
- The model used and the detected query type
- Response quality signals — length, structure, and token ratios
- Latency and cost
- A one-way hash of your message (for pattern detection)
Nothing else. Your prompts, your responses, and your data are never stored. Quality is inferred from response structure — not content. The hash is irreversible by design.
How the Learning Works
As your profile accumulates data, the router applies a per-model bias to its scoring for each query type. A model that has consistently produced high-quality analysis gets a score boost on analysis requests. A model with mediocre performance on creative tasks gets a modest penalty.
The bias is bounded to ±15 router points — enough to influence selection, not enough to completely override the base scores. The system adjusts incrementally, not dramatically.
Confidence Gating
Learning bias only kicks in when there's enough data to trust it:
- < 10 requests per model — no influence yet, cold-start defaults apply
- 10–50 requests — 20% influence, cautious adjustments
- 50–100 requests — 50% influence, moderate confidence
- > 100 requests — 80% influence, strong personalization
Until your account reaches the minimum threshold, RoutePlex falls back to global patterns — aggregated signals from across the platform, anonymized and weighted by similarity to your query type.
Explicit Feedback
You can accelerate learning by rating individual responses. A star rating from 1–5 is blended with the automatic quality score (60% your rating, 40% automatic) and immediately updates the model bias for that query type.
From the dashboard or via the API:
curl -X POST https://api.routeplex.com/api/v1/insights/feedback \
-H "Authorization: Bearer YOUR_JWT_TOKEN" \
-d '{"request_id": "req_abc123", "score": 5, "is_helpful": true}'
What You Can See
The Insights tab shows the full picture:
- Which models are performing best for each query type
- Whether prompt enhancement is improving quality for your workload
- Cost optimization opportunities (e.g. if you're using a premium model for simple queries)
- Personalized recommendations with confidence scores
Your Data, Your Control
All learning data belongs to you. Delete it at any time from the dashboard settings or via:
DELETE /api/v1/insights/data
Routing immediately reverts to global defaults. No questions asked.
The Result
After a few hundred requests, RoutePlex's routing decisions are tailored to your specific workload. The models that work best for your use case get priority. The ones that underperform get deprioritized. Automatically, in the background, without any configuration.
This is intelligent routing that improves as you use it.



