LLM Router Multi-Provider Request Routing Implementation

We design and deploy artificial intelligence systems: from prototype to production-ready solutions. Our team combines expertise in machine learning, data engineering and MLOps to make AI work not in the lab, but in real business.
Showing 1 of 1 servicesAll 1566 services
LLM Router Multi-Provider Request Routing Implementation
Medium
~3-5 business days
FAQ
AI Development Areas
AI Solution Development Stages
Latest works
  • image_website-b2b-advance_0.png
    B2B ADVANCE company website development
    1212
  • image_web-applications_feedme_466_0.webp
    Development of a web application for FEEDME
    1161
  • image_websites_belfingroup_462_0.webp
    Website development for BELFINGROUP
    852
  • image_ecommerce_furnoro_435_0.webp
    Development of an online store for the company FURNORO
    1041
  • image_logo-advance_0.png
    B2B Advance company logo design
    561
  • image_crm_enviok_479_0.webp
    Development of a web application for Enviok
    822

LLM Request Router Implementation (Multi-Provider Routing)

LLM Router is a middleware that routes requests to the optimal provider based on request characteristics: complexity, task type, latency requirements, cost. Simple classifications go to GPT-4o-mini. Complex code generation goes to Claude Opus. Real-time chat goes to Groq. This reduces costs 3–5x without quality loss.

Router Architecture

from anthropic import Anthropic
from openai import OpenAI
from groq import Groq
from dataclasses import dataclass
from typing import Callable, Optional
import re

@dataclass
class RoutingRule:
    name: str
    condition: Callable[[str, dict], bool]
    provider: str
    model: str
    reason: str

class LLMRouter:

    def __init__(self):
        self.openai_client = OpenAI()
        self.anthropic_client = Anthropic()
        self.groq_client = Groq()

        self.rules: list[RoutingRule] = [
            RoutingRule(
                name="simple_chat",
                condition=lambda text, meta: len(text) < 200 and not meta.get("complex"),
                provider="groq",
                model="llama-3.1-8b-instant",
                reason="Short simple query using fast inference",
            ),
            RoutingRule(
                name="code_generation",
                condition=lambda text, meta: self._is_code_task(text),
                provider="anthropic",
                model="claude-sonnet-4-5",
                reason="Code task Claude performs better",
            ),
            RoutingRule(
                name="reasoning",
                condition=lambda text, meta: self._is_reasoning_task(text),
                provider="openai",
                model="o3-mini",
                reason="Reasoning task using o3-mini",
            ),
            RoutingRule(
                name="default",
                condition=lambda text, meta: True,
                provider="openai",
                model="gpt-4o",
                reason="Default routing",
            ),
        ]

    def _is_code_task(self, text: str) -> bool:
        code_keywords = [
            "write code", "implement", "function", "class", "algorithm",
            "python", "javascript", "sql", "refactor", "debug"
        ]
        return any(kw in text.lower() for kw in code_keywords)

    def _is_reasoning_task(self, text: str) -> bool:
        reasoning_keywords = [
            "prove", "calculate", "optimize", "find optimal",
            "mathematically", "logically follows", "minimize"
        ]
        return any(kw in text.lower() for kw in reasoning_keywords)

    def route(self, text: str, meta: dict = None) -> RoutingRule:
        meta = meta or {}
        for rule in self.rules:
            if rule.condition(text, meta):
                return rule
        return self.rules[-1]

    def complete(self, messages: list[dict], system: str = None, **kwargs) -> str:
        user_message = messages[-1]["content"] if messages else ""
        rule = self.route(user_message)
        print(f"Router {rule.name} to {rule.provider}/{rule.model}: {rule.reason}")
        
        if rule.provider == "anthropic":
            return self._call_anthropic(messages, rule.model, system, **kwargs)
        elif rule.provider == "openai":
            return self._call_openai(messages, rule.model, system, **kwargs)
        elif rule.provider == "groq":
            return self._call_groq(messages, rule.model, system, **kwargs)

Practical Case: SaaS Platform

Request profile: 60% simple questions, 25% analytics, 15% code generation.

Before routing: all go to GPT-4o = $450/month.

After routing:

  • Simple to Groq Llama 8B: $12/month
  • Analytics to GPT-4o-mini: $45/month
  • Code to Claude Sonnet: $67/month
  • Total: $124/month (-72%)

Timeline

  • Rule-based router: 2–3 days
  • Semantic router with ML: 1 week
  • Monitoring and A/B testing: 1 week