# multillm-openrouter OpenRouter provider for [multillm](../multillm). OpenRouter provides access to many models through a single API. ## Installation ```bash pip install multillm-openrouter ``` ## Usage ```python import asyncio import multillm async def main(): client = multillm.Client() # Simple query answer = await client.single("openrouter/anthropic/claude-3-haiku", "Hello!") print(answer) # Chat completion response = await client.chat_complete("openrouter/meta-llama/llama-3-70b-instruct", [ {"role": "user", "content": "What is Python?"} ]) print(response.text) # Streaming async for chunk in client.chat_complete_stream("openrouter/openai/gpt-4o", messages): print(chunk, end="") asyncio.run(main()) ``` ## Configuration Choose one of the following methods (priority: direct config > env var > config file): **Option 1: Config file** (recommended) Create `~/.config/multillm/providers/openrouter.json`: ```json { "api_key": "sk-or-...", "base_url": "https://openrouter.ai/api/v1" } ``` Set permissions: ```bash chmod 600 ~/.config/multillm/providers/openrouter.json ``` **Option 2: Environment variable** ```bash export OPENROUTER_API_KEY=sk-or-... ``` **Option 3: Direct config** ```python client = multillm.Client({ "openrouter": {"api_key": "sk-or-..."} }) ``` ### Config File Fields | Field | Required | Description | |-------|----------|-------------| | `api_key` | Yes | Your OpenRouter API key | | `base_url` | No | API base URL (defaults to `https://openrouter.ai/api/v1`) | ## Models Use OpenRouter model IDs after the provider prefix. Models follow the format `openrouter//`: - `openrouter/anthropic/claude-3-haiku` - `openrouter/openai/gpt-4o` - `openrouter/meta-llama/llama-3-70b-instruct` - `openrouter/google/gemini-pro` See [OpenRouter models](https://openrouter.ai/models) for the full list.