# multillm-openai OpenAI provider for [multillm](../multillm). ## Installation ```bash pip install multillm-openai ``` ## Usage ```python import asyncio import multillm async def main(): client = multillm.Client() # Simple query answer = await client.single("openai/gpt-4o", "Hello!") print(answer) # Chat completion response = await client.chat_complete("openai/gpt-4o", [ {"role": "user", "content": "What is Python?"} ]) print(response.text) # Streaming async for chunk in client.chat_complete_stream("openai/gpt-4o", messages): print(chunk, end="") asyncio.run(main()) ``` ## Configuration Choose one of the following methods (priority: direct config > env var > config file): **Option 1: Config file** (recommended) Create `~/.config/multillm/providers/openai.json`: ```json { "api_key": "sk-...", "organization": "org-..." } ``` Set permissions: ```bash chmod 600 ~/.config/multillm/providers/openai.json ``` **Option 2: Environment variable** ```bash export OPENAI_API_KEY=sk-... ``` **Option 3: Direct config** ```python client = multillm.Client({ "openai": {"api_key": "sk-...", "organization": "org-..."} }) ``` ### Config File Fields | Field | Required | Description | |-------|----------|-------------| | `api_key` | Yes | Your OpenAI API key | | `organization` | No | Your organization ID (if applicable) | ## Models Use OpenAI model names after the provider prefix: - `openai/gpt-4o` - `openai/gpt-4o-mini` - `openai/gpt-4-turbo` - `openai/o1-preview`