# multillm-gemini Google Gemini provider for [multillm](../multillm). ## Installation ```bash pip install multillm-gemini ``` ## Usage ```python import asyncio import multillm async def main(): client = multillm.Client() # Simple query answer = await client.single("gemini/gemini-2.0-flash", "Hello!") print(answer) # Chat completion response = await client.chat_complete("gemini/gemini-2.0-flash", [ {"role": "user", "content": "What is Python?"} ]) print(response.text) # Streaming async for chunk in client.chat_complete_stream("gemini/gemini-2.0-flash", messages): print(chunk, end="") asyncio.run(main()) ``` ## Configuration Choose one of the following methods (priority: direct config > env var > config file): **Option 1: Config file** (recommended) Create `~/.config/multillm/providers/gemini.json`: ```json { "api_key": "..." } ``` Set permissions: ```bash chmod 600 ~/.config/multillm/providers/gemini.json ``` **Option 2: Environment variable** ```bash export GOOGLE_API_KEY=... ``` **Option 3: Direct config** ```python client = multillm.Client({ "gemini": {"api_key": "..."} }) ``` ### Config File Fields | Field | Required | Description | |-------|----------|-------------| | `api_key` | Yes | Your Google API key | ## Models Use Gemini model names after the provider prefix: - `gemini/gemini-2.0-flash` - `gemini/gemini-1.5-pro` - `gemini/gemini-1.5-flash`