1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
|
# multillm-cli
Command-line interface for [multillm](../multillm).
## Installation
```bash
pip install multillm-cli
```
You'll also need to install provider packages:
```bash
pip install multillm-openai multillm-anthropic multillm-claude
```
## Usage
```bash
multillm -m <model> -p <prompt>
```
### Examples
**Chat providers:**
```bash
# OpenAI
multillm -m openai/gpt-4o -p "What is 2+2?"
# Anthropic
multillm -m anthropic/claude-sonnet-4-20250514 -p "Explain async/await in Python"
# Gemini
multillm -m gemini/gemini-2.0-flash-exp -p "What are the benefits of Rust?"
```
**Piping input from stdin:**
```bash
# Summarize a file
cat document.txt | multillm -m openai/gpt-4o -p "Summarize this:" --with-stdin
# Analyze code
cat script.py | multillm -m claude/default -p "Review this code:" --with-stdin
# Process command output
ls -la | multillm -m anthropic/claude-sonnet-4-20250514 -p "Explain these files:" --with-stdin
```
**Chat providers with built-in tools:**
```bash
# Calculate
multillm -m openai/gpt-4o -p "What is 15 * 23?" --use-tools calculate
# Get current time
multillm -m openai/gpt-4o -p "What time is it?" --use-tools get_current_time
# Weather (mock data)
multillm -m openai/gpt-4o -p "What's the weather in Tokyo?" --use-tools get_weather
# Multiple tools
multillm -m openai/gpt-4o \
-p "What's 5+5 and what time is it?" \
--use-tools calculate get_current_time
# Verbose mode (shows tool arguments and results)
multillm -m openai/gpt-4o \
-p "Calculate 100 / 4" \
--use-tools calculate \
--verbose
```
**Agent provider with tools:**
```bash
# Simple query
multillm -m claude/default -p "What is 2+2?"
# With tool use
multillm -m claude/default \
-p "What Python version is installed?" \
--allowed-tools Bash \
--permission-mode acceptEdits
# Complex task with verbose output
multillm -m claude/default \
-p "List all .py files in the current directory" \
--allowed-tools Bash Read \
--permission-mode acceptEdits \
--max-turns 10 \
--verbose
```
## Options
| Option | Description |
|--------|-------------|
| `-m`, `--model` | Model to use (format: `provider/model-name`) |
| `-p`, `--prompt` | Prompt to send to the model |
| `--with-stdin` | Append stdin to the prompt after a separator |
| `--use-tools` | Enable built-in tools for chat providers (see below) |
| `--max-turns` | Maximum turns for agent providers |
| `--allowed-tools` | Space-separated list of allowed tools for agent providers |
| `--permission-mode` | Permission mode: `acceptEdits`, `acceptAll`, or `prompt` |
| `-v`, `--verbose` | Show detailed tool execution information |
## Built-in Tools (Chat Providers)
When using chat providers (OpenAI, Anthropic, Gemini, OpenRouter), you can enable built-in tools with `--use-tools`:
| Tool | Description | Example |
|------|-------------|---------|
| `calculate` | Perform mathematical calculations | `--use-tools calculate` |
| `get_current_time` | Get current date and time | `--use-tools get_current_time` |
| `get_weather` | Get weather information (mock data) | `--use-tools get_weather` |
| `ask_user` | Ask the user a question interactively | `--use-tools ask_user` |
### Interactive Tools
The `ask_user` tool allows the model to ask you questions during execution and collect your responses. This enables truly interactive conversations where the model can clarify requirements, gather preferences, or get additional information.
**Example:**
```bash
multillm -m openai/gpt-4o \
-p "Help me choose a programming language for my project by asking about my requirements" \
--use-tools ask_user
```
When the model calls `ask_user`, you'll see:
```
======================================================================
❓ QUESTION FROM ASSISTANT
======================================================================
What type of project are you building?
Suggested options:
1. Web application
2. Desktop application
3. Data science
4. Command-line tool
You can select a number or provide your own answer.
Your answer: _
```
### Tool Output
When tools are used, you'll see inline output showing:
**Normal mode:**
```
[Using 1 tool(s)]
→ calculate()
✓ Result: 345
The result of 15 * 23 is 345.
```
**Verbose mode (`--verbose`):**
```
[Using 1 tool(s)]
→ calculate({"expression": "15 * 23"})
← {
"expression": "15 * 23",
"result": 345
}
The result of 15 * 23 is 345.
```
### Example Usage
```bash
# Basic calculation
multillm -m openai/gpt-4o -p "What is 123 + 456?" --use-tools calculate
# With verbose output
multillm -m openai/gpt-4o -p "Calculate 50 / 2" --use-tools calculate -v
# Multiple tools
multillm -m openai/gpt-4o \
-p "What's the current time and what's 10 * 10?" \
--use-tools get_current_time calculate
# All tools enabled
multillm -m openai/gpt-4o \
-p "What time is it in Tokyo and what's the weather like?" \
--use-tools get_current_time get_weather \
--verbose
```
## Available Tools (Agent Providers)
When using agent providers (like `claude`), you can specify which tools the agent can use with `--allowed-tools`:
### Core Tools
| Tool | Description |
|------|-------------|
| `Read` | Read files from the filesystem |
| `Write` | Create or overwrite files |
| `Edit` | Edit existing files with find/replace |
| `Bash` | Execute bash commands |
| `Glob` | Find files using glob patterns |
| `Grep` | Search file contents with regex |
### Extended Tools
| Tool | Description |
|------|-------------|
| `Task` | Launch specialized sub-agents for complex tasks |
| `WebFetch` | Fetch and process web content |
| `WebSearch` | Search the web for information |
| `NotebookEdit` | Edit Jupyter notebook cells |
| `AskUserQuestion` | Ask the user questions during execution |
| `KillShell` | Kill running background shells |
| `EnterPlanMode` | Enter planning mode for complex implementations |
| `ExitPlanMode` | Exit planning mode |
### Usage Examples
```bash
# Basic file operations
multillm -m claude/default -p "Read README.md" --allowed-tools Read
# Command execution
multillm -m claude/default -p "What Python version?" --allowed-tools Bash
# Multiple tools
multillm -m claude/default -p "Find all .py files and read them" \
--allowed-tools Glob Read
# Common combination for development tasks
multillm -m claude/default -p "Fix the bug in main.py" \
--allowed-tools Read Write Edit Bash Grep
```
**Note:** The more tools you allow, the more autonomous the agent becomes. Start with minimal tools and add more as needed.
## Configuration
Provider credentials can be set via:
**Option 1: Config files** (recommended)
Create provider configs at `~/.config/multillm/providers/<provider>.json`.
Example for OpenAI (`~/.config/multillm/providers/openai.json`):
```json
{
"api_key": "sk-..."
}
```
Set permissions:
```bash
chmod 600 ~/.config/multillm/providers/*.json
```
See individual provider READMEs for all configuration options:
- [multillm-openai](../multillm-openai)
- [multillm-anthropic](../multillm-anthropic)
- [multillm-gemini](../multillm-gemini)
- [multillm-openrouter](../multillm-openrouter)
- [multillm-claude](../multillm-claude)
**Option 2: Environment variables**
```bash
export OPENAI_API_KEY=sk-...
export ANTHROPIC_API_KEY=sk-ant-...
export GOOGLE_API_KEY=...
export OPENROUTER_API_KEY=sk-or-...
```
**For Claude agent provider:**
```bash
# OAuth via CLI (recommended)
claude login
# Or API key
export ANTHROPIC_API_KEY=sk-ant-...
# Or OAuth token
export CLAUDE_CODE_OAUTH_TOKEN=your-token
```
## Supported Providers
- **Chat providers**: `openai`, `anthropic`, `gemini`, `openrouter`
- **Agent providers**: `claude`
|