aboutsummaryrefslogtreecommitdiffstats

multillm-cli

Command-line interface for multillm.

Installation

pip install multillm-cli

You'll also need to install provider packages:

pip install multillm-openai multillm-anthropic multillm-claude

Usage

multillm -m <model> -p <prompt>

Examples

Chat providers:

# OpenAI
multillm -m openai/gpt-4o -p "What is 2+2?"

# Anthropic
multillm -m anthropic/claude-sonnet-4-20250514 -p "Explain async/await in Python"

# Gemini
multillm -m gemini/gemini-2.0-flash-exp -p "What are the benefits of Rust?"

Piping input from stdin:

# Summarize a file
cat document.txt | multillm -m openai/gpt-4o -p "Summarize this:" --with-stdin

# Analyze code
cat script.py | multillm -m claude/default -p "Review this code:" --with-stdin

# Process command output
ls -la | multillm -m anthropic/claude-sonnet-4-20250514 -p "Explain these files:" --with-stdin

Chat providers with built-in tools:

# Calculate
multillm -m openai/gpt-4o -p "What is 15 * 23?" --use-tools calculate

# Get current time
multillm -m openai/gpt-4o -p "What time is it?" --use-tools get_current_time

# Weather (mock data)
multillm -m openai/gpt-4o -p "What's the weather in Tokyo?" --use-tools get_weather

# Multiple tools
multillm -m openai/gpt-4o \
  -p "What's 5+5 and what time is it?" \
  --use-tools calculate get_current_time

# Verbose mode (shows tool arguments and results)
multillm -m openai/gpt-4o \
  -p "Calculate 100 / 4" \
  --use-tools calculate \
  --verbose

Agent provider with tools:

# Simple query
multillm -m claude/default -p "What is 2+2?"

# With tool use
multillm -m claude/default \
  -p "What Python version is installed?" \
  --allowed-tools Bash \
  --permission-mode acceptEdits

# Complex task with verbose output
multillm -m claude/default \
  -p "List all .py files in the current directory" \
  --allowed-tools Bash Read \
  --permission-mode acceptEdits \
  --max-turns 10 \
  --verbose

Options

Option Description
-m, --model Model to use (format: provider/model-name)
-p, --prompt Prompt to send to the model
--with-stdin Append stdin to the prompt after a separator
--use-tools Enable built-in tools for chat providers (see below)
--max-turns Maximum turns for agent providers
--allowed-tools Space-separated list of allowed tools for agent providers
--permission-mode Permission mode: acceptEdits, acceptAll, or prompt
-v, --verbose Show detailed tool execution information

Built-in Tools (Chat Providers)

When using chat providers (OpenAI, Anthropic, Gemini, OpenRouter), you can enable built-in tools with --use-tools:

Tool Description Example
calculate Perform mathematical calculations --use-tools calculate
get_current_time Get current date and time --use-tools get_current_time
get_weather Get weather information (mock data) --use-tools get_weather
ask_user Ask the user a question interactively --use-tools ask_user

Interactive Tools

The ask_user tool allows the model to ask you questions during execution and collect your responses. This enables truly interactive conversations where the model can clarify requirements, gather preferences, or get additional information.

Example:

multillm -m openai/gpt-4o \
  -p "Help me choose a programming language for my project by asking about my requirements" \
  --use-tools ask_user

When the model calls ask_user, you'll see:

======================================================================
 QUESTION FROM ASSISTANT
======================================================================

What type of project are you building?

Suggested options:
  1. Web application
  2. Desktop application
  3. Data science
  4. Command-line tool

You can select a number or provide your own answer.

Your answer: _

Tool Output

When tools are used, you'll see inline output showing:

Normal mode:

[Using 1 tool(s)]
  → calculate()
  ✓ Result: 345

The result of 15 * 23 is 345.

Verbose mode (--verbose):

[Using 1 tool(s)]
  → calculate({"expression": "15 * 23"})
  ← {
    "expression": "15 * 23",
    "result": 345
  }

The result of 15 * 23 is 345.

Example Usage

# Basic calculation
multillm -m openai/gpt-4o -p "What is 123 + 456?" --use-tools calculate

# With verbose output
multillm -m openai/gpt-4o -p "Calculate 50 / 2" --use-tools calculate -v

# Multiple tools
multillm -m openai/gpt-4o \
  -p "What's the current time and what's 10 * 10?" \
  --use-tools get_current_time calculate

# All tools enabled
multillm -m openai/gpt-4o \
  -p "What time is it in Tokyo and what's the weather like?" \
  --use-tools get_current_time get_weather \
  --verbose

Available Tools (Agent Providers)

When using agent providers (like claude), you can specify which tools the agent can use with --allowed-tools:

Core Tools

Tool Description
Read Read files from the filesystem
Write Create or overwrite files
Edit Edit existing files with find/replace
Bash Execute bash commands
Glob Find files using glob patterns
Grep Search file contents with regex

Extended Tools

Tool Description
Task Launch specialized sub-agents for complex tasks
WebFetch Fetch and process web content
WebSearch Search the web for information
NotebookEdit Edit Jupyter notebook cells
AskUserQuestion Ask the user questions during execution
KillShell Kill running background shells
EnterPlanMode Enter planning mode for complex implementations
ExitPlanMode Exit planning mode

Usage Examples

# Basic file operations
multillm -m claude/default -p "Read README.md" --allowed-tools Read

# Command execution
multillm -m claude/default -p "What Python version?" --allowed-tools Bash

# Multiple tools
multillm -m claude/default -p "Find all .py files and read them" \
  --allowed-tools Glob Read

# Common combination for development tasks
multillm -m claude/default -p "Fix the bug in main.py" \
  --allowed-tools Read Write Edit Bash Grep

Note: The more tools you allow, the more autonomous the agent becomes. Start with minimal tools and add more as needed.

Configuration

Provider credentials can be set via:

Option 1: Config files (recommended)

Create provider configs at ~/.config/multillm/providers/<provider>.json.

Example for OpenAI (~/.config/multillm/providers/openai.json):

{
  "api_key": "sk-..."
}

Set permissions:

chmod 600 ~/.config/multillm/providers/*.json

See individual provider READMEs for all configuration options: - multillm-openai - multillm-anthropic - multillm-gemini - multillm-openrouter - multillm-claude

Option 2: Environment variables

export OPENAI_API_KEY=sk-...
export ANTHROPIC_API_KEY=sk-ant-...
export GOOGLE_API_KEY=...
export OPENROUTER_API_KEY=sk-or-...

For Claude agent provider:

# OAuth via CLI (recommended)
claude login

# Or API key
export ANTHROPIC_API_KEY=sk-ant-...

# Or OAuth token
export CLAUDE_CODE_OAUTH_TOKEN=your-token

Supported Providers

  • Chat providers: openai, anthropic, gemini, openrouter
  • Agent providers: claude