aboutsummaryrefslogtreecommitdiffstats
path: root/packages/multillm-agentwrap/README.md
diff options
context:
space:
mode:
Diffstat (limited to 'packages/multillm-agentwrap/README.md')
-rw-r--r--packages/multillm-agentwrap/README.md349
1 files changed, 349 insertions, 0 deletions
diff --git a/packages/multillm-agentwrap/README.md b/packages/multillm-agentwrap/README.md
new file mode 100644
index 0000000..2e0c27c
--- /dev/null
+++ b/packages/multillm-agentwrap/README.md
@@ -0,0 +1,349 @@
+# multillm-agentwrap
+
+Agent wrapper provider for multillm - wraps chat providers with agentic capabilities.
+
+## Overview
+
+The `agentwrap` provider allows you to use any chat provider (OpenAI, Google, Anthropic, etc.) with agentic capabilities including:
+
+- **Tool execution loop**: Automatically executes tools and sends results back
+- **Conversation history management**: Maintains context across tool calls
+- **Multi-turn interactions**: Continues until task is complete or max turns reached
+
+## Installation
+
+```bash
+pip install multillm-agentwrap
+```
+
+Or with uv in a workspace:
+
+```bash
+uv add multillm-agentwrap
+```
+
+## Usage
+
+### Basic Usage
+
+```python
+import asyncio
+import multillm
+
+async def main():
+ client = multillm.Client()
+
+ # Wrap any chat model with agentic capabilities
+ async for msg in client.run("agentwrap/openai/gpt-4", "Hello!"):
+ if msg.type == "text":
+ print(msg.content)
+
+asyncio.run(main())
+```
+
+### With Tools
+
+```python
+import asyncio
+import multillm
+
+# Define a custom tool
+calculate_tool = multillm.Tool(
+ name="calculate",
+ description="Perform a calculation",
+ parameters={
+ "type": "object",
+ "properties": {
+ "expression": {"type": "string", "description": "Math expression"}
+ },
+ "required": ["expression"]
+ },
+ handler=lambda args: {"result": eval(args["expression"])}
+)
+
+async def main():
+ client = multillm.Client()
+
+ # Use with tools
+ async for msg in client.run(
+ "agentwrap/google/gemini-pro",
+ "What's 25 * 4?",
+ tools=[calculate_tool]
+ ):
+ if msg.type == "text":
+ print(msg.content)
+ elif msg.type == "tool_use":
+ print(f" → Using tool: {msg.tool_name}")
+ elif msg.type == "tool_result":
+ print(f" ← Result: {msg.tool_result}")
+
+asyncio.run(main())
+```
+
+### With Options
+
+```python
+from multillm import AgentOptions
+
+async def main():
+ client = multillm.Client()
+
+ options = AgentOptions(
+ max_turns=5,
+ system_prompt="You are a helpful assistant.",
+ temperature=0.7
+ )
+
+ async for msg in client.run(
+ "agentwrap/anthropic/claude-3-5-sonnet-20241022",
+ "Explain quantum computing",
+ options=options
+ ):
+ if msg.type == "text":
+ print(msg.content)
+
+asyncio.run(main())
+```
+
+## Supported Chat Providers
+
+Any chat provider supported by multillm can be wrapped:
+
+- `agentwrap/openai/gpt-4` - OpenAI GPT-4
+- `agentwrap/openai/gpt-4-turbo` - OpenAI GPT-4 Turbo
+- `agentwrap/openai/gpt-3.5-turbo` - OpenAI GPT-3.5 Turbo
+- `agentwrap/google/gemini-pro` - Google Gemini Pro
+- `agentwrap/google/gemini-1.5-pro` - Google Gemini 1.5 Pro
+- `agentwrap/anthropic/claude-3-5-sonnet-20241022` - Anthropic Claude 3.5 Sonnet
+- `agentwrap/openrouter/...` - Any OpenRouter model
+
+## Model Format
+
+The model string follows the format:
+
+```
+agentwrap/<chat-provider>/<model-name>
+```
+
+Where:
+- `agentwrap` - The agent wrapper provider
+- `<chat-provider>` - The chat provider to wrap (openai, google, anthropic, openrouter)
+- `<model-name>` - The specific model from that provider
+
+## How It Works
+
+1. **Receives prompt**: User sends initial message
+2. **Calls chat API**: Uses the wrapped chat provider via `chat_complete()`
+3. **Returns response**: If no tool calls, returns text and stops
+4. **Executes tools**: If tool calls present, executes them with provided handlers
+5. **Continues loop**: Sends tool results back and gets next response
+6. **Repeats**: Steps 3-5 until no more tool calls or max turns reached
+
+## Configuration
+
+Configure the wrapped provider via multillm config:
+
+```python
+config = {
+ "openai": {"api_key": "sk-..."},
+ "google": {"api_key": "..."},
+ "agentwrap": {
+ "max_turns": 10 # Default max turns if not specified in options
+ }
+}
+
+client = multillm.Client(config)
+```
+
+## Agent Options
+
+All `AgentOptions` are supported:
+
+```python
+from multillm import AgentOptions
+
+options = AgentOptions(
+ system_prompt="Custom system prompt",
+ max_turns=15, # Max tool execution iterations
+ temperature=0.8, # Sampling temperature
+ max_tokens=2000, # Max tokens to generate
+)
+```
+
+## Message Types
+
+The agent yields different message types during execution:
+
+### System Message
+```python
+AgentMessage(
+ type="system",
+ content="Agentic session started",
+)
+```
+
+### Text Message
+```python
+AgentMessage(
+ type="text",
+ content="The answer is 42",
+ raw=<original response object>
+)
+```
+
+### Tool Use Message
+```python
+AgentMessage(
+ type="tool_use",
+ tool_name="calculate",
+ tool_input={"expression": "6*7"},
+ raw=<tool call object>
+)
+```
+
+### Tool Result Message
+```python
+AgentMessage(
+ type="tool_result",
+ tool_name="calculate",
+ tool_result="42",
+ raw=<result dict>
+)
+```
+
+### Result Message
+```python
+AgentMessage(
+ type="result",
+ content="Final answer",
+)
+```
+
+## Comparison with Native Agent Providers
+
+### AgentWrap (This Provider)
+- ✅ Works with any chat provider
+- ✅ Simple tool execution loop
+- ✅ Full control over chat API settings
+- ❌ No built-in tools (must provide custom tools)
+- ❌ No file system access
+- ❌ More basic agentic capabilities
+
+### Native Agent Providers (e.g., Claude)
+- ✅ Advanced agentic capabilities
+- ✅ Built-in tools (Bash, Read, Write, etc.)
+- ✅ File system access
+- ✅ Plan mode, interactive sessions
+- ❌ Limited to specific providers
+
+## Use Cases
+
+### When to Use AgentWrap
+
+- **Different models**: Want agentic behavior with OpenAI, Google, or other chat models
+- **Custom tools**: Need specific tool implementations
+- **Simple workflows**: Basic tool calling without file system access
+- **Cost optimization**: Use cheaper chat models with agentic capabilities
+
+### When to Use Native Agents
+
+- **File operations**: Need to read/write files, run commands
+- **Complex workflows**: Multi-step tasks requiring planning
+- **Built-in tools**: Want Bash, Read, Write, Grep, etc.
+- **Claude-specific**: Need Claude's advanced agentic features
+
+## Limitations
+
+1. **No built-in tools**: Must provide all tools yourself (unlike Claude agent which has Bash, Read, Write, etc.)
+2. **No file system access**: Can't read/write files unless you implement those tools
+3. **No interactive mode**: Single-shot sessions only (no `run_interactive`)
+4. **Tool handlers required**: Tools must have Python handler functions
+
+## Examples
+
+### Calculator Agent
+
+```python
+import asyncio
+import multillm
+
+calculate = multillm.Tool(
+ name="calculate",
+ description="Evaluate a mathematical expression",
+ parameters={
+ "type": "object",
+ "properties": {
+ "expression": {"type": "string"}
+ },
+ "required": ["expression"]
+ },
+ handler=lambda args: {"result": eval(args["expression"])}
+)
+
+async def main():
+ client = multillm.Client()
+
+ async for msg in client.run(
+ "agentwrap/openai/gpt-4",
+ "What's (125 + 75) * 3?",
+ tools=[calculate]
+ ):
+ if msg.type == "text":
+ print(msg.content)
+
+asyncio.run(main())
+```
+
+### Multi-Tool Agent
+
+```python
+import asyncio
+import multillm
+from datetime import datetime
+
+get_time = multillm.Tool(
+ name="get_current_time",
+ description="Get the current time",
+ parameters={"type": "object", "properties": {}},
+ handler=lambda args: {"time": datetime.now().isoformat()}
+)
+
+get_weather = multillm.Tool(
+ name="get_weather",
+ description="Get weather for a location",
+ parameters={
+ "type": "object",
+ "properties": {
+ "location": {"type": "string"}
+ },
+ "required": ["location"]
+ },
+ handler=lambda args: {"temp": 72, "condition": "sunny"}
+)
+
+async def main():
+ client = multillm.Client()
+
+ async for msg in client.run(
+ "agentwrap/google/gemini-pro",
+ "What time is it and what's the weather in Tokyo?",
+ tools=[get_time, get_weather]
+ ):
+ if msg.type == "text":
+ print(msg.content)
+
+asyncio.run(main())
+```
+
+## License
+
+MIT
+
+## Contributing
+
+Contributions welcome! Please see the main multillm repository for guidelines.
+
+## See Also
+
+- [multillm](https://github.com/yourusername/multillm) - Main library
+- [multillm-claude](https://github.com/yourusername/multillm-claude) - Claude agent provider