A Python library and command line client for https://www.mcp.run. This tool enables seamless interaction with various AI models while providing access to a suite of powerful tools.
- List Tools: Browse available tools and their capabilities
- Direct Tool Execution: Run tools with specific inputs without LLM interaction
- Tool Integration: Use tools seamlessly within AI chat conversations
- Ollama: https://ollama.com/
- Claude: https://www.anthropic.com/api
- OpenAI: https://openai.com/api/
- Gemini: https://ai.google.dev/
- Llamafile: https://github.com/Mozilla-Ocho/llamafile
- Real-time chat interface with AI models
- Tool suggestion and execution within conversations
- Support for both local and cloud-based AI providers
uv
npm
ollama
(optional)
You will need to get an mcp.run session ID:
$ npx --yes -p @dylibso/mcpx gen-session
Login successful!
Session: kabA7w6qH58H7kKOQ5su4v3bX_CeFn4k.Y4l/s/9dQwkjv9r8t/xZFjsn2fkLzf+tkve89P1vKhQ
Then set the MPC_RUN_SESSION_ID
environment variable:
$ export MCP_RUN_SESSION_ID=kabA7w6qH58H7kKOQ5su4v3bX_CeFn4k.Y4l/s/9dQwkjv9r8t/xZFjsn2fkLzf+tkve89P1vKhQ
Using uv
:
$ uv add git+https://github.com/dylibso/mcpx-py
Or pip
:
$ pip install git+https://github.com/dylibso/mcpx-py
from mcpx import Client # Import the mcp.run client
client = Client() # Create the client, this will check the
# `MCP_RUN_SESSION_ID` environment variable
# Call a tool with the given input
results = client.call("eval-js", {"code": "'Hello, world!'"})
# Iterate over the results
for content in results.content:
print(content.text)
More examples can be found in the examples/ directory
uv tool install git+https://github.com/dylibso/mcpx-py
Or from the root of the repo:
uv tool install .
mcpx-client can also be executed without being installed using uvx
:
uvx --from git+https://github.com/dylibso/mcpx-py mcpx-client
mcpx-client --help
mcpx-client chat
mcpx-client list
mcpx-client tool eval-js '{"code": "2+2"}'
- Sign up for an Anthropic API account at https://console.anthropic.com
- Get your API key from the console
- Set the environment variable:
ANTHROPIC_API_KEY=your_key_here
- Create an OpenAI account at https://platform.openai.com
- Generate an API key in your account settings
- Set the environment variable:
OPENAI_API_KEY=your_key_here
- Create an Gemini account at https://aistudio.google.com
- Generate an API key in your account settings
- Set the environment variable:
GEMINI_API_KEY=your_key_here
- Install Ollama from https://ollama.ai
- Pull your desired model:
ollama pull llama3.2
- No API key needed - runs locally
- Download a Llamafile model from https://github.com/Mozilla-Ocho/llamafile/releases
- Make the file executable:
chmod +x your-model.llamafile
- Run in JSON API mode:
./your-model.llamafile --json-api --host 127.0.0.1 --port 8080
- Use with the OpenAI provider pointing to
http://localhost:8080