MCP Server

Connect your AI assistant to 200+ LLM models through the Model Context Protocol. Works with Claude Code, Cursor, and any MCP-compatible client.

200+ Models

Access models from OpenAI, Anthropic, Google & more

Image Generation

Generate images directly from your AI assistant

Unified API

One endpoint for all providers and capabilities

Easy Setup

Works with Claude Code, Cursor, and any MCP client

Available Tools

chat

modelmessagestemperature+1

Send messages to any LLM and get responses. Supports 200+ models from OpenAI, Anthropic, Google, and more.

{
  "model": "gpt-4o",
  "messages": [{ "role": "user", "content": "Hello!" }]
}

generate-image

promptmodelsize+1

Generate images from text prompts using AI image models like Qwen Image. Returns images directly in the response.

{
  "prompt": "A serene mountain landscape at sunset",
  "model": "qwen-image-plus",
  "size": "1024x1024"
}

generate-nano-banana

promptfilenameaspect_ratio

Generate images with Gemini 3 Pro Image Preview. Returns inline image data. Set UPLOAD_DIR on the server to also save images to disk.

{
  "prompt": "A pixel-art cat sitting on a rainbow",
  "filename": "hero-image.png",
  "aspect_ratio": "16:9"
}

list-models

familylimitinclude_deactivated

Discover available models with their capabilities, pricing, and provider information.

{
  "family": "openai",
  "limit": 10
}

list-image-models

No params

Get a list of all available image generation models with pricing and usage examples.

// No parameters required
// Returns: qwen-image-plus, qwen-image-max, etc.

Quick Start

Claude Code
claude mcp add --transport http --scope user llmgateway https://api.llmgateway.io/mcp \
  --header "Authorization: Bearer YOUR_API_KEY"

Run this command in your terminal to add the MCP server globally.

Alternative: Manual JSON configuration
Add to ~/.claude/claude_desktop_config.json
{
  "mcpServers": {
    "llmgateway": {
      "url": "https://api.llmgateway.io/mcp",
      "headers": {
        "Authorization": "Bearer YOUR_API_KEY"
      }
    }
  }
}
Codex
codex mcp add llmgateway --url https://api.llmgateway.io/mcp \
  --bearer-token-env-var LLM_GATEWAY_API_KEY

Set LLM_GATEWAY_API_KEY env var first, then run this command.

Alternative: Manual TOML configuration
Add to ~/.codex/config.toml
[mcp_servers.llmgateway]
url = "https://api.llmgateway.io/mcp"
bearer_token_env_var = "LLM_GATEWAY_API_KEY"
Cursor
{
  "mcpServers": {
    "llmgateway": {
      "url": "https://api.llmgateway.io/mcp",
      "headers": {
        "Authorization": "Bearer YOUR_API_KEY"
      }
    }
  }
}

Add to ~/.cursor/mcp.json

Direct API Call
curl -X POST https://api.llmgateway.io/mcp \
  -H "Content-Type: application/json" \
  -H "Authorization: Bearer YOUR_API_KEY" \
  -d '{
    "jsonrpc": "2.0",
    "id": 1,
    "method": "tools/call",
    "params": {
      "name": "chat",
      "arguments": {
        "model": "gpt-4o",
        "messages": [{"role": "user", "content": "Hello!"}]
      }
    }
  }'

Ready to get started?

Get your API key and start using LLM Gateway with your favorite AI assistant.