Skip to main content
Version: v1.4.1

Model Context Protocol

Connect AI agents directly to the Xplainable platform. The xplainable MCP server exposes training, prediction, deployment, and monitoring as tool calls that any MCP-compatible client can invoke.

What is MCP?

The Model Context Protocol (MCP) is an open standard that lets AI assistants call external tools in a structured, safe way. An MCP server advertises a set of tools with typed parameters. Any MCP-compatible client -- Claude Desktop, Cursor, Windsurf, or your own agent -- can discover those tools and call them on behalf of the user.

Connect in seconds

The fastest way to get started is with the hosted MCP endpoint — no installation required. The server handles authentication via OAuth (Auth0), so you'll be prompted to log in with your Xplainable account on first use.

Hosted MCP Endpoint— zero install, OAuth login, always up to date
https://mcp.xplainable.io/mcp

Add to Claude Desktop

Add the hosted server to your Claude Desktop config. Authentication is handled automatically via OAuth — you'll be redirected to log in with your Xplainable account on first connection.

claude_desktop_config.json
1{
2"mcpServers": {
3 "xplainable": {
4 "type": "url",
5 "url": "https://mcp.xplainable.io/mcp"
6 }
7}
8}

Add to Claude Code

1claude mcp add xplainable --transport http https://mcp.xplainable.io/mcp
No API key needed

The hosted endpoint uses OAuth 2.0 (Auth0) — your client will open a browser for you to log in with your Xplainable account. No API key or bearer token is required. Each user gets their own isolated session.

Server Capabilities

Agentic Auto-Train

End-to-end model training from a single prompt. Dataset summarization, feature engineering, training, and deployment.

Inference & Prediction

Score single records or stream batch predictions against deployed models.

Model Management

List, inspect, and manage models, versions, features, and evaluation metrics.

Cloud Operations

Deploy models, manage API keys, configure monitors, and track usage.

Run locally (alternative)

If you prefer to run the server on your own machine:

1pip install xplainable-mcp

Configuration

The server is configured through environment variables.

XPLAINABLE_API_KEYstrRequired
API key from platform.xplainable.io
XPLAINABLE_HOSTstrdefault: https://platform.xplainable.io
API hostname
XPLAINABLE_ORG_IDstr
Organization ID
XPLAINABLE_TEAM_IDstr
Team ID
ENABLE_WRITE_TOOLSbooldefault: true
Enable create, update, and delete tools
RATE_LIMIT_ENABLEDbooldefault: true
Enable rate limiting

Local server config

If running locally, pick your client and copy the config. The server registers itself, picks up your API key from env, and exposes its tools to the assistant.

~/Library/Application Support/Claude/claude_desktop_config.json
json
1{
2 "mcpServers": {
3 "xplainable": {
4 "command": "uvx",
5 "args": ["xplainable-mcp"],
6 "env": {
7 "XPLAINABLE_API_KEY": "your-api-key"
8 }
9 }
10 }
11}

How It Works

Each MCP server exposes a set of tools (callable functions with typed schemas) and resources (read-only data the assistant can reference). The host (Claude, Cursor, …) negotiates capabilities at startup, then routes natural-language requests to tool calls via JSON-RPC.

MCP ClientClaude · Cursor · ClineXplainable MCPlocal stdio serverXplainable Cloudapi.xplainable.ioJSON-RPCHTTPSpredict · train · explain
  1. The AI agent discovers the available tools by calling list_tools or get_workflows.
  2. The user issues a natural-language prompt (e.g. "Train a churn model on the telco dataset").
  3. The agent maps the request to one or more MCP tool calls with the correct parameters.
  4. The MCP server authenticates with the Xplainable Cloud API and executes each operation.
  5. Results flow back to the agent, which summarises them for the user.

Next Steps

See the Tool Reference for a complete list of every tool, its parameters, and descriptions.