Model Context Protocol
Connect AI agents directly to the Xplainable platform. The xplainable MCP server exposes training, prediction, deployment, and monitoring as tool calls that any MCP-compatible client can invoke.
The Model Context Protocol (MCP) is an open standard that lets AI assistants call external tools in a structured, safe way. An MCP server advertises a set of tools with typed parameters. Any MCP-compatible client -- Claude Desktop, Cursor, Windsurf, or your own agent -- can discover those tools and call them on behalf of the user.
Connect in seconds
The fastest way to get started is with the hosted MCP endpoint — no installation required. The server handles authentication via OAuth (Auth0), so you'll be prompted to log in with your Xplainable account on first use.
https://mcp.xplainable.io/mcpAdd to Claude Desktop
Add the hosted server to your Claude Desktop config. Authentication is handled automatically via OAuth — you'll be redirected to log in with your Xplainable account on first connection.
Add to Claude Code
The hosted endpoint uses OAuth 2.0 (Auth0) — your client will open a browser for you to log in with your Xplainable account. No API key or bearer token is required. Each user gets their own isolated session.
Server Capabilities
End-to-end model training from a single prompt. Dataset summarization, feature engineering, training, and deployment.
Score single records or stream batch predictions against deployed models.
List, inspect, and manage models, versions, features, and evaluation metrics.
Deploy models, manage API keys, configure monitors, and track usage.
Run locally (alternative)
If you prefer to run the server on your own machine:
Configuration
The server is configured through environment variables.
https://platform.xplainable.iotruetrueLocal server config
If running locally, pick your client and copy the config. The server registers itself, picks up your API key from env, and exposes its tools to the assistant.
~/Library/Application Support/Claude/claude_desktop_config.jsonHow It Works
Each MCP server exposes a set of tools (callable functions with typed schemas) and resources (read-only data the assistant can reference). The host (Claude, Cursor, …) negotiates capabilities at startup, then routes natural-language requests to tool calls via JSON-RPC.
- The AI agent discovers the available tools by calling
list_toolsorget_workflows. - The user issues a natural-language prompt (e.g. "Train a churn model on the telco dataset").
- The agent maps the request to one or more MCP tool calls with the correct parameters.
- The MCP server authenticates with the Xplainable Cloud API and executes each operation.
- Results flow back to the agent, which summarises them for the user.
Next Steps
See the Tool Reference for a complete list of every tool, its parameters, and descriptions.