Leveraging MCP
Chat with your ZenML server
Last updated
Was this helpful?
Chat with your ZenML server
Last updated
Was this helpful?
ZenML server supports a chat interface that allows you to interact with the server using natural language through the . This feature enables you to query your ML pipelines, analyze performance metrics, and generate reports using conversational language instead of traditional CLI commands or dashboard interfaces.
The Model Context Protocol (MCP) is an open protocol that standardizes how applications provide context to Large Language Models (LLMs). Think of it as a "USB-C port for AI applications" - providing a standardized way to connect AI models to different data sources and tools.
MCP follows a client-server architecture where:
MCP Clients: Programs like Claude Desktop or IDEs (Cursor, Windsurf, etc.) that want to access data through MCP
MCP Servers: Lightweight programs that expose specific capabilities through the standardized protocol. Our implementation is of an MCP server that connects to your ZenML server.
The ZenML MCP Server offers several advantages for developers and teams:
Natural Language Interaction: Query your ZenML metadata, code and logs using conversational language instead of memorizing CLI commands or navigating dashboard interfaces.
Contextual Development: Get insights about failing pipelines or performance metrics without switching away from your development environment.
Accessible Analytics: Generate custom reports and visualizations about your pipelines directly through conversation.
Streamlined Workflows: Trigger pipeline runs via natural language requests when you're ready to execute.
You can get a sense of how it works in the following video:
The ZenML MCP server provides access to core read functionality from your ZenML server, allowing you to get live information about:
Users
Stacks
Pipelines
Pipeline runs
Pipeline steps
Services
Stack components
Flavors
Pipeline run templates
Schedules
Artifacts (metadata about data artifacts, not the data itself)
Service Connectors
Step code
Step logs (if the step was run on a cloud-based stack)
It also allows you to trigger new pipeline runs through existing run templates.
The setup process for the ZenML MCP Server is straightforward:
Access to a ZenML Cloud server
A local clone of the repository
Create an MCP config file with your ZenML server details
Configure your preferred MCP client (Claude Desktop or Cursor)
Once set up, you can interact with your ZenML infrastructure through natural language. Here are some example prompts you can try:
Pipeline Analysis Report:
Comparative Pipeline Analysis:
Stack Component Analysis:
Whether you need additional write actions (creating stacks, registering components, etc.)
Examples of how you're using the server in your workflows
Suggestions for additional features or improvements
For the most up-to-date setup instructions and code, please refer to the . We
recommend using the uv
package manager to install the dependencies since it's
the most reliable and fastest setup experience.
installed locally
For detailed setup instructions, please refer to the .
We invite you to try the and share your experiences with us through our . We're particularly interested in:
Contributions and pull requests to are always welcome!