Chat with your Kubernetes Cluster using AI tools and IDEs like Claude and Cursor!
A Model Context Protocol (MCP) server for Kubernetes that enables AI assistants like Claude, Cursor, and others to interact with Kubernetes clusters through natural language.
kubectl-mcp-tool
in Action with Claude!kubectl-mcp-tool
in Action with Cursor!kubectl-mcp-tool
in Action with Windsurf!The Kubectl MCP Tool implements the Model Context Protocol (MCP), enabling AI assistants to interact with Kubernetes clusters through a standardized interface. The architecture consists of:
The tool operates in two modes:
For detailed installation instructions, please see the Installation Guide.
You can install kubectl-mcp-tool directly from PyPI:
pip install kubectl-mcp-tool
For a specific version:
pip install kubectl-mcp-tool==1.1.1
The package is available on PyPI: https://pypi.org/project/kubectl-mcp-tool/1.1.1/
# Install latest version from PyPI
pip install kubectl-mcp-tool
# Or install development version from GitHub
pip install git+https://github.com/rohitg00/kubectl-mcp-server.git
# Clone the repository
git clone https://github.com/rohitg00/kubectl-mcp-server.git
cd kubectl-mcp-server
# Install in development mode
pip install -e .
After installation, verify the tool is working correctly:
kubectl-mcp --help
Note: This tool is designed to work as an MCP server that AI assistants connect to, not as a direct kubectl replacement. The primary command available is kubectl-mcp serve
which starts the MCP server.
If you prefer using Docker, a pre-built image is available on Docker Hub:
# Pull the latest image
docker pull rohitghumare64/kubectl-mcp-server:latest
The server inside the container listens on port 8000. Bind any free host port to 8000 and mount your kubeconfig:
# Replace 8081 with any free port on your host
# Mount your local ~/.kube directory for cluster credentials
docker run -p 8081:8000 \
-v $HOME/.kube:/root/.kube \
rohitghumare64/kubectl-mcp-server:latest
-p 8081:8000
maps host port 8081 → container port 8000.-v $HOME/.kube:/root/.kube
mounts your kubeconfig so the server can reach the cluster.If you want to build and push a multi-arch image (so it runs on both x86_64 and Apple Silicon), use Docker Buildx:
# Ensure Buildx and QEMU are installed once per machine
# docker buildx create --name multiarch --use
# docker buildx inspect --bootstrap
# Build and push for linux/amd64 and linux/arm64
# (replace <your_username> if you're publishing to your own registry)
docker buildx build \
--platform linux/amd64,linux/arm64 \
-t rohitghumare64/kubectl-mcp-server:latest \
--push .
The published image will contain a manifest list with both architectures, and Docker will automatically pull the correct variant on each machine.
The MCP server is allowed to access these paths to read your Kubernetes configuration:
run:
volumes:
- '{{kubectl-mcp-server.kubeconfig}}:/root/.kube'
config:
description: The MCP server is allowed to access this path
parameters:
type: object
properties:
kubeconfig:
type: string
default:
$HOME/.kube
required:
- kubeconfig
This configuration allows users to add their kubeconfig directory to the container, enabling the MCP server to authenticate with their Kubernetes cluster.
The MCP Server (kubectl_mcp_tool.mcp_server
) is a robust implementation built on the FastMCP SDK that provides enhanced compatibility across different AI assistants:
Note: If you encounter any errors with the MCP Server implementation, you can fall back to using the minimal wrapper by replacing
kubectl_mcp_tool.mcp_server
withkubectl_mcp_tool.minimal_wrapper
in your configuration. The minimal wrapper provides basic capabilities with simpler implementation.
Direct Configuration
{
"mcpServers": {
"kubernetes": {
"command": "python",
"args": ["-m", "kubectl_mcp_tool.mcp_server"],
"env": {
"KUBECONFIG": "/path/to/your/.kube/config",
"PATH": "/usr/local/bin:/usr/bin:/bin:/usr/sbin:/sbin",
"MCP_LOG_FILE": "/path/to/logs/debug.log",
"MCP_DEBUG": "1"
}
}
}
}
Key Environment Variables
MCP_LOG_FILE
: Path to log file (recommended to avoid stdout pollution)MCP_DEBUG
: Set to “1” for verbose loggingMCP_TEST_MOCK_MODE
: Set to “1” to use mock data instead of real clusterKUBECONFIG
: Path to your Kubernetes config fileKUBECTL_MCP_LOG_LEVEL
: Set to “DEBUG”, “INFO”, “WARNING”, or “ERROR”Testing the MCP Server
You can test if the server is working correctly with:
python -m kubectl_mcp_tool.simple_ping
This will attempt to connect to the server and execute a ping command.
Alternatively, you can directly run the server with:
python -m kubectl_mcp_tool
Add the following to your Claude Desktop configuration at ~/.config/claude/mcp.json
(Windows: %APPDATA%\Claude\mcp.json
):
{
"mcpServers": {
"kubernetes": {
"command": "python",
"args": ["-m", "kubectl_mcp_tool.mcp_server"],
"env": {
"KUBECONFIG": "/path/to/your/.kube/config"
}
}
}
}
Add the following to your Cursor AI settings under MCP by adding a new global MCP server:
{
"mcpServers": {
"kubernetes": {
"command": "python",
"args": ["-m", "kubectl_mcp_tool.mcp_server"],
"env": {
"KUBECONFIG": "/path/to/your/.kube/config",
"PATH": "/usr/local/bin:/usr/bin:/bin:/usr/sbin:/sbin:/opt/homebrew/bin"
}
}
}
}
Save this configuration to ~/.cursor/mcp.json
for global settings.
Note: Replace
/path/to/your/.kube/config
with the actual path to your kubeconfig file. On most systems, this is~/.kube/config
.
Add the following to your Windsurf configuration at ~/.config/windsurf/mcp.json
(Windows: %APPDATA%\WindSurf\mcp.json
):
{
"mcpServers": {
"kubernetes": {
"command": "python",
"args": ["-m", "kubectl_mcp_tool.mcp_server"],
"env": {
"KUBECONFIG": "/path/to/your/.kube/config"
}
}
}
}
For automatic configuration of all supported AI assistants, run the provided installation script:
bash install.sh