Chat with your Kubernetes Cluster
Hey there! Ever wished you could just chat with your Kubernetes cluster instead of wrestling with endless commands? Well, guess what? Now you can, thanks to a bit of magic from OpenAI's GPT.
In this post, I'm gonna show you a super simple Python script that lets you do just that. Whether you're a coding newbie or a seasoned pro, you'll see how easy it is to make your Kubernetes cluster understand plain English.
So, grab your favorite snack, and let's get this coding party started. It's gonna be fun, I promise!
Design
Here is a diagram illustrating the workflow of the chatbot system, including interactions between the User, Chatbot, OpenAI API, and the Kubernetes cluster:
- User to Chatbot: The user inputs a command or query.
- Chatbot to OpenAI API: The chatbot sends the query to the OpenAI API.
- Conditional Interaction with Kubernetes:
- If the OpenAI API requests the execution of a Kubernetes command, the chatbot executes it on the Kubernetes System and returns the output back to the OpenAI API.
- If no Kubernetes command is needed, the OpenAI API generates a response based on AI.
- Final Response to User: The OpenAI API sends the final response to the chatbot, which then displays it to the user.
Environment Set Up
Before we go into the tutorial, ensure you have Python 3 installed and accessible in your terminal. We'll begin by creating a virtual environment and installing two essential libraries: openai
and colorama
. This setup is crucial for a streamlined development experience.
mkdir k8s-chat
cd k8s-chat
# set up env
python3 -m venv .venv
source .venv/bin/activate
# install libraries
pip install openai
pip install colorama
# create the python file
# for our code
touch chat_with_k8s.py
Code
Let's start by writing a simple python function that will be able to execute any arbitrary kubectl
commands:
Executing Kubectl commands:
import subprocess
def execute_kubectl_cmd(cmd):
# add kubectl prefix if it's not already there
if not cmd.startswith("kubectl"):
cmd = f"kubectl {cmd}"
if cmd.startswith("kubectl delete"):
return "I'm sorry, Deleting resources is disabled."
try:
output = subprocess.check_output(cmd, shell=True)
return output.decode('utf-8')
except subprocess.CalledProcessError as e:
return f"Error executing kubectl command: {e}"
This function is a utility for safely executing Kubernetes commands from a Python script. It ensures that commands are correctly formatted, prevents potentially destructive delete operations, and provides feedback on the success or failure of command execution.
Tool Definition:
Let's then define the using the openai function calling documentation,
import json
import subprocess
from colorama import Back, init, Fore
from openai import OpenAI
client = OpenAI()
model_name = "gpt-3.5-turbo-0125"
tools = [
{
"type": "function",
"function": {
"name": "execute_kubectl_cmd",
"description": "execute the kubeclt command against the current kubernetes cluster",
"parameters": {
"type": "object",
"properties": {
"cmd": {
"type": "string",
"description": "the kubectl command to execute",
},
},
"required": ["cmd"],
},
},
}
]
The client
is simply initializing the openai api, and the model_name
is just a variable for what model we want to use with our system.
A list named tools
is defined with a single tool (execute_kubectl_cmd
), detailing its purpose and required parameters. This tool is intended to execute Kubernetes commands.
Processing Chat Input:
This function contains the chat completion and the interaction with the openai api
def chat_completion(user_input):
messages = [{"role": "user", "content": user_input}]
response = client.chat.completions.create(
model=model_name,
messages=messages,
tools=tools,
tool_choice="auto",
)
response_message = response.choices[0].message
tool_calls = response_message.tool_calls
if tool_calls:
available_functions = {
"execute_kubectl_cmd": execute_kubectl_cmd,
}
messages.append(response_message)
for tool_call in tool_calls:
function_name = tool_call.function.name
function_to_call = available_functions[function_name]
function_args = json.loads(tool_call.function.arguments)
function_response = function_to_call(
cmd=function_args.get("cmd"),
)
messages.append(
{
"tool_call_id": tool_call.id,
"role": "tool",
"name": function_name,
"content": function_response,
}
)
second_response = client.chat.completions.create(
model=model_name,
messages=messages,
)
return second_response.choices[0].message.content
else:
return response_message
Below are the steps the function takes.
- The
chat_completion
function simulates processing user input through an AI model. It prepares the input message, sends it to the OpenAI client, and retrieves a response. - If the response includes a call to the
execute_kubectl_cmd
tool, it executes the command and appends both the AI's response and the command's output to the conversation. - It then sends this extended conversation back to the model for further processing, if necessary, and returns the final message content to be displayed to the user.
Running the Chatbot:
The run_conversation
function initiates an interactive chat session. It welcomes the user and enters a loop to accept user input.
def run_conversation():
print("Welcome to the Kubernetes chatbot!")
print("You can ask me anything about your Kubernetes cluster.")
while True:
print(Back.CYAN + Fore.BLACK + " You: ", end="")
print(" > ", end="")
user_input = input()
if user_input.lower() == "exit" or user_input.lower() == "q":
print("Goodbye!")
break
else:
resp = chat_completion(user_input)
print("")
print(Back.GREEN + Fore.BLACK + " Assitant: ", end="")
print(" > ", end="")
print(f"{resp}")
print("")
If the user types "exit" or "q", the chatbot ends the session. Otherwise, it processes the user's input through the chat_completion
function.
Main Execution
Finally, the script initializes Colorama's auto-reset feature (to prevent color codes from leaking into unrelated terminal output) and starts the conversation loop.
def main():
init(autoreset=True)
run_conversation()
if __name__ == "__main__":
main()
To run the code, just make sure that you have OPENAI_API_KEY
secret in the env and that you are running from the venv
env created at the beginning of the post.
export OPENAI_API_KEY="xxxxxxxxxx"
python chat_with_k8s.py
Demo
We will use Kind to run a local kubernetes cluster, and chat with it.
Conclusion
developing a Python-based chatbot for Kubernetes operations is both feasible and straightforward. Our script leverages essential libraries and concepts such as json
for JSON data manipulation, subprocess
for running shell commands, and colorama
for enhancing terminal output with color.
Utilizing the openai
library, we seamlessly interact with the GPT AI model, particularly through function calling, streamlining the process of communicating with functions or APIs. This demonstrates the practicality and efficiency of integrating AI into operational scripts.
for the full code, you can check the git repository.
And don't forget to sign up to get fresh content delivered to your inbox.
Member discussion