Introduction to the Kubectl OpenAI Plugin
The Kubectl OpenAI Plugin is an innovative tool designed as a kubectl
plugin to enhance the Kubernetes experience. It leverages the capabilities of OpenAI's GPT to generate and apply Kubernetes manifests efficiently. The primary goal of this project is to alleviate the hassle of searching for and gathering random manifests, especially during development and testing processes.
Features and Demonstration
A comprehensive demonstration of this tool can be viewed here. The demo highlights the plugin's seamless integration with Kubernetes and its ability to streamline manifest creation.
Installation Methods
Homebrew
To install using Homebrew, add the plugin to the brew tap and execute the following commands:
brew tap sozercan/kubectl-ai https://github.com/sozercan/kubectl-ai
brew install kubectl-ai
Krew
For installation via Krew, add the plugin to your krew index and execute:
kubectl krew index add kubectl-ai https://github.com/sozercan/kubectl-ai
kubectl krew install kubectl-ai/kubectl-ai
GitHub Release
Download the binary directly from GitHub releases. For use as a kubectl
plugin, ensure the kubectl-ai
binary is in your PATH
. Alternatively, you can use the binary in standalone mode.
Prerequisites and Usage
Requirements
To utilize kubectl-ai
, ensure you have a valid Kubernetes configuration and one of the following:
- An OpenAI API key, available through the OpenAI platform.
- An Azure OpenAI Service API key and endpoint.
- An OpenAI API-compatible endpoint, configured with options like AIKit or LocalAI.
Environment Setup
Configure your environment with the necessary variables for integration:
export OPENAI_API_KEY=<your_openai_key>
export OPENAI_DEPLOYMENT_NAME=<deployment/model_name>
export OPENAI_ENDPOINT=<endpoint_url>
Local Endpoint Setup
For those without OpenAI API access, a local AI endpoint can be established using AIKit:
docker run -d --rm -p 8080:8080 ghcr.io/sozercan/llama3.1:8b
export OPENAI_ENDPOINT="http://localhost:8080/v1"
export OPENAI_DEPLOYMENT_NAME="llama-3.1-8b-instruct"
export OPENAI_API_KEY="n/a"
Functional Configuration
Customize the plugin's operations with flags and environment variables. For example:
- Set
--require-confirmation
to prompt for confirmation before manifest application. - Adjust the
--temperature
for creative or deterministic outputs. - Enable
--use-k8s-api
for precise completions using Kubernetes OpenAPI specifications.
Practical Application
Command Examples
Creating a Deployment:
kubectl ai "create an nginx deployment with 3 replicas"
Refining a Manifest:
Reprompt: update to 5 replicas and port 8080
Handling Multiple Objects:
kubectl ai "create a foo namespace then create nginx pod in that namespace"
Automatic Application:
kubectl ai "create a service with type LoadBalancer with selector as 'app:nginx'" --require-confirmation=false
Using kubectl-ai
facilitates efficient Kubernetes manifest generation and application, leveraging the power of AI to enhance developer productivity and accuracy.