Introduction to tlm: Local CLI Copilot Powered by CodeLLaMa
Overview
tlm is a command-line interface (CLI) tool designed to be a local copilot for developers. It operates right from your workstation without needing additional resources or an internet connection. At its core, tlm harnesses the power of CodeLLaMa, a large language model optimized for coding tasks. This makes tlm an efficient and powerful companion for generating command line suggestions, explanations, and more.
Features
- No API Key Required: Unlike other tools such as ChatGPT or Github Copilot, tlm does not require an API key or any subscription.
- Offline Capabilities: Operates fully without an internet connection, ensuring privacy and accessibility.
- Cross-Platform Compatibility: Supports macOS, Linux, and Windows, making it versatile and suitable for various environments.
- Automatic Shell Detection: tlm can automatically identify and adapt to different shell environments including Powershell, Bash, and Zsh.
- Efficient Command Generation: Offers handy features such as one-liner generation and command explanation to streamline your workflow.
Installation
tlm provides flexibility in installation with two primary methods:
1. Installation Script (Recommended)
This script automatically detects your operating system and architecture, then executes the appropriate installation commands.
-
For Linux and macOS:
curl -fsSL https://raw.githubusercontent.com/yusufcanb/tlm/1.1/install.sh | sudo -E bash
-
For Windows (Powershell 5.1 or Higher):
Invoke-RestMethod -Uri https://raw.githubusercontent.com/yusufcanb/tlm/1.1/install.ps1 | Invoke-Expression
2. Go Install
If you have Go version 1.21 or higher, you can install tlm using this method:
go install github.com/yusufcanb/tlm@latest
After installing, deploy the model files with:
tlm deploy
Verify the installation by running:
tlm help
Prerequisites
Before installing tlm, ensure you have Ollama downloaded, as it is necessary for downloading the required models.
-
For macOS and Windows users: Follow download instructions on Ollama's official page.
-
For Linux users: Execute the following script:
curl -fsSL https://ollama.com/install.sh | sh
-
Using Docker images:
-
CPU Only:
docker run -d -v ollama:/root/.ollama -p 11434:11434 --name ollama ollama/ollama
-
With GPU:
docker run -d --gpus=all -v ollama:/root/.ollama -p 11434:11434 --name ollama ollama/ollama
-
Uninstallation
To remove tlm from your system:
-
On Linux and macOS:
rm /usr/local/bin/tlm
-
On Windows:
Remove-Item -Recurse -Force "C:\Users\$env:USERNAME\AppData\Local\Programs\tlm"
Contributors
tlm has been developed and maintained by a dedicated team of contributors. You can view the complete list of contributors on Github.
In conclusion, tlm presents a comprehensive CLI tool that simplifies command generation and explanation without the need for an internet connection or external resources, facilitating an enhanced and seamless development experience.