Weco CLI: The command-line interface for LLM-driven code optimization. Automate iterative improvements based on performance metrics to achieve superior results. Ideal for GPU kernels, ML model development, prompt engineering, and other performance-critical code.
Weco systematically optimizes your code, guided directly by your evaluation metrics.
Example applications include:
latency
, throughput
, or memory_bandwidth
.validation_accuracy
, AUC
, or Sharpe Ratio
.win_rate
, relevance
, or format_adherence
The weco
CLI leverages a tree search approach guided by LLMs to iteratively explore and refine your code. It automatically applies changes, runs your evaluation script, parses the results, and proposes further improvements based on the specified goal.
Install the Package:
pip install weco
Set Up LLM API Keys (Required):
weco
requires API keys for the LLMs it uses internally. You must provide these keys via environment variables:
The easiest way to get started with Weco is to use the interactive copilot. Simply navigate to your project directory and run:
weco
Or specify a project path:
weco /path/to/your/project
This launches Weco’s interactive copilot that will:
weco
directly modifies the file specified by --source
during the optimization process. It is strongly recommended to use version control (like Git) to track changes and revert if needed. Alternatively, ensure you have a backup of your original file before running the command. Upon completion, the file will contain the best-performing version of the code found during the run.
Configure optimization parameters yourself - If you need precise control over the optimization parameters, you can use the direct weco run
command:
Example: Optimizing Simple PyTorch Operations
# Navigate to the example directory
cd examples/hello-kernel-world
# Install dependencies
pip install torch
# Run Weco with manual configuration
weco run --source optimize.py \
--eval-command "python evaluate.py --solution-path optimize.py --device cpu" \
--metric speedup \
--goal maximize \
--steps 15 \
--additional-instructions "Fuse operations in the forward method while ensuring the max float deviation remains small. Maintain the same format of the code."
Note: If you have an NVIDIA GPU, change the device in the --eval-command
to cuda
. If you are running this on Apple Silicon, set it to mps
.
For more advanced examples, including Triton, CUDA kernel optimization, ML model optimization, and prompt engineering for math problems, please see the README.md
files within the corresponding subdirectories under the examples/
folder.
weco run
Required:
Argument | Description | Example |
---|---|---|
-s, --source |
Path to the source code file that will be optimized. | -s model.py |
-c, --eval-command |
Command to run for evaluating the code in --source . This command should print the target --metric and its value to the terminal (stdout/stderr). See note below. |
-c "python eval.py" |
-m, --metric |
The name of the metric you want to optimize (e.g., ‘accuracy’, ‘speedup’, ‘loss’). This metric name does not need to match what’s printed by your --eval-command exactly (e.g., its okay to use “speedup” instead of “Speedup:”). |
-m speedup |
-g, --goal |
maximize /max to maximize the --metric or minimize /min to minimize it. |
-g maximize |
Optional:
Argument | Description | Default | Example |
---|---|---|---|
-n, --steps |
Number of optimization steps (LLM iterations) to run. | 100 | -n 50 |
-M, --model |
Model identifier for the LLM to use (e.g., o4-mini , claude-sonnet-4-0 ). |
o4-mini when OPENAI_API_KEY is set; claude-sonnet-4-0 when ANTHROPIC_API_KEY is set; gemini-2.5-pro when GEMINI_API_KEY is set. |
-M o4-mini |
-i, --additional-instructions |
Natural language description of specific instructions or path to a file containing detailed instructions to guide the LLM. | None |
-i instructions.md or -i "Optimize the model for faster inference" |
-l, --log-dir |
Path to the directory to log intermediate steps and final optimization result. | .runs/ |
-l ./logs/ |
Weco offers both anonymous and authenticated usage:
You can use Weco without creating an account by providing LLM API keys via environment variables. This is perfect for trying out Weco or for users who prefer not to create accounts.
To save your optimization runs and view them on the Weco dashboard, you can log in using Weco’s secure device authentication flow:
weco
for the first time, you’ll be prompted to log in or skipweco logout
to clear credentials, then run weco
again to re-authenticateBenefits of authenticated usage:
Command | Description | When to Use |
---|---|---|
weco |
Launch interactive onboarding | Recommended for beginners - Analyzes your codebase and guides you through setup |
weco /path/to/project |
Launch onboarding for specific project | When working with a project in a different directory |
weco run [options] |
Direct optimization execution | For advanced users - When you know exactly what to optimize and how |
weco logout |
Clear authentication credentials | To switch accounts or troubleshoot authentication issues |
You can specify which LLM model to use with the -M
or --model
flag:
# Use with onboarding
weco --model gpt-4o
# Use with direct execution
weco run --model claude-3.5-sonnet --source optimize.py [other options...]
Available models:
gpt-4o
, o4-mini
(requires OPENAI_API_KEY
)claude-3.5-sonnet
, claude-sonnet-4-20250514
(requires ANTHROPIC_API_KEY
)gemini-2.5-pro
(requires GEMINI_API_KEY
)If no model is specified, Weco automatically selects the best available model based on your API keys.
Weco, powered by the AIDE algorithm, optimizes code iteratively based on your evaluation results. Achieving significant improvements, especially on complex research-level tasks, often requires substantial exploration time.
The following plot from the independent Research Engineering Benchmark (RE-Bench) report shows the performance of AIDE (the algorithm behind Weco) on challenging ML research engineering tasks over different time budgets.
As shown, AIDE demonstrates strong performance gains over time, surpassing lower human expert percentiles within hours and continuing to improve. This highlights the potential of evaluation-driven optimization but also indicates that reaching high levels of performance comparable to human experts on difficult benchmarks can take considerable time (tens of hours in this specific benchmark, corresponding to many --steps
in the Weco CLI). Factor this into your planning when setting the number of --steps
for your optimization runs.
The command specified by --eval-command
is crucial. It’s responsible for executing the potentially modified code from --source
and assessing its performance. This command MUST print the metric you specified with --metric
along with its numerical value to the terminal (standard output or standard error). Weco reads this output to understand how well each code version performs and guide the optimization process.
For example, if you set --metric speedup
, your evaluation script (eval.py
in the examples) should output a line like:
speedup: 1.5
or
Final speedup value = 1.5
Weco will parse this output to extract the numerical value (1.5 in this case) associated with the metric name (‘speedup’).
We welcome your contributions! To get started:
Fork & Clone the Repository:
git clone https://github.com/WecoAI/weco-cli.git
cd weco-cli
Install Dependencies:
pip install -e ".[dev]"
Create a Feature Branch:
git checkout -b feature/your-feature-name
Make Changes: Ensure your code adheres to our style guidelines and includes relevant tests.
Commit, Push & Open a PR: Commit your changes, and open a pull request with a clear description of your enhancements.