Automate browser-based workflows with LLMs and Computer Vision
🐉 Automate Browser-based workflows using LLMs and Computer Vision 🐉
Skyvern automates browser-based workflows using LLMs and computer vision. It provides a simple API endpoint to fully automate manual workflows on a large number of websites, replacing brittle or unreliable automation solutions.
Traditional approaches to browser automations required writing custom scripts for websites, often relying on DOM parsing and XPath-based interactions which would break whenever the website layouts changed.
Instead of only relying on code-defined XPath interactions, Skyvern relies on Vision LLMs to interact with the websites.
Want to see examples of Skyvern in action? Jump to #real-world-examples-of-skyvern
Skyvern Cloud is a managed cloud version of Skyvern that allows you to run Skyvern without worrying about the infrastructure. It allows you to run multiple Skyvern instances in parallel and comes bundled with anti-bot detection mechanisms, proxy network, and CAPTCHA solvers.
If you’d like to try it out, navigate to app.skyvern.com and create an account.
⚠️ Supported Python Versions: Python 3.11, 3.12, 3.13 ⚠️
pip install skyvern
skyvern quickstart
from skyvern import Skyvern
skyvern = Skyvern()
task = await skyvern.run_task(prompt="Find the top post on hackernews today")
print(task)
Skyvern starts running the task in a browser that pops up and closes it when the task is done. You will be able to review the task from http://localhost:8080/history
You can also run a task on Skyvern Cloud:
from skyvern import Skyvern
skyvern = Skyvern(api_key="SKYVERN API KEY")
task = await skyvern.run_task(prompt="Find the top post on hackernews today")
print(task)
Or your local Skyvern service from step 2:
# Find your API KEY in .env
skyvern = Skyvern(base_url="http://localhost:8000", api_key="LOCAL SKYVERN API KEY")
task = await skyvern.run_task(prompt="Find the top post on hackernews today")
print(task)
Check out more features to use for Skyvern task in our official doc. Here are a couple of interesting examples:
⚠️ WARNING: Since Chrome 136, Chrome refuses any CDP connect to the browser using the default user_data_dir. In order to use your browser data, Skyvern copies your default user_data_dir to
./tmp/user_data_dir
the first time connecting to your local browser. ⚠️
from skyvern import Skyvern
# The path to your Chrome browser. This example path is for Mac.
browser_path = "/Applications/Google Chrome.app/Contents/MacOS/Google Chrome"
skyvern = Skyvern(
base_url="http://localhost:8000",
api_key="YOUR_API_KEY",
browser_path=browser_path,
)
task = await skyvern.run_task(
prompt="Find the top post on hackernews today",
)
Add two variables to your .env file:
# The path to your Chrome browser. This example path is for Mac.
CHROME_EXECUTABLE_PATH="/Applications/Google Chrome.app/Contents/MacOS/Google Chrome"
BROWSER_TYPE=cdp-connect
Restart Skyvern service skyvern run all
and run the task through UI or code:
from skyvern import Skyvern
skyvern = Skyvern(
base_url="http://localhost:8000",
api_key="YOUR_API_KEY",
)
task = await skyvern.run_task(
prompt="Find the top post on hackernews today",
)
Grab the cdp connection url and pass it to Skyvern
from skyvern import Skyvern
skyvern = Skyvern(cdp_url="your cdp connection url")
task = await skyvern.run_task(
prompt="Find the top post on hackernews today",
)
You can do this by adding the data_extraction_schema
parameter:
from skyvern import Skyvern
skyvern = Skyvern()
task = await skyvern.run_task(
prompt="Find the top post on hackernews today",
data_extraction_schema={
"type": "object",
"properties": {
"title": {
"type": "string",
"description": "The title of the top post"
},
"url": {
"type": "string",
"description": "The URL of the top post"
},
"points": {
"type": "integer",
"description": "Number of points the post has received"
}
}
}
)
Launch the Skyvern Server Separately
skyvern run server
Launch the Skyvern UI
skyvern run ui
Check status of the Skyvern service
skyvern status
Stop the Skyvern service
skyvern stop all
Stop the Skyvern UI
skyvern stop ui
Stop the Skyvern Server Separately
skyvern stop server
docker ps
to check)skyvern init llm
to generate a .env
file. This will be copied into the Docker image. docker compose up -d
http://localhost:8080
in your browser to start using the UIImportant: Only one Postgres container can run on port 5432 at a time. If you switch from the CLI-managed Postgres to Docker Compose, you must first remove the original container:
docker rm -f postgresql-container
If you encounter any database related errors while using Docker to run Skyvern, check which Postgres container is running with docker ps
.
Skyvern was inspired by the Task-Driven autonomous agent design popularized by BabyAGI and AutoGPT – with one major bonus: we give Skyvern the ability to interact with websites using browser automation libraries like Playwright.
Skyvern uses a swarm of agents to comprehend a website, and plan and execute its actions:
This approach has a few advantages:
https://github.com/user-attachments/assets/5cab4668-e8e2-4982-8551-aab05ff73a7f
Tasks are the fundamental building block inside Skyvern. Each task is a single request to Skyvern, instructing it to navigate through a website and accomplish a specific goal.
Tasks require you to specify a url
, prompt
, and can optionally include a data schema
(if you want the output to conform to a specific schema) and error codes
(if you want Skyvern to stop running in specific situations).
Workflows are a way to chain multiple tasks together to form a cohesive unit of work.
For example, if you wanted to download all invoices newer than January 1st, you could create a workflow that first navigated to the invoices page, then filtered down to only show invoices newer than January 1st, extracted a list of all eligible invoices, and iterated through each invoice to download it.
Another example is if you wanted to automate purchasing products from an e-commerce store, you could create a workflow that first navigated to the desired product, then added it to a cart. Second, it would navigate to the cart and validate the cart state. Finally, it would go through the checkout process to purchase the items.
Supported workflow features include:
Skyvern allows you to livestream the viewport of the browser to your local machine so that you can see exactly what Skyvern is doing on the web. This is useful for debugging and understanding how Skyvern is interacting with a website, and intervening when necessary
Skyvern is natively capable of filling out form inputs on websites. Passing in information via the navigation_goal
will allow Skyvern to comprehend the information and fill out the form accordingly.
Skyvern is also capable of extracting data from a website.
You can also specify a data_extraction_schema
directly within the main prompt to tell Skyvern exactly what data you’d like to extract from the website, in jsonc format. Skyvern’s output will be structured in accordance to the supplied schema.
Skyvern is also capable of downloading files from a website. All downloaded files are automatically uploaded to block storage (if configured), and you can access them via the UI.
Skyvern supports a number of different authentication methods to make it easier to automate tasks behind a login. If you’d like to try it out, please reach out to us via email or discord.
Skyvern supports a number of different 2FA methods to allow you to automate workflows that require 2FA.
Examples include:
🔐 Learn more about 2FA support here.
Skyvern currently supports the following password manager integrations:
Skyvern supports the Model Context Protocol (MCP) to allow you to use any LLM that supports MCP.
See the MCP documentation here
Skyvern supports Zapier, Make.com, and N8N to allow you to connect your Skyvern workflows to other apps.
We love to see how Skyvern is being used in the wild. Here are some examples of how Skyvern is being used to automate workflows in the real world. Please open PRs to add your own examples!
For a complete local environment CLI Installation
pip install -e .
The following command sets up your development environment to use pre-commit (our commit hook handler)
skyvern quickstart contributors
http://localhost:8080
in your browser to start using the UIMore extensive documentation can be found on our 📕 docs page. Please let us know if something is unclear or missing by opening an issue or reaching out to us via email or discord.
Provider | Supported Models |
---|---|
OpenAI | gpt4-turbo, gpt-4o, gpt-4o-mini |
Anthropic | Claude 3 (Haiku, Sonnet, Opus), Claude 3.5 (Sonnet) |
Azure OpenAI | Any GPT models. Better performance with a multimodal llm (azure/gpt4-o) |
AWS Bedrock | Anthropic Claude 3 (Haiku, Sonnet, Opus), Claude 3.5 (Sonnet) |
Ollama | Run any locally hosted model via Ollama |
OpenRouter | Access models through OpenRouter |
Gemini | Coming soon (contributions welcome) |
Llama 3.2 | Coming soon (contributions welcome) |
Novita AI | Llama 3.1 (8B, 70B), Llama 3.2 (1B, 3B, 11B Vision) |
OpenAI-compatible | Any custom API endpoint that follows OpenAI’s API format (via liteLLM) |
Variable | Description | Type | Sample Value |
---|---|---|---|
ENABLE_OPENAI |
Register OpenAI models | Boolean | true , false |
OPENAI_API_KEY |
OpenAI API Key | String | sk-1234567890 |
OPENAI_API_BASE |
OpenAI API Base, optional | String | https://openai.api.base |
OPENAI_ORGANIZATION |
OpenAI Organization ID, optional | String | your-org-id |
Supported LLM Keys: OPENAI_GPT4_TURBO
, OPENAI_GPT4V
, OPENAI_GPT4O
, OPENAI_GPT4O_MINI
Variable | Description | Type | Sample Value |
---|---|---|---|
ENABLE_ANTHROPIC |
Register Anthropic models | Boolean | true , false |
ANTHROPIC_API_KEY |
Anthropic API key | String | sk-1234567890 |
Supported LLM Keys: ANTHROPIC_CLAUDE3
, ANTHROPIC_CLAUDE3_OPUS
, ANTHROPIC_CLAUDE3_SONNET
, ANTHROPIC_CLAUDE3_HAIKU
, ANTHROPIC_CLAUDE3.5_SONNET
Variable | Description | Type | Sample Value |
---|---|---|---|
ENABLE_AZURE |
Register Azure OpenAI models | Boolean | true , false |
AZURE_API_KEY |
Azure deployment API key | String | sk-1234567890 |
AZURE_DEPLOYMENT |
Azure OpenAI Deployment Name | String | skyvern-deployment |
AZURE_API_BASE |
Azure deployment api base url | String | https://skyvern-deployment.openai.azure.com/ |
AZURE_API_VERSION |
Azure API Version | String | 2024-02-01 |
Supported LLM Key: AZURE_OPENAI
Variable | Description | Type | Sample Value |
---|---|---|---|
ENABLE_BEDROCK |
Register AWS Bedrock models. To use AWS Bedrock, you need to make sure your AWS configurations are set up correctly first. | Boolean | true , false |
Supported LLM Keys: BEDROCK_ANTHROPIC_CLAUDE3_OPUS
, BEDROCK_ANTHROPIC_CLAUDE3_SONNET
, BEDROCK_ANTHROPIC_CLAUDE3_HAIKU
, BEDROCK_ANTHROPIC_CLAUDE3.5_SONNET
, BEDROCK_AMAZON_NOVA_PRO
, BEDROCK_AMAZON_NOVA_LITE
Variable | Description | Type | Sample Value |
---|---|---|---|
ENABLE_GEMINI |
Register Gemini models | Boolean | true , false |
GEMINI_API_KEY |
Gemini API Key | String | your_google_gemini_api_key |
Supported LLM Keys: GEMINI_PRO
, GEMINI_FLASH
Variable | Description | Type | Sample Value |
---|---|---|---|
ENABLE_NOVITA |
Register Novita AI models | Boolean | true , false |
NOVITA_API_KEY |
Novita AI API Key | String | your_novita_api_key |
Supported LLM Keys: NOVITA_DEEPSEEK_R1
, NOVITA_DEEPSEEK_V3
, NOVITA_LLAMA_3_3_70B
, NOVITA_LLAMA_3_2_1B
, NOVITA_LLAMA_3_2_3B
, NOVITA_LLAMA_3_2_11B_VISION
, NOVITA_LLAMA_3_1_8B
, NOVITA_LLAMA_3_1_70B
, NOVITA_LLAMA_3_1_405B
, NOVITA_LLAMA_3_8B
, NOVITA_LLAMA_3_70B
Variable | Description | Type | Sample Value |
---|---|---|---|
ENABLE_OLLAMA |
Register local models via Ollama | Boolean | true , false |
OLLAMA_SERVER_URL |
URL for your Ollama server | String | http://host.docker.internal:11434 |
OLLAMA_MODEL |
Ollama model name to load | String | qwen2.5:7b-instruct |
Supported LLM Key: OLLAMA
Variable | Description | Type | Sample Value |
---|---|---|---|
ENABLE_OPENROUTER |
Register OpenRouter models | Boolean | true , false |
OPENROUTER_API_KEY |
OpenRouter API key | String | sk-1234567890 |
OPENROUTER_MODEL |
OpenRouter model name | String | mistralai/mistral-small-3.1-24b-instruct |
OPENROUTER_API_BASE |
OpenRouter API base URL | String | https://api.openrouter.ai/v1 |
Supported LLM Key: OPENROUTER
Variable | Description | Type | Sample Value |
---|---|---|---|
ENABLE_OPENAI_COMPATIBLE |
Register a custom OpenAI-compatible API endpoint | Boolean | true , false |
OPENAI_COMPATIBLE_MODEL_NAME |
Model name for OpenAI-compatible endpoint | String | yi-34b , gpt-3.5-turbo , mistral-large , etc. |
OPENAI_COMPATIBLE_API_KEY |
API key for OpenAI-compatible endpoint | String | sk-1234567890 |
OPENAI_COMPATIBLE_API_BASE |
Base URL for OpenAI-compatible endpoint | String | https://api.together.xyz/v1 , http://localhost:8000/v1 , etc. |
OPENAI_COMPATIBLE_API_VERSION |
API version for OpenAI-compatible endpoint, optional | String | 2023-05-15 |
OPENAI_COMPATIBLE_MAX_TOKENS |
Maximum tokens for completion, optional | Integer | 4096 , 8192 , etc. |
OPENAI_COMPATIBLE_TEMPERATURE |
Temperature setting, optional | Float | 0.0 , 0.5 , 0.7 , etc. |
OPENAI_COMPATIBLE_SUPPORTS_VISION |
Whether model supports vision, optional | Boolean | true , false |
Supported LLM Key: OPENAI_COMPATIBLE
Variable | Description | Type | Sample Value |
---|---|---|---|
LLM_KEY |
The name of the model you want to use | String | See supported LLM keys above |
SECONDARY_LLM_KEY |
The name of the model for mini agents skyvern runs with | String | See supported LLM keys above |
LLM_CONFIG_MAX_TOKENS |
Override the max tokens used by the LLM | Integer | 128000 |
This is our planned roadmap for the next few months. If you have any suggestions or would like to see a feature added, please don’t hesitate to reach out to us via email or discord.
We welcome PRs and suggestions! Don’t hesitate to open a PR/issue or to reach out to us via email or discord.
Please have a look at our contribution guide and
“Help Wanted” issues to get started!
If you want to chat with the skyvern repository to get a high level overview of how it is structured, how to build off it, and how to resolve usage questions, check out Code Sage.
By Default, Skyvern collects basic usage statistics to help us understand how Skyvern is being used. If you would like to opt-out of telemetry, please set the SKYVERN_TELEMETRY
environment variable to false
.
Skyvern’s open source repository is supported via a managed cloud. All of the core logic powering Skyvern is available in this open source repository licensed under the AGPL-3.0 License, with the exception of anti-bot measures available in our managed cloud offering.
If you have any questions or concerns around licensing, please contact us and we would be happy to help.