OpenAI 接口管理 & 分发系统,支持 Azure、Anthropic Claude、Google PaLM 2 & Gemini、智谱 ChatGLM、百度文心一言、讯飞星火认知、阿里通义千问、360 智脑以及腾讯混元,可用于二次分发管理 key,仅单可执行文件,已打包好 Docker 镜像,一键部署,开箱即用. OpenAI key management & redistribution system, using a single API for all LLMs, and features an English UI.
✨ Access all LLM through the standard OpenAI API format, easy to deploy & use ✨
Deployment Tutorial · Usage · Feedback · Screenshots · Live Demo · FAQ · Related Projects · Donate
Warning: This README is translated by ChatGPT. Please feel free to submit a PR if you find any translation errors.
Warning: The Docker image for English version is
justsong/one-api-en
.
Note: The latest image pulled from Docker may be an
alpha
release. Specify the version manually if you require stability.
Deployment command: docker run --name one-api -d --restart always -p 3000:3000 -e TZ=Asia/Shanghai -v /home/ubuntu/data/one-api:/data justsong/one-api-en
Update command: docker run --rm -v /var/run/docker.sock:/var/run/docker.sock containrrr/watchtower -cR
The first 3000
in -p 3000:3000
is the port of the host, which can be modified as needed.
Data will be saved in the /home/ubuntu/data/one-api
directory on the host. Ensure that the directory exists and has write permissions, or change it to a suitable directory.
Nginx reference configuration:
server{
server_name openai.justsong.cn; # Modify your domain name accordingly
location / {
client_max_body_size 64m;
proxy_http_version 1.1;
proxy_pass http://localhost:3000; # Modify your port accordingly
proxy_set_header Host $host;
proxy_set_header X-Forwarded-For $remote_addr;
proxy_cache_bypass $http_upgrade;
proxy_set_header Accept-Encoding gzip;
}
}
Next, configure HTTPS with Let’s Encrypt certbot:
# Install certbot on Ubuntu:
sudo snap install --classic certbot
sudo ln -s /snap/bin/certbot /usr/bin/certbot
# Generate certificates & modify Nginx configuration
sudo certbot --nginx
# Follow the prompts
# Restart Nginx
sudo service nginx restart
The initial account username is root
and password is 123456
.
git clone https://github.com/songquanpeng/one-api.git
# Build the frontend
cd one-api/web/default
npm install
npm run build
# Build the backend
cd ../..
go mod download
go build -ldflags "-s -w" -o one-api
chmod u+x one-api
./one-api --port 3000 --log-dir ./logs
root
and password is 123456
.For more detailed deployment tutorials, please refer to this page.
SESSION_SECRET
for all servers.SQL_DSN
and use MySQL instead of SQLite. All servers should connect to the same database.NODE_TYPE
for all non-master nodes to slave
.SYNC_FREQUENCY
for servers to periodically sync configurations from the database.FRONTEND_BASE_URL
to redirect page requests to the master server.REDIS_CONN_STRING
so that the database can be accessed with zero latency when the cache has not expired.SYNC_FREQUENCY
must be set to periodically sync configurations from the database.Please refer to the environment variables section for details on using environment variables.
Refer to #175 for detailed instructions.
If you encounter a blank page after deployment, refer to #97 for possible solutions.
Zeabur’s servers are located overseas, automatically solving network issues, and the free quota is sufficient for personal usage.
create database `one-api`
to create the database.PORT
with a value of 3000
, and then add a SQL_DSN
with a value of <username>:<password>@tcp(<addr>:<port>)/one-api
. Save the changes. Please note that if SQL_DSN
is not set, data will not be persisted, and the data will be lost after redeployment.The system is ready to use out of the box.
You can configure it by setting environment variables or command line parameters.
After the system starts, log in as the root
user to further configure the system.
Add your API Key on the Channels
page, and then add an access token on the Tokens
page.
You can then use your access token to access One API. The usage is consistent with the OpenAI API.
In places where the OpenAI API is used, remember to set the API Base to your One API deployment address, for example: https://openai.justsong.cn
. The API Key should be the token generated in One API.
Note that the specific API Base format depends on the client you are using.
graph LR
A(User)
A --->|Request| B(One API)
B -->|Relay Request| C(OpenAI)
B -->|Relay Request| D(Azure)
B -->|Relay Request| E(Other downstream channels)
To specify which channel to use for the current request, you can add the channel ID after the token, for example: Authorization: Bearer ONE_API_KEY-CHANNEL_ID
.
Note that the token needs to be created by an administrator to specify the channel ID.
If the channel ID is not provided, load balancing will be used to distribute the requests to multiple channels.
REDIS_CONN_STRING
: When set, Redis will be used as the storage for request rate limiting instead of memory.
REDIS_CONN_STRING=redis://default:redispw@localhost:49153
SESSION_SECRET
: When set, a fixed session key will be used to ensure that cookies of logged-in users are still valid after the system restarts.
SESSION_SECRET=random_string
SQL_DSN
: When set, the specified database will be used instead of SQLite. Please use MySQL version 8.0.
SQL_DSN=root:123456@tcp(localhost:3306)/oneapi
LOG_SQL_DSN
: When set, a separate database will be used for the logs
table; please use MySQL or PostgreSQL.
LOG_SQL_DSN=root:123456@tcp(localhost:3306)/oneapi-logs
FRONTEND_BASE_URL
: When set, the specified frontend address will be used instead of the backend address.
FRONTEND_BASE_URL=https://openai.justsong.cn
SYNC_FREQUENCY
: When set, the system will periodically sync configurations from the database, with the unit in seconds. If not set, no sync will happen.
SYNC_FREQUENCY=60
NODE_TYPE
: When set, specifies the node type. Valid values are master
and slave
. If not set, it defaults to master
.
NODE_TYPE=slave
CHANNEL_UPDATE_FREQUENCY
: When set, it periodically updates the channel balances, with the unit in minutes. If not set, no update will happen.
CHANNEL_UPDATE_FREQUENCY=1440
CHANNEL_TEST_FREQUENCY
: When set, it periodically tests the channels, with the unit in minutes. If not set, no test will happen.
CHANNEL_TEST_FREQUENCY=1440
POLLING_INTERVAL
: The time interval (in seconds) between requests when updating channel balances and testing channel availability. Default is no interval.
POLLING_INTERVAL=5
BATCH_UPDATE_ENABLED
: Enabling batch database update aggregation can cause a certain delay in updating user quotas. The optional values are ‘true’ and ‘false’, but if not set, it defaults to ‘false’. BATCH_UPDATE_ENABLED=true
BATCH_UPDATE_INTERVAL=5
: The time interval for batch updating aggregates, measured in seconds, defaults to ‘5’. BATCH_UPDATE_INTERVAL=5
GLOBAL_API_RATE_LIMIT
: Global API rate limit (excluding relay requests), the maximum number of requests within three minutes per IP, default to 180.GLOBAL_WEL_RATE_LIMIT
: Global web speed limit, the maximum number of requests within three minutes per IP, default to 60.TIKTOKEN_CACHE_DIR
: By default, when the program starts, it will download the encoding of some common word elements online, such as’ gpt-3.5 turbo ‘. In some unstable network environments or offline situations, it may cause startup problems. This directory can be configured to cache data and can be migrated to an offline environment.DATA_GYM_CACHE_DIR
: Currently, this configuration has the same function as’ TIKTOKEN-CACHE-DIR ', but its priority is not as high as it.RELAY_TIMEOUT
: Relay timeout setting, measured in seconds, with no default timeout time set.RELAY_PROXY
: After setting up, use this proxy to request APIs.USER_CONTENT_REQUEST_TIMEOUT
: The timeout period for users to upload and download content, measured in seconds.USER_CONTENT_REQUEST_PROXY
: After setting up, use this agent to request content uploaded by users, such as images.SQLITE_BUSY_TIMEOUT
: SQLite lock wait timeout setting, measured in milliseconds, default to ‘3000’.GEMINI_SAFETY_SETTING
: Gemini’s security settings are set to ‘BLOCK-NONE’ by default.GEMINI_VERSION
: The Gemini version used by the One API, which defaults to ‘v1’.THE
: The system’s theme setting, default to ‘default’, specific optional values refer to [here] (./web/README. md).ENABLE_METRIC
: Whether to disable channels based on request success rate, default not enabled, optional values are ‘true’ and ‘false’.METRIC_QUEUE_SIZE
: Request success rate statistics queue size, default to ‘10’.METRIC_SUCCESS_RATE_THRESHOLD
: Request success rate threshold, default to ‘0.8’.INITIAL_ROOT_TOKEN
: If this value is set, a root user token with the value of the environment variable will be automatically created when the system starts for the first time.INITIAL_ROOT_ACCESS_TOKEN
: If this value is set, a system management token will be automatically created for the root user with a value of the environment variable when the system starts for the first time.--port <port_number>
: Specifies the port number on which the server listens. Defaults to 3000
.
--port 3000
--log-dir <log_dir>
: Specifies the log directory. If not set, the logs will not be saved.
--log-dir ./logs
--version
: Prints the system version number and exits.--help
: Displays the command usage help and parameter descriptions.
BASE_URL
during deployment.This project is an open-source project. Please use it in compliance with OpenAI’s Terms of Use and applicable laws and regulations. It must not be used for illegal purposes.
This project is released under the MIT license. Based on this, attribution and a link to this project must be included at the bottom of the page.
The same applies to derivative projects based on this project.
If you do not wish to include attribution, prior authorization must be obtained.
According to the MIT license, users should bear the risk and responsibility of using this project, and the developer of this open-source project is not responsible for this.