A LLM semantic caching system aiming to enhance user experience by reducing response time via cached query-result pairs.
中文 | English
Codefuse-ModelCache is a semantic cache for large language models (LLMs). By caching pre-generated model results, it reduces response time for similar requests and improves user experience.
This project aims to optimize services by introducing a caching mechanism. It helps businesses and research institutions reduce the cost of inference deployment, improve model performance and efficiency, and provide scalable services for large models. Through open-source, we aim to share and exchange technologies related to large model semantic cache.
You can find the start script in flask4modelcache.py
and flask4modelcache_demo.py
.
flask4modelcache_demo.py
: A quick test service that embeds SQLite and FAISS. No database configuration required.flask4modelcache.py
: The standard service that requires MySQL and Milvus configuration.Python: V3.8 or above
Package installation
pip install -r requirements.txt
model/text2vec-base-chinese
folder.cd CodeFuse-ModelCache
python flask4modelcache_demo.py
model/text2vec-base-chinese
folder.cd CodeFuse-ModelCache
docker network create modelcache
# When the modelcache image does not exist locally for the first time, or when the Dockerfile is changed
docker-compose up --build
# This is not the first run and the Dockerfile has not changed
docker-compose up
Before you start standard service, do these steps:
Install MySQL and import the SQL file from reference_doc/create_table.sql
.
Install vector database Milvus.
Configure database access in:
modelcache/config/milvus_config.ini
modelcache/config/mysql_config.ini
Download the embedding model bin file from Hugging Face. Put it in model/text2vec-base-chinese
.
Start the backend service:
python flask4modelcache.py
The service provides three core RESTful API functionalities: Cache-Writing, Cache-Querying, and Cache-Clearing.
import json
import requests
url = 'http://127.0.0.1:5000/modelcache'
type = 'insert'
scope = {"model": "CODEGPT-1008"}
chat_info = [{"query": [{"role": "system", "content": "You are an AI code assistant and you must provide neutral and harmless answers to help users solve code-related problems."}, {"role": "user", "content": "你是谁?"}],
"answer": "Hello, I am an intelligent assistant. How can I assist you?"}]
data = {'type': type, 'scope': scope, 'chat_info': chat_info}
headers = {"Content-Type": "application/json"}
res = requests.post(url, headers=headers, json=json.dumps(data))
import json
import requests
url = 'http://127.0.0.1:5000/modelcache'
type = 'query'
scope = {"model": "CODEGPT-1008"}
query = [{"role": "system", "content": "You are an AI code assistant and you must provide neutral and harmless answers to help users solve code-related problems."}, {"role": "user", "content": "Who are you?"}]
data = {'type': type, 'scope': scope, 'query': query}
headers = {"Content-Type": "application/json"}
res = requests.post(url, headers=headers, json=json.dumps(data))
import json
import requests
url = 'http://127.0.0.1:5000/modelcache'
type = 'remove'
scope = {"model": "CODEGPT-1008"}
remove_type = 'truncate_by_model'
data = {'type': type, 'scope': scope, 'remove_type': remove_type}
headers = {"Content-Type": "application/json"}
res = requests.post(url, headers=headers, json=json.dumps(data))
We’ve implemented several key updates to our repository. We’ve resolved network issues with Hugging Face and improved inference speed by introducing local embedding capabilities. Due to limitations in SqlAlchemy, we’ve redesigned our relational database interaction module for more flexible operations. We’ve added multi-tenancy support to ModelCache, recognizing the need for multiple users and models in LLM products. Lastly, we’ve made initial adjustments for better compatibility with system commands and multi-turn dialogues.
Module | Function | ||
---|---|---|---|
ModelCache | GPTCache | ||
Basic Interface | Data query interface | ☑ | ☑ |
Data writing interface | ☑ | ☑ | |
Embedding | Embedding model configuration | ☑ | ☑ |
Large model embedding layer | ☑ | ||
BERT model long text processing | ☑ | ||
Large model invocation | Decoupling from large models | ☑ | |
Local loading of embedding model | ☑ | ||
Data isolation | Model data isolation | ☑ | ☑ |
Hyperparameter isolation | |||
Databases | MySQL | ☑ | ☑ |
Milvus | ☑ | ☑ | |
OceanBase | ☑ | ||
Session management | Single-turn dialogue | ☑ | ☑ |
System commands | ☑ | ||
Multi-turn dialogue | ☑ | ||
Data management | Data persistence | ☑ | ☑ |
One-click cache clearance | ☑ | ||
Tenant management | Support for multi-tenancy | ☑ | |
Milvus multi-collection capability | ☑ | ||
Other | Long-short dialogue distinction | ☑ |
In ModelCache, we incorporated the core principles of GPTCache. ModelCache has four modules: adapter, embedding, similarity, and data_manager.
To make ModelCache more suitable for industrial use, we made several improvements to its architecture and functionality:
This project has referenced the following open-source projects. We would like to express our gratitude to the projects and their developers for their contributions and research.
GPTCache
ModelCache is a captivating and invaluable project, whether you are an experienced developer or a novice just starting out, your contributions to this project are warmly welcomed. Your involvement in this project, be it through raising issues, providing suggestions, writing code, or documenting and creating examples, will enhance the project’s quality and make a significant contribution to the open-source community.