DeepGemini

A Flexible Multi-Model Orchestration API with OpenAI Compatibility

205
31
Python

DeepGemini 🌟

A Flexible Multi-Model Orchestration API with OpenAI Compatibility

FastAPI
Python 3.11
OpenAI Compatible
License

δΈ­ζ–‡ | English

✨ Features

  • Multi-Model Orchestration: Seamlessly combine multiple AI models in customizable workflows
  • Role Management: Create AI roles with different personalities and skills
  • Discussion Groups: Combine multiple roles to form discussion groups
  • Multiple Discussion Modes:
    • General Discussion
    • Brainstorming
    • Debate
    • Role-playing
    • SWOT Analysis
    • Six Thinking Hats
  • Provider Flexibility: Support for multiple AI providers:
    • DeepSeek
    • Claude
    • Gemini
    • Grok3
    • OpenAI
    • OneAPI
    • OpenRouter
    • Siliconflow
  • OpenAI Compatible: Drop-in replacement for OpenAI’s API in existing applications
  • Stream Support: Real-time streaming responses for better user experience
  • Advanced Configuration: Fine-grained control over model parameters and system prompts
  • Database Integration: SQLite-based configuration storage with Alembic migrations
  • Web Management UI: Built-in interface for managing models and configurations
  • Multi-language Support: English and Chinese interface

Preview

image

image

image

image

image

πŸš€ Quick Start

1. Installation

git clone https://github.com/sligter/DeepGemini.git
cd DeepGemini
uv sync

2. Configuration

cp .env.example .env

Required environment variables:

  • ALLOW_API_KEY: Your API access key
  • ALLOW_ORIGINS: Allowed CORS origins (comma-separated or β€œ*”)

3. Run the Application

uv run uvicorn app.main:app --host 0.0.0.0 --port 8000

Visit http://localhost:8000/dashboard to access the web management interface.

🐳 Docker Deployment

Using Docker Compose (Recommended)

  1. Create and configure your .env file:
cp .env.example .env
touch deepgemini.db
echo "" > deepgemini.db
  1. Build and start the container:
docker-compose up -d
  1. Access the web interface at http://localhost:8000/dashboard

Using Docker Directly

  1. Pull the image:
docker pull bradleylzh/deepgemini:latest
  1. Create necessary files:

For Linux/Mac:

# Create .env file
cp .env.example .env
touch deepgemini.db

For Windows PowerShell:

# Create .env file
cp .env.example .env
echo "" > deepgemini.db
  1. Run the container:

For Linux/Mac:

docker run -d \
-p 8000:8000 \
-v $(pwd)/.env:/app/.env \
-v $(pwd)/deepgemini.db:/app/deepgemini.db \
--name deepgemini \
bradleylzh/deepgemini:latest

For Windows PowerShell:

docker run -d -p 8000:8000 `
-v ${PWD}\.env:/app/.env `
-v ${PWD}\deepgemini.db:/app/deepgemini.db `
--name deepgemini `
bradleylzh/deepgemini:latest

πŸ”§ Model Configuration

DeepGemini supports various AI providers:

  • DeepSeek: Advanced reasoning capabilities
  • Claude: Refined text generation and thinking
  • Gemini: Google’s AI model
  • Grok3: Grok’s AI model
  • Custom: Add your own provider integration

Each model can be configured with:

  • API credentials
  • Model parameters (temperature, top_p, tool, etc.)
  • System prompts
  • Usage type (reasoning/execution/both)

πŸ”„ Relay Chain Configuration

Create custom Relay Chain by combining models:

  1. Reasoning Step: Initial analysis and planning
  2. Execution Step: Final response generation
  3. Custom Steps: Add multiple steps as needed

πŸ‘₯ Multi-Role Discussion

Role Management: Create AI roles with different personalities and skills

  • Discussion Groups: Combine multiple roles to form discussion groups
  • Multiple Discussion Modes:
    • General Discussion
    • Brainstorming
    • Debate
    • Role-playing
    • SWOT Analysis
    • Six Thinking Hats

πŸ›  Tech Stack

πŸ“ License

This project is licensed under the MIT License - see the LICENSE file for details.

🀝 Contributing

Contributions are welcome! Please feel free to submit a Pull Request.

πŸ“¬ Contact

For questions and support, please open an issue on GitHub.