superduper

Superduper: Build end-to-end AI applications and agent workflows on your existing data infrastructure and preferred tools - without migrating your data.

4821
466
Python

Build end-to-end AI-data workflows and applications with your favourite tools

Package version Supported Python versions License - Apache 2.0

What is Superduper?

Superduper is a Python based framework for building end-2-end AI-data workflows and applications on your own data, integrating with major databases. It supports the latest technologies and techniques, including LLMs, vector-search, RAG, multimodality as well as classical AI and ML paradigms.

Developers may leverage Superduper by building compositional and declarative objects which out-source the details of deployment, orchestration and versioning, and more to the Superduper engine. This allows developers to completely avoid implementing MLOps, ETL pipelines, model deployment, data migration and synchronization.

Using Superduper is simply “CAPE”: Connect to your data, apply arbitrary AI to that data, package and reuse the application on arbitrary data, and execute AI-database queries and predictions on the resulting AI outputs and data.

  • Connect
  • Apply
  • Package
  • Execute
Alt text for the image

Connect

db = superduper('mongodb|postgres|mysql|sqlite|duckdb|snowflake://<your-db-uri>')

Apply

listener = MyLLM('self_hosted_llm', architecture='llama-3.2', postprocess=my_postprocess).to_listener('documents', key='txt')
db.apply(listener)

Package

application = Application('my-analysis-app', components=[listener, vector_index])
template = Template('my-analysis', component=app, substitutions={'documents': 'table'})
template.export('my-analysis')

Execute

query = db['documents'].like({'txt', 'Tell me about Superduper'}, vector_index='my-index').select()
query.execute()

Superduper may be run anywhere; you can also contact us to learn more about the enterprise platform for bringing your Superduper workflows to production at scale.

What does Superduper support?

Superduper is flexible enough to support a huge range of AI techniques and paradigms. We have a range of pre-built functionality in the plugins and templates directories. In particular, Superduper excels when AI and data need to interact in a continuous and tightly integrated fashion. Here are some illustrative examples, which you may try out from our templates:

We’re looking to connect with enthusiastic developers to contribute to the repertoire of amazing pre-built templates and workflows available in Superduper open-source. Please join the discussion, by contributing issues and pull requests!

Core features

  • Create a Superduper data-AI connection/ datalayer consisting of your own
    • databackend (database/ datalake/ datawarehouse)
    • metadata store (same or other as databackend)
    • artifact store (to store big objects)
    • compute implementation
  • Build complex units of functionality (Component) using a declarative programming model, which integrate closely with data in your databackend, using a simple set of primitives and base classes.
  • Build larger units of functionality wrapping several interrelated Component instances into an AI-data Application
  • Reuse battle-tested Component, Model and Application instances using Template, giving developers an easy point to start with difficult AI implementations
  • A transparent, human-readable, web-friendly and highly portable serialization protocol, “Superduper-protocol”, to communicate results of experimentation, make Application lineage and versioning easy to follow, and create an elegant segway from the AI world to the databasing/ typed-data worlds.
  • Execute queries using a combination of outputs of Model instances as well as primary databackend data, to enable the latest generation of AI-data applications, including all flavours of vector-search, RAG, and much, much more.

Key benefits

Massive flexibility

Combine any Python based AI model, API from the ecosystem with the most established, battle tested databases and warehouses; Snowflake, MongoDB, Postgres, MySQL, SQL Server, SQLite, BigQuery, and Clickhouse are all supported.

Seamless integration avoiding MLOps

Remove the need to implement MLOps, using the declarative and compositional Superduper components, which specify the end state that the models and data should reach.

Promote code reusability and portability

Package components as templates, exposing the key parameters required to reuse and communicate AI applications in your community and organization.

Cost savings

Implement vector search and embedding generation without requiring a dedicated vector database. Effortlessly toggle between self hosted models and API hosted models, without major code changes.

Move to production without any additional effort

Superduper’s REST API, allows installed models to be served without additional development work. For enterprise grade scalability, fail safes, security and logging, applications and workflows created with Superduper, may be deployed in one click on Superduper enterprise.

What’s new in the main branch?

We are working on an upcoming release of 0.4.0. In this release we have:

Revamped how Component triggers initial computations and data dependent computations using @trigger

This will enable a large diversity of Component types in addition to the well established Model, Listener, VectorIndex.

Created a general CDC (change-data-capture) base class

This will allow developers to create a range of functionality which reacts to incoming data changes

Developed the concept of Template to enable re-usable units of complete functionality

Components saved as Template instances, will allow users to easily redeploy their already deployed and tested Component and Application implementations, on alternative data sources, and with key parameters toggled to cater to operational requirements.

Added concrete Template implementations to the project

These Template instances may be applied with Superduper with a simple single command

superduper apply <template> '{"variable_1": "value_1",  "variable_2": ...}'

or:

from superduper import templates

app = template(variable_1='value_1', variable_2='value_2', ...)

db.apply(app)

Added a user interface and new REST implementation

Now you may view your Component, Application and Template instances in the user-interface, and execute queries using QueryTemplate instances, directly against the REST server.

superduper start

Getting started

Installation:

pip install superduper-framework

View available pre-built templates:

superduper ls

Connect and apply a pre-built template:

(Note: the pre-built templates are only supported by Python 3.10; you may use all of the other features in Python 3.11+.)

# e.g. 'mongodb://localhost:27017/test_db'
SUPERDUPER_DATA_BACKEND=<your-db-uri> superduper apply simple_rag

Execute a query or prediction on the results:

from superduper import superduper
db = superduper('<your-db-uri>')  # e.g. 'mongodb://localhost:27017/test_db'
db['rag'].predict('Tell me about superduper')

View and monitor everything in the Superduper interface. From the command line:

superduper start

After doing this you are ready to build your own components, applications and templates!

Get started by copying an existing template, to your own development environment:

superduper bootstrap <template_name> --destination templates/my-template

Edit the build.ipynb notebook, to build your own functionality.

Currently supported datastores

Community & getting help

If you have any problems, questions, comments, or ideas:

Contributing

There are many ways to contribute, and they are not limited to writing code. We welcome all contributions such as:

Please see our Contributing Guide for details.

Contributors

Thanks goes to these wonderful people:

License

Superduper is open-source and intended to be a community effort, and it wouldn’t be possible without your support and enthusiasm.
It is distributed under the terms of the Apache 2.0 license. Any contribution made to this project will be subject to the same provisions.

Join Us

We are looking for nice people who are invested in the problem we are trying to solve to join us full-time. Find roles that we are trying to fill here!