Deepchecks: Tests for Continuous Validation of ML Models & Data. Deepchecks is a holistic open-source solution for all of your AI & ML validation needs, enabling to thoroughly test your data and models from research to production.
Deepchecks is a holistic open-source solution for all of your AI & ML validation needs,
enabling you to thoroughly test your data and models from research to production.
๐ Join Slack | ๐ Documentation | ๐ Blog | ๐ฆ Twitter
Deepchecks includes:
This repo is our main repo as all components use the deepchecks checks in their core. See the Getting Started section for more information about installation and quickstarts for each of the components.
If you want to see deepchecks monitoringโs code, you can check out the
deepchecks/monitoring repo.
pip install deepchecks -U --user
For installing the nlp / vision submodules or with conda:
deepchecks
with "deepchecks[nlp]"
,deepchecks[nlp-properties]
deepchecks
with "deepchecks[vision]"
.conda install -c conda-forge deepchecks
.Check out the full installation instructions for deepchecks testing here.
To use deepchecks for production monitoring, you can either use our SaaS service, or deploy a local instance in one line on Linux/MacOS (Windows is WIP!) with Docker.
Create a new directory for the installation files, open a terminal within that directory and run the following:
pip install deepchecks-installer
deepchecks-installer install-monitoring
This will automatically download the necessary dependencies, run the installation process
and then start the application locally.
The installation will take a few minutes. Then you can open the deployment url (default is http://localhost),
and start the system onboarding. Check out the full monitoring open source installation & quickstart.
Note that the open source product is built such that each deployment supports monitoring of
a single model.
Jump right into the respective quickstart docs:
to have it up and running on your data.
Inside the quickstarts, youโll see how to create the relevant deepchecks object for holding your data and metadata
(Dataset, TextData or VisionData, corresponding to the data type), and run a Suite or Check.
The code snippet for running it will look something like the following, depending on the chosen Suite or Check.
from deepchecks.tabular.suites import model_evaluation
suite = model_evaluation()
suite_result = suite.run(train_dataset=train_dataset, test_dataset=test_dataset, model=model)
suite_result.save_as_html() # replace this with suite_result.show() or suite_result.show_in_window() to see results inline or in window
# or suite_result.results[0].value with the relevant check index to process the check result's values in python
The output will be a report that enables you to inspect the status and results of the chosen checks:
Jump right into the
open source monitoring quickstart docs
to have it up and running on your data.
Youโll then be able to see the checks results over time, set alerts, and interact
with the dynamic deepchecks UI that looks like this:
Deepchecks managed CI & Testing management is currently in closed preview.
Book a demo for more information about the offering.
For building and maintaining your own CI process while utilizing Deepchecks Testing for it,
check out our docs for Using Deepchecks in CI/CD.
At its core, deepchecks includes a wide variety of built-in Checks,
for testing all types of data and model related issues.
These checks are implemented for various models and data types (Tabular, NLP, Vision),
and can easily be customized and expanded.
The check results can be used to automatically make informed decisions
about your modelโs production-readiness, and for monitoring it over time in production.
The check results can be examined with visual reports (by saving them to an HTML file, or seeing them in Jupyter),
processed with code (using their pythonic / json output), and inspected and collaborated on with Deepchecksโ dynamic UI
(for examining test results and for production monitoring).
result.save_to_html('output_report_name.html')
) or viewing them in Jupyter (result.show()
).value
attribute, or saving aDeepchecksโ projects (deepchecks/deepchecks
& deepchecks/monitoring
) are open source and are released under AGPL 3.0.
The only exception are the Deepchecks Monitoring components (in the deepchecks/monitoring
repo), that are under the
(backend/deepchecks_monitoring/ee)
directory, that are subject to a commercial license (see the license here).
That directory isnโt used by default, and is packaged as part of the deepchecks monitoring repository simply to
support upgrading to the commercial edition without downtime.
Enabling premium features (contained in the backend/deepchecks_monitoring/ee
directory) with a self-hosted instance requires a Deepchecks license.
To learn more, book a demo or see our pricing page.
Looking for a ๐ฏ% open-source solution for deepcheck monitoring?
Check out the Monitoring OSS repository, which is purged of all proprietary code and features.
Deepchecks is an open source solution.
We are committed to a transparent development process and highly appreciate any contributions.
Whether you are helping us fix bugs, propose new features, improve our documentation or spread the word,
we would love to have you as part of our community.
Join our Slack to give us feedback, connect with the maintainers and fellow users, ask questions,
get help for package usage or contributions, or engage in discussions about ML testing!
Thanks goes to these wonderful people (emoji
key):
This project follows the all-contributors
specification. Contributions of any kind are welcome!