MNN is a blazing fast, lightweight deep learning framework, battle-tested by business-critical use cases in Alibaba
MNN is a highly efficient and lightweight deep learning framework. It supports inference and training of deep learning models and has industry-leading performance for inference and training on-device. At present, MNN has been integrated into more than 30 apps of Alibaba Inc, such as Taobao, Tmall, Youku, DingTalk, Xianyu, etc., covering more than 70 usage scenarios such as live broadcast, short video capture, search recommendation, product searching by image, interactive marketing, equity distribution, security risk control. In addition, MNN is also used on embedded devices, such as IoT.
MNN-LLM is a large language model runtime solution developed based on the MNN engine. The mission of this project is to deploy LLM models locally on everyone’s platforms(Mobile Phone/PC/IOT). It supports popular large language models such as Qianwen, Baichuan, Zhipu, LLAMA, and others. MNN-LLM User guide
MNN-Diffusion is a stable diffusion model runtime solution developed based on the MNN engine. The mission of this project is to deploy stable diffusion models locally on everyone’s platforms. MNN-Diffusion User guide
Inside Alibaba, MNN works as the basic module of the compute container in the Walle System, the first end-to-end, general-purpose, and large-scale production system for device-cloud collaborative machine learning, which has been published in the top system conference OSDI’22. The key design principles of MNN and the extensive benchmark testing results (vs. TensorFlow, TensorFlow Lite, PyTorch, PyTorch Mobile, TVM) can be found in the OSDI paper. The scripts and instructions for benchmark testing are put in the path “/benchmark”. If MNN or the design of Walle helps your research or production use, please cite our OSDI paper as follows:
@inproceedings {proc:osdi22:walle,
author = {Chengfei Lv and Chaoyue Niu and Renjie Gu and Xiaotang Jiang and Zhaode Wang and Bin Liu and Ziqi Wu and Qiulin Yao and Congyu Huang and Panos Huang and Tao Huang and Hui Shu and Jinde Song and Bin Zou and Peng Lan and Guohuan Xu and Fei Wu and Shaojie Tang and Fan Wu and Guihai Chen},
title = {Walle: An {End-to-End}, {General-Purpose}, and {Large-Scale} Production System for {Device-Cloud} Collaborative Machine Learning},
booktitle = {16th USENIX Symposium on Operating Systems Design and Implementation (OSDI 22)},
year = {2022},
isbn = {978-1-939133-28-1},
address = {Carlsbad, CA},
pages = {249--265},
url = {https://www.usenix.org/conference/osdi22/presentation/lv},
publisher = {USENIX Association},
month = jul,
}
MNN’s docs are in place in Read the docs.
You can also read docs/README to build docs’s html.
MNN Workbench could be downloaded from MNN’s homepage, which provides pretrained models, visualized training tools, and one-click deployment of models to devices.
Tensorflow
, Caffe
, ONNX
,Torchscripts
and supports common neural networks such as CNN
, RNN
, GAN
, Transformer
.Tensorflow
OPs, 52 Caffe
OPs, 163 Torchscripts
OPs, 158 ONNX
OPs.The Architecture / Precision MNN supported is shown below:
Architecture / Precision | Normal | FP16 | BF16 | Int8 | |
---|---|---|---|---|---|
CPU | Native | B | C | B | B |
x86/x64-SSE4.1 | A | B | B | A | |
x86/x64-AVX2 | S | B | B | A | |
x86/x64-AVX512 | S | B | B | S | |
ARMv7a | S | S (ARMv8.2) | S | S | |
ARMv8 | S | S (ARMv8.2) | S(ARMv8.6) | S | |
GPU | OpenCL | A | S | C | C |
Vulkan | A | A | C | C | |
Metal | A | S | C | C | |
CUDA | A | S | C | C | |
NPU | CoreML | B | B | C | C |
HIAI | B | C | C | B | |
NNAPI | B | B | C | C |
Base on MNN (Tensor compute engine), we provided a series of tools for inference, train and general computation.
The group discussions are predominantly Chinese. But we welcome and will help English speakers.
Dingtalk discussion groups:
Group #1 (Full): 23329087
Group #2 (Full): 23350225
Group #3: QR code:
The preliminary version of MNN, as mobile inference engine and with the focus on manual optimization, has also been published in MLSys 2020. Please cite the paper, if MNN previously helped your research:
@inproceedings{alibaba2020mnn,
author = {Jiang, Xiaotang and Wang, Huan and Chen, Yiliu and Wu, Ziqi and Wang, Lichuan and Zou, Bin and Yang, Yafeng and Cui, Zongyang and Cai, Yu and Yu, Tianhang and Lv, Chengfei and Wu, Zhihua},
title = {MNN: A Universal and Efficient Inference Engine},
booktitle = {MLSys},
year = {2020}
}
Apache 2.0
MNN participants: Taobao Technology Department, Search Engineering Team, DAMO Team, Youku and other Alibaba Group employees.
MNN refers to the following projects: