# Paddle-Lite **Repository Path**: paddlepaddle/paddle-lite ## Basic Information - **Project Name**: Paddle-Lite - **Description**: Paddle Lite为Paddle-Mobile的升级版,定位支持包括手机移动端在内更多场景的轻量化高效预测,支持更广泛的硬件和平台,是一个高性能、轻量级的深度学习预测引擎 - **Primary Language**: C++ - **License**: Apache-2.0 - **Default Branch**: develop - **Homepage**: None - **GVP Project**: No ## Statistics - **Stars**: 169 - **Forks**: 54 - **Created**: 2019-08-23 - **Last Updated**: 2025-09-04 ## Categories & Tags **Categories**: machine-learning **Tags**: None ## README # Paddle Lite [简体中文](README.md) | English [](https://travis-ci.org/PaddlePaddle/Paddle-Lite) [](https://www.paddlepaddle.org.cn/lite) [](https://github.com/PaddlePaddle/Paddle-Lite/releases) [](LICENSE) Paddle Lite is an updated version of Paddle-Mobile, an open-open source deep learning framework designed to make it easy to perform inference on mobile, embeded, and IoT devices. It is compatible with PaddlePaddle and pre-trained models from other sources. For tutorials, please see [PaddleLite Document](https://www.paddlepaddle.org.cn/lite). ## Key Features - Multiple platform support, covering Android and iOS devices, embedded Linux, Windows, macOS and Linux computer. - Diverse language support, which includes Java, C++, and Python. - High performance and light weight: optimized for on-device machine learning, reduced model and binary size, efficient inference and reduced memory usage. ## Architecture Paddle Lite is designed to support a wide range of hardwares and devices, and it enables mixed execution of a single model on multiple devices, optimization on various phases, and light-weighted applications on devices.  As is shown in the figure above, analysis phase includes Machine IR module, and it enables optimizations like Op fusion and redundant computation pruning. Besides, excecution phase only involves Kernal execution, so it can be deployed on its own to ensure maximum light-weighted deployment. ## Key Info about the Update The earlier Paddle-Mobile was designed to be compatible with PaddlePaddle and multiple hardwares, including ARM CPU, Mali GPU, Adreno GPU, FPGA, ARM-Linux and Apple's GPU Metal. Within Baidu, inc, many product lines have been using Paddle-Mobile. As an update of Paddle-Mobile, Paddle Lite has incorporated many older capabilities into the [new architecture](https://github.com/PaddlePaddle/Paddle-Lite/tree/develop/lite). ## Special Thanks Paddle Lite has referenced the following open-source projects: - [ARM compute library](https://github.com/ARM-software/ComputeLibrary) - [Anakin](https://github.com/PaddlePaddle/Anakin). The optimizations under Anakin has been incorporated into Paddle Lite, and so there will not be any future updates of Anakin. As another high-performance inference project under PaddlePaddle, Anakin has been forward-looking and helpful to the making of Paddle Lite. ## Feedback and Community Support - Questions, reports, and suggestions are welcome through Github Issues! - Forum: Opinions and questions are welcome at our [PaddlePaddle Forum](https://ai.baidu.com/forum/topic/list/168)! - WeChat Official Account: PaddlePaddle - QQ Group Chat: 696965088
WeChat Official Account QQ Group Chat