Product overview
Key Benefits
NN and ML models optimization
Import your own neural network models, select optimization options, and generate the optimized C code.
NN and ML models profiling
Generates report that details the NN memory requirements and the inference time, both for the complete network and for each layer.
Part of the ST Edge AI Suite
A collection of free online tools, case studies, and resources to support engineers at every stage of their edge AI development.
Description
X-CUBE-AI is an STM32Cube Expansion Package, which is part of the STM32Cube.AI ecosystem. It extends STM32CubeMX capabilities with automatic conversion of pretrained artificial intelligence algorithms, including neural network and classical machine learning models. It integrates also a generated optimized library into the user's project.
The easiest way to use X-CUBE-AI is to download it inside the STM32CubeMX tool (version 5.4.0 or newer) as described in the user manual "Getting started with X-CUBE-AI Expansion Package for artificial intelligence (AI)" (UM2526).
The X-CUBE-AI Expansion Package offers also several means to validate artificial intelligence algorithms both on a desktop PC and an STM32. With X-CUBE-AI, it is as well possible to measure performance on STM32 devices without any user handmade specific C code.
-
All features
- Generation of an STM32-optimized library from pretrained neural network and classical machine learning models
- Native support for various deep learning frameworks such as Keras and TensorFlow™ Lite, and support for all frameworks that can export to the ONNX standard format such as PyTorch™, MATLAB®, and more
- Support for various built-in scikit-learn models such as isolation forest, support vector machine (SVM), K-means, and more
- Support for 8-bit quantized neural network format (TensorFlow™ Lite and ONNX Tensor-oriented QDQ)
- Support for deeply quantized neural networks (down to 1-bit) from QKeras and Larq
- Relocatable option enabling standalone model update during product lifecycle by creating a model binary code separated from the application code
- Possibility to use larger networks by storing weights in external flash memory and activation buffers in external RAM
- Easy portability across different STM32 microcontroller series through STM32Cube integration
- With a TensorFlow™ Lite neural network, code generation using either the STM32Cube.AI runtime or TensorFlow™ Lite for Microcontrollers runtime
- Free-of-charge, user-friendly license terms