Construction is among the industries where accidents occur most frequently. It's crucial to ensure that workers consistently wear their personal protective equipment to safeguard their safety, reduce accidents, and adhere to regulatory standards. 
Several solutions can be considered:

  • Manual inspections: While they allow direct verification, their effectiveness is limited by human capacity and time. They are also prone to errors and are not scalable.
  • CCTV surveillance: This covers larger areas, providing a broader overview. However, it requires manual review of footage, which is time-consuming and raises privacy concerns as individuals may feel constantly monitored.
  • Wearable sensors: These offer real-time data, enhancing responsiveness to incidents. However, they are often expensive and may face resistance from workers who are reluctant to wear sensors continuously.


This is where edge AI comes in. It offers real-time detection, cost efficiency, privacy preservation, and scalability compared to other solutions. Quantitative evidence can be mentioned, such as the reduction in accident rates and cost savings, while qualitative feedback from workers validates an improved perception of safety.

Approach

This use case is based on a public dataset containing images of workers wearing personal protective equipment (helmets, safety vests, shoes, and gloves). The goal is to detect what personal protective equipment a person is wearing and draw a bounding box around each piece of equipment.

For this project, we used the STM32 model zoo. It provides pre-trained models optimized for STM32 and scripts to retrain and customize your models.

First, we cloned the GitHub repository and set up our project environment. We mainly used the resources from the "Object detection" folder since our project involved detecting personal protective equipment. You might use resources from another folder depending on your use case. Several applications are available in the STM32 model zoo.

After downloading the dataset, we imported the files into the STM32 model zoo in the designated folder (stm32ai-modelzoo/object detection/datasets). Each sample in the dataset consists of an image and a txt file with the position of each piece of equipment in the image.

We then used the configuration file (stm32ai-modelzoo/object detection/src/user_config.yaml) to set up the training parameters for our model. This file allowed us to:

  • Select one of the pre-trained models available in the STM32 model zoo: SSD mobilenet v1in our case.
  • Define the mode of operation: here chain_tqeb to train, quantize, evaluate and benchmark. *
  • Define the classes to detect: Person, Helmet, Protection Jacket, Protection Shoes, Gloves.
  • Enter the training, validation, test, and quantization folder path (it is recommended to use a portion of the training set)
  • Rescale the images to the model input size: 224x224x3.
  • The other parameters can be left as default. However, you can modify them to improve the training of your model.


Then, we just had to execute the python script with the command python stm32ai_main.py to start the training (make sure to be in the stm32ai-modelzoo/object_detection/src folder).

Once the training was completed, we were able to retrieve the results in the experiment_outputs folder (stm32ai-modelzoo/object_detection/src/ experiment_outputs). In this folder, we can find the .h5 train model and its .tflite quantized version, the execution time and memory footprint for each model, and the confusion matrix for each object to be detected.

You will find below a video in which we show all these steps.

* The quantization and benchmark are done through the ST Edge AI Developer Cloud. You will need an ST account to be able to use it.

0:00 / 0:00

Data

Dataset Personal protective equipment detection
Data format
5 classes: Person, Helmet, Protection Jacket, Protection Shoes, Gloves
RGB images: 4.2K train, 584 validation, 292 test

Results

Model SSD mobilenetv1 quantized neural network
Input size: images rescaled to 224x224x3
Memory footprint:
188.6 Kbytes Flash for weights
131.42 KbytesRAM for activations
Accuracy: 78.96% mean average precision after quantization

  • Helmet class AP = 79.85 %
  • Gloves class AP = 50.61%
  • Person class AP = 91.32%
  • Shoes class AP = 82.73%
  • Jacket class AP = 90.29%


Performance on STM32H747l-DISCO (High-perf) @ 400 MHz 
Inference time: 74.98 ms
STM32CubeAI 8.1.0

Model repository

ST Edge AI Model Zoo

Model repository

Optimized with

STM32Cube.AI

Optimized with

Compatible with

STM32

Compatible with

Resources

Model repository ST Edge AI Model Zoo

A collection of reference AI models optimized to run on ST devices with associated deployment scripts. The model zoo is a valuable resource to add edge AI capabilities to embedded applications.

Model repository ST Edge AI Model Zoo Model repository ST Edge AI Model Zoo Model repository ST Edge AI Model Zoo

Optimized with STM32Cube.AI

A free STM32Cube expansion package, X-CUBE-AI allows developers to convert pretrained AI algorithms automatically, such as neural network and machine learning models, into optimized C code for STM32.

Optimized with STM32Cube.AI Optimized with STM32Cube.AI Optimized with STM32Cube.AI

Compatible with STM32

The STM32 family of 32-bit microcontrollers based on the Arm Cortex®-M processor is designed to offer new degrees of freedom to MCU users. It offers products combining very high performance, real-time capabilities, digital signal processing, low-power / low-voltage operation, and connectivity, while maintaining full integration and ease of development.

Compatible with STM32 Compatible with STM32 Compatible with STM32

You might also be interested by

Entertainment | Image recognition | Vision | STM32Cube.AI | Demo | Tutorial | GitHub | Video

Smart mirrors for fitness: pose estimation and multi-person tracking

Track and analyze users' body movements to provide feedback on exercise with STM32N6 at 28 FPS.

Predictive maintenance | Accelerometer | NanoEdge AI Studio | Video | Partner | Industrial

Anomaly detection with on-device learning with Rtone

Anomaly detection solution on industrial equipment, running on STM32 MCU.

Smart city | Smart home | Smart office | Context awareness | Vision | AI for Linux | Idea | Video

People detection and counting solution

Optimized computer vision using an MPU running at 8 FPS.