Wearables Entertainment NanoEdge AI Studio Human activity Gyroscope

Human activity recognition using a smartphone

Using a smartphone to recognize human activities.

Human activity recognition using a smartphone
Wearables Entertainment NanoEdge AI Studio Human activity Gyroscope
Maintaining an active lifestyle is essential for our overall well-being, but understanding and tracking our activities accurately can be a challenge. Artificial Intelligence (AI) revolutionizes human activity recognition precision and edge AI enables these algorithms to be embedded everywhere. They can run locally without disclosing any personal information, embedded in devices such as smart watches, smart wristbands, and smart shoes but especially in a device that everyone has in their pocket, a smartphone.

Embedding an optimized AI model, a smartphone can become a personalized activity recognition system capable of understanding and categorizing your own movements. From walking, running and cycling to more complex activities including yoga and weightlifting, a smartphone can monitor your progress and make informed decisions about your fitness routine.

Approach

This use case is based on the "Human Activity Recognition with Smartphones" dataset from Kaggle, created by recording 30 participants carrying smartphones on their waist. The goal was to determine if a person is walking, walking upstairs, walking downstairs, sitting or standing.

An accelerometer and a gyroscope inside a smartphone are used to capture linear acceleration and angular velocity data at a rate of 50 Hz. The training and test files are then sorted into several files; one for each of the five activities. The 'Activity' (label) column is then deleted from each file. Finally, we used NanoEdge AI Studio to train an N-class classification model based on these inputs.

Sensor

Accelerometer and gyroscope.

Data

5 classes of activities: Walking, walking upstairs, walking downstairs, sitting, and standing.
Signal length 562 (multi-sensors)
Data rate 50 Hz

Results

N-class classification:
96.35% accuracy, 6.7 Kbytes of RAM, 15.2 Kbytes of Flash memory
Green points represent well classified signals. Red points represent misclassified signals. The classes are on the abscissa and the confidence of the prediction is on the ordinate 

Model created with

NanoEdge AI Studio

Model created with

Compatible with

Any STM32 MCU

Compatible with

Resources

Model created with NanoEdge AI Studio

A free AutoML software for adding AI to embedded projects, guiding users step by step to easily find the optimal AI model for their requirements.

Model created with NanoEdge AI Studio

Compatible with Any STM32 MCU

The STM32 family of 32-bit microcontrollers based on the Arm Cortex®-M processor is designed to offer new degrees of freedom to MCU users. It offers products combining very high performance, real-time capabilities, digital signal processing, low-power / low-voltage operation, and connectivity, while maintaining full integration and ease of development.

Compatible with Any STM32 MCU

You also might be interested by

Wearables | Entertainment

Yoga pose recognition on wearable devices

Pose recognition and classification on a sensor.

Wearables | Entertainment

Gym activity recognition on wearable devices

Activity recognition and classification on a sensor.

Entertainment

Run a Rock-Paper-Scissors game on Arduino using NanoEdge AI Studio 

Gestures classification on Arduino using a ToF sensor.