AI Trainer: Autoencoder Based Approach for Squat Analysis and Correction

This project uses deep learning and computer vision to evaluate squat performance and provide corrective feedback. We designed a Bi-GRU model with attention for accurate classification across 7 squat types, reaching 94% accuracy. Data was collected using a custom multi-camera setup from 40 participants with varied motion patterns. My role involved designing the data collection apparatus, feature extraction pipeline, and training/testing the neural model.

1 / 8
Squat Eccentric Phase
Eccentric of the squat
2 / 8
Squat Bottom
Bottom of the squat
3 / 8
Squat Concentric
Concentric of the squat
4 / 8
Intel RealSense Camera
Intel RealSense D435 camera used in the setup
5 / 8
Stereo Setup
The stereo setup
6 / 8
Keypoints Visualization
Keypoints visualized on the human body in real time
7 / 8
3D Keypoints Visualization
3D keypoints with color coding for each body segment
8 / 8
3D Skeleton Model
3D keypoints with color coding for each body segment