This repository provides an open-source solution for real-time sitting posture detection using YOLOv5, a state-of-the-art object detection algorithm. The program is designed to analyze a user’s sitting posture and offer feedback on whether it aligns with ergonomic best practices, aiming to promote healthier sitting habits.
- YOLOv5: The program leverages the power of YOLOv5, which is an object detection algorithm, to accurately detect the user’s sitting posture from a webcam.
- Real-time Posture Detection: The program provides real-time feedback on the user's sitting posture, making it suitable for applications in office ergonomics, fitness, and health monitoring.
- Good vs. Bad Posture Classification: The program uses a pre-trained model to classify the detected posture as good or bad, enabling users to improve their posture and prevent potential health issues associated with poor sitting habits.
- Open-source: Released under an open-source license, allowing users to access, modify, and contribute to the project.
We are pleased to announce that this project has been published in an IEEE conference paper, which provides a comprehensive overview of our methodology, technical approach, and results in applying YOLOv5 for lateral sitting posture detection. This paper, titled "Lateral Sitting Posture Detection using YOLOv5," was presented at the 2024 IEEE International Conference on Biomedical Robotics and Biomechatronics (BioRob). For more in-depth information, please refer to the full paper available at:
Read the IEEE Publication on Xplore
- Python 3.9.x
If you have an NVIDIA graphics processor, you can activate GPU acceleration by installing the GPU requirements. Note that without GPU acceleration, the inference will run on the CPU, which can be very slow.
git clone https://github.com/itakurah/SittingPostureDetection.git
python -m venv venv
.\venv\scripts\activate.bat
pip install -r ./requirements_windows.txt
ORpip install -r ./requirements_windows_gpu.txt
git clone https://github.com/itakurah/SittingPostureDetection.git
python3 -m venv venv
source venv/bin/activate
pip3 install -r requirements_linux.txt
ORpip3 install -r requirements_linux_gpu.txt
python application.py <optional: model_file.pt>
OR python3 application.py <optional: model_file.pt>
The default model is loaded if no model file is specified.
This project uses a custom-trained YOLOv5s model fine-tuned on 160 images per class over 146 epochs. It categorizes postures into two classes:
sitting_good
sitting_bad
The architecture that is used for the model is the standard YOLOv5s architecture:
Fig. 1: YOLOv5s network architecture (based on Liu et al.). The CBS module consists of a Convolutional layer, a Batch Normalization layer, and a Sigmoid Linear Unit (SiLU) activation function. The C3 module consists of three CBS modules and one bottleneck block. The SPPF module consists of two CBS modules and three Max Pooling layers.
The validation set contains 80 images (40 sitting_good, 40 sitting_bad). The results are as follows:
Class | Images | Instances | Precision | Recall | mAP50 | mAP50-95 |
---|---|---|---|---|---|---|
all | 80 | 80 | 0.87 | 0.939 | 0.931 | 0.734 |
sitting_good | 40 | 40 | 0.884 | 0.954 | 0.908 | 0.744 |
sitting_bad | 80 | 40 | 0.855 | 0.925 | 0.953 | 0.724 |
F1, Precision, Recall, and Precision-Recall plots:
This project was developed by Niklas Hoefflin, Tim Spulak, Pascal Gerber & Jan Bösch. It was supervised by André Jeworutzki and Jan Schwarzer as part of the Train Like A Machine module.
- Jocher, G. (2020). YOLOv5 by Ultralytics (Version 7.0). https://doi.org/10.5281/zenodo.3908559
- Fig. 1: H. Liu, F. Sun, J. Gu, and L. Deng, “Sf-yolov5: A lightweight small object detection algorithm based on improved feature fusion mode,” Sensors (Basel, Switzerland), vol. 22, no. 15, pp. 1–14, 2022. https://doi.org/10.3390/s22155817
This project is licensed under the MIT License. See the LICENSE file for details.