Human Psychophysiological Activity Estimation Based on Smartphone Camera and Wearable Electronics
<p>Heart rate changes during Chi and Kundalini meditation techniques [<a href="#B4-futureinternet-12-00111" class="html-bibr">4</a>].</p> "> Figure 2
<p>Breathing rate during the meditation (pre-, meditative, and post-baseline periods) [<a href="#B8-futureinternet-12-00111" class="html-bibr">8</a>].</p> "> Figure 3
<p>Blood pressure changes during meditation.</p> "> Figure 4
<p>Reference model of the psychophysiological activity detection system.</p> "> Figure 5
<p>Heart rate and acceleration for the complete calm pattern estimated for five people.</p> "> Figure 6
<p>Heart rate and acceleration for the emotional pattern.</p> "> Figure 7
<p>Heart rate and acceleration for the gesture pattern.</p> "> Figure 8
<p>Heart rate and acceleration for the movements pattern.</p> "> Figure 9
<p>Dataset example: (<b>a</b>) professional in lotus pose, (<b>b</b>) beginner in lotus pose.</p> "> Figure 10
<p>Dataset example: beginner in sitting position.</p> "> Figure 11
<p>Dataset example: Meditations feedback by one of meditators.</p> "> Figure 12
<p>Neural network-based meditator seating pose detection.</p> "> Figure 13
<p>Convolutional neural network for meditation pose estimation.</p> "> Figure 14
<p>Converting video data to 180 × 180 px pictures.</p> "> Figure 15
<p>Image normalization before training the neural network.</p> "> Figure 16
<p>First results of a neural network. Graph (<b>a</b>) shows the accuracy of the model prediction. Graph (<b>b</b>) shows the value of the loss function.</p> "> Figure 17
<p>Peak accuracy of neural network predictions (<b>a</b>) and loss function (<b>b</b>).</p> "> Figure 18
<p>Examples of meditation pose recognition based on a developed model.</p> "> Figure 19
<p>Human movement estimation based on skeleton detection.</p> "> Figure 20
<p>Human movement estimation based on skeleton detection.</p> "> Figure 21
<p>Reference model for competence management of conscious exercise coaches.</p> "> Figure 22
<p>Gamification elements based on a user’s motivational model and the results of their meditation practice.</p> "> Figure 23
<p>Gamification element selection based on wearable electronics data.</p> "> Figure 24
<p>Database structure for meditation practices.</p> "> Figure 25
<p>Example of data in the “Session” table.</p> "> Figure 26
<p>Example of data in the “Sensor_Data” table.</p> "> Figure 27
<p>Example of the gamification element in application.</p> ">
Abstract
:1. Introduction
2. Related Work
2.1. Relationships between Heart Rate, Breath and Psychophysiological Activity
2.2. Image Recognition Techniques for Human Activity Detection
2.3. Competence Management and Motivational Strategy
2.4. Related Work Results and Task Definition
- Meditation evaluation support based on human video monitoring as well as heart and breath rate measurements.
- Intelligent search function support for coach search based on his/her competencies as well as human preferences.
- Automatic proactive audio guide proposal based on human experience.
- Dynamic motivation based on human preferences, including gamification elements.
3. Reference Model
4. Meditation Estimation Approach
4.1. Patterns of Human Behavior Based on Wearable Electronics
4.2. Meditation Video Dataset
4.3. Meditation Estimation Based on Neural Networks
- TensorFlow Machine Learning Library 2.1,
- Keras 2.3 as a high-level neural network API for TensorFlow and another machine learning library,
- OpenCV computer vision library 3.4.
- Python 3.7 due to the large community and high support of TensorFlow.
4.4. Meditation Estimation Based on Skeleton Detection
- Oscillations caused by breathing are noticeable on most graphs that take into account vertical movement, but the amplitude and frequency are best reflected on the graph of movement of the thorax key point along the Y axis;
- The movements of the head are well reflected in the graph of the nose, but it also reflects stoop and other movements of the upper part of the body. To get only the movement of the head, it is needed to subtract the schedule of movement of the shoulders;
- The stoop is well observed on the shoulder graphs.
4.5. Competence-Based Model for Meditation Coach Search for Practice Estimation
- Reviews of the coach by his students with similar tastes and preferences.
- Accuracy of coach’s assessment: how much the coach’s grades of meditation are different from those of other coaches.
- Quality of audio guides recorded by the coach. How strong these audio guides helped people with similar preferences meditate better, which is determined by how the users rated the audio guides.
- Marks set by humans who used this meditation.
- Influence on meditation quality: how much the audio guides improve the overall meditation rating of the person’s meditation.
- Goals of meditation: reduce stress, improve productivity, etc.
- Characteristic of user: age, gender, etc.
- Quality of user’s meditations: beginner, advanced, etc.
5. User Motivation Model for Psychophysiological Activity
6. Evaluation
6.1. Meditation Estimation Evaluation
- Introduction phase: the first 2 min of the whole meditation time;
- Conclusive phase: the last 1–2 min of the whole meditation time;
- Main phase: the residual time period.
6.2. Motivational Model Evaluation
7. Conclusions
Author Contributions
Funding
Acknowledgments
Conflicts of Interest
References
- Wang, C.; Li, K.; Gaylord, S. Prevalence, patterns, and predictors of meditation use among U.S. children: Results from the national health interview survey. Complement. Ther. Med. 2019, 43, 271–276. [Google Scholar] [CrossRef] [PubMed]
- Wielgosz, J.; Goldberg, S.B.; Kral, T.R.A.; Dunne, J.D.; Davidson, R.J. Mindfulness meditation and psychopathology. Annu. Rev. Clin. Psychol. 2019, 15, 285–316. [Google Scholar] [CrossRef] [PubMed]
- Conklin, Q.A.; Crosswell, A.D.; Saron, C.D.; Epel, E.S. Meditation, stress processes, and telomere biology. Curr. Opin. Psychol. 2019, 28, 92–101. [Google Scholar] [CrossRef] [PubMed] [Green Version]
- Ricard, M.; Lutz, A.; Davidson, R.J. Neuroscience Reveals the Secrets of Meditation’s Benefits. Scientific American. 2014, Volume 311. Available online: https://www.scientificamerican.com/article/neuroscience-reveals-the-secrets-of-meditation-s-benefits/ (accessed on 30 June 2020).
- Harvard News. Available online: https://news.harvard.edu/gazette/story/2011/01/eight-weeks-to-a-better-brain/ (accessed on 30 June 2020).
- Kashevnik, A.; Kruglov, M.; Saveliev, N.; Parfenov, V.; Maiatin, A. Motivational and personalization strategies for human activities in everyday life. In Proceedings of the 26th Conference of Open Innovations Association (FRUCT), Yaroslavl, Russia, 23–24 April 2020; pp. 136–142. [Google Scholar]
- Peng, C.; Mietus, J.; Liu, Y.; Khalsa, G.; Douglas, P.; Benson, H.; Goldberger, A. Exaggerated heart rate oscillations during two meditation techniques. Int. J. Cardiol. 1999, 70, 101–107. [Google Scholar] [CrossRef]
- Arambula, P.; Peper, E.; Kawakami, M.; Gibney, K. The physiological correlates of kundalini yoga meditation: A study of a yoga master. Appl. Psychophysiol. Biofeedback 2001, 26, 147–153. [Google Scholar] [CrossRef] [PubMed]
- Wallace, R.; Benson, H. The physiology of meditation. Sci. Am. 1972, 226, 84–90. [Google Scholar] [CrossRef]
- Nesvold, A.; Fagerland, M.; Davanger, S.; Ellingsen, O.; Solberg, E.; Holen, A.; Sevre, K.; Atar, D. Increased heart rate variability during nondirective meditation. Eur. J. Prev. Cardiol. 2011, 19, 773–780. [Google Scholar] [CrossRef]
- Woolfolk, R. Psychophysiological correlates of meditation. Arch. Gen. Psychiatry 1975, 32, 1326–1333. [Google Scholar] [CrossRef]
- Zollars, I.; Poirier, T.; Pailden, J. Effects of mindfulness meditation on mindfulness, mental well-being, and perceived stress. Curr. Pharm. Teach. Learn. 2019, 11, 1022–1028. [Google Scholar] [CrossRef]
- Brook, R.; Chair, M.; Appel, L. Alternative approaches to lowering blood pressure. Hypertension 2013, 61, 1360–1383. [Google Scholar] [CrossRef]
- Cao, Z.; Hidalgo, G.; Simon, T.; Wei, S.E.; Sheikh, Y. OpenPose: Realtime multi-person 2D pose estimation using part affinity fields. arXiv 2018, arXiv:1812.08008. Available online: https://arxiv.org/abs/1812.08008 (accessed on 30 June 2020). [CrossRef] [PubMed] [Green Version]
- Medium. Available online: https://medium.com/@amarchenkova/convolutional-neural-network-for-classifying-yoga-poses-bf9c10686f31 (accessed on 30 June 2020).
- Karpathy, A.; Toderici, G.; Shetty, S.; Leung, T.; Sukthankar, R.; Fei-Fei, L. Large-scale video classification with convolutional neural networks. In Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition, Columbus, OH, USA, 24–27 June 2014; pp. 1725–1732. [Google Scholar]
- Varol, G.; Laptev, I.; Schmid, C. Long-term temporal convolutions for action recognition. IEEE Trans. Pattern Anal. Mach. Intell. 2018, 40, 1510–1517. [Google Scholar] [CrossRef] [PubMed] [Green Version]
- Donahue, J.; Hendricks, L.; Guadarrama, S.; Rohrbach, M.; Venugopalan, S.; Darrell, T.; Saenko, K. Long-term recurrent convolutional networks for visual recognition and description. IEEE Trans. Pattern Anal. Mach. Intell. 2017, 39, 677–691. [Google Scholar] [CrossRef]
- Simonyan, K.; Zisserman, A. Two-stream convolutional networks for action recognition in videos. Adv. Neural Inf. Process. Syst. 2014, 1, 568–576. [Google Scholar]
- Li, Q.; Lin, W.; Li, J. Human activity recognition using dynamic representation and matching of skeleton feature sequences from RGB-D images. Signal. Process. Image Commun. 2018, 68, 265–272. [Google Scholar] [CrossRef]
- Lillo, I.; Niebles, J.C.; Soto, A. Sparse composition of body poses and atomic actions for human activity recognition in RGB-D videos. Image Vis. Comput. 2016, 59, 63–75. [Google Scholar] [CrossRef]
- Prakash, A.; Boochoon, S.; Brophy, M.; Acuna, D.; Cameracci, E.; State, G.; Shapira, O.; Birchfield, S. Structured domain randomization: Bridging the reality gap by context-aware synthetic data. Eng. Comput. Sci. 2019. [Google Scholar] [CrossRef] [Green Version]
- Lei, Q.; Du, J.-X.; Zhang, H.-B.; Ye, S.; Chen, D.-S. A survey of vision-based human action evaluation methods. Sensors 2019, 19, 4129. [Google Scholar] [CrossRef] [Green Version]
- Guler, R.; Neverova, N.; Kokkinos, I. DensePose: Dense human pose estimation in the wild. In Proceedings of the 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition, Salt Lake City, UT, USA, 18–23 June 2018. [Google Scholar]
- Franco, A.; Magnani, A.; Maio, D. A multimodal approach for human activity recognition based on skeleton and RGB data. Pattern Recognit. Lett. 2020, 131, 293–299. [Google Scholar] [CrossRef]
- Find a Certified Meditation & Mindfulness Teacher Near You. Available online: https://mcleanmeditation.com/find-a-meditation-teacher/ (accessed on 30 June 2020).
- Your Partner in Fitness Every Step of the Way. Available online: https://findyourcoach.com/ (accessed on 30 June 2020).
- Find the Right Tutor for You. Available online: https://www.tutorhunt.com/ (accessed on 30 June 2020).
- Heylighen, F. Self-organization in communicating groups: The emergence of coordination, shared references and collective intelligence. In Complexity Perspectives on Language. Communication and Society; Springer: Berlin/Heidelberg, Germany, 2013; pp. 117–149. [Google Scholar]
- Fang, H.; Zhai, C. Probabilistic models for coach finding. In Lecture Notes in Computer Science; Springer: Berlin/Heidelberg, Germany, 2007; Volume 4425, pp. 418–430. [Google Scholar]
- Cothern, C. Value judgments in verifying and validating risk assessment models. In Handbook for Environmental Risk Decision Making: Values, Perception and Ethics, London; CRC Lewis Publishers: Boca Raton, FL, USA, 1996; pp. 291–309. [Google Scholar]
- Montibeller, G.; Von Winterfeldt, D. Cognitive and motivational biases in decision and risk analysis. Risk Anal. 2015, 35, 1230–1251. [Google Scholar] [CrossRef]
- Burgman, M.; McBride, M.; Ashton, R.; Speirs-Bridge, A.; Flander, L.; Wintle, B.; Fidler, F.; Rumpff, L.; Twardy, C. Coach status and performance. PLoS ONE 2011, 6, e22998. [Google Scholar] [CrossRef] [PubMed]
- Sailer, M.; Hense, J.U.; Mayr, S.K.; Mandl, H. How gamification motivates: An experimental study of the effects of specific game design elements on psychological need satisfaction. Comput. Hum. Behav. 2016, 69, 371–380. [Google Scholar] [CrossRef]
- He, K.; Girshick, R.; Dollar, P. Rethinking imageNet pre-training. In Proceedings of the IEEE International Conference on Computer Vision, Long Beach, CA, USA, 15–21 June 2019; pp. 4917–4926. [Google Scholar]
- He, K.; Gkioxari, G.; Dollár, P.; Girshick, R. Mask R-CNN. IEEE Trans. Pattern Anal. Mach. Intell. 2020, 42, 386–397. [Google Scholar] [CrossRef]
- Lin, T.Y.; Maire, M.; Belongie, S.; Hays, J.; Perona, P.; Ramanan, D.; Dollár, P.; Zitnick, C.L. Microsoft COCO: Common objects in context. In Lecture Notes in Computer Science; Springer: Berlin/Heidelberg, Germany, 2014; Volume 8693, pp. 740–755. [Google Scholar]
- Sun, X.; Li, C.; Lin, S. An integral pose regression system for the ECCV2018 PoseTrack challenge. arXiv 2018, arXiv:1809.06079. Available online: https://arxiv.org/abs/1809.06079 (accessed on 30 June 2020).
- Ionescu, C.; Papava, D.; Olaru, V.; Sminchisescu, C. Human3.6M: Large scale datasets and predictive methods for 3D human sensing in natural environments. IEEE Trans. Pattern Anal. Mach. Intell. 2014, 36, 1325–1339. [Google Scholar] [CrossRef] [PubMed]
- Liu, P.; Lyu, M.; King, I.; Xu, J. Selflow: Self-supervised learning of optical flow. In Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition, Long Beach, CA, USA, 15–21 June 2019; pp. 4566–4575. [Google Scholar]
- Butler, D.J.; Wulff, J.; Stanley, G.B.; Black, M.J. A naturalistic open source movie for optical flow evaluation. In Lecture Notes in Computer Science; Springer: Berlin, Heidelberg, 2012; Volume 7577, pp. 611–625. [Google Scholar]
- Farneb, G. Two-frame motion estimation based on. In Lecture Notes in Computer Science; Springer: Berlin/Heidelberg, Germany, 2003; Volume 2749, pp. 363–370. [Google Scholar]
Variable | Rest | Meditation | Difference | p-Value |
---|---|---|---|---|
Respiration rate/min | 16.9 ± 1.9 | 16.3 ± 1.8 | 0.52 (−0.05 to 1.9) | 0.072 |
Heart rate/min | 72.6 ± 10.9 | 71.6 ± 10.6 | 0.92 (−0.15 to 1.98) | 0.088 |
Parameter | Introduction Phase | Main Phase | Conclusive Phase |
---|---|---|---|
Pulse: The first decrease of more than 10% | +10 | 0 | 0 |
Pulse: Increase of 10% from the previous measure | −5 | −15 | −10 |
Pulse: Staying lower than 90% from the first measure | +5 | +15 | +10 |
Breath: The first decrease for less than 10 breaths per min | +20 | 0 | 0 |
Breath: Increase for more than 10 breaths per min | −5 | −15 | −10 |
Breath: Staying lower than 10 breaths per min | +10 | +10 | +10 |
Squaring shoulders | −3 | −5 | 0 |
Straightening a back | −3 | −5 | 0 |
Head movements | −3 | −5 | 0 |
Changing the position of lower body | −3 | −10 | −5 |
Changing the position of hands | −3 | −5 | 0 |
Opening/closing eyes | −2 | −5 | 0 |
Preserving the body position | +10 | +20 | +15 |
Basic Aspect | Value | Comment |
---|---|---|
Meditation process duration | 22 min | |
Amount of pose changes | 0 | No interruptions |
Amount of interruptions | 0 | |
Amount of eye openings | 0 | |
Breathing rate | 6–7 | Deep breath |
Breath type: thoracic/abdominal/other/not clear | thoracic | No changes |
Basic Aspect | Value | Comment |
---|---|---|
Duration | 15 min | |
Amount of pose changes | Undefined amount | From 0:00 to 02:40 min |
1—shaking, head up, and down movements; | ||
2—sliding hands over knees; | ||
3—swaying (impulsively); | ||
4—lowered hands lower, raised head; | ||
5—lowered head; | ||
After 2:40 min: | ||
03:00 min—raised and lowered head; | ||
03:30 min—straightened shoulders, straightened up, raised hands, raised head; | ||
04:18 min—moved hands; | ||
05:31 min—lowered hands, raised head; | ||
05:48 min—raised head; | ||
08:10 min—took a deep breath and straightened up; | ||
08:30 min—raised arms, straightened, straightened shoulders; | ||
09:45 min—lowered hands, straightened, straightened shoulders, and raised head; | ||
09:54 min—lower lip twitched; | ||
10:12 min—“chewed” lips; | ||
11:24 min—strongly lowered his head; | ||
12:54 min—leveled head; | ||
13:20 min—raised hands, straightened, straightened shoulders, and raised head; | ||
14:58 min—moved hands, straightened, straightened shoulders, raised head, and took a deep breath; | ||
At the end, active head movements up/down and lip biting. | ||
Interruptions | 0 | |
Eye openings | 0 | |
Breathing rate | 5 | |
Breath type | thoracic | No changes |
© 2020 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (http://creativecommons.org/licenses/by/4.0/).
Share and Cite
Kashevnik, A.; Kruglov, M.; Lashkov, I.; Teslya, N.; Mikhailova, P.; Ripachev, E.; Malutin, V.; Saveliev, N.; Ryabchikov, I. Human Psychophysiological Activity Estimation Based on Smartphone Camera and Wearable Electronics. Future Internet 2020, 12, 111. https://doi.org/10.3390/fi12070111
Kashevnik A, Kruglov M, Lashkov I, Teslya N, Mikhailova P, Ripachev E, Malutin V, Saveliev N, Ryabchikov I. Human Psychophysiological Activity Estimation Based on Smartphone Camera and Wearable Electronics. Future Internet. 2020; 12(7):111. https://doi.org/10.3390/fi12070111
Chicago/Turabian StyleKashevnik, Alexey, Mikhail Kruglov, Igor Lashkov, Nikolay Teslya, Polina Mikhailova, Evgeny Ripachev, Vladislav Malutin, Nikita Saveliev, and Igor Ryabchikov. 2020. "Human Psychophysiological Activity Estimation Based on Smartphone Camera and Wearable Electronics" Future Internet 12, no. 7: 111. https://doi.org/10.3390/fi12070111
APA StyleKashevnik, A., Kruglov, M., Lashkov, I., Teslya, N., Mikhailova, P., Ripachev, E., Malutin, V., Saveliev, N., & Ryabchikov, I. (2020). Human Psychophysiological Activity Estimation Based on Smartphone Camera and Wearable Electronics. Future Internet, 12(7), 111. https://doi.org/10.3390/fi12070111