Nothing Special   »   [go: up one dir, main page]

skip to main content
10.1145/2678025.2701405acmconferencesArticle/Chapter ViewAbstractPublication PagesiuiConference Proceedingsconference-collections
research-article

Inferring Meal Eating Activities in Real World Settings from Ambient Sounds: A Feasibility Study

Published: 18 March 2015 Publication History

Abstract

Dietary self-monitoring has been shown to be an effective method for weight-loss, but it remains an onerous task despite recent advances in food journaling systems. Semi-automated food journaling can reduce the effort of logging, but often requires that eating activities be detected automatically. In this work we describe results from a feasibility study conducted in-the-wild where eating activities were inferred from ambient sounds captured with a wrist-mounted device; twenty participants wore the device during one day for an average of 5 hours while performing normal everyday activities. Our system was able to identify meal eating with an F-score of 79.8% in a person-dependent evaluation, and with 86.6% accuracy in a person-independent evaluation. Our approach is intended to be practical, leveraging off-the-shelf devices with audio sensing capabilities in contrast to systems for automated dietary assessment based on specialized sensors.

References

[1]
Amft, O., and Tröster, G. Recognition of dietary activity events using on-body sensors. Artificial Intelligence in Medicine 42, 2 (Feb. 2008), 121--136.
[2]
Bäckström, T., and Magi, C. Properties of line spectrum pair polynomials - A review. Signal Processing 86, 11 (Nov. 2006), 3286--3298.
[3]
Burke, L. E., Wang, J., and Sevick, M. A. Self-Monitoring in Weight Loss: A Systematic Review of the Literature. YJADA 111, 1 (Jan. 2011), 92--102.
[4]
Dong, Y., Scisco, J., Wilson, M., Muth, E., and Hoover, A. Detecting periods of eating during free living by tracking wrist motion. IEEE Journal of Biomedical Health Informatics (Sept. 2013).
[5]
Gillet, O., and Richard, G. Automatic transcription of drum loops. In 2004 IEEE International Conference on Acoustics, Speech, and Signal Processing, IEEE (2004), iv-269-iv-272.
[6]
Kadomura, A., Li, C.-Y., Tsukada, K., Chu, H.-H., and Siio, I. Persuasive technology to improve eating behavior using a sensor-embedded fork. In the 2014 ACM International Joint Conference, ACM Press (New York, New York, USA, 2014), 319--329.
[7]
Kahneman, D., Krueger, A. B., Schkade, D. A., and Schwarz, N. A Survey Method for Characterizing Daily Life Experience: The Day Reconstruction Method. Science (2004).
[8]
Kalantarian, H., Alshurafa, N., and Sarrafzadeh, M. A Wearable Nutrition Monitoring System. In Wearable and Implantable Body Sensor Networks (BSN), 2014 11th International Conference on (2014), 75--80.
[9]
Lu, H., Frauendorfer, D., Rabbi, M., Mast, M. S., Chittaranjan, G. T., Campbell, A. T., Gatica-Perez, D., and Choudhury, T. StressSense: detecting stress in unconstrained acoustic environments using smartphones. In UbiComp '12: Proceedings of the 2012 ACM Conference on Ubiquitous Computing, ACM Request Permissions (Sept. 2012).
[10]
Lu, H., Pan, W., Lane, N., Choudhury, T., and Campbell, A. SoundSense: scalable sound sensing for people-centric applications on mobile phones. Proceedings of the 7th international conference on Mobile systems, applications, and services (2009), 165--178.
[11]
Lukowicz, P., Pentland, A. S., and Ferscha, A. From Context Awareness to Socially Aware Computing. IEEE pervasive computing 11, 1 (2012), 32--40.
[12]
Makhoul, J. Linear prediction: A tutorial review. Proceedings of the IEEE 63, 4 (Apr. 1975), 561--580.
[13]
Mathieu, B., Essid, S., Fillon, T., Prado, J., and Richard, G. YAAFE, an Easy to Use and Efficient Audio Feature Extraction Software. In proceedings of the 11th ISMIR conference, 2010 (Sept. 2010).
[14]
Moore, B. C. J., Glasberg, B. R., and Baer, T. A Model for the Prediction of Thresholds, Loudness, and Partial Loudness. Journal of the Audio Engineering Society 45, 4 (1997), 224--240.
[15]
Noronha, J., Hysen, E., Zhang, H., and Gajos, K. Z. Platemate: crowdsourcing nutritional analysis from food photographs. Proceedings of the 24th annual ACM symposium on User interface software and technology (2011), 1--12.
[16]
Pedregosa, F., Varoquaux, G., Gramfort, A., Michel, V., Thirion, B., Grisel, O., Blondel, M., Prettenhofer, P., Weiss, R., Dubourg, V., Vanderplas, J., Passos, A., Cournapeau, D., Brucher, M., Perrot, M., and Duchesnay, E. Scikit-learn: Machine learning in Python. Journal of Machine Learning Research 12 (2011), 2825--2830.
[17]
Rossi, M., Feese, S., Amft, O., Braune, N., Martis, S., and Tröster, G. AmbientSense: A real-time ambient sound recognition system for smartphones. In Pervasive Computing and Communications Workshops (PERCOM Workshops), 2013 IEEE International Conference on (2013), 230--235.
[18]
Sazonov, E., Schuckers, S., Lopez-Meyer, P., Makeyev, O., Sazonova, N., Melanson, E. L., and Neuman, M. Non-invasive monitoring of chewing and swallowing for objective quantification of ingestive behavior. Physiological Measurement 29, 5 (Apr. 2008), 525--541.
[19]
Scheirer, E., and Slaney, M. Construction and evaluation of a robust multifeature speech/music discriminator. IEEE Internation Conference on Acoustics, Speech and Signal Processing, p. 1331--1334, 1997. 2 (1997), 1331--1334.
[20]
Schussler, H. A stability theorem for discrete systems. IEEE Transactions on Acoustics, Speech, and Signal Processing 24, 1 (Feb. 1976), 87--89.
[21]
Stellar, E., and Shrager, E. E. Chews and swallows and the microstructure of eating. The American journal of clinical nutrition 42, 5 (1985), 973--982.
[22]
Ward, J. A., Lukowicz, P., Tröster, G., and Starner, T. E. Activity Recognition of Assembly Tasks Using Body-Worn Microphones and Accelerometers. Pattern Analysis and Machine Intelligence, IEEE Transactions on 28, 10 (2006), 1553--1567.
[23]
Wyatt, D., Choudhury, T., and Bilmes, J. Conversation detection and speaker segmentation in privacy-sensitive situated speech data. Proceedings of Interspeech (2007), 586--589.
[24]
Yatani, K., and Truong, K. N. BodyScope: a wearable acoustic sensor for activity recognition. UbiComp '12: Proceedings of the 2012 ACM Conference on Ubiquitous Computing (2012), 341--350.

Cited By

View all
  • (2024)Collecting Self-reported Physical Activity and Posture Data Using Audio-based Ecological Momentary AssessmentProceedings of the ACM on Interactive, Mobile, Wearable and Ubiquitous Technologies10.1145/36785848:3(1-35)Online publication date: 9-Sep-2024
  • (2024)MunchSonic: Tracking Fine-grained Dietary Actions through Active Acoustic Sensing on EyeglassesProceedings of the 2024 ACM International Symposium on Wearable Computers10.1145/3675095.3676619(96-103)Online publication date: 5-Oct-2024
  • (2024)EchoGuide: Active Acoustic Guidance for LLM-Based Eating Event Analysis from Egocentric VideosProceedings of the 2024 ACM International Symposium on Wearable Computers10.1145/3675095.3676611(40-47)Online publication date: 5-Oct-2024
  • Show More Cited By

Index Terms

  1. Inferring Meal Eating Activities in Real World Settings from Ambient Sounds: A Feasibility Study

    Recommendations

    Comments

    Please enable JavaScript to view thecomments powered by Disqus.

    Information & Contributors

    Information

    Published In

    cover image ACM Conferences
    IUI '15: Proceedings of the 20th International Conference on Intelligent User Interfaces
    March 2015
    480 pages
    ISBN:9781450333061
    DOI:10.1145/2678025
    Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than ACM must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected]

    Sponsors

    Publisher

    Association for Computing Machinery

    New York, NY, United States

    Publication History

    Published: 18 March 2015

    Permissions

    Request permissions for this article.

    Check for updates

    Author Tags

    1. acoustic sensor
    2. activity recognition
    3. ambient sound
    4. automated dietary assessment
    5. dietary intake
    6. food journaling
    7. machine learning
    8. sound classification

    Qualifiers

    • Research-article

    Funding Sources

    • NIH
    • Intel Science and Technology Center for Pervasive Computing (ISTC-PC

    Conference

    IUI'15
    Sponsor:

    Acceptance Rates

    IUI '15 Paper Acceptance Rate 47 of 205 submissions, 23%;
    Overall Acceptance Rate 746 of 2,811 submissions, 27%

    Upcoming Conference

    IUI '25

    Contributors

    Other Metrics

    Bibliometrics & Citations

    Bibliometrics

    Article Metrics

    • Downloads (Last 12 months)34
    • Downloads (Last 6 weeks)0
    Reflects downloads up to 13 Nov 2024

    Other Metrics

    Citations

    Cited By

    View all
    • (2024)Collecting Self-reported Physical Activity and Posture Data Using Audio-based Ecological Momentary AssessmentProceedings of the ACM on Interactive, Mobile, Wearable and Ubiquitous Technologies10.1145/36785848:3(1-35)Online publication date: 9-Sep-2024
    • (2024)MunchSonic: Tracking Fine-grained Dietary Actions through Active Acoustic Sensing on EyeglassesProceedings of the 2024 ACM International Symposium on Wearable Computers10.1145/3675095.3676619(96-103)Online publication date: 5-Oct-2024
    • (2024)EchoGuide: Active Acoustic Guidance for LLM-Based Eating Event Analysis from Egocentric VideosProceedings of the 2024 ACM International Symposium on Wearable Computers10.1145/3675095.3676611(40-47)Online publication date: 5-Oct-2024
    • (2024)Integrated image and sensor-based food intake detection in free-livingScientific Reports10.1038/s41598-024-51687-314:1Online publication date: 18-Jan-2024
    • (2023)An End-to-End Energy-Efficient Approach for Intake Detection With Low Inference Time Using Wrist-Worn SensorIEEE Journal of Biomedical and Health Informatics10.1109/JBHI.2023.327662927:8(3878-3888)Online publication date: Aug-2023
    • (2023)A Dataset for Foreground Speech Analysis With Smartwatches In Everyday Home Environments2023 IEEE International Conference on Acoustics, Speech, and Signal Processing Workshops (ICASSPW)10.1109/ICASSPW59220.2023.10192949(1-5)Online publication date: 4-Jun-2023
    • (2023)Understanding behaviours in context using mobile sensingNature Reviews Psychology10.1038/s44159-023-00235-32:12(767-779)Online publication date: 23-Oct-2023
    • (2023)Passive Sensors for Detection of Food IntakeEncyclopedia of Sensors and Biosensors10.1016/B978-0-12-822548-6.00086-8(218-234)Online publication date: 2023
    • (2022)AudioIMU: Enhancing Inertial Sensing-Based Activity Recognition with Acoustic ModelsProceedings of the 2022 ACM International Symposium on Wearable Computers10.1145/3544794.3558471(44-48)Online publication date: 11-Sep-2022
    • (2021)Digital Tools to Promote Healthy Eating for Working-Age Individuals: A Scoping ReviewProceedings of the Ninth International Symposium of Chinese CHI10.1145/3490355.3490356(1-8)Online publication date: 16-Oct-2021
    • Show More Cited By

    View Options

    Get Access

    Login options

    View options

    PDF

    View or Download as a PDF file.

    PDF

    eReader

    View online with eReader.

    eReader

    Media

    Figures

    Other

    Tables

    Share

    Share

    Share this Publication link

    Share on social media