Nothing Special   »   [go: up one dir, main page]

You seem to have javascript disabled. Please note that many of the page functionalities won't work as expected without javascript enabled.
 
 
Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

Article Types

Countries / Regions

Search Results (12)

Search Parameters:
Keywords = ear canal sensor

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
12 pages, 2072 KiB  
Article
A Continuously Worn Dual Temperature Sensor System for Accurate Monitoring of Core Body Temperature from the Ear Canal
by Kyle D. Olson, Parker O’Brien, Andy S. Lin, David A. Fabry, Steve Hanke and Mark J. Schroeder
Sensors 2023, 23(17), 7323; https://doi.org/10.3390/s23177323 - 22 Aug 2023
Cited by 1 | Viewed by 3008
Abstract
The objective of this work was to develop a temperature sensor system that accurately measures core body temperature from an ear-worn device. Two digital temperature sensors were embedded in a hearing aid shell along the thermal gradient of the ear canal to form [...] Read more.
The objective of this work was to develop a temperature sensor system that accurately measures core body temperature from an ear-worn device. Two digital temperature sensors were embedded in a hearing aid shell along the thermal gradient of the ear canal to form a linear heat balance relationship. This relationship was used to determine best fit parameters for estimating body temperature. The predicted body temperatures resulted in intersubject limits of agreement (LOA) of ±0.49 °C over a range of physiologic and ambient temperatures without calibration. The newly developed hearing aid-based temperature sensor system can estimate core body temperature at an accuracy level equal to or better than many devices currently on the market. An accurate, continuously worn, temperature monitoring and tracking device may help provide early detection of illnesses, which could prove especially beneficial during pandemics and in the elderly demographic of hearing aid wearers. Full article
(This article belongs to the Special Issue Wearable and Unobtrusive Technologies for Healthcare Monitoring)
Show Figures

Figure 1

Figure 1
<p>Location of temperature readings for the heat balance model.</p>
Full article ">Figure 2
<p>Block diagram of the temperature sensors in the hearing aid.</p>
Full article ">Figure 3
<p>Illustration of a CIC hearing aid shell with two embedded temperature sensors.</p>
Full article ">Figure 4
<p>Dual temperature sensor heat balance relationship and identity line indicating near unity linearity.</p>
Full article ">Figure 5
<p>Bland–Altman graph of core body temperature estimates based on the dual sensor heat balance model. Mean error equals zero and the limits of agreement are ±0.49 °C.</p>
Full article ">Figure 6
<p>(<b>a</b>) Example of reference and estimated core body temperatures (left and right ears) over time for one subject during simulated febrile experiment. (<b>b</b>) Error of the estimated temperature over time for left and right sensors.</p>
Full article ">
20 pages, 887 KiB  
Review
Non-Invasive Methods of Quantifying Heat Stress Response in Farm Animals with Special Reference to Dairy Cattle
by Veerasamy Sejian, Chikamagalore Gopalakrishna Shashank, Mullakkalparambil Velayudhan Silpa, Aradotlu Parameshwarappa Madhusoodan, Chinnasamy Devaraj and Sven Koenig
Atmosphere 2022, 13(10), 1642; https://doi.org/10.3390/atmos13101642 - 9 Oct 2022
Cited by 18 | Viewed by 5648
Abstract
Non-invasive methods of detecting heat stress magnitude for livestock is gaining momentum in the context of global climate change. Therefore, the objective of this review is to focus on the synthesis information pertaining to recent efforts to develop heat stress detection systems for [...] Read more.
Non-invasive methods of detecting heat stress magnitude for livestock is gaining momentum in the context of global climate change. Therefore, the objective of this review is to focus on the synthesis information pertaining to recent efforts to develop heat stress detection systems for livestock based on multiple behavioral and physiological responses. There are a number of approaches to quantify farm animal heat stress response, and from an animal welfare point of view, these can be categorized as invasive and non-invasive approaches. The concept of a non-invasive approach to assess heat stress primarily looks into behavioral and physiological responses which can be monitored without any human interference or additional stress on the animal. Bioclimatic thermal indices can be considered as the least invasive approach to assess and/or predict the level of heat stress in livestock. The quantification and identification of the fecal microbiome in heat-stressed farm animals is one of the emerging techniques which could be effectively correlated with animal adaptive responses. Further, tremendous progress has been made in the last decade to quantify the classical heat stress endocrine marker, cortisol, non-invasively in the feces, urine, hair, saliva and milk of farm animals. In addition, advanced technologies applied for the real-time analysis of cardinal signs such as sounds through microphones, behavioral images, videos through cameras, and data stalking body weight and measurements might provide deeper insights towards improving biological metrics in livestock exposed to heat stress. Infrared thermography (IRT) can be considered another non-invasive modern tool to assess the stress response, production, health, and welfare status in farm animals. Various remote sensing technologies such as ear canal sensors, rumen boluses, rectal and vaginal probes, IRT, and implantable microchips can be employed in grazing animals to assess the quantum of heat stress. Behavioral responses and activity alterations to heat stress in farm animals can be monitored using accelerometers, Bluetooth technology, global positioning systems (GPSs) and global navigation satellite systems (GNSSs). Finally, machine learning offers a scalable solution in determining the heat stress response in farm animals by utilizing data from different sources such as hardware sensors, e.g., pressure sensors, thermistors, IRT sensors, facial recognition machine vision sensors, radio frequency identification, accelerometers, and microphones. Thus, the recent advancements in recording behavior and physiological responses offer new scope to quantify farm animals’ heat stress response non-invasively. These approaches could have greater applications in not only determining climate resilience in farm animals but also providing valuable information for defining suitable and accurate amelioration strategies to sustain their production. Full article
(This article belongs to the Section Biometeorology and Bioclimatology)
Show Figures

Figure 1

Figure 1
<p>Different non-invasive methods to quantify heat stress response in farm animals.</p>
Full article ">
17 pages, 16639 KiB  
Article
Earable Ω (OMEGA): A Novel Clenching Interface Using Ear Canal Sensing for Human Metacarpophalangeal Joint Control by Functional Electrical Stimulation
by Kazuhiro Matsui, Yuya Suzuki, Keita Atsuumi, Miwa Nagai, Shotaro Ohno, Hiroaki Hirai, Atsushi Nishikawa and Kazuhiro Taniguchi
Sensors 2022, 22(19), 7412; https://doi.org/10.3390/s22197412 - 29 Sep 2022
Cited by 3 | Viewed by 1756
Abstract
(1) Background: A mouth-free interface is required for functional electrical stimulation (FES) in people with spinal cord injuries. We developed a novel system for clenching the human metacarpophalangeal (MP) joint using an earphone-type ear canal movement sensor. Experiments to control joint angle and [...] Read more.
(1) Background: A mouth-free interface is required for functional electrical stimulation (FES) in people with spinal cord injuries. We developed a novel system for clenching the human metacarpophalangeal (MP) joint using an earphone-type ear canal movement sensor. Experiments to control joint angle and joint stiffness were performed using the developed system. (2) Methods: The proposed FES used an equilibrium point control signal and stiffness control signal: electrical agonist–antagonist ratio and electrical agonist–antagonist sum. An angle sensor was used to acquire the joint angle, and system identification was utilized to measure joint stiffness using the external force of a robot arm. Each experiment included six and five subjects, respectively. (3) Results: While the joint angle could be controlled well by clenching with some hysteresis and delay in three subjects, it could not be controlled relatively well after hyperextension in the other subjects, which revealed a calibration problem and a change in the characteristics of the human MP joint caused by hyperextension. The joint stiffness increased with the clenching amplitude in five subjects. In addition, the results indicated that viscosity can be controlled. (4) Conclusions: The developed system can control joint angle and stiffness. In future research, we will develop a method to show that this system can control the equilibrium point and stiffness simultaneously. Full article
(This article belongs to the Section Intelligent Sensors)
Show Figures

Figure 1

Figure 1
<p>Earable <math display="inline"><semantics> <mi mathvariant="sans-serif">Ω</mi> </semantics></math>.</p>
Full article ">Figure 2
<p>Signal processing image.</p>
Full article ">Figure 3
<p>Upper figure shows the system configuration diagram. Lower figure shows an experimental setup for controlling the joint angle using earable <math display="inline"><semantics> <mi mathvariant="sans-serif">Ω</mi> </semantics></math> as an interface, top view. The stimulation at the surface electrode is via the emergency stop switch for the safety of subjects.</p>
Full article ">Figure 4
<p>A 1-link model of MP joint.</p>
Full article ">Figure 5
<p>Upper figure shows the system configuration diagram. Lower figure shows an experimental setup for controlling joint stiffness using earable <math display="inline"><semantics> <mi mathvariant="sans-serif">Ω</mi> </semantics></math> as an interface, top view. The stimulation at the surface electrode is via the emergency stop switch for the safety of subjects.</p>
Full article ">Figure 6
<p>Experimental scene.</p>
Full article ">Figure 7
<p>An example of maintaining <span class="html-italic">C</span> of Subject E.</p>
Full article ">Figure 8
<p>The results of the six subjects for controlling the joint angle using earable <math display="inline"><semantics> <mi mathvariant="sans-serif">Ω</mi> </semantics></math> as an interface.</p>
Full article ">Figure 9
<p>The results of <math display="inline"><semantics> <msub> <mi>ω</mi> <mi mathvariant="normal">n</mi> </msub> </semantics></math> (<b>left</b>), <math display="inline"><semantics> <msub> <mi>K</mi> <mi mathvariant="normal">p</mi> </msub> </semantics></math> (<b>center</b>), <math display="inline"><semantics> <mrow> <mi>ζ</mi> <msub> <mi>ω</mi> <mi mathvariant="normal">n</mi> </msub> </mrow> </semantics></math> (<b>right</b>). Upper figures are averages and SDs. Lower figures are individual values.</p>
Full article ">Figure 10
<p>The results of NRMSE.</p>
Full article ">
16 pages, 3401 KiB  
Article
Validation of Earphone-Type Sensors for Non-Invasive and Objective Swallowing Function Assessment
by Takuto Yoshimoto, Kazuhiro Taniguchi, Satoshi Kurose and Yutaka Kimura
Sensors 2022, 22(14), 5176; https://doi.org/10.3390/s22145176 - 11 Jul 2022
Cited by 2 | Viewed by 1766
Abstract
Standard methods for swallowing function evaluation are videofluoroscopy (VF) and videoendoscopy, which are invasive and have test limitations. We examined the use of an earphone-type sensor to noninvasively evaluate soft palate movement in comparison with VF. Six healthy adults wore earphone sensors and [...] Read more.
Standard methods for swallowing function evaluation are videofluoroscopy (VF) and videoendoscopy, which are invasive and have test limitations. We examined the use of an earphone-type sensor to noninvasively evaluate soft palate movement in comparison with VF. Six healthy adults wore earphone sensors and swallowed barium water while being filmed by VF. A light-emitting diode at the sensor tip irradiated infrared light into the ear canal, and a phototransistor received the reflected light to detect changes in ear canal movement, including that of the eardrum. Considering that the soft palate movement corresponded to the sensor waveform, a Bland–Altman analysis was performed on the difference in time recorded by each measurement method. The average difference between the time taken from the most downward retracted position before swallowing to the most upward position during swallowing of the soft palate in VF was −0.01 ± 0.14 s. The Bland–Altman analysis showed no fixed or proportional error. The minimal detectable change was 0.28 s. This is the first noninvasive swallowing function evaluation through the ear canal. The earphone-type sensor enabled us to measure the time from the most retracted to the most raised soft palate position during swallowing and validated this method for clinical application. Full article
(This article belongs to the Section Biomedical Sensors)
Show Figures

Figure 1

Figure 1
<p>Earphone-type sensor. (<b>A</b>) Shape of the sensor used in this study. Elongation of the elastic material at the tip improved the sensor and allowed it to reach the eardrum. Since the elastic region models its shape to fit the shape of the ear canal, the sensor can accommodate for the individual differences in the shape of the ear canal. (<b>B</b>) The mechanism and structure of the earphone-type sensor. A small optical sensor is mounted on the tip of the sensor, and a light-emitting diode (LED) and a phototransistor are incorporated. By irradiating the ear canal with infrared light using an LED (wavelength 960 nm) and moving the irradiated area, changes in the reflected light emitted can be detected. When the phototransistor captures the reflected light signals, it detects the movement of the ear canal, including the eardrum. The light energy of the detected light is converted and amplified into electrical energy, and the signal is converted into a digital signal using an analog–digital (AD) conversion circuit (10 bits, 250 Hz). The results of these measurements can be captured and are recorded on a tablet terminal at a remote location using Bluetooth. (<b>C</b>) Configuration of the measurement device. (<b>D</b>) Electronic circuits surrounding the lightwave distance sensor.</p>
Full article ">Figure 2
<p>Correspondence between soft palate movement and sensor waveform.</p>
Full article ">Figure 3
<p>Data correction method. (<b>A</b>) Data from five swallowing trials were continuously recorded. As the start time of recording was different between the VF and the sensor, there a time lag in the data was recorded. (<b>B</b>) The time lag was determined on the basis of the audio recorded at the same time, and the corrected time of emergence of SA to SC was used for the analysis. Numbers in the figure show the number of swallowing trials. VF: Videofluoroscopy; SA: Time the waveform was at its lowest point; SC: Time the waveform decreased.</p>
Full article ">Figure 4
<p>An example of a waveform left out of analysis. (<b>A</b>) Poor sensor mounting (subject No. 2), (<b>B</b>) Indiscernible data (subject No. 4).</p>
Full article ">Figure 5
<p>Sum of errors in sensor measurements vs VF measurements. The sum of sensor error (DA + DB + DC) versus VF for each subject is graphed from the results in <a href="#sensors-22-05176-t001" class="html-table">Table 1</a>. The smallest errors in the VF and sensor readings were observed in No. 3, with errors of DA: 0.09 ± 0.07 s, DB: 0.13 ± 0.07 s and DC: 0.12 ± 0.04 s. The largest errors were seen for subject No. 2, with DA: 0.50 ± 0.15 s, DB: 0.54 ± 0.12 s and DC: 0.83 ± 0.30 s. DA: Difference in the time of emergence of VA and SA; DB: Difference in the time of emergence between VB and SB; DC: Difference in the time of emergence of VC between SC; VF: Videofluroscopy.</p>
Full article ">Figure 6
<p>Bland–Altman plot. The X-axis shows the average value (seconds) of the measurement pairs and the Y-axis shows the difference (seconds) between the methods. The blue line on the graph shows the average value of the difference between the two measurement methods, and the red line shows the limits of agreement. (<b>A</b>) VI and SI. TI had a mean difference of −0.01 ± 0.14 s. Neither fixed nor proportional error was observed. (<b>B</b>) VII and SII. TII had a mean difference of −0.33 ± 0.23 s. There was a fixed error, but no proportional error was observed. (<b>C</b>) VIII and SIII. TIII had a mean difference of −0.34 ± 0.31 s. There was a fixed error, but no proportional error was observed. VI: VI time from VA to VB; VII: time from VB to VC; VIII: time from VA to VC; SI: time from SA to SB; SI: SI time from SA to SB; TI: difference between VI and SI; VII: Time from VB to VC; SII: Time from SB to SC; VIII: Time from VA to VC; SIII: Time from SA to SC; TIII: VIII and SIII.</p>
Full article ">Figure 7
<p>Example of waveforms for each subject. (<b>A</b>) Waveform of Subject No. 3, which is simpler and smoother than those of No. 6 and 7. Subject No. 3 had fewer errors than the other subjects on all items from DA to DC. (<b>B</b>) Waveform of Subject No. 6. It shows small tremors, suggesting the possibility that the carotid pulse was also captured by the recording. (<b>C</b>) Waveform of Subject No. 1, which also shows small tremors as in B. However, it is possible that the subject was only aware of swallowing for a short time, which caused the waveform to be stretched horizontally, thus clarifying the position of the plot. DA: Difference in the time of emergence of VA and SA; DC: Difference in the time of emergence of VC between SC.</p>
Full article ">
17 pages, 831 KiB  
Article
Augmenting Ear Accessories for Facial Gesture Input Using Infrared Distance Sensor Array
by Kyosuke Futami, Kohei Oyama and Kazuya Murao
Electronics 2022, 11(9), 1480; https://doi.org/10.3390/electronics11091480 - 5 May 2022
Cited by 8 | Viewed by 1950
Abstract
Simple hands-free input methods using ear accessories have been proposed to broaden the range of scenarios in which information devices can be operated without hands. Although many previous studies use canal-type earphones, few studies focused on the following two points: (1) A method [...] Read more.
Simple hands-free input methods using ear accessories have been proposed to broaden the range of scenarios in which information devices can be operated without hands. Although many previous studies use canal-type earphones, few studies focused on the following two points: (1) A method applicable to ear accessories other than canal-type earphones. (2) A method enabling various ear accessories with different styles to have the same hands-free input function. To realize these two points, this study proposes a method to recognize the user’s facial gesture using an infrared distance sensor attached to the ear accessory. The proposed method detects skin movement around the ear and face, which differs for each facial expression gesture. We created a prototype system for three ear accessories for the root of the ear, earlobe, and tragus. The evaluation results for nine gestures and 10 subjects showed that the F-value of each device was 0.95 or more, and the F-value of the pattern combining multiple devices was 0.99 or more, which showed the feasibility of the proposed method. Although many ear accessories could not interact with information devices, our findings enable various ear accessories with different styles to have eye-free and hands-free input ability based on facial gestures. Full article
(This article belongs to the Special Issue Design, Development and Testing of Wearable Devices)
Show Figures

Figure 1

Figure 1
<p>Flowchart of the proposed method. Reprinted/adapted with permission from Ref. [<a href="#B11-electronics-11-01480" class="html-bibr">11</a>] 2021, ACM.</p>
Full article ">Figure 2
<p>Ear-root-mounted device. Reprinted/adapted with permission from Ref. [<a href="#B11-electronics-11-01480" class="html-bibr">11</a>] 2021, ACM.</p>
Full article ">Figure 3
<p>Earlobe-mounted device [<a href="#B11-electronics-11-01480" class="html-bibr">11</a>].</p>
Full article ">Figure 4
<p>Tragus-mounted device. Reprinted/adapted with permission from Ref. [<a href="#B11-electronics-11-01480" class="html-bibr">11</a>]. 2021, ACM.</p>
Full article ">Figure 5
<p>Prototype system configuration.</p>
Full article ">Figure 6
<p>Example of wearing sensor devices. Reprinted/adapted with permission from Ref. [<a href="#B11-electronics-11-01480" class="html-bibr">11</a>] 2021, ACM.</p>
Full article ">Figure 7
<p>Characteristics of infrared distance sensor. The horizontal axis shows the distance between the sensor and the skin, and the vertical axis shows the normalized sensor output value.</p>
Full article ">Figure 8
<p>Types of gestures. Reprinted/adapted with permission from Ref. [<a href="#B11-electronics-11-01480" class="html-bibr">11</a>] 2021, ACM.</p>
Full article ">Figure 9
<p>The results of Evaluation 1 using nine gestures. The average F-value for each device pattern. Device 1 is a tragus-mounted device, Device 2 is an ear-root-mounted device, and Device 3 is an earlobe-mounted device. Device combinations are classified into seven types (i.e., individual use and a combination of each device).</p>
Full article ">Figure 10
<p>The results of each gesture in Evaluation 1 using nine gestures. The average F-value for each device pattern.</p>
Full article ">Figure 11
<p>The results of Evaluation 2 using six gestures. The average value for each device pattern.</p>
Full article ">Figure 12
<p>The results of each gesture in Evaluation 2 using six gestures. The average F-value for each device pattern.</p>
Full article ">Figure 13
<p>The results of Evaluation 3 testing with leave-one-user-out. The average F-value for each device pattern.</p>
Full article ">Figure 14
<p>The results of each gesture in Evaluation 3 testing with leave-one-user-out. The average F-value for each device pattern.</p>
Full article ">
16 pages, 2339 KiB  
Article
Simultaneous Measurement of Ear Canal Movement, Electromyography of the Masseter Muscle and Occlusal Force for Earphone-Type Occlusal Force Estimation Device Development
by Mami Kurosawa, Kazuhiro Taniguchi, Hideya Momose, Masao Sakaguchi, Masayoshi Kamijo and Atsushi Nishikawa
Sensors 2019, 19(15), 3441; https://doi.org/10.3390/s19153441 - 6 Aug 2019
Cited by 9 | Viewed by 4656
Abstract
We intend to develop earphone-type wearable devices to measure occlusal force by measuring ear canal movement using an ear sensor that we developed. The proposed device can measure occlusal force during eating. In this work, we simultaneously measured the ear canal movement (ear [...] Read more.
We intend to develop earphone-type wearable devices to measure occlusal force by measuring ear canal movement using an ear sensor that we developed. The proposed device can measure occlusal force during eating. In this work, we simultaneously measured the ear canal movement (ear sensor value), the surface electromyography (EMG) of the masseter muscle and the occlusal force six times from five subjects as a basic study toward occlusal force meter development. Using the results, we investigated the correlation coefficient between the ear sensor value and the occlusal force, and the partial correlation coefficient between ear sensor values. Additionally, we investigated the average of the partial correlation coefficient and the absolute value of the average for each subject. The absolute value results indicated strong correlation, with correlation coefficients exceeding 0.9514 for all subjects. The subjects showed a lowest partial correlation coefficient of 0.6161 and a highest value of 0.8286. This was also indicative of correlation. We then estimated the occlusal force via a single regression analysis for each subject. Evaluation of the proposed method via the cross-validation method indicated that the root-mean-square error when comparing actual values with estimates for the five subjects ranged from 0.0338 to 0.0969. Full article
(This article belongs to the Special Issue Wearable Sensors and Devices for Healthcare Applications)
Show Figures

Figure 1

Figure 1
<p>Experimental system. In this system, analog signals ranging from 0 V to 3.3 V measured using occlusal force meter, electromyography and the ear sensor to measure the movement of the ear canal are converted into digital signals by the analog-to-digital converter at a sampling frequency of 100 Hz with 12 bit resolution; the digital signals are then recorded together with timestamps in a storage device.</p>
Full article ">Figure 2
<p>Principle of ear canal movement measurement using ear sensor. Occlusion is performed by the temporalis and masticatory muscles, including the masseter muscle and the temporomandibular joint. Occlusion causes a change in the ear canal shape near the masticatory muscles and the temporomandibular joint. The ear sensor measures this shape change in the ear canal during occlusion optically and noninvasively. A small photosensor is attached to the ear sensor. This photosensor houses a light-emitting diode (LED) with an emission wavelength of 940 nm and a phototransistor, as illustrated in <a href="#sensors-19-03441-f001" class="html-fig">Figure 1</a>. The ear sensor irradiates the skin of the ear canal with infrared light, and the reflected light is then received by the phototransistor to measure the change in the ear canal shape.</p>
Full article ">Figure 3
<p>Appearance of the GM-10 occlusal force meter. This occlusal force meter is constructed continuously of an intraoral insertion part and a gripping part; 88 mm of the total length is the intraoral insertion part (on the left side in the figure) and the remaining 101 mm is the gripping part (on the right side in the figure). During measurements, the disposable resin-made cover is placed on the intraoral insertion part in advance. The subject then holds the gripping part using a single hand and the sensor measures the occlusal force when the subject chews the tip (i.e., the sensing area) of the intraoral insertion part.</p>
Full article ">Figure 4
<p>Measured results for subject A. The graph shows the measured results for which the correlation coefficient between the ear sensor value and the occlusal force was the highest among the six runs of subject A.</p>
Full article ">Figure 5
<p>Measurement results for ear sensor and occlusal force over the first through sixth runs for subject A. Here, the horizontal axis represents the ear sensor-measured value, while the vertical axis represents the measured occlusal force value.</p>
Full article ">
8 pages, 2399 KiB  
Article
Measurement of Core Body Temperature Using Graphene-Inked Infrared Thermopile Sensor
by Jorge S. Chaglla E., Numan Celik and Wamadeva Balachandran
Sensors 2018, 18(10), 3315; https://doi.org/10.3390/s18103315 - 3 Oct 2018
Cited by 37 | Viewed by 7198
Abstract
Continuous and reliable measurements of core body temperature (CBT) are vital for studies on human thermoregulation. Because tympanic membrane directly reflects the temperature of the carotid artery, it is an accurate and non-invasive method to record CBT. However, commercial tympanic thermometers lack portability [...] Read more.
Continuous and reliable measurements of core body temperature (CBT) are vital for studies on human thermoregulation. Because tympanic membrane directly reflects the temperature of the carotid artery, it is an accurate and non-invasive method to record CBT. However, commercial tympanic thermometers lack portability and continuous measurements. In this study, graphene inks were utilized to increase the accuracy of the temperature measurements from the ear by coating graphene platelets on the lens of an infrared thermopile sensor. The proposed ear-based device was designed by investigating ear canal geometry and developed with 3D printing technology using the Computer-Aided Design (CAD) Software, SolidWorks 2016. It employs an Arduino Pro Mini and a Bluetooth module. The proposed system runs with a 3.7 V, 850 mAh rechargeable lithium-polymer battery that allows long-term, continuous monitoring. Raw data are continuously and wirelessly plotted on a mobile phone app. The test was performed on 10 subjects under resting and exercising in a total period of 25 min. Achieved results were compared with the commercially available Braun Thermoscan, Original Thermopile, and Cosinuss One ear thermometers. It is also comprehended that such system will be useful in personalized medicine as wearable in-ear device with wireless connectivity. Full article
(This article belongs to the Special Issue Nanostructured Surfaces in Sensing Systems)
Show Figures

Figure 1

Figure 1
<p>Coating process to obtain the graphene-inked thermopile. Adapted from [<a href="#B15-sensors-18-03315" class="html-bibr">15</a>].</p>
Full article ">Figure 2
<p>Amplified image of the graphene-inked MLX90614 thermopile.</p>
Full article ">Figure 3
<p>(<b>a</b>) Assembled device; and (<b>b</b>) monitoring of CBT on the smartphone.</p>
Full article ">Figure 4
<p>Raw temperature data acquired from the tympanic membrane.</p>
Full article ">Figure 5
<p>Bland–Altman plot between: (<b>a</b>) CBT acquired with the original thermopile and the reference thermometer; and (<b>b</b>) CBT measured with the graphene-inked thermopile and the reference thermometer.</p>
Full article ">Figure 5 Cont.
<p>Bland–Altman plot between: (<b>a</b>) CBT acquired with the original thermopile and the reference thermometer; and (<b>b</b>) CBT measured with the graphene-inked thermopile and the reference thermometer.</p>
Full article ">Figure 6
<p>Temperature data during physical activity.</p>
Full article ">
16 pages, 2020 KiB  
Article
Earable POCER: Development of a Point-of-Care Ear Sensor for Respiratory Rate Measurement
by Kazuhiro Taniguchi and Atsushi Nishikawa
Sensors 2018, 18(9), 3020; https://doi.org/10.3390/s18093020 - 10 Sep 2018
Cited by 11 | Viewed by 5579
Abstract
We have carried out research and development on an earphone-type respiratory rate measuring device, earable POCER. The name earable POCER is a combination of “earable”, which is a word coined from “wearable” and “ear”, and “POCER”, which is an acronym for “point-of-care ear [...] Read more.
We have carried out research and development on an earphone-type respiratory rate measuring device, earable POCER. The name earable POCER is a combination of “earable”, which is a word coined from “wearable” and “ear”, and “POCER”, which is an acronym for “point-of-care ear sensor for respiratory rate measurement”. The earable POCER calculates respiratory frequency, based on the measurement values over one minute, through the simple attachment of an ear sensor to one ear of the measured subject and displays these on a tablet terminal. The earable POCER irradiates infrared light using a light-emitting diode (LED) loaded on an ear sensor to the epidermis within the ear canal and, by receiving that reflected light with a phototransistor, it measures movement of the ear canal based on respiration. In an evaluation experiment, eight healthy subjects first breathed through the nose 12 times per minute, then 16 times per minute, and finally 20 times per minute, in accordance with the flashing of a timing instruction LED. The results of these evaluation tests showed that the accuracy of the respiratory frequency was 100% for nose breathing 12 times per minute, 93.8% at 16 times, and 93.8% at 20 times. Full article
(This article belongs to the Special Issue Point of Care Sensors)
Show Figures

Figure 1

Figure 1
<p>External diagram of the earable point-of-care ear sensor for respiratory rate measurement (POCER). The earable POCER uses an ear sensor attached to one ear. The earable POCER is comprised of one ear sensor for measuring the respiratory rate of the measured subject, a display and processing device (DPD) that calculates and displays the respiratory frequency from the values measured by the ear sensor, and one clip for fixing the cable stretching from the ear sensor to the DPD on the subject’s clothes. A photo sensor is attached to the ear sensor. The surface of the ear tip on the ear sensor is covered with silicone rubber, and the inside is made of low-rebound urethane. Using this silicone rubber and low rebound urethane, the ear sensor fits snugly in the ear hole.</p>
Full article ">Figure 2
<p>Block diagram of the earable POCER. The photo sensor is equipped with one light-emitting diode (LED) with a wavelength of 940 nm and one photo transistor. The DPD is comprised of an AD converter, start/stop button, timing teaching LED, signal processor, memory, and display. The analog signal measured with the ear sensor is converted into a digital signal with a sampling frequency of 34.13 Hz using the AD converter. The measured values, converted into a digital signal, are recorded in the memory. Based on the information recorded in memory, the respiratory frequency is calculated from the signal processor, and the respiratory frequency and frequency waveform are shown on the display. The timing instruction LED has a duty ratio of 50%, and frequencies with the three patterns of 200 mHz, 267 mHz, and 333 mHz. The start/stop button is used for starting and stopping the earable POCER.</p>
Full article ">Figure 3
<p>Measurement principles of the ear sensor. The ear sensor has a compact photo sensor attached. The photo sensor irradiates infrared light using an LED to the epidermis of the ear canal, and by receiving the reflected light with a phototransistor, it can measure form changes in the ear canal. The ear canal and a nasal cavity are connected by the eustachian tube. The ear canal and nasal cavity are also connected by the levator veli palatini muscle and tensor veli palatini muscle, which are anatomically close to each other. Based on this, when one breathes, the shape of the ear canal changes. Additionally, as one breathes through the mouth, because the action of opening and closing the mouth results from extension and contraction of the temporalis muscle, this extension and contraction of the temporalis muscle changes the shape of the adjacent ear canal. Usage of the ear sensor enables optical and non-invasive measurement of the aforementioned changes in the shape of the ear canal that occur due to breathing.</p>
Full article ">Figure 4
<p>Experiment results of EX200mHz12rpm, EX267mHz16rpm, and EX333mHz20rpm for subject A. This value is the analog–digital (AD) converted value <span class="html-italic">v</span><sub>i</sub> measured using the sensor. A.D. conversion occurred with a sampling frequency of 34.13 Hz and a resolution of 12 bits, and the number of data items was 2129. The measurement time was approximately 62.37 s. (<b>a</b>) The first experiment results of 200 mHz (EX200mHz12rpm); (<b>b</b>) the first experiment results of 267 mHz (EX267mHz16rpm); (<b>c</b>) the first experiment results of 333 mHz (EX333mHz20rpm).</p>
Full article ">Figure 5
<p>FFT processing data obtained from experiment results of EX200mHz12rpm, EX267mHz16rpm, and EX333mHz20rpm for subject A. Median filter processing, band-pass filter processing with a passband from 189 mHz to 504 mHz and thinning to 1/16 for the 2048 items was performed on <span class="html-italic">v</span><sub>i</sub> in <a href="#sensors-18-03020-f004" class="html-fig">Figure 4</a>, and processing was performed so that the mean value of the 128 data items was 0, creating the data for one minute. (<b>a</b>) The first experiment results of 200 mHz (EX200mHz12rpm); (<b>b</b>) the first experiment results of 267 mHz (EX267mHz16rpm); (<b>c</b>) the first experiment results of 333 mHz (EX333mHz20rpm).</p>
Full article ">Figure 6
<p>FFT processing results for the experiment results of EX200mHz12rpm, EX267mHz16rpm, and EX333mHz20rpm for subject A. These are the results of performing FFT with a data length of 128 on the data in <a href="#sensors-18-03020-f005" class="html-fig">Figure 5</a>, and show that the true values (timing instruction LED frequencies) and earable POCER calculated values (frequencies at which the power spectrum is maximum) match. (<b>a</b>) The first experiment results of 200 mHz (EX200mHz12rpm); (<b>b</b>) the first experiment results of 267 mHz (EX267mHz16rpm); (<b>c</b>) the first experiment results of 333 mHz (EX333mHz20rpm).</p>
Full article ">
11 pages, 3342 KiB  
Article
Earable TEMPO: A Novel, Hands-Free Input Device that Uses the Movement of the Tongue Measured with a Wearable Ear Sensor
by Kazuhiro Taniguchi, Hisashi Kondo, Mami Kurosawa and Atsushi Nishikawa
Sensors 2018, 18(3), 733; https://doi.org/10.3390/s18030733 - 1 Mar 2018
Cited by 26 | Viewed by 6246
Abstract
In this study, an earphone-type interface named “earable TEMPO” was developed for hands-free operation, wherein the user can control the device by simply pushing the tongue against the roof of the mouth for about one second. This interface can be used to start [...] Read more.
In this study, an earphone-type interface named “earable TEMPO” was developed for hands-free operation, wherein the user can control the device by simply pushing the tongue against the roof of the mouth for about one second. This interface can be used to start and stop the music from a portable audio player. The earable TEMPO uses an earphone-type sensor equipped with a light emitting diode (LED) and a phototransistor to optically measure shape variations that occur in the external auditory meatus when the tongue is pressed against the roof of the mouth. To evaluate the operation of the earable TEMPO, experiments were performed on five subjects (men and women aged 22–58) while resting, chewing gum (representing mastication), and walking. The average accuracy was 100% while resting and chewing and 99% while walking. The precision was 100% under all conditions. The average recall value of the five subjects was 92%, 90%, and 48% while resting, masticating, and walking, respectively. All subjects were reliably able to perform the action of pressing the tongue against the roof of the mouth. The measured shape variations in the ear canal were highly reproducible, indicating that this method is suitable for various applications such as controlling a portable audio player. Full article
(This article belongs to the Special Issue State-of-the-Art Sensors Technology in Japan 2017)
Show Figures

Figure 1

Figure 1
<p>Anatomy of the tongue and external acoustic meatus. When the user pushes the tongue against the roof of the mouth, the suprahyoid muscles, including the stylohyoid muscle, expand and contract. The shape of the adjacent external acoustic meatus changes as a result of this expansion and contraction.</p>
Full article ">Figure 2
<p>Construction of the electronic circuit of the earable TEMPO and the appearance of the earphone-type sensor. The earable TEMPO optically measures the shape variation of the auditory canal via an earphone-type sensor inserted into one ear. The switch manipulates the PAP through the controller when a classifier judges that the user has performed TEMPO. The offset voltage of the output of the earphone-type sensor can be adjusted with the variable resistor VR<sub>1</sub>. A pulse wave generator is attached to the LED of the earphone-type sensor to control the light emission of the LED. The microprocessor comprises five components: an AD converter, a pulse wave generator, a timing-teaching LED, memory, and a classifier.</p>
Full article ">Figure 3
<p>Earphone-type sensor for the right ear. The earphone-type sensor is equipped with a small optical sensor comprising a QRE 1113 sensor (Fairchild Semiconductor International Inc., San Jose, CA, USA). An infrared LED and phototransistor is equipped in the optical sensor. The size of the area that is used for the earphone-type sensor to be inserted into the ear canal was based on a medium sized commercially available earphone. Speakers were not mounted onto the sensor in this study, because the objective here is only to evaluate the operating performance of a PAP using a prototyped earphone-type sensor; however, speakers can be added in future applications.</p>
Full article ">Figure 4
<p>Measurement principle for changes in the shape of the ear canal. The earphone-type sensor receives the light emitted from the optical distance sensor that is reflected back by the eardrum and ear canal. During movement of the tongue, the shape of the ear canal changes, which alters the distance between the optical distance sensor and the eardrum and ear canal. The amount of light received changes over time in association with this change in distance.</p>
Full article ">Figure 5
<p>Ground truth obtained from subject A: the subject repeated TEMPO ten times, while the movement of the external acoustic meatus was measured by the earphone-type sensor. The obtained data was normalized, and the median values from the normalized data from each time were extracted and used as the ground truth as described in <a href="#sec3dot2-sensors-18-00733" class="html-sec">Section 3.2</a>.</p>
Full article ">Figure 6
<p>Ground truth of the persons (A–E) being tested. Ground truth is also obtained for the persons (B–E) being tested in the same manner as in <a href="#sensors-18-00733-f005" class="html-fig">Figure 5</a>, and the results are superimposed and displayed. The measured values were obtained from the experiment discussed in <a href="#sec3dot2-sensors-18-00733" class="html-sec">Section 3.2</a>.</p>
Full article ">Figure 7
<p>Measured values of person A while walking. The result is obtained by normalizing the measurement values of the earphone-type sensor when person A performs the TEMPO 10 times while walking. These measurement values were obtained from the experiment discussed in <a href="#sec3dot3-sensors-18-00733" class="html-sec">Section 3.3</a>.</p>
Full article ">
4534 KiB  
Article
A Novel Earphone Type Sensor for Measuring Mealtime: Consideration of the Method to Distinguish between Running and Meals
by Kazuhiro Taniguchi, Hikaru Chiaki, Mami Kurosawa and Atsushi Nishikawa
Sensors 2017, 17(2), 252; https://doi.org/10.3390/s17020252 - 27 Jan 2017
Cited by 17 | Viewed by 6108
Abstract
In this study, we describe a technique for estimating meal times using an earphone-type wearable sensor. A small optical sensor composed of a light-emitting diode and phototransistor is inserted into the ear hole of a user and estimates the meal times of the [...] Read more.
In this study, we describe a technique for estimating meal times using an earphone-type wearable sensor. A small optical sensor composed of a light-emitting diode and phototransistor is inserted into the ear hole of a user and estimates the meal times of the user from the time variations in the amount of light received. This is achieved by emitting light toward the inside of the ear canal and receiving light reflected back from the ear canal. This proposed technique allowed “meals” to be differentiated from having conversations, sneezing, walking, ascending and descending stairs, operating a computer, and using a smartphone. Conventional devices worn on the head of users and that measure food intake can vibrate during running as the body is jolted more violently than during walking; this can result in the misidentification of running as eating by these devices. To solve this problem, we used two of our sensors simultaneously: one in the left ear and one in the right ear. This was based on our finding that measurements from the left and right ear canals have a strong correlation during running but no correlation during eating. This allows running and eating to be distinguished based on correlation coefficients, which can reduce misidentification. Moreover, by using an optical sensor composed of a semiconductor, a small and lightweight device can be created. This measurement technique can also measure body motion associated with running, and the data obtained from the optical sensor inserted into the ear can be used to support a healthy lifestyle regarding both eating and exercise. Full article
(This article belongs to the Special Issue Wearable Biomedical Sensors)
Show Figures

Figure 1

Figure 1
<p>Anatomical positional relationship between the ear canal, temporal muscle (temporalis), and temporomandibular joint (TMJ). Chewing occurs by moving the TMJ and the temporal muscle. The ear canal is anatomically close to the TMJ and temporal muscle and is susceptible to mechanical force exerted by the TMJ and temporal muscle.</p>
Full article ">Figure 2
<p>Measurement principle for changes in the shape of the ear canal. The earphone-type sensor receives the light emitted from the optical distance sensor that is reflected back by the eardrum and ear canal. During chewing, the shape of the ear canal changes, which alters the distance between the optical distance sensor and the eardrum and ear canal. The amount of light received changes over time in association with this change in distance.</p>
Full article ">Figure 3
<p>Sensor prototype. The earphone-type sensor has the same shape as a conventional inner ear-type earphone. The sensor is worn and used in both ears.</p>
Full article ">Figure 4
<p>Electronic circuit around the sensor. From the circuit, it is evident that the reflected light changes based on the distance d between the phototransistor and vicinity of the eardrum, which causes the collector current <span class="html-italic">I</span><sub>C</sub> of the phototransistor to change in response to fluctuations in the shape of the ear canal associated with chewing. The change in the collector current <span class="html-italic">I</span><sub>C</sub> obtained here is converted into a change in the voltage of the resistor <span class="html-italic">R</span><sub>L</sub>. This change in voltage is considered as the output voltage of the sensor.</p>
Full article ">Figure 5
<p>Ear-canal movement of conversation: (<b>a</b>) shows the measured values for movement of the ear canal during conversation; (<b>b</b>) shows the amount of change in these measured values. The amount of change was obtained by subtracting the immediately anteceding measured value (100 ms before obtaining the measurements because measurements are performed at 10 Hz) from the current measured value.</p>
Full article ">Figure 6
<p>Ear-canal movement of yawn: (<b>a</b>) shows the movement of the ear canal during yawning, while (<b>b</b>) shows the corresponding amount of change.</p>
Full article ">Figure 7
<p>Ear-canal movement of chewing: (<b>a</b>) shows the movement of the ear canal during gum chewing, while (<b>b</b>) shows the corresponding amount of change.</p>
Full article ">Figure 8
<p>Meal quality feature of subject A. The Meal Quality Feature (MQF) of subject A calculated using the meal time estimation algorithm is presented in the figure. The vertical axis in the figure shows the MQF in volts (V), while the horizontal axis shows the time (11:00 to 13:00) in hours and minutes (h:m). The time during which the MQF is above the threshold is TIME1, while the time during which the MQF is below the threshold is TIME2. TIME1 is the estimated meal time.</p>
Full article ">Figure 9
<p>Right and left ear canal movement during running (subject H).</p>
Full article ">Figure 10
<p>Right and left ear canal movement during chewing (subject H).</p>
Full article ">
1177 KiB  
Article
Wearable Sensing of In-Ear Pressure for Heart Rate Monitoring with a Piezoelectric Sensor
by Jang-Ho Park, Dae-Geun Jang, Jung Wook Park and Se-Kyoung Youm
Sensors 2015, 15(9), 23402-23417; https://doi.org/10.3390/s150923402 - 16 Sep 2015
Cited by 80 | Viewed by 16729
Abstract
In this study, we developed a novel heart rate (HR) monitoring approach in which we measure the pressure variance of the surface of the ear canal. A scissor-shaped apparatus equipped with a piezoelectric film sensor and a hardware circuit module was designed for [...] Read more.
In this study, we developed a novel heart rate (HR) monitoring approach in which we measure the pressure variance of the surface of the ear canal. A scissor-shaped apparatus equipped with a piezoelectric film sensor and a hardware circuit module was designed for high wearability and to obtain stable measurement. In the proposed device, the film sensor converts in-ear pulse waves (EPW) into electrical current, and the circuit module enhances the EPW and suppresses noise. A real-time algorithm embedded in the circuit module performs morphological conversions to make the EPW more distinct and knowledge-based rules are used to detect EPW peaks. In a clinical experiment conducted using a reference electrocardiogram (ECG) device, EPW and ECG were concurrently recorded from 58 healthy subjects. The EPW intervals between successive peaks and their corresponding ECG intervals were then compared to each other. Promising results were obtained from the samples, specifically, a sensitivity of 97.25%, positive predictive value of 97.17%, and mean absolute difference of 0.62. Thus, highly accurate HR was obtained from in-ear pressure variance. Consequently, we believe that our proposed approach could be used to monitor vital signs and also utilized in diverse applications in the near future. Full article
(This article belongs to the Special Issue Noninvasive Biomedical Sensors)
Show Figures

Figure 1

Figure 1
<p>Anatomical view of the vessels around the ear.</p>
Full article ">Figure 2
<p>Structure of the in-ear pressure sensing device: (<b>a</b>) stick covered with piezoelectric film sensor; (<b>b</b>) spring to push both sticks in different directions; (<b>c</b>) button to control the position of both sticks; (<b>d</b>) hardware circuit module; (<b>e</b>) coin-cell battery.</p>
Full article ">Figure 3
<p>(<b>a</b>) In-ear pressure sensing device prototype; (<b>b</b>) wearing example (lateral view); (<b>c</b>) wearing example (frontal view).</p>
Full article ">Figure 4
<p>Circuit module and signal processing flow.</p>
Full article ">Figure 5
<p>Circuit board (15 × 17 mm): (<b>a</b>) top and bottom views of circuit board, (1) analog part, (2) MCU and digital part, (3) power regulator; (<b>b</b>) circuit board size compared to 25 cent coin.</p>
Full article ">Figure 6
<p>In-ear pulse wave recorded for 6.5 s: (<b>a</b>) raw EPW; (<b>b</b>) smoothed EPW; (<b>c</b>) converted (SSF) EPW.</p>
Full article ">Figure 7
<p>Decision flow of EPW peak detection.</p>
Full article ">Figure 8
<p>SsfPw peak and raw EPW peak detected via decision rules: (<b>a</b>) SsfPw with its peaks and offsets; (<b>b</b>) raw EPW with its peaks.</p>
Full article ">Figure 9
<p>Concurrently recorded ECG and EPW signals, and their peak-to-peak intervals for comparative analysis.</p>
Full article ">Figure 10
<p>Sensitivity and PPV curves of the in-ear device.</p>
Full article ">Figure 11
<p>Agreement analysis results: (<b>a</b>) plot of identity; (<b>b</b>) Bland-Altman plot.</p>
Full article ">Figure 12
<p>Examples of EPW waveform when the user is (<b>a</b>) walking, (<b>b</b>) running, (<b>c</b>) chewing gum.</p>
Full article ">
2281 KiB  
Article
Bio-Inspired Micro-Fluidic Angular-Rate Sensor for Vestibular Prostheses
by Charalambos M. Andreou, Yiannis Pahitas and Julius Georgiou
Sensors 2014, 14(7), 13173-13185; https://doi.org/10.3390/s140713173 - 22 Jul 2014
Cited by 23 | Viewed by 8835
Abstract
This paper presents an alternative approach for angular-rate sensing based on the way that the natural vestibular semicircular canals operate, whereby the inertial mass of a fluid is used to deform a sensing structure upon rotation. The presented gyro has been fabricated in [...] Read more.
This paper presents an alternative approach for angular-rate sensing based on the way that the natural vestibular semicircular canals operate, whereby the inertial mass of a fluid is used to deform a sensing structure upon rotation. The presented gyro has been fabricated in a commercially available MEMS process, which allows for microfluidic channels to be implemented in etched glass layers, which sandwich a bulk-micromachined silicon substrate, containing the sensing structures. Measured results obtained from a proof-of-concept device indicate an angular rate sensitivity of less than 1 °/s, which is similar to that of the natural vestibular system. By avoiding the use of a continually-excited vibrating mass, as is practiced in today’s state-of-the-art gyroscopes, an ultra-low power consumption of 300 μW is obtained, thus making it suitable for implantation. Full article
(This article belongs to the Special Issue Implantable Sensors)
Show Figures

Graphical abstract

Graphical abstract
Full article ">
<p>Illustration of the human vestibular system and its components [<a href="#b10-sensors-14-13173" class="html-bibr">10</a>].</p>
Full article ">
<p>Illustration of the principle behind the sensor, which integrates four cantilevers (MEMS hair cells) at each of the layer transition points, <span class="html-italic">i.e.</span>, in the bulk-micromachined silicon layer, positioned at the locations shown by the black arrows.</p>
Full article ">
<p>Actual physical layout of the structure consisting of: (1) top-glass canal (green quadrants in upper-left and bottom-right), (2) bottom-glass canal (lilac quadrants in bottom-left and upper right), (3) bulk micromachined silicon layer connecting upper and lower quadrants, intercepted by four flaps (light blue Π-shaped cuts in a 3.1 μm silicon membrane) that are oriented in same direction in all cases and which contain a piezo-resistor centered on each of the membrane's uncut side, (4) various conductors for powering the Wheatstone Bridge (VDD, GND) and differential outputs (V+, V−).</p>
Full article ">
<p>Wheatstone bridge.</p>
Full article ">
<p>Micro-bubbles that can appear with incorrect filling methods, in the MEMS-microfluidic sensors. The micro-bubbles are indicated with red arrows (photographs).</p>
Full article ">
<p>Schematic design of the PCB test board for testing the proposed sensor.</p>
Full article ">
<p>Simulated frequency response of the sensing elements of the proposed sensor.</p>
Full article ">
<p>Test setup for characterization of the proposed angular rate sensor. (<b>a</b>) the PCB with the sensor is mounted on top of the rate-table; (<b>b</b>) the PCB with the sensor is placed in an aluminum box and mounted on top of the rate-table. The aluminum box ensures electromagnetic shielding of the device under test so as to avoid picking up electrical noise emitted from the rate-table motor.</p>
Full article ">
<p>Measured results of the output voltage <span class="html-italic">vs.</span> angular rate. (<b>a</b>) Angular rates between 0 °/s and a maximum of ±1.5 °/s (sinusoidally modulated); and (<b>b</b>) Angular rates up to ±30 °/s (sinusoidally modulated).</p>
Full article ">
Back to TopTop