1. Introduction
Over the last few decades, life expectancy has increased considerably due to better quality in living standards and constant progress in medicine, among other causes [
1]. The consequential increase in population has significant social impact in terms of healthcare requirements. In this regard, healthcare cost containment is a crucial factor which implies strict control of medicines, medical equipment, and hospital care expenditures [
2]. There is, thus, a need to develop and implement groundbreaking devices able to provide medical services at an affordable cost. This is particularly true for populations with limited access to basic healthcare services.
Cardiovascular diseases (CVDs) are the leading cause of death in the world [
3]. In 2016, over 17.9 million people died of CVDs, representing 31% of all global deaths by disease. Of these deaths, 85% are due to heart attacks and strokes, and about three quarters take place in countries with low to middle average incomes [
4].
To address these troubling statistics, a possible approach consists of thoroughly studying the functioning of patients’ hearts using portable devices. They can gather an enormous amount of data through recurrent heart tests which can be carried out at home, while patients sleep or go about their daily activities. So, monitoring patients with portable sensors is a way to improve life quality; especially for elder individuals and people with mobility problems, or to extend access to a health protection system. This type of monitoring provides an efficient and cost-effective alternative to on-site clinical monitoring [
5]. Systems equipped with low-cost portable sensors are useful and feasible diagnostic tools for the real-time monitoring of important physiological indicators in such domestic/external facilities [
6]. In the same context, wearable sensors are gaining popularity and attracting the attention of many researchers and technological companies [
7]. Consequently, many leading corporations such as Apple and Samsung are spending large amounts of money on developing wearable devices focused on heart fitness. The set of features integrated in a single wrist-worn device will provide relevant information about the individual in almost real time. The main drawback to this approach is its low accuracy in ECG data acquisition. Interestingly, our methodology does not require ECG data calibration to provide good results.
The problem of data quality in ECG sensors has also been tackled in [
8], where authors developed a single-chip based wearable ECG hardware system for monitoring patients and offer a high-quality ECG signal by applying three types of filters. Nevertheless, they did not focus on a heart disease classification approach. The authors of Reference [
9] proposed a multi-sensor data fusion scenario in order to improve the quality of heart disease detection. They used a kernel random forest ensemble with several time and frequency domain features for their classification method. Even though they attained 98% accuracy working on a continuous stream of data, they incur in network bandwidth and battery consumption. Contrastingly, our design is single-device oriented, uses only RR intervals (time between two consecutive heartbeats) as input, and requires no network configuration.
Parallel to the development of these portable sensors for health monitoring, a large set of algorithms has been implemented to understand, detect, and classify different diseases. For instance, during the last decade, the design of different algorithms for the analysis of electrocardiogram signals has attracted the interest of many researchers. The detection and classification of different heart arrhythmias, in particular atrial fibrillation (AF), has played a central role since AF is a leading preventable cause of recurrent stroke for which early detection and treatment are critical.
Some relevant algorithms for the detection of AF are the ones based on the analysis of the P-wave [
10,
11], or those that utilize a large set of features obtained from ECGs in an artificial neural network [
12,
13,
14] or in a deep learning approach [
15,
16]. Furthermore, it is worth mentioning that several recent clinical trials on large populations focusing on AF detection have been carried out [
17,
18,
19]. In those clinical studies ECG data were obtained with portable sensors achieving outstanding results.
However, some limitations of these algorithms are: the high dependence on the robustness of the training data, the requirement for a filter preprocessing phase, and the fact that they are built upon a large set of features that are usually computationally complex to analyze, among others.
In order to overcome some of these limitations, several algorithms for the automatic detection of AF, based only on RR intervals (time between two consecutive heart beats), have been introduced [
20,
21,
22].
Our work is aimed at developing a generic application and its associated methodology to be able to apply a predictive algorithm over the RR intervals acquired from any low cost biomedical sensor. In principle, this will not be easy to carry out due to the expected low precision of most of the portable sensors on the market. Portable sensors or wearables for this purpose, such as Apple Watch Series 4 [
23], MySignals [
24] or Qardio [
25], provide electrocardiogram data in PDF format, which could be used as the input to our application. Regarding the low accuracy of these devices, it would be necessary to figure out a calibration procedure. In order to rapidly create a fully functional prototype, and taking low-cost and publicly available data into consideration, we use a commercially available portable electrocardiograph sensor called Kardia Mobile from AliveCor [
26]. Albeit, Kardia Mobile has some limitations recently published [
27,
28] in comparison to the data accuracy of a 12-lead gold standard ECG recording. The MAC800 Healthcare [
29] is employed as the 12-lead gold ground truth standard ECG equipment. For reproducibility purposes, the source code of the application developed, as well as its corresponding updates, can be found in Reference [
30]. Furthermore, a possible MATLAB functional diagram is shown in
Appendix A.
In addition, in this paper, we use our recently proposed predictive algorithm to detect atrial fibrillation, which is based on Symbolic Recurrence Quantification Analysis and can be found in Reference [
31]. In order to maintain a self-contained paper, its more important features will also be summarized here. The use of Symbolic Recurrence Quantification Analysis (SRQA) [
32] will provide us with a framework to analyze dynamic changes in a time series. This time series will be extracted from electrocardiogram (ECG) signals. More concretely, RR intervals are the measurement processed by our algorithm. The use of RR intervals as the unique input for our algorithm reduces the complexity of the approach and allows us to create a robust, simple and reliable solution for short and long term ECG monitoring. Specifically, we constructed a logistic regression algorithm that employs SRQA over RR interval time series to detect AF. The resulting approach can differentiate between normal sinus individuals and patients with AF. Furthermore, in order to show the power of our algorithm, we performed a logistic regression model that provides outstanding results in terms of power accuracy.
Our main contributions are: (i) the integration in an application of our computing time efficient algorithm; (ii) the fact that our algorithm only employs RR intervals as input so it can virtually operate with any low-cost sensor; and (iii) predictive power accuracy not being affected by a data calibration procedure.
The rest of this paper is organized as follows.
Section 2 outlines the symbolic dynamics approach as well as the logistic model for the classification algorithm.
Section 3 elaborates the methodology for applying the algorithm to generic low-cost devices. This section also presents the digitalization phases, the determination of RR intervals, the selectable calibration procedure, and the devices employed for such purpose.
Section 4 describes the data employed for training the calibration method and the data used for the validation of the predictive algorithm to detect AF. Finally,
Section 5 concludes.
2. Normal Sinus and Atrial Fibrillation Classification Scheme
Recently, we have consistently applied symbolic recurrence quantification analysis (SQRA) for the detection of atrial fibrillation [
31]. SRQA provides a powerful framework to study the dynamic behavior of a system. In particular, we developed a novel algorithm based on a logistic model that we called ReAD-AF (Recurrence Analysis to Detect Atrial Fibrilation). The algorithm is intended to determine whether a patient is in a normal state (normal sinus rhythm NS) or with AF. The solution is based on the RR interval of an ECG signal as the unique input, requiring minimum pre-processing computational cost. Specifically, we constructed a logistic regression algorithm that employs SRQA over RR interval time series. Here, we summarize its operation and main features. Additionally, the pseudocode implementing the algorithm is included in
Appendix B. The proposed approach uses a symbolization procedure based on neighboring permutations. That is, given an RR interval time series
, its space representation can be built by means of Taken’s time delay method [
33], so the space vector will be
for an embedding dimension
m. A symmetric group (
) of order
m! is defined as a group that contains all the permutations of length m. In turn, each permutation is defined as a symbol
. It should be mentioned that each vectorial time series from
is called
m-history. Finally, a symbolization map
transforms the
m-history of the vectorial time series into a sequence of symbols
representing ordinal patterns of length
m, so that
As an example, the transformation of the RR interval sample set (
2) into a sequence of symbols (
3) is shown as follows.
Taking into account that if
, the symmetric group is formed of
symbols, so
Each m-history of length 3 corresponds to a symbol (and an assigned color) of the symmetric group . Thus, for , , it can be observed that which means that corresponds to a symbol of type (0,2,1). Similarly, for , , it is observed that , so corresponds to a symbol of type (1,0,2).
Now, two symbols
and
at different instants in time (
t and
s) are recurrent if and only if
=
. So, an indicator function is defined as
Therefore, the Symbolic Recurrence Plot (SRP) is defined as a matrix that contains the recurrences of all the symbols provided by the indicator function. The SRP becomes a powerful tool for analyzing the dynamic behavior of the time series and can be represented in a graph where the axes represent the time indexes of the time series. Each colored dot at coordinates (t,s) in the graph means that the m-histories and are recurrent to the corresponding symbol. White dots mean that the m-histories are not recurrent. This enables the display of the portion of the phase space that is being visited by the system. If a dynamic change occurs, then the colored distribution of the SRP will also change.
Figure 1 illustrates an example of two SRPs plotted from a 102-lengh RR interval time series from a patient with normal sinus rhythm and another with atrial fibrillation. We can observe that the colored distribution is different in both graphics, which means that the dynamic behavior changes in both time series. The normal sinus rhythm SRP presents a much more structured distribution, with a remarkable predominant color (black) of symbolic recurrence. In contrast, the atrial fibrillation SRP does not exhibit a structured pattern. The symbols appear in a random way without following any particular distribution. These random patterns characterize the atrial fibrillation patient due to the non-periodic state of the RR intervals. However, the normal sinus rhythm patient presents a characteristic set of symbols that repeat periodically.
To figure out the probability of a patient being categorized as being in a NS or with AF, a logistic model is created. To estimate the model, it is necessary to define a set of covariates. These covariates are divided into two groups. The first group contains those based on the diagonal and vertical lines of the SRP, as well as the recurrence rates of each symbol. The second group is based on the distribution of the RR intervals and comprises the mean, median, Pearson coefficient of variation, and the coefficient of variation of the median. These covariates allow the discrimination between normal sinus rhythm and atrial fibrillation patients. The logistic model is generated through a receiver operating characteristic (ROC) curve analysis in order to compute a probability threshold. So, a patient with an estimated probability above this threshold is classified as having AF.
In order to build this curve, three metrics are assessed. Namely, sensitivity, specificity, and accuracy. To this end, the number of true positive (TP), true negative (TN), false negative (FN), false positive (FP), sensitivity (Se) or the true positive rate
, and the false positive rate
given by
have to be defined. In addition, specificity (Sp) is also defined as
. Finally, accuracy (ACC) is determined as
. Taking all these metrics into account, the ROC curve is constructed using the
points, and the optimal threshold
parameter is the one that minimizes the distance between the points (FPR, Se) and (0,1),
In the original paper [
31] it was shown that sensitivity, specificity and accuracy increased with the window size selected. Sensitivity always reached values above
and specificity slightly lower with values above
for the smallest window sizes. Finally, accuracy achieved values of
presenting good predictive power.
To validate the algorithm, a k-fold cross-validation procedure was performed. The logistic model was fitted with a training set and further evaluated with a test set. The entire data set is publicly available and provided by Physiobank [
34].
The ReAD-AF algorithm was shown to be robust and capable of discriminating AF patients with high precision. Indeed, the ReAD-AF algorithm developed can work with only the RR interval feature as input, which implies a reduction of the computational cost of the model, enabling portability to mobile platforms such as smartphones.
In addition to this classification algorithm based only on RR intervals as input, we want to further contribute with a methodology to help practitioners develop an application based on algorithms that can be applied to the growing low-cost biomedical sensor field. Thus, the next section illustrates a framework for such purpose.
3. Prototype Application
In this section we look deeper into the application modules developed. A general block diagram of this application is depicted in
Figure 2. A detailed description of these main modules as well as how they work is discussed below.
The first module uses an ECG signal (acquired from a low-cost device) as input in PDF or JPEG format and digitalizes it. Satisfying the well-known sampling theorem, the module takes as its main parameter double the bandwidth of the device under consideration. Selecting double the bandwidth is enough to be able to obtain all the signal information. The background color of the electrocardiogram and the duration of the ECG are also important parameters to take into account. Since direct access to data captured by a particular sensor usually follows proprietary protocols, this digitalization procedure enables the implementation of a generic application, regardless of the low-cost commercial device finally selected to provide the input ECG.
The next module is responsible for detecting the inter-beat (RR) intervals also known as heart rate variability (HRV) in an ECG. This metric measures the specific changes in time (or variability) between successive heartbeats. This time variation (usually expressed in milliseconds or seconds) is controlled by a primitive part of the nervous system called the autonomic nervous system (ANS). RR intervals are one of the indicators of a person’s state of health, fitness, and recovery. There are many algorithms available in the open literature for detecting heart rate variability (HRV). For instance, the well-known Pan and Tompkins algorithm [
35] detects QRS waves, or QRS complex, based on the slope, amplitude, and width of the ECG signal. Our algorithm is divided into two stages; the pre-processing and the decision-making process. In the pre-processing phase, the signal is prepared by removing noise, smoothing the signal, and amplifying the QRS slope. Then, in the decision stage only signal peaks are obtained. It is also possible to apply a set of well-proven tools developed by MIT [
36,
37], enabling the detection of QRS complex and RR intervals. In our prototype, the implementation of the RR interval method is carried out with a native MATLAB function (
findpeaks).
Once the RR intervals have been acquired, our application could take them directly to further feed a third module for calibrating these data obtained by a low-cost portable device against professional 12-lead gold standard electrocardiograph equipment in order to achieve more precise data and later diagnosis. This calibration is usually based on a linear regression model aimed at improving the accuracy of the data acquired by the low-cost device finally employed. The calibration will allow us to analyze the ECG signal with the highest degree of accuracy. Interestingly, we prove that measures obtained from SRQA are invariant under monotonic transformations (see
Appendix C), which means that the symbolic recurrence measures employed as covariates in our algorithm are not affected in those cases in which the calibration processes are based on linear regression. That is, raw uncalibrated data can be used to feed our algorithm directly. However, for practitioners who might use different approaches, a possible calibration method will also be introduced below.
The last module, implementing the ReAD-AF scheme, is in charge of assisting the AF classification. Its power of classification is addressed in
Section 4.2.
All of these modules are intended to illustrate a general methodology since, as commented above, commercial sensor applications are mainly based on proprietary protocols. That is, the data acquired is not directly accessible, and only an ECG representation, commonly in PDF or JPEG format, is available.
3.1. Module 1: ECG Digitization
To analyze the ECG signals comprehensively, we first require a technique that enables the digitalization of a graph in PDF or JPEG formats, which are the most common output formats of commercially available sensors. This will provide independence to the data acquisition platform and simplicity in the generalization of our methodology. For this purpose, we have customized a procedure implementing this functionality [
38]. The sequence of actions for the digitization process of the ECG signal in PDF or JPEG format are shown in
Figure 3, in addition to a detailed description of each one.
The first phase of our proposal is the scanning of the ECG signal. To do this, a MATLAB-scan-function (imread) is applied at a resolution of 600 dpi which takes a PDF file containing the ECG as an input parameter.
To perform the scan a portion of the PDF document must first be selected. The complete PDF document contains additional information, such as the patient’s name, or his/her date of birth, among other things. Therefore, it is necessary to prune the signal to be digitalized.
In the image binarization phase, RGB color images are converted into binary images. This process reduces the number of colors at the binary level, resulting in a clear reduction of the memory needed to store the signal. This function also simplifies the processing of the binarized image in comparison to the original image. However, ECG charts are always represented graphically in a grid, which interferes with the binarization process of the image. One way to eliminate or erase this grid in the background of the image is to set an RGB threshold. In detail, all interfering colors are enclosed in the threshold and transformed into white before starting the binarization mechanism. The end result is a white background with the ECG signal in a selectable (e.g., blue) color.
Some noise might appear, characterized by isolated black and white pixels in the binarized image. This may impair the subsequent analysis of the data. Therefore, a filtering function has also been implemented to eliminate these error pixels. To extract the ECG signal graph from the two-dimensional binarized image, horizontal scanning is usually employed. However, our module implements vertical scanning to identify the ECG signal pixels. The vertical scanning is selected to achieve faster and more accurate digitalization. While with horizontal scanning pixel location identification requires an iterative process, with vertical scanning, each pixel directly represents a (x, y) position coordinate of the graph, which results in a one-dimensional vector.
Figure 4, below, illustrates both types of scanning.
In order to offer greater data accuracy, vectors double the number of columns of pixels analyzed, because some peaks are plotted as successive vertical black lines in scenarios with low resolution images. Each component of these vectors is a complex number. The two components of the same column have an identical real part, which is the column index of the pixel matrix. The two consecutive imaginary parts (of the same real part) quantify the upper and lower limits of the vertical black line, that is, of the ECG signal under study. For example, the 2D-vector means that the vertical black line spans from line 27 to line 28 in column 12.
The digitalization algorithm searches the data from the first pixel (bottom left) to the last one (top right). Once the original axes have been deleted from the ECG graph, the value of the bottom vertical and the first column positions are used as a reference for the ECG scanning.
To convert the above mentioned 2D vector into a one-dimensional vector, the algorithm computes the modal distance () between the imaginary part of two successive components; that is, the number of vertical pixels comprising the signal in a specific column. If the difference between the imaginary components of the two coordinates is within the limit , then only the imaginary part of the second element is included in the 1D version of the vector. This value is assumed as a reference () to the R peak identification in the next column analysis. If this difference is greater than , the module of the difference between the and each of the two components is calculated. The stored 1D value is the greatest obtained value. One can often find portions of the graph where the drawing is no longer continuous. In order to provide a consistent 1D vector, the components in these positions are estimated through linear interpolation. Finally, the digital signal is plotted taking into account this final one dimensional vector and the sampling frequency.
For a better understanding of the digitalization process, the pseudocode of the employed methods is also shown below in Algorithm 1, as well as an explanation of the sequence of actions. The input is an ECG in PDF or JPEG format. Then, step 2 allows the reading of the input file. Step 3 applies an interactive image pruning operation to the image previously read. After this, the original image is transformed into a binary one. This is denoted in step 4, during which a binary mask and a composite image are generated. The composite image shows the original RGB pixel values under the mask. Step 5 indicates that portions with fewer than P pixels, in our case 1260 pixels, are removed from the binary image, resulting in a clearer signal where all the pixels but the ECG signal curve are eliminated. Then, a shrink function is applied in step 6 for a better union between points. Finally, vertical scanning is implemented in step 7 for the remaining steps to convert the binarized image into a 1D vector, and the resulting output signal is then plotted in MATLAB format (
.fig).
Algorithm 1 PDF to MATLAB |
- 1:
Input: PDF file (.pdf) - 2:
read(PDF) - 3:
prune(r) - 4:
createBinaryMask(p) - 5:
bwAreaOpen() - 6:
bwMorph() - 7:
pixelToVector(m) - 8:
Output: MATLAB signal (.fig)
|
Figure 5 shows the digitalization procedure step by step. At each step, the image is carried out by a transformation to be able to obtain the final digitalized signal.
3.2. Module 2: Signal Processing and Filtering
Once the digital output signal is obtained, the processing phase to attain the RR intervals starts. First, in order to smooth the data obtained from Module 1 in
Section 3.1 we apply the MATLAB function
filter(b,a,x) that uses the rational transfer function
with
and
providing a filter of fourth order.
As an example, the functionality of the filter is illustrated in
Figure 6.
The next stage of the analysis of the ECG signal is peak detection. To do this, the findpeaks function of MATLAB is employed, adjusting some parameters of this function according to the type of signal under consideration. This function returns a vector with the local maxima (peaks) of the input signal vector (data), together with the position index in which the peak is located.
The resulting vector allows the computation of RR intervals in ECG signals (see
Figure 6, red signal); which, as previously mentioned, are essential values to provide as input to our classification algorithm.
3.3. Module 3: Calibration Procedure
In order to exemplify the calibration procedure, we employ a low-cost, portable, commercial device called Kardia Mobile (AliveCor Inc., San Francisco, CA, USA). It enables one-lead ECG recording. Note that any other low-cost alternative device could be used, provided that it offers an output ECG in PDF or JPEG format, as is commonly the case. Kardia by AliveCor is one of a family of mobile, clinical-quality electrocardiogram (ECG) recorders.
Furthermore, we employ professional 12-lead gold standard ECG ground truth equipment, generally used in many hospitals to monitor patients. In particular, the MAC800 by General Electric Healthcare is the one selected. The MAC 800 system provides the ability to store, analyze and display ECGs in a huge variety of formats with utmost accuracy. Both devices are shown in
Figure 7.
Kardia Mobile and MAC800 are capable of monitoring patients and representing their electrocardiograms. However, the 12-lead MAC800 is a much more accurate device than the single lead Kardia Mobile. Therefore, to be comparable, data gathered by Kardia Mobile have to be calibrated, taking as a reference the data acquired by the MAC800 gold standard in order to attain the same quality results.
To achieve comparable results, we have to adjust the data acquired by the low-cost device with respect to the data measured by the more powerful ECG equipment (gold standard). The calibration procedure is carried out by comparing the morphological signals in both devices, which is the signal contained in the Lead I of each ECG. The lead placement configuration in both devices is shown in
Figure 8. Due to the fact that Kardia Mobile only has a Lead I configuration, which goes from the RA (right arm) to the LA (left arm), the lead placement of the MAC800 needs to have a Lead I as well. In our case, the Mason-Likar placement [
39] is employed in MAC800 because this configuration contains the Lead I signal (other leads are not used), necessary to be compared with the Lead I signal of the Kardia Mobile. The wires and adhesive electrodes of the MAC800 are much less vulnerable to artifacts and variability, providing much cleaner and more stable ECG recording with respect to the Kardia Mobile ECG (see
Figure 8). In fact, we have observed in our study that the PQ and QT intervals of the Kardia Mobile had different lengths compared to the same intervals of the MAC800 gold standard (see
Figure 9). This means that the morphology of the Kardia Mobile ECG waveform slightly differs from the MAC800 signal. Consequently, it could be expected that the deviation in the waveform will affect the precision of Kardia Mobile’s RR interval data measurements, and therefore, the atrial fibrillation detection procedure.
For a deeper insight into the accuracy of the Kardia Mobile in comparison to the MAC800 equipment, a set of ECGs is taken for the same patient using both devices (Kardia Mobile and MAC800) simultaneously.
Although the absolute error of the Kardia Mobile acquired data is relatively low compared to the MAC800 recordings, it is not negligible, and the corresponding deviation should be taken into consideration.
Figure 9 shows four examples of each Lead-I ECG representing the difference between the Kardia Mobile signal in blue and the MAC800 Healthcare signal in red. It should be noted that the scale in the y-axis is notably different between both devices, due to the higher gain of the MAC800 with respect to the Kardia Mobile. Note, however, that this particularly significant gap does not affect the calibration procedure since the RR interval is the metric of interest.
In order to provide a better estimation in the calibration procedure for the Kardia Mobile measurements of RR intervals -namely
- we propose a common modification of
that reduces the deviations of the measurements given by MAC800 (
) by means of a linear regression:
The estimated values for the linear regression parameters in Equation (
7) are
and
, with a coefficient of determination of
, providing an excellent goodness of fit measure. Therefore, according to these estimated values, to calibrate the RR interval samples given by Kardia Mobile, we have to increase each measurement
by 6.79% and then use a global correction of all the samples of
.
Figure 10 illustrates how the proposed adjustment reduces the deviation of the Kardia Mobile RR intervals with respect to the RR intervals of the MAC800. Although the model has been trained with data sets of 20 patients (400 RR interval data samples), the calibration is intended to be applied to each patient individually. Thus, if we measure the RR intervals of an unseen patient with the Kardia Mobile, it will be transformed into other more precise RR intervals after applying the calibration procedure.
It is important to emphasize that we have described the calibration procedure (and its validation below) for illustrative purposes for future practitioners. The procedure is intended to improve the accuracy of the RR intervals extracted from any low-cost biomedical sensor. Nevertheless, our specific mathematical framework (ReAD-AF) is slightly affected by linear transformations and the difference before and after the calibration can be calculated (see
Appendix C). However, this difference only affects the predictive power (accuracy) of the algorithm in the fourth decimal at most, as can be seen in
Table 1.
5. Conclusions
In this paper, we have described the methodology for designing a prototype, but fully functional application, which meets the following requirements: (i) it is robust against low-quality data, therefore enabling the use of multiple low-cost wrist-worn sensors generally characterized by low accuracy in comparison to professional electrocardiographs; and (ii) it is effective for analyzing dynamic changes based on a time series of RR intervals.
Symbolic recurrence quantification analysis (SRQA) is applied to implement a predictive scheme for the detection of AF by using, in principle, any commercially available sensor. Our approach only employs the RR intervals taken from an ECG signal in PDF or JPEG format, which further reduces the complexity of the pre-processing phase, enabling an efficient algorithm. Despite its good power performance for AF detection, it does not take into account the medical history of the patient and is restricted to the detection of AF. As a future research line we plan to extend the logistic model to a multinomial scenario where several types of arrhythmia (the most prevalent ones) will be classified and explained with new covariates extracted from the medical history of the patient.
In addition, it should be mentioned that our first preliminary tests show the computational efficiency of the entire application here introduced, which is a fundamental step towards its portability to mobile devices and, thus, its wide potential to be provisioned in those communities without sufficient resources to access medical care.
Finally, our study is intended to offer practitioners a foundation for further development of apps in the emerging digital health field based on low-cost sensors.