1 Introduction

The increased prevalence and burden of chronic lung disease on patients require greater attention towards cost-effective tools to facilitate and support patient-care (Amdie and Woo 2020; Morrison et al. 2016; Soriano et al. 2020). Over the last few years, chronic disease management programs have incorporated elements of telehealth to maximize access to healthcare services and reduce costs (Wu et al. 2020). Telehealth is defined as a tool to facilitate virtual care, which may include mobile health applications (mHealth apps), web-based tools, telecommunication services, wearable devices and social media (Donevant et al. 2018; Fan and Zhao 2022). MHealth apps have features to help users understand and manage their disease by providing monitoring and feedback, education, medication reminders and rehabilitation support (Fan and Zhao 2022; Wu et al. 2020). Recent studies have explored the effectiveness and feasibility of incorporating mHealth apps into people’s self-care by modifying their behaviors (Hamine et al. 2015; Iribarren et al. 2021; Wang et al. 2021a). With the COVID-19 pandemic, interest in using mHealth apps increased significantly, as they were viewed as simple and accessible tools to safely promote virtual health (Dixit and Nandakumar 2021).

People living with chronic lung diseases present with several chronic pulmonary and extrapulmonary symptoms that limit their daily activities and mental well-being (Xie et al. 2020), impacting their quality of life (Song et al. 2022; Soriano et al. 2020). Managing these consequences and delaying its progression is imperative. Effective disease management requires changes to patients’ behaviors (Iribarren et al. 2021), encompassing elements of education, symptom control, and physical activity (Cornelison and Pascual 2019; Kelly et al. 2022; Song et al. 2022). Practice guidelines advocate for patient-centered approaches between patients and healthcare teams to adopt effective self-management behaviors, but their implementation is often poor (Hamine et al. 2015; Khusial et al. 2020; Roberts et al. 2013). Lack of appropriate implementation may be due to patients’ complex social and emotional needs, and the limited time and resources healthcare providers have (Kelly et al. 2022; Khusial et al. 2020). Alternatively, mHealth apps are widely available, and may help overcome these barriers (Blakey et al. 2018; Cornelison and Pascual 2019; Morrison et al. 2016; Roberts et al. 2013; Song et al. 2022), by empowering patients to adhere to their self-care regime over long periods of time (Amdie and Woo 2020; Beerthuizen et al. 2020; Hamine et al. 2015). Patients have expressed interest in using mHealth interventions to learn and develop skills to manage their disease (Debon et al. 2019; Donevant et al. 2018; Roberts et al. 2013; Wang et al. 2021b). Reported benefits of mHealth apps include decreased hospitalization, improved symptom control and quality of life (Farzandipour et al. 2017; Khusial et al. 2020). However, systematic reviews reported no significant improvements in patients’ outcomes, possibly due to the heterogeneity of mHealth apps (Shaw et al. 2020; Yang et al. 2018). Reported designs and contents of mHealth apps in previous studies are inconsistent (Agarwal et al. 2021). Therefore, an assessment of mHealth apps for chronic lung disease is required to characterize their reported designs, qualities and integration into participants’ self-management.

2 Materials and methods

2.1 Objective

The primary objective of this systematic review is to summarize the characteristics and features of mHealth apps for self-management in people with chronic lung diseases described in randomized controlled trials (RCTs).

2.2 Methods

A protocol was developed and registered on PROSPERO (CRD42021260205) as of July 10, 2021. The Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA) Guideline (Page et al. 2021) was used to direct and report this review (Supplementary Material Table 1).

2.2.1 Data sources and search

A structured search strategy was developed to identify relevant citations across five online databases: CINAHL (EBSCOHost), Medline, Embase, Scopus and Cochrane Libraries. A combination of medical subject headings (MeSH) and key terms related to (1) mHealth apps, (2) chronic lung disease, and 3) self-management were combined with Boolean operators. The MeSH and keywords were modified for each database (Supplementary Material Table 2). Each database was searched from inception to June 2021 and updated in May 2022. Reference lists of eligible studies were screened and if the app’s name was reported, they were used as keywords to search for related studies. If full-text citations were unavailable, the authors were contacted for further information.

2.2.2 Study selection

The search results were compiled and uploaded onto Clarivate Endnote X9.1 (Philadelphia, Pennsylvania) Reference Manager to remove duplicates. Titles and abstracts were screened by three reviewers (SQ, WM, AM) on Research Screener (Chai et al. 2021), a machine learning tool, designed to increase screening efficiency. Research screener is a validated Web-based application that semi-automates abstract screening by utilizing an algorithm developed from machine learning methods (Chai et al. 2021). Research Screener access and details have been previously published (Chai et al. 2021). Full-text screening to identify eligible articles were completed by two reviewers (SQ, WM) on Covidence (Veritas Health Innovation, Australia). Disagreements were resolved by a fourth reviewer (AO). For the updated search in May 2022, Covidence was used to screen abstracts and review full-text articles among two reviewers (SQ, WM), and any disagreements were resolved by AO.

Studies were included if they were: (1) RCTs, (2) investigating mHealth apps for disease self-management, and (3) in adult participants (≥ 18 years) with chronic lung disease. Chronic lung diseases included but were not limited to asthma, bronchiectasis, chronic obstructive pulmonary disease (COPD), cystic fibrosis, interstitial lung disease, idiopathic pulmonary fibrosis, lung cancer, pulmonary hypertension, sarcoidosis, asbestosis, or asthma-COPD overlap syndrome. MHealth app for self-management was defined as mobile apps that were easily accessible on mobile devices (i.e., phones or tablets), not including web-based platforms (Hamine et al. 2015), with features to help patients engage in activities to manage their condition (Kelly et al. 2022; Lagan et al. 2020). In addition, publications had to be published in English, French or Portuguese, in alignment with the research teams’ language capabilities. Articles were excluded if mHealth apps did not have interactive components (e.g., communication, monitoring only) or were trialed in the pediatric population or published in languages other than English, Portuguese or French.

2.2.3 Data extraction and assessment criteria

Data were extracted by one reviewer (AB) and verified by a second reviewer (SQ or WM). Extracted data included: authors, publication dates, study design, participants’ characteristics, clinical outcomes, and mHealth app descriptions (i.e., designs and implementation), listed in Supplementary Material. Their characteristics and features were extracted using the mHealth Index and Navigation Database (MIND) evaluation framework (Harvard Medical School Teaching Hospital 2020; Lagan et al. 2021), described by Lagan et al. (Lagan et al. 2020). This framework has excellent interrater reliability (kappa ≥ 0.75), informed by 79 different app evaluation models to create 107 objective questions across 5 domains (Lagan et al. 2021).

Supplemental files and other referenced publications (where applicable) were retrieved to facilitate data extraction. For example, one MIND question required reviewers to use a readability calculator to assess the readability of apps’ privacy policies (Automatic Readability Checker 2022; Lagan et al. 2020). If accessible, apps’ privacy policies were retrieved and entered into the recommended readability calculator to determine its Flesch-Kincaid grade level (FKGL) (Lagan et al. 2020). The FKGL provides an estimated text reading level, a direct estimate that matches the U.S. education grade level (i.e., FKGL scores of 8.0–8.9 indicate completion of grade 8 is required to read the text), useful for identifying suitable resources for patients (Jindal and MacDermid 2017).

2.2.4 Risk of bias assessment

Since this systematic review did not assess or report on the effectiveness and outcomes of mHealth apps and given the use of the MIND framework to evaluate the mHealth interventions, a risk of bias assessment was not deemed relevant. Risk of bias assessments are meant to identify potential study design and outcome biases (Drucker et al. 2016), where the MIND framework was chosen to specifically evaluate the details of the mHealth app interventions, which was more suitable for this review.

3 Results

A total of 95,516 papers were retrieved; 86,033 citations remained after duplicates were removed and 12,905 (15%) articles were screened. The updated search retrieved 7386 new citations. After applying the eligibility criteria, 16 studies were included (Fig. 1). During data extraction, one RCT (North et al. 2020) reported using a previously created mHealth app, myCOPD (Crooks et al 2020). Therefore, MIND assessment was completed using data reported by Crooks et al. (2020) and (North et al. 2020) for myCOPD. The complete MIND evaluations are available in Supplementary Material.

Fig. 1
figure 1

PRISMA flow diagram for study selection. Original searches were completed in June 2021 and updated searches in May 2022. The updated n reflects search results between June 2021 and May 2022

3.1 Study details and participants

A total 16 studies were included: 11 (69%) reported clinical trial registrations (Beerthuizen et al. 2020; Bentley et al. 2020; Boer et al. 2019; Crooks et al. 2020; Farmer et al. 2017; Kwon et al. 2018; Lin et al. 2022; Mahmoud et al. 2022; Mosnaim et al. 2021; North et al. 2020; Zairina et al. 2016). Of the 16 studies, there were 15 distinct mHealth apps of: 7 for asthma (47%) (Beerthuizen et al. 2020; Cingi et al. 2015; Kim et al. 2016; Lin et al. 2022; Mahmoud et al. 2022; Mosnaim et al. 2021; Zairina et al. 2016) and 8 for COPD self-management (53%) (Bentley et al. 2020; Boer et al. 2019; Crooks et al. 2020; Farmer et al. 2017; Kwon et al. 2018; North et al. 2020; Park et al. 2020; Vorrink et al. 2016). Nine (63%) studies reported their app names (Beerthuizen et al. 2020; Bentley et al. 2020; Farmer et al. 2017; Kim et al. 2016; Kwon et al. 2018; Mahmoud et al. 2022; Mosnaim et al. 2021; North et al. 2020; Zairina et al. 2016), and four of these (44%) were findable on the app store (either Android™ or Apple™) (Beerthuizen et al. 2020; Mahmoud et al. 2022; Mosnaim et al. 2021; North et al. 2020). However, none of these apps was downloadable as access was restricted to study participants. Nine apps (60%) (Bentley et al. 2020; Boer et al. 2019; Farmer et al. 2017; Kim et al. 2016; Kwon et al. 2018; Lin et al. 2022; North et al. 2020; Park et al. 2020; Zairina et al. 2016) reported their designs were informed by multiple resources, including experts in the field, previous clinical studies and international guidelines; the remaining apps (6, 40%) did not explicitly provide information about their design (Beerthuizen et al. 2020; Cingi et al. 2015; Mahmoud et al. 2022; Mosnaim et al. 2021; Vorrink et al. 2016; Wang et al. 2021b). Apps were commonly created with objectives to support self-management, improve medication adherence, provide action plans, control symptoms, facilitate behavioral changes and provide monitoring for clinicians. Studies were conducted in 8 different countries (Netherlands, Turkey, Korea, China, Egypt, Australia, United States of America, United Kingdom) and intervention length varied from 8 to 52 weeks. Frequency of app use was different across studies: participants were instructed to use their apps ad libitum (4, 27%) (Beerthuizen et al. 2020; Cingi et al. 2015; Crooks et al. 2020; Vorrink et al. 2016), daily (7, 47%) (Bentley et al. 2020; Boer et al. 2019; Farmer et al. 2017; Kim et al. 2016; Kwon et al. 2018; Mosnaim et al. 2021; North et al. 2020), weekly (2, 13%) (Park et al. 2020; Zairina et al. 2016), for specific circumstances (1, 7%) (Mahmoud et al. 2022) and one study did not specify usage (7%) (Lin et al. 2022). Different apps and their respective studies reported a spectrum of patient-relevant outcome measures, including medication adherence, quality of life, spirometry, exercise tolerance, exacerbations, and hospital admissions. Study details are summarized in Table 1.

Table 1 Names, sources, descriptions and study characteristics for asthma and COPD mHealth apps. A total of 15 apps are reported; 1 app was used and trialed in 2 studies.

Studies evaluating apps for patients with asthma had sample sizes ranging from 22 to 461 in the interventional arm, and 11 to 462 in the control arm. Only one study (7%) reported a 12-month follow-up period (Beerthuizen et al. 2020). Retention rates at follow-up ranged from 67 to 97% in the treatment arm and 43 to 97% in the control arm. Participants’ mean age ranged from 31 to 49 years, and 32–51 years for the treatment and control groups, respectively. Only 1 study provided details about their participants’ comorbidities (Zairina et al. 2016).

For COPD-specific apps, sample sizes ranged from 19 to 110 in the interventional arm and 11–81 in the control arm. Two studies reported their follow-up periods were 2 and 12 months (Bentley et al. 2020; Boer et al. 2019). Retention rates at follow-up ranged from 53 to 93% and 55–97% in the treatment and control groups, respectively. Ranges of mean age for the treatment group were 62–70 years and 63–70 years in the control group and five studies reported their participants’ comorbidities (Boer et al. 2019; Farmer et al. 2017; Kwon et al. 2018; Park et al. 2020; Wang et al. 2021b). Additional participant characteristics are in Table 2.

Table 2 Participants’ baseline characteristics for each study (n = 16)

3.2 Background and access characteristics

Nine studies provided the app with a mobile device (60%) (Bentley et al. 2020; Boer et al. 2019; Farmer et al. 2017; Lin et al. 2022; Mahmoud et al. 2022; Park et al. 2020; Vorrink et al. 2016; Wang et al. 2021b; Zairina et al. 2016), and six provided access to the app via invitation/registration code (40%) (Beerthuizen et al. 2020; Cingi et al. 2015; Kim et al. 2016; Kwon et al. 2018; Mosnaim et al. 2021; North et al. 2020). It is unclear when the apps were created, released, and updated since many apps (11, 73%) were not available on the app marketplace. Two studies (13%) mentioned their apps had accessibility features; one (7%) allowed participants to adjust text size (Park et al. 2020), and another (7%) provided participants with larger tablets to increase text size for comfortable viewing (Velardo et al. 2017).

3.3 Data safety and privacy

The privacy policies were available in four (27%) of the 15 apps and were accessible through their app store page or website (i.e., PatientCoach, (Beerthuizen et al. 2020) Propeller Health (Mosnaim et al. 2021), Clip-tone buddy (Mahmoud et al. 2022), myCOPD (Farmer et al. 2017)). Readability of the privacy policies of these apps resulted in FKGL scores from 8 to 15.

Regarding data usage and privacy, eight apps (53%) reported they declared data use to their participants (Beerthuizen et al. 2020; Farmer et al. 2017; Kim et al. 2016; Kwon et al. 2018; Mahmoud et al. 2022; Mosnaim et al. 2021; North et al. 2020; Park et al. 2020) and seven (47%) declared use of their personal information (Beerthuizen et al. 2020; Farmer et al. 2017; Kim et al. 2016; Kwon et al. 2018; Mahmoud et al. 2022; Mosnaim et al. 2021; North et al. 2020). Three (20%) apps mentioned users could opt out of data collection (Beerthuizen et al. 2020; Farmer et al. 2017; Mosnaim et al. 2021), and four (27%) apps allowed users to delete their own data (Beerthuizen et al. 2020; Mahmoud et al. 2022; Mosnaim et al. 2021; North et al. 2020). Twelve apps (80%) appeared to store their data on their server (Beerthuizen et al. 2020; Bentley et al. 2020; Boer et al. 2019; Farmer et al. 2017; Kim et al. 2016; Kwon et al. 2018; Lin et al. 2022; Mahmoud et al. 2022; Mosnaim et al. 2021; Vorrink et al. 2016; Wang et al. 2021b; Zairina et al. 2016), seven (47%) described their security systems (Farmer et al. 2017; Kim et al. 2016; Kwon et al. 2018; Mahmoud et al. 2022; Mosnaim et al. 2021; North et al. 2020; Zairina et al. 2016), and three (25%) mentioned data sharing to third parties (Mahmoud et al. 2022; Mosnaim et al. 2021; North et al. 2020). Data safety and privacy details are summarized in Table 3.

Table 3 Data safety and privacy features and details of the 15 trialed apps

3.4 App effectiveness and clinical foundation

The context of this domain is to assess apps for their clinical foundation and effectiveness in the intended population (Lagan et al. 2020). Five apps (33%) had additional peer-reviewed publications to describe the effectiveness or feasibility of their apps (Boer et al. 2019; Farmer et al. 2017; Kwon et al. 2018; Mosnaim et al. 2021; North et al. 2020). App effectiveness data are outlined in Table 4.

Table 4 Clinical data to support apps’ effectiveness and clinical foundation (n=15)

3.5 User experience and engagement

Across the different apps, they vary in their input requirements, output data, engagement styles and features. None of the apps reported whether access to participants’ contact lists, cameras, or microphones was required for use. Common input requirements were questionnaires (8, 53%) (Beerthuizen et al. 2020; Cingi et al. 2015; Farmer et al. 2017; Kim et al. 2016; Kwon et al. 2018; Lin et al. 2022; North et al. 2020; Park et al. 2020), journaling (7, 47%) (Beerthuizen et al. 2020; Cingi et al. 2015; Farmer et al. 2017; Kim et al. 2016; Kwon et al. 2018; North et al. 2020; Park et al. 2020), step tracking (6, 40%) (Beerthuizen et al. 2020; Bentley et al. 2020; Farmer et al. 2017; North et al. 2020; Park et al. 2020; Vorrink et al. 2016), and data from external hardware (9, 67%) (Beerthuizen et al. 2020; Bentley et al. 2020; Boer et al. 2019; Kim et al. 2016; Kwon et al. 2018; Mosnaim et al. 2021; North et al. 2020; Park et al. 2020; Zairina et al. 2016). Ten apps (67%) provided participants with information and resources for educational purposes (Beerthuizen et al. 2020; Bentley et al. 2020; Cingi et al. 2015; Farmer et al. 2017; Kim et al. 2016; Lin et al. 2022; North et al. 2020; Park et al. 2020; Vorrink et al. 2016; Wang et al. 2021b), seven had push notifications (47%) (Beerthuizen et al. 2020; Cingi et al. 2015; Kim et al. 2016; Kwon et al. 2018; Mosnaim et al. 2021; North et al. 2020; Park et al. 2020), and three had reminders (20%) (Beerthuizen et al. 2020; Kwon et al. 2018; Mosnaim et al. 2021). Five apps reported graphical visualizations (33%) (Bentley et al. 2020; Kwon et al. 2018; North et al. 2020; Park et al. 2020; Vorrink et al. 2016), four with text summarizations (27%) (Bentley et al. 2020; Farmer et al. 2017; Mosnaim et al. 2021; Vorrink et al. 2016). Two apps allowed data sharing to users’ social media accounts (13%) (Cingi et al. 2015; Kim et al. 2016). Five of these apps (33%) had features allowing participants to connect with healthcare providers remotely, through messaging (Cingi et al. 2015; Mosnaim et al. 2021; Vorrink et al. 2017), and phone calls (Farmer et al. 2017; Park et al. 2020).

Eleven apps (73%) had features to support collaborations between participants and healthcare professionals (Beerthuizen et al. 2020; Bentley et al. 2020; Cingi et al. 2015; Farmer et al. 2017; Kim et al. 2016; Kwon et al. 2018; Lin et al. 2022; Mosnaim et al. 2021; North et al. 2020; Park et al. 2020; Vorrink et al. 2016). Six apps (40%) had content delivered to participants in video-formats for educational or motivational purposes (Farmer et al. 2017; Lin et al. 2022; Mahmoud et al. 2022; North et al. 2020; Park et al. 2020; Wang et al. 2021b). Participants could use the apps to send messages to peers or healthcare professionals (4, 27%) (Cingi et al. 2015; Park et al. 2020; Vorrink et al. 2016; Wang et al. 2021b), or to network with peers (13%) (Park et al. 2020; Wang et al. 2021b). Other features were to support participants in setting goals (9, 60%) (Beerthuizen et al. 2020; Bentley et al. 2020; Cingi et al. 2015; Farmer et al. 2017; Kim et al. 2016; North et al. 2020; Park et al. 2020; Vorrink et al. 2016; Wang et al. 2021b), tracking medications (7, 47%) (Cingi et al. 2015; Kwon et al. 2018; Lin et al. 2022; Mahmoud et al. 2022; Mosnaim et al. 2021; North et al. 2020; Zairina et al. 2016), exercise (7, 47%) (Beerthuizen et al. 2020; Bentley et al. 2020; Kwon et al. 2018; North et al. 2020; Park et al. 2020; Vorrink et al. 2016; Wang et al. 2021b), mood (7, 47%) (Boer et al. 2019; Cingi et al. 2015; Farmer et al. 2017; Kim et al. 2016; Kwon et al. 2018; North et al. 2020; Park et al. 2020) or mindfulness (2, 13%) (Farmer et al. 2017; North et al. 2020). Details of each apps’ user engagement and style are reported in Table 5.

Table 5 The input, output data for each app (n=15), and their engagement style and features. For a full list, see Supplementary Material

3.6 Data integration and therapeutic alliance

Seven studies (47%) stated that their apps had to be used with the healthcare or research team (Beerthuizen et al. 2020; Bentley et al. 2020; Farmer et al. 2017; Kim et al. 2016; Kwon et al. 2018; Mosnaim et al. 2021; North et al. 2020), or else access to the apps were not permitted. None of the studies provided clear indications on whether participants owned their data. Two apps (13%) mentioned that participants could export their data (Kwon et al. 2018; North et al. 2020), and four apps (27%) could send data to users’ electronic medical records (Boer et al. 2019; Farmer et al. 2017; Kwon et al. 2018; Mosnaim et al. 2021). Details are reported in Table 6.

Table 6 Usage and data interoperability of the apps (n=15)—all the apps were designed to be a self-management tool and patient-facing

4 Discussion

In this review, 15 mHealth apps were trialed in 16 RCTs with inconsistent reports of designs and characteristics. Intervention lengths, follow-up periods, and frequency of use varied considerably among the studies, with designs being informed from multiple sources. Most studies did not provide sufficient information to complete most of the domains in the MIND framework. Information regarding engagement and features for clinical use was frequently reported, with common features designed for  education, symptom tracking, medication reminders and clinical support. There was a lack of information on apps' background information and characteristics, and since direct access to the apps were not available, it was unclear whether they were still being evaluated or how often they were updated. In addition, there were minimal details about their privacy and security functions, as well as scant discussions regarding the apps’ clinical foundation.

Data privacy was a difficult domain to access across studies although it is an important determinant of acceptability and clinical use (Dixit and Nandakumar 2021; Fan and Zhao 2022; Wu et al. 2020). Available privacy policies had FKGL scores ranging from 8 to 15, indicating the reading level required to understand these privacy policies were at the level of high school completion. It is important for users to understand how their personal information and data are handled prior to using mHealth apps (Agarwal et al. 2016; Lagan et al. 2020), and is recommended that this information be written at the grade 8 level or lower to accommodate their users (Jindal and MacDermid 2017). This is imperative as these resources must be readable and understandable to facilitate users’ self-management (Jindal and MacDermid 2017).

The RCTs included in this review investigated mHealth apps in asthma and COPD specifically. Compared to past systematic reviews, this review aimed at systematically reviewing the reported app designs and characteristics that were rarely evaluated and reported before. From our MIND assessment, we were able to identify in-depth differences in the interventions’ foundational designs, features, engagement styles and intended use. In three apps (20%) for asthma self-management, they did not provide features for education, but were meant to facilitate medication tracking, in conjunction with external hardware, i.e., puffers with sensors (Mahmoud et al. 2022; Mosnaim et al. 2021; Zairina et al. 2016). The remaining four apps (27%) seem to primarily provide didactic education, and symptom monitoring, along with additional features to journal, receive notifications and collaborations with clinicians (Beerthuizen et al. 2020; Cingi et al. 2015; Kim et al. 2016; Lin et al. 2022). In the context of COPD, one study using ACCESS mentioned the app was created to explicitly for detect and guide patients during COPD exacerbations (Boer et al. 2019). Whereas the other COPD apps appear to provide combinations of didactic education, symptoms tracking, exercise encouragement and collaborations with clinicians (Bentley et al. 2020; Crooks et al. 2020; Farmer et al. 2017; Kwon et al. 2018; North et al. 2020; Park et al. 2020; Vorrink et al. 2016, 2017; Wang et al. 2021b). Two key components that appear frequently across these apps are the interactive feedback and the possibility to collaborate with their healthcare teams, features well suited to optimize acceptability and implementation among patients (Blackstock and Roberts 2021; Morrison et al. 2016; Wang et al. 2021a). However, the apps in these RCTs may have the potential to provide additional designs or features not discussed here as this study was limited to synthesizing the information that was inconsistently reported across studies.

There is a clear need to emphasize the lack of information and knowledge growth on these app interventions after their RCTs. Of the 15 apps identified in this review, four (27%) were searchable on the app marketplace, but only two (13%) had public websites (Propeller Health and myCOPD) (Crooks et al. 2020; Mosnaim et al. 2021; North et al. 2020). These two mHealth app development teams have continued to assess their apps’ effectiveness in different subgroups, with clear outlines of their ongoing research, publications and presentations available to the public (Propeller Health, https://propellerhealth.com/clinical-research/published-research/; and myCOPD, https://mymhealth.com/studies). This open communication is ideal, as it provides clarity to their target users and can support future collaborations with academic centers to strengthen the understanding of whether these apps are designed well and suitable for self-management in chronic lung disease. Unfortunately, the remainder of apps did not identify additional clinical evidence to support their use and some studies did not report their app name, see Table 4 (Lin et al. 2022; Park et al. 2020; Vorrink et al. 2016; Wang et al. 2021b). In consideration of these factors, it is unclear whether these apps are still in use, being trialed or updated frequently, warranting hesitations of generalizing these findings into clinical practice. Currently, the mHealth space needs consistency in their app interventions, which may be addressed if there is transparency and continued efforts to build on the body of evidence of established apps, similar to Propeller Health and myCOPD.

There are uncertainties with using mHealth apps for facilitating chronic respiratory care, though mHealth apps may have the potential to promote self-management, improve physical activity and quality of life (Kiani et al. 2022; Shaw et al. 2020; Yang et al. 2018). From our MIND assessment across the 15 distinct apps trialed in RCTs, it is apparent that the app designs and features varied considerably and were underreported, likely preventing their results from being reproduced and generalizable to other apps and populations. This trend may explain the variable effectiveness shown in past systematic reviews, where mHealth apps in people with COPD or asthma reported high heterogeneity in the included studies (Debon et al. 2019; Farzandipour et al. 2017; Iribarren et al. 2021; Shaw et al. 2020; Yang et al. 2018). Therefore, standardizing these interventions is necessary to ensure their quality, including their method for implementation, monitoring and outcome assessments (Kiani et al. 2022; Shaw et al. 2020). Ensuring consistency in the quality of these interventions continues to be a challenge, as there are significant variations in design elements and of quality assessment tools for mHealth apps in the chronic disease space (Agarwal et al. 2021). There is a need for future research to utilize a standardized approach to ensure their interventions are created with equal quality (Agarwal et al. 2021; Shaw et al. 2020). Of all the available app evaluation tools that exist, the MIND framework demonstrated in this review and in the mental health space (Spadaro et al. 2022), that its extensive comprehensiveness can likely ensure all potential domains of app quality are accounted for (Lagan et al. 2020, 2021). Since the MIND framework was informed by a compilation of many existing mHealth app models, it could serve as a checklist to ensure the quality control of the mHealth app interventions created and reported, specifically regarding their foundational design, features and interactive components in future studies (Lagan et al. 2020, 2021).

Our review has several strengths. To our knowledge, this is the first time a systematic review used an established framework to describe reported characteristics of mHealth apps for chronic lung diseases in RCTs. The MIND framework is comprehensive (Lagan et al. 2020), guiding the assessments of essential apps designs and characteristics. Furthermore, we sought additional resources to ensure we thoroughly completed the MIND assessment for each mHealth app. Another strength of our study is the extensive search strategy we implemented and updated to ensure all possible studies were screened for inclusion. With Research Screener (Chai et al. 2021), we efficiently screened a large volume of citations. Research Screener’s sensitivity threshold ranges between 4 and 32%, and past systematic reviews reported all relevant articles were found after 15% of imported records were screened, similar to our screening total (Chai et al. 2021).

This study has a few limitations. Although our search strategies led to a large volume of results, it was necessary to use these key terms to ensure all the possible articles were found as taxonomy for this technological intervention is inconsistent. To facilitate the process, Research Screener was used (Chai et al. 2021). Another limitation is the lack of access to these apps; some were found on the app marketplace but required special access, while others were simply described in their reports with visual screenshots.

5 Conclusion

This review described mHealth interventions’ design, qualities, and characteristics available in RCTs using a comprehensive framework. These findings demonstrated the differences between mHealth apps across trials, and the potential challenges healthcare providers may have in identifying the most suitable app to integrate into clinical care plans. This review emphasized the need for intervention consistency and reporting, and the benefits of using the MIND framework to guide future app development and reporting. Advocating for the use of the MIND framework will minimize intervention heterogeneity in future studies, strengthening their qualities and evidence to facilitate our understanding of their effectiveness in self-management for chronic lung disease.