Added: Shawon Frantz - Date: 29.09.2021 13:17 - Views: 14958 - Clicks: 1185
Try out PMC Labs and tell us what you think. Learn More. Detailed inter-rater reliability as analyzed by the weighted kappa, intraclass correlation, and Krippendorff alpha for each MARS dimension. General consumers can now easily access drug information and quickly check for potential drug-drug interactions PDDIs through mobile health mHealth apps. With aging population in Canada, more people have chronic diseases and comorbidities leading to increasing s of medications.
The use of mHealth apps for checking PDDIs can be helpful in ensuring patient safety and empowerment. The aim of this study was to review the characteristics and quality of publicly available mHealth apps that check for PDDIs. The mean MARS score was 3. The information dimension was associated with the highest score 3. The total of features per app, average rating, and price were ificantly associated with the total MARS score.
Some apps provided accurate and comprehensive information about potential adverse drug effects from PDDIs. Given the potentially severe consequences of incorrect drug information, there is a need for oversight to eliminate low quality and potentially harmful apps. Because managing PDDIs is complex in the absence of complete information, secondary features such as medication reminder, refill reminder, medication history tracking, and pill identification could help enhance the effectiveness of PDDI apps. Potential drug-drug interactions PDDI have been a prevalent source of preventable problems that can occur in any age group and increase costs to the health care systems [ 1 ].
A PDDI occurs when an individual is prescribed two drugs that are known to interact. An occurrence of drug-drug interaction DDI is defined as a clinical alteration of the exposure or response to a drug as a result of coadministration. DDIs can be clinically relevant when the result of the interaction warrants the attention of health care professionals HCPs.
Most PDDIs are preventable, but it remains a ificant problem to patients and the health care system [ 3 , 6 ]. It has been observed that physicians are not always aware of clinically ificant drug interactions [ 7 , 8 ] and may underestimate the effects of PDDIs [ 9 ].
Other factors such as high workload in pharmacy could also lead to higher risk of PDDIs for patients [ 10 , 11 ]. DDIs have also been identified as a ificant portion of the overall ADRs resulting in hospitalization among older adults [ 12 ]. One possible solution that has been proposed is to use a decision support system to detect and avoid PDDIs [ 7 , 9 ].
With the rise of smartphones and mobile apps, decision support systems for PDDIs are now within the reach of consumers and patients and no longer exclusive to HCPs. This is an opportunity that can engage and empower patients by providing necessary tools to detect, avoid, and report ADR events stemming from DDIs [ 13 - 17 ]. The potential benefit for older adults with polypharmacy—the use of multiple medications—is deemed greater because of multiple prescribing providers involved in the care, which is a substantial risk factor for medication errors and ADR events [ 18 ].
Mobile health mHealth apps with PDDI decision support are not subject to the Food and Drug Administration regulation [ 19 ], and this may pose a substantial threat to the safety of consumers and patients. To our knowledge, the quantity, features, characteristics, or efficacy of the available PDDI mHealth apps on the market have never been systematically assessed. Therefore, understanding the characteristics of these mHealth apps is important in planning future interventions or policies aiming at patient-centered care and patient safety.
This systematic review adhered to the Preferred Reporting Items for Systematic Reviews and Meta-Analyses systematic review protocol [ 21 ] as closely as possible, but it deviated in few instances because of the characteristics of mHealth app databases, which differ from scholarly reference databases for published articles. To ensure the review process is transparent and replicable, the detailed descriptions of each step are provided below. Our review aimed to search apps that were publicly available to Canadians in English. This study developed a keyword search procedure to identify potentially eligible apps Textbox 1.
First, the searcher was instructed to log out from the Google on a browser to prevent any personalized search . The search terms were specifically developed to be in all lower case letters and in quotations for consistent and comprehensive search . As operating systems and apps are updated routinely, searches on both stores were conducted on the same day in December Additionally, the searches were performed on a deated set of devices and the same network to obtain consistent search and avoid deviations by personalized search [ 23 ].
Search were extracted and saved in a spreheet for the next stage of app selection. Following the search of the two databases, for each search term, all the identified apps were screened in two stages. This study included apps that claim to check for PDDIs in their description, published in English, and last updated in or later. Apps were excluded if they targeted nongeneral consumers, passively informed users of PDDIs does not allow pair-wise or combinational interaction check , checked for drug interactions for pets and animals, and specific to a particular disease or drug class. After screening the for each search term, the selected app names were aggregated.
If an app was listed in both stores, this study considered them separately and examined both versions to capture potentially varying features and user reviews. Second, the authors downloaded and installed the remaining apps from the first step to verify their eligibility one more time. Apps that failed to launch after three attempts on the test devices were excluded.
A set of general information about the apps were extracted following app review studies [ 24 , 25 ]. General app information provides contextual information such as availability, affordability, and user satisfaction level. A set of secondary features that can further empower end users beyond the PDDI check feature was identified from literature review [ 24 , 26 - 28 ]. In summary, the two extracted sets of information were as follows: 1 general information about the apps: last updated date, price, and user rating and 2 other relevant secondary features that the apps offered:.
Medication management related features: reminder to take medication, reminder to refill medication, medication history tracking, pill identification, searching medication using generic or brand names, and access to medication database. Security and privacy related features: password protection for user data and multiple user support. Multimedia Appendix 1 presents the secondary features extracted and examined for each app. The MARS, a item, expert-based rating scale with a purpose of assessing the quality of mHealth apps, was used to critically and systematically evaluate the quality of the mHealth apps [ 20 ] See Multimedia Appendix 2 for a detailed MARS score for all included apps.
This expert scale consists of multiple dimensions that assess different quality aspects of apps, including end-user engagement, features, aesthetics, content quality, and subjective quality [ 20 ]. This expert rating scale has been increasingly adopted in recent years for evaluating mHealth apps such as mindfulness [ 29 ], weight loss [ 25 , 30 ], smoking cessation [ 30 ], self-care [ 31 ], online well-being [ 32 ], and medication adherence [ 24 ].
A study has shown high internal consistencies in the total score and subscales, as well as strong interrater reliability IRR [ 20 ]. Moreover, use of a standardized assessment scale such as MARS for evaluating mHealth apps has been recommended by various researchers [ 33 - 35 ]. The popularity of MARS led to the further development of an Italian version [ 36 ] and an end-user version for nonresearchers [ 37 ].
The last dimension of MARS is app subjective quality, which takes the subjective opinions of the reviewers. To ensure the quality assessment process is as consistent and objective as possible, the subjective quality dimension was omitted from this review. A study that employed MARS as an objective method to assess quality also excluded the subjective quality dimension [ 25 ]. Instead, relevant information was captured from the app databases, including the price and app ratings.
Before rating the apps, each rater read and familiarized themselves with the MARS protocol. A group discussion was followed to achieve a consensus on the rating criteria, and the first app was rated as a group. The need for an objective example of PDDIs arose for the MARS questions 15 and 16 that assess comprehensiveness and accuracy of the content and information. On the basis of a careful review of the literature [ 38 , 39 ], we developed a list of PDDIs with 20 true positive and six false positive examples Multimedia Appendix 3.
The percentage of correctly identified and described PDDIs was scaled to a range from 1 to 5 for questions 15 and No studies have reported on the details of how the accuracy and comprehensiveness of app content were assessed. Two raters assessed each app individually.
The kappa value was assessed by putting quadratic weights for differing values. The ICC coefficient was calculated with a two-way random model and for agreement level. The weighted kappa, Krippendorff alpha, and ICC were calculated per dimension and for all apps.
Each dimension in MARS was analyzed using the mean value as recommended by the developers [ 20 ]. The difference in app quality between the two app stores was analyzed by t tests. A ificance level of. All analyses were performed in R version 3. The app store search was conducted in December After removing duplicates in each database, the authors reviewed the descriptions of apps against the inclusion and exclusion criteria Figure 1.
Review was initiated for 25 apps, but the authors excluded two additional apps identified as duplicates in multi-language versions, leaving a total of 23 apps for this study Figure 1. Seven apps were listed in both stores.
Table 1 summarizes the general information of the reviewed apps and the mean MARS scores. NA: not available. The last updated dates for the Android apps were from April to December , whereas the iOS apps ranged from July to December The average rating for the apps from the Google Play Store was 3.
Secondary features, features other than PDDI check, were extracted and examined for each app. On average, they had 3. The overall of apps per secondary feature is shown in Figure 2. The mean MARS score of the 23 apps was 3. The IRR between two raters as assessed by the weighted kappa was. The mean scores of the four dimensions of MARS were examined to investigate the magnitude of the differences in quality in each dimension.
Functionality dimension resulted in the highest mean score 3. The functionality dimension had the most variability Figure 3. Each point represents the score for an individual app. The box plot shows the median, first, and third quartiles and minimum and maximum scores. General and functional characteristics of the 23 apps were examined for a correlation with the MARS score Table 2. The general and functional characteristics, including average user rating and total of features, were statistically ificantly associated with the total MARS score Table 2.
Statistically ificant associations were observed between the general and functional characteristics, including the total of features and the price and average user rating Table 2. Within the MARS dimensions, all were statistically ificantly correlated with each other except for information dimension Table 2.
In this app review study, a systematic search strategy was used to find PDDI apps. To our knowledge, this is the first systematic review on apps that offers decision support for PDDI checking. The 23 included apps were analyzed to extract general characteristics and functional characteristics, and their quality was assessed using MARS. App price had statistically ificant negative correlations with three of four MARS dimensions and of features.
This demonstrates that app quality is not always represented by the selling price. A plausible explanation for this counterintuitive and inverse relationship is that free apps may have been developed by companies and organizations with sufficient resources; hence, apps were developed to expand consumer reach, whereas individual developers who may have limited resources may rely on generating revenue from app sales, while the quality of app may not be as high as the apps developed by companies and organizations that can afford to hire a group of expert developers.
Further research should investigate the relationship between the price of consumer mHealth apps and its quality, as well as its impact on consumer perception. The primary features of the examined mHealth apps were providing drug information to users and checking for PDDIs. Despite this aim of these apps, a low average score in the information dimension was found based on MARS. In particular, MARS questions 15 and 16, which assessed the accuracy and comprehensiveness, scored on average 2. To worsen the problem, less than half of the correctly identified PDDIs 2.
Inability to detect PDDIs and providing incomplete and incorrect information is a ificant threat to patient safety. This polarized quality of information found in mHealth apps further raises the question about the tools available for consumers to evaluate and select high quality apps. The average user rating was ificantly correlated with the information dimension, and it indicates that the average user rating can potentially be an important tool for selecting mHealth apps.
There are other resources available such as app clearinghouses that make recommendations for mHealth apps to consumers based on the from systematically evaluating the usability, quality, accuracy, or evidence of the app and its content [ 40 ]. These app clearinghouses hold promise to enhance consumer safety of mHealth apps, but they have not been investigated against MARS or other validated tools that assess the quality of mHealth apps.
The low average MARS score for the engagement dimension can be partially explained by the primary purpose of the included apps. The investigated apps work as a reference to check for PDDIs, and these apps do not rely on user engagement to elicit behaviour change. On the other hand, other mHealth apps that focus on behaviour change tend to score higher in the engagement dimension, as the success of behaviour change may heavily depend on how successfully they engage the user [ 24 ].
Most MARS dimensions were correlated with each other except information.Check medicine interactions
email: [email protected] - phone:(270) 785-8409 x 5519
Drug Interaction Checkers