Why expert quality ratings are important for the evaluation of digital apps that were found to be effective in randomized controlled trials

When I first thought about potential uses of Enlight suite of measures 1, my intention was to enable researchers and developers to evaluate the potential of digital apps, regardless of empirical testing. Back at the start, I believed that such an evaluation would help identify the most promising products, and under this umbrella, which products are also “worthy” of evaluation in randomized controlled trials (RCTs).

However, while RCTs are considered by many to be the gold standard of intervention evaluation – including digital apps – this does not mean that if a digital app was found to be effective in a RCT it will be effective when it is used by users in real world setting.

I believe that this notion is highly important when discussing the potential of digital apps to help their users. Here are two reasons why RCTs cannot be seen as the only metric for effective treatment within the eHealth interventions domain:

  1. Biased samples in (too) many RCTs. Researchers adopt various strategies in order to reach to potential participants, including social media strategies and personal outreach to potential interest groups – all in order to increase the recruitment pool. The bigger the recruitment pool is in comparison to those eventually enrolling to the study, the stronger the bias towards people who are interested to try and likely to adhere to mental health technologies 2. That is, since those eventually accepting to take part in the study are those who were interested in what the technology had to offer. At the end of the day, any research that uses such recruitment strategies, basically limits our ability to generalize the results since those participated in research do not resemble most of the people the intervention is meant to address.
  2. The difference between RCT and real world settings. Processes embedded in randomized clinical trials revolve tunneling of participants, high research attention, and a requirement from participants to commit for the study which may reflect on user’s commitment to adhere to the intervention 3. Part of “research attention” is the human support embedded within the trial itself (e.g., research coordinators contacting users to collect measures) which may also impact user behaviors. Since such attention towards the user is not presented in real world use it is difficult to recognize the effect of these interventions outside the RCT setting.

Stakeholders and end users should be aware to the pitfalls of RCTs within the eHealth interventions realm and moreover to articles in which RCTs are being referred to as the only standard for a program’s potential (regardless of the setting in which the RCT was deployed). It is worth noting that it doesn’t mean that RCTs are not important, just that they should be part of an overall evaluation of an intervention.

In Enlight suite of quality ratings we examine the evidence backing specific eHealth interventions – including RCTs – under the subscale “Evidence Based Program”. The subscale is separated from our assessment of product design, to avoid biases as those reported above and to examine in what conditions and mix evidence deriving from RCTs and Expert Quality Ratings help identify product’s potential to succeed in real world use. To that end, our star ratings presented in programs reviewed in MindTools.io, is based on a formula that combines the scores of six different contrasts (see scientific approach) that provide an independent evaluation of user experience that is not leaned on clinical trials. In this formula we aim to rate the quality of the product from the perspective of real world usage.

References

1 Baumel, A., Faber, K., Mathur, N., Kane, J. M., & Muench, F. (2017). Enlight: A Comprehensive Quality and Therapeutic Potential Evaluation Tool for Mobile and Web-Based eHealth Interventions. Journal of medical Internet research, 19(3), e82.

2 Mohr, D. C., Weingardt, K. R., Reddy, M., & Schueller, S. M. (2017). Three problems with current digital mental health research… and three things we can do about them. Psychiatric services, 68(5), 427-429.

3 Ebert, D. D., & Baumeister, H. (2017). Internet-Based Self-help Interventions for Depression in Routine Care. Jama Psychiatry, 74(8), 852-853.

Find a program that fits you.

DISCOVER MORE PROGRAMS