By MATTHEW HOLT
In recent weeks, a significant amount of time has been dedicated to discussing the conflict surrounding data access. Questions about who can access it, how it can be used, and what AI tools and analytics capabilities allow us to do have been at the forefront. The ultimate goal is to leverage data to enhance patient care.
From well-known figures like John Halamka at the Mayo Clinic to smaller players tinkering in their garage, there is a shared belief that AI can lead to improved patient outcomes at a lower cost. However, when we look at the impact of recent changes in patient care brought on by digital health companies established over the past decade and a half, the results are not as clear.
Companies like Oak, Iora, One Medical, Livongo, Vida, Virta, and others that seek to innovate primary care or revolutionize diabetes management are starting to evaluate their impact as they accumulate a substantial user base. Various organizations have emerged to assess these interventions. While the companies often present their own studies demonstrating positive outcomes, impartial entities like the Validation Institute, ICER, RAND, and more recently the Peterson Health Technology Institute have begun conducting their own studies or meta-analyses to provide unbiased evaluations.
The general consensus from these assessments is that digital health solutions may not be as effective as claimed. The US healthcare system has historically focused on clinical trials to determine the efficacy of new technologies, often overlooking the cost-effectiveness aspect. As a result, certain technologies that prove to be ineffective and costly are still widely utilized and reimbursed.
Despite widespread inefficiencies in healthcare technology, digital health solutions seem to have drawn particular scrutiny. Organizations like ICER have declared certain digital therapeutics for opioid use disorder as ineffective, leading to non-coverage by health plans. Peterson Institute, following ICER’s framework, has similarly critiqued diabetes solutions and is expanding its evaluation to musculoskeletal issues.
Critics within this arena, such as Al Lewis and Brian Dolan, bring their skepticism from various angles, highlighting flawed methodologies in studies and questioning extrapolations made by assessment entities like Peterson. The lack of substantial real-world data and potential conflicts of interest among advisory boards have raised concerns about the credibility of these assessments.
While it is true that digital health companies have not consistently produced high-quality studies, their commercial success often precedes academic validation. Livongo’s rapid growth without extensive studies, for example, illustrates that some companies have thrived based on market demand rather than scientific evidence.
In conclusion, the debate around the efficacy of digital health solutions persists, with conflicting viewpoints and the need for more rigorous evaluation methods. The push to improve healthcare outcomes through innovation requires a careful balance between commercial interests and scientific rigor.