More than 40 million lives per year would be saved globally if human behavior changed in just a few key areas—if no one smoked, drank heavily, or used other drugs, and if everyone ate a balanced diet and engaged in regular physical activity. It sounds simple, right?
Of course, behavior change is far from simple. For more than 40 years scientists have developed behavioral interventions with tremendous potential to prevent and treat disease, and enhance health and well-being. Yet the success of these interventions has been limited, and their effectiveness has not improved much over the last several decades.
In short, the field of behavioral intervention science has been stuck, while automobiles, telephones, computers, appliances, and software have steadily improved. So what’s the reason for this difference, and what can we learn from it?
As colleagues and I discuss in a special issue of Translational Behavioral Medicine, one explanation lies in the standard approach to intervention development and evaluation. Historically, the model has been pharmaceutical trials, in which a medication -- say, a pill -- is developed and then evaluated by comparing outcomes for those who take the pill with those who don’t.
This is a sensible approach for evaluating a pill or other single-component treatment. However, behavioral interventions typically are made up of many components. For example, a smoking cessation intervention may include nicotine replacement therapy to mitigate withdrawal; a support group; counseling focused on maintaining abstinence; a daily encouraging text message, and more.
Testing a multi-component intervention using the same approach as a single treatment fails to provide information about the performance of the individual components making up the intervention. This information is necessary for improving the intervention going forward and, indeed, for building a coherent base of scientific knowledge about which strategies work, for whom, and under what circumstances.
The developers of automobiles, software and the like measure whether the presence, absence, or level of individual components has an impact on the performance of others. Then the poorly performing components can be weeded out, the well-performing components can be identified, and it immediately becomes evident where there’s room for improvement.
We suggest that the evaluation of behavioral interventions should be approached in much the same way, using a new research framework called the multiphase optimization strategy (MOST). An investigator using MOST examines the performance of individual intervention components, and eliminates underperforming ones to deliver the best outcome.
MOST represents a very different way of thinking about behavioral interventions -- in fact, it can be argued that it‘s a paradigm shift. There's been a high degree of openness to MOST; to date the National Institutes of Health have funded more than 100 studies using MOST.
Imagine a future in which interventions are more efficient and less wasteful and burdensome; inert and counterproductive components have been eliminated; and interventions are immediately scalable because they are optimized to be affordable, practical, and implementable. There’s an expectation of ongoing, continual improvement -- substituting stronger components for weaker ones and keeping them fresh.
In much the same way that new and improved versions of automobiles, phones and software are issued periodically, so are new and improved versions of interventions with MOST. As a result, interventions steadily increase in effectiveness and public health impact, and gradually fulfill their potential to enhance and save lives.
GPH’s new Intervention Optimization Initiative aims to promote and support MOST, establish a community of scholars, and provide training in current intervention optimization methods. As its director, I encourage you to watch for IOI activities starting in 2022, and hope you become interested in intervention optimization!
Linda Collins, PhD
Professor of Social and Behavioral Sciences