Policy interventions that base themselves on behavioral economics findings have emerged heavily in different governments over the past 2 decades. The Behavioral Insight Team (UK) is one of the most recallable instances of nudge units that provided for research advancements and interventions of such kind. From using social norms to increase tax payments, increase fine payment rates, or using lotteries to increase election participation rates among voters, the approach that any result had in common was the extensive use of a statistical experimental method that changed science for good: randomised controlled trials. This article explores the methodology, why it is useful as well as its limitations, along with real life examples to provide an informed take on why is of essential importance within the behavioral economics field.
A Randomized control trial (RCT) is a type of scientific experiment that aims to reduce certain sources of bias when testing the effectiveness of new treatments. It consists in the randomized selection of two or more groups inside the population in study, where one group is the experimental one, which receives the intervention being assessed while the other, commonly called the control group, receives an alternative treatment in which there’s no intervention. The groups are monitored under equal conditions, in order to determine the effectiveness of the experimental trial when compared to the control group, since the only expected difference between the groups is the outcome variable in study.
Pro’s
RCTs are useful tools to answer medical-related experiments, being one of the most efficient ways to study the safety and effectiveness of new treatments, required by governmental regulatory bodies as the basis for approval decisions. This provides a very powerful response to questions of causality, as it allows the program implementers to assure the outcomes obtained are, in fact, a result of the variable in study, considering all other variables constant. As so, it eliminates any bias that may hinder the experiment, improving the veracity of the results achieved. RCTs are, in fact, the safest method for establishing cause–effect relationships, and they are often used to rigorously evaluate nudge interventions.
Con’s
Nevertheless, RCTs are not always the answer, and should only be used to evaluate nudge interventions whenever appropriate. The truth is, sometimes they are simply not feasible. Let’s consider the example of a school intervention: many times, the processes of randomization are not possible to perform in classrooms where all students are exposed to it. RCTs can also be considered unethical, in experiments where the intervention group clearly benefits with its application. For instance, if a school decides to expose only half of the students to an experiment that helps them build a CV, the other half will undoubtedly be harmed, due to the randomization inherent to the selection process. Additionally, RCTs may not be able to guarantee equivalence between the groups when dealing with a small sample, which may result in an unpowered test, unable to detect the real effect of the variable being studied. Lastly, if participants in different experimental conditions are in close proximity, as it might be the case in experiments among the population of a company, in which participants share their workplace, there may be communication between them, harming the veracity and credibility of the results.
The use of RCTs by Behavioral Economists
The application of psychology in public policy has been gaining tremendous importance in the past decade, as Governments have been realizing that their policies may depend on social, emotional, cognitive and many other factors usually disregarded by economists. Behavioral Economics is the science that embraces the psychological aspects of policies, and RCTs are the most important tool to which it resorts to. Leading to significant contributions to known cognitive and perceptual biases in our decision-making processes, the application of Behavioral Economics to public policy is occurring in diverse settings, from promoting a sustainable use of energy at home, to encouraging the timely and honest payment of taxes. These public policies have usually derived from centralized labs or innovation hubs specifically set up within government departments to rigorously trial and disseminate findings.
As introduced before, the most well-known example of RCT applied to BE is the United Kingdom’s Behavioral Insights Team (BIT). Having advised the UK’s Government since 2010, the BIT has been working across multiple domestic policy areas with the intent of improving public policy in general, by promoting the methodology “Test, Learn, Adapt”. The suggested approach was created with the purpose of dismantling the idea that RCTs are expensive and difficult to implement, by showing the Government its endless advantages in offering quick feedback to improve policy making. The BIT’s innumerous interventions over the past years have saved the UK government tens of millions of pounds, through the combination of a rigorous evaluation by the highly regarded psychologists, economists and policymakers who compose the team with new insights and approaches, with the use of RCTs as the most essential tool to test the effectiveness of these evidence-based policies. The BIT’s remarkable success has led many other countries to follow similar paths and create some type of Government-led initiative influenced by Behavioral Economics.
Allow us to consider a real-life example: in Canada, where behavioral economics is playing an increasingly important role in the development of its policies, with the establishment of many agencies rigorously testing their prospective behavioral interventions through the application of RCTs, it was made a behavioral intervention involving the promotion of organ donation registration rates in Ontario. The experiment was developed by the government agency Behavioral Insights Unit (BIU), which intervened with the purpose of simplifying the registration processes. The trial contributed to an increase of organ donation registration rates by 143%, due to the province of Ontario’s opt-out default policy framework. The RCT implemented by BIU allowed them to compare the effects of the different treatment periods to before and after the intervention, and to determine which treatment contributed the most to increase registration rates.
Although the incorporation of behavioral economics into public policy initiatives is not the solution to every political concern, it has proved to be a very important tool in this regard, as this intervention in Canada allowed us to conclude.
However, it is not only in the public sphere that RCTs are used. In fact, many private entities resort to this type of scientific experiment to prove the effectiveness of certain behavioral interventions.
Let’s consider a real-life example to better understand how a RCT can be helpful to the private sector: The Rector of the University of Virginia organized a RCT to find out the relevance of three different nudge interventions the university was considering to apply, with the goal of increasing college attendance and graduation rates. As so, the university selected three random groups of students, which received one of three messages: about the financial benefits of completing the Free Application for Federal Student Aid (FAFSA); reminding them or their motivation for applying to college; or with instructional guidance on how to complete the FAFSA. The study ended up finding out that none of the used nudge interventions had a statistically significant effect on students’ persistence into their second year of college, as the effects were close to none. It was concluded that this was a well-conducted RCT, since it provided valid findings and allowed the university to reject the application of the previously considered interventions.
In essence, it is safe to conclude that RCTs are one of the most valuable tools used by behavioral economists, not only as a powerful aid on policy making, but also on many private entities that desire to test hypothetical nudges before putting them into practice.