A service of the

Download article as PDF

This article is part of Nudging in Public Policy: Application, Opportunities and Challenges

Over the past seven years, the European Commission has been applying behavioural insights to its policy-making. This activity has been growing at a steady pace and is now supported by a dedicated team at the Joint Research Centre, the European Commission’s in-house science and knowledge management service. This team is part of the EU Policy Lab,1 a multidisciplinary space for openly exploring and re-examining policy issues, engaging with stakeholders and co-creating more user-centred solutions.

Working on the application of behavioural insights to policy at a supranational regulating body has its opportunities and challenges. Above all, however, it requires its own approach. As in any novel field, there are a number of competing interpretations of what behavioural insights are and how they should be applied to the policy-making process. In this article, we seek to present our view based on the experience of informing policies and instruments (often regulatory) at the EU level. In some respects, it is complementary to the interpretation of many “nudge units” established in governments around the world, but in others, quite different.

First up is the issue of definitions. Do we speak of behavioural economics, insights, science or sciences? These distinctions are largely academic and, from a policy perspective, do not concern us much. “Behavioural economics” is a branch of economics that challenges traditional assumptions of rationality in economic behaviour. “Behavioural insights” are pieces of knowledge (not opinions) based on empirical findings (not intuition) about behaviour. “Behavioural science” is the systematic study of human behaviour, and “behavioural sciences” are all those disciplines which undertake this study, from anthropology to neuroscience. For us at the EU Policy Lab, using behavioural economics, insights, science or sciences is simply about applying a more nuanced and evidence-based understanding of human behaviour to inform the policy-making process. To remain neutral with regard to academic discipline, we tend to use “behavioural science(s)” or “behavioural insights” when referring to this general definition.

Second is the question of what this implies. What does it mean to apply behavioural insights to policy-making? How does this actually work out? Here, it is useful to take the assumption of rationality in economic behaviour as a starting point. Much of analytical thinking in policy-making is either implicitly or explicitly influenced by economics. Allocating limited resources to competing ends for increasing welfare, for example, is inherently an economic issue. The first way of introducing behavioural insights into policy-making is to challenge the assumption that consumers and citizens behave rationally. Rather, we need to acknowledge the presence of systematic violations of rationality (anomalies, biases or heuristics) in human thought. Behavioural economics is built on the study of these systematic violations, which have been extensively documented.

However, the list of anomalies, biases and heuristics is long. How do we know which apply to a particular policy issue? Dual process theory helps provide a simplified account. In a nutshell, there are two ways of thinking: a fast, automatic, effortless way (System 1) and a slow, reflective, effortful way (System 2).2 The anomalies, biases and heuristics displayed by human thought belong to System 1, whereas rational thought corresponds to System 2. The classic bat and ball example illustrates this point well. A bat and a ball together cost €1.10. The bat costs €1 more than the ball. How much does the ball cost? System 1 will suggest the quick intuitive answer of ten cents, whereas System 2 will yield a more carefully thought-out (and correct) answer: five cents.

The usefulness of the System 1 vs System 2 distinction for policy-making is that many policies are designed with the implicit or explicit assumption that people use their System 2 thinking, whereas in fact they may use their System 1 thinking. Food labelling, privacy notices or calorie postings, for example, assume people will take the time to read and process the information given to them. This is in line with the assumption of rationality: people use information, a scarce resource, to make better decisions; ergo, the more information they have, the better the decision they will make. Unfortunately, as we know, this is not always the case. Hence, unless we acknowledge the existence of System 1 and all its peculiarities, we run the risk of designing policies based on the assumption of System 2 thinking. And these policies may therefore turn out ineffective.

If the assumption of rationality does not hold, what does? What other overarching framework can be used to explain, and possibly predict, human behaviour? Unfortunately, there is no alterative Grand Theory of Behaviour. We need to continuously rely on empirical observations to generate behavioural insights. In this sense, applying behavioural sciences to policy-making is more of an inductive than deductive process. Yes, some insights will be transferable from one context to another, but others will not. Ultimately, observations of behaviour will always be required, either to come up with new insights or to confirm that existing ones are transferable to other contexts. This emphasis on the empirical is characteristic of behavioural sciences and is perhaps one of its greatest contributions to the policy-making process.

How are behavioural insights applied to EU policy-making?

The way in which behavioural insights are applied will vary according to the phase of the policy-making process at which they are introduced (see Figure 1). At the initial stages of policy preparation, they can help identify and better understand the issue or problem. At the implementation stage, they can be embedded into EU policy instruments. And at the final stage, application, they can be used to nudge behaviour directly. While the first two stages apply to EU processes, the third one requires cooperation with other authorities and actors.

Figure 1
Behavioural insights throughout the EU policy cycle
Behavioural insights throughout the EU policy cycle

Source: Authors’ illustration.

Behavioural insights are relevant at the policy preparation stage because a better understanding of behaviour leads to better-designed policies. This holds true regardless of the policy instrument which is introduced to tackle the policy problem (i.e. even if it does not incorporate behavioural insights in its design).

Take the example of farmer behaviour. Say the policy aim is to foster the uptake of technology on farms, which would lead to higher productivity and a more efficient use of scarce resources. Applying behavioural insights can help explain why farmers are not using enough technology, even in cases where technology is shown to increase overall yields. Farmers are concerned about potential losses. They would be reluctant to adopt a technology that doubles agricultural yield on average, but which leads to lower yields than before during droughts (which may happen once every ten years). This situation can be explained by the concept of loss aversion, whereby potential losses loom larger than potential gains. By better understanding the possible underlying causes of behaviour, a better policy can be designed, one that specifically addresses farmers’ fears of having a bad year.

Once the policy problem has been analysed, the policy debate moves on to discuss what can be done about it. Here, behavioural insights can be incorporated in the design of a policy instrument to increase its effectiveness. These insights will be relevant even if the policy aim is not to change behaviour. For example, public policy relies on behavioural insights in regulation to protect consumers from ethically questionable nudges by the private sector (which has been using behavioural insights for decades to influence consumer behaviour).

For example, in Chapter IV, Article 22 of the EU Consumer Rights Directive, policy-makers recognised the power of default options in influencing consumer behaviour.3 This referred to practices such as pre-checked boxes on e-commerce sites that require consumers to untick them if they do not want an additional service. To protect the consumer, the EU proclaimed that

If the trader has not obtained the consumer’s express consent but has inferred it by using default options which the consumer is required to reject in order to avoid the additional payment, the consumer shall be entitled to reimbursement of this payment.4

Finally, behavioural insights can be applied more directly to influence behaviour. This is the realm of nudging,5 which is largely responsible for the popularity of behavioural insights in policy-making today. According to this approach, people can be nudged into desirable behaviours, e.g. behaviours that make them better off as judged by themselves. This is achieved by “changing people’s behaviour without changing their minds”,6 i.e. by appealing to their System 1 instead of their System 2.

The principle of subsidiarity limits the possibilities for direct EU action here. To nudge people directly, you need to access their choice architecture, the context in which they make decisions. Usually, control over these contexts will fall to the national, regional or local levels of governance. For the EU to take part in nudging, it needs to coordinate with other authorities, fulfilling a role that is complementary, and not overlapping, to theirs. For example, it can help coordinate joint field trials at the EU level. And of course, it can also contribute to the policy debate about nudging, mapping its use by regional and national authorities in the EU and discussing issues such as the ethics of nudging and its long-term effects.7

Finding behavioural insights

As noted earlier, behavioural science provides empirical findings in lieu of a unifying theory of behaviour. These may be replicated and found to hold up across a number of settings, gradually building up a solid evidence base. Sometimes, this evidence base will be enough for improving regulations (for example, we do not need another experiment to confirm loss aversion). However, at other times, primary, context-specific evidence will be required.

While in principle all methodological options are available for conducting behavioural studies in support of EU policy,8 there is a challenge of diversity that needs to be overcome. Any behavioural study intended to support policies that affect 500 million people in 28 countries should provide generalisable results. A randomised controlled trial (RCT) conducted in only one country is not good enough, because its results are specific to a given context. An intervention that works in a German city might not work in a Greek village. Moreover, comparable RCTs conducted in several countries simultaneously, precisely to address these context specificities, are generally unfeasible due to the cost and complexity of such an operation. On the other hand, qualitative methodology can provide an extremely rich description of behaviour embedded in its social context and can be conducted simultaneously in several countries, but due to small samples sizes (and to the nature of the approach), it cannot produce quantitative, generalisable findings.

As a result, the European Commission tends to rely on experiments (either online or in a laboratory) as a method of choice. Because they observe behaviour in a controlled environment where only the relevant variables are kept while all the noise is eliminated, experiments can isolate and test the underlying psychological mechanisms of a decision or behaviour.9 This mechanism, in turn, can apply to different contexts. To prove that this is the case, experiments can be conducted in several countries across the EU, covering countries with different cultures and histories, testing for country differences.

One of the great benefits of experiments (which they share with RCTs) is that they can establish causality between two variables. If the only difference between a treatment group and a control group is the presence of a treatment, the only possible explanation for a change in the behavioural outcome of both groups is that treatment. This is extremely useful for policy-makers and possibly refreshing, too, after hearing repeatedly that the results of correlational studies do not imply causality.

Close to 30 large-scale experimental studies have been undertaken at the European Commission over the past six years, either by the Joint Research Centre or outsourced through public procurement. These have covered a wide range of policy areas, from tobacco labelling to online marketing to children to the circular economy. These studies have fed into the policy-making process, often at the impact assessment stage. They have produced the evidence on which proposed policy initiatives were based.

For example, the energy labels of household appliances needed an overhaul. Advances in technology being what they are, at some point products were offering an A+++ rating. A study on energy labelling was conducted to test the best way of providing customers with information about the energy efficiency of household appliances.10 The results showed that alphabetic scales worked better than numeric scales, that an “A to G” scale worked better than the “A+++ to D” scale, and that label designs were more important when energy efficiency was not of key importance to consumers. This evidence was subsequently used in the impact assessment for the EU regulation on energy efficiency labelling.11

Another study, on online gambling, conducted a survey and experiments (both in the lab and online) to test possible remedies to problematic online gambling behaviour.12 Treatments applied before participants started gambling were not effective, but some which were applied while they were gambling were. By interrupting or altering the human-machine interaction, pictorial and textual warnings made consumers reduce the speed of their bets (but not the amount – for this, fixed monetary limits are required). This evidence was used to underpin the impact assessment for the European Commission’s recommendation on principles for online gambling.13

The case for a reflexive and transparent approach

While experiments have proven to be a very useful source of behavioural insights for EU policy-making, it would be wrong not to reflexively examine some of their limitations. For one thing, there is the problem that many of the classic results found in psychology have not been replicated. A recent large-scale study found that while 97% of original experimental studies in psychology had significant results, only 36% of replication studies had significant results, and the size of the effect was about half of the original.14 This is worrying and undermines trust in the results of a single experiment. Replication is normally the answer to this concern, but the timing of the policy cycle is such that experiments conducted in order to assist with the crafting of EU policy tend to be conducted only once, making replication unfeasible.

The issue of ecological validity (i.e. the degree to which an experiment manages to simulate a real-world environment) is also important. No study will be given credibility by policy-makers if it is too detached from real life, no matter how well designed it is. Sometimes this issue can be overcome, especially if the experiment seeks to test capabilities. For example, in a study on retail investment services, participants faced a decision in a very simplified environment and made mistakes.15 In a real-life environment, which is more complex, these mistakes would presumably increase. So the severity of the mistake was actually underestimated in a laboratory setting. On the other hand, a proposed experiment sought to create a situation in which participants had to decide between repairing an electronic good and replacing it with a new one. Here, capturing the complexities involved in such a decision (e.g. desire to purchase new, expected lifetime of a product, status symbol of new purchases, etc.) would have been much more complicated.

A third issue to take into account is publication bias and the fear of a null result. The fact is that an experiment where none of the treatments had any significant effect is unlikely to get much attention. In academia, this means that it is unlikely to be published. As a result, the scientific literature offers a distorted picture of reality. In policy-making, a null result will raise eyebrows, especially if a substantial amount of public money has been invested in a study. Too many null results and people will soon start questioning the usefulness of experimental methodology. There is a risk, therefore, of p-hacking (conducting ever-more refined analyses until a significant effect can be found somewhere in the data), which could – in the long run – undermine the whole integrity of experimental methodology.

All of these concerns can be addressed – but for that to happen, they must first be raised. In truth, all methods of research will have shortcomings. The problem perhaps does not lie in relying on experimental methodology for policy support, but in relying on experimental methodology alone. A mixed-method approach, whereby the same research question is addressed by several complementary methodologies, can go a long way in ensuring the evidence is robust. For example, experiments are sometimes combined with qualitative methods in behavioural studies for EU policy, and we expect this trend to grow.16 For RCTs, the challenge ahead is to work together with Member States on joint trials, coordinated by the European Commission, to produce robust results that can be generalised beyond a single context.

Whatever method is chosen for a behavioural study, whether alone or in combination with another, the issue of transparency will remain at the core. As stated earlier, one of the greatest contributions of the behavioural turn in policy-making is the emphasis on the empirical. The design and implementation of policies should not be based on theoretical assumptions of behaviour, but rather on what we know about behaviour based on empirical observation. And this process needs to be transparent, which is achieved by publishing results openly and making the relevant data sets available, allowing for a critical reflection of the advances being made and the challenges ahead. This article is a contribution to that effort.

Conclusion

The introduction of behavioural insights into policy-making is welcome, because they challenge traditional assumptions in policy-making which are largely inspired by neoclassical economic thinking. In line with good evidence-based policy-making, they make us question and test how people behave instead of assuming we already know the answer.

In the European Commission, the benefits of behavioural insights applied to policy-making are increasingly recognised and have now been embedded within the institution’s “better regulation” toolbox.17 The Joint Research Centre, in supporting this process, is developing ties with other practitioners in the public sector and in academia, contributing to an open environment of mutual learning.

However, for all their promise and potential, it would be a mistake to raise expectations and see behavioural insights as some sort of silver bullet that will do away with tough policy problems at a lower cost. Behavioural sciences certainly enrich the variety of insights that inform our understanding of the problems. In this sense, they only complement – but do not replace – more traditional tools (e.g. incentives, regulation or information disclosure) available to policy-makers for addressing them.

  • 1 For further information, see European Commission: Blogs of the European Commission, available at http://blogs.ec.europa.eu/eupolicylab/.
  • 2 D. Kahneman: Thinking, Fast and Slow, New York 2011, Farrar, Straus and Giroux.
  • 3 Directive 2011/83/EU of the European Parliament and of the Council of 25 October 2011 on consumer rights.
  • 4 Ibid., p. 81.
  • 5 R. Thaler, C. Sunstein: Nudge: Improving Decisions About Health, Wealth, and Happiness, London 2008, Penguin.
  • 6 P. Dolan, M. Hallsworth, D. Halpern, D. King, R. Metcalfe, I. Vlaev: Influencing behaviour: The mindspace way, in: Journal of Economic Psychology, Vol. 33, No. 1, 2012, pp. 264-277.
  • 7 J. Sousa Lourenço, E. Ciriolo, S. Rafael Almeida, X. Troussard: Behavioural Insights Applied to Policy: European Report 2016, Luxembourg 2016, Publications Office of the European Union.
  • 8 R. van Bavel, B. Hermann, G. Esposito, A. Proestakis: Applying Behavioural Sciences to EU Policy-making, JRC Scientific and Policy Report, EUR 26033, 2013 Luxembourg, Publications Office of the European Union.
  • 9 P.D. Lunn, Á. Ní Choisdealbha: The case for laboratory experiments in behavioural public policy, in: Behavioural Public Policy, 8 January 2018.
  • 10 London Economics: Study on the impact of the energy label – and potential changes to it – on consumer understanding and on purchase decisions, European Commission, ENER/C3/2013-428 Final Report, 2014.
  • 11 Regulation (EU) 2017/1369 of the European Parliament and of the Council of 4 July 2017 setting a framework for energy labelling, in: Official Journal of the European Union, Vol. 60, 28 July 2017, pp. 1-23.
  • 12 C. Codagnone, F. Bogliacino, A. Ivchenko, G. Veltri, G. Gaskell: Stud y on online gambling and adequate measures for the protection of consumers of gambling services, European Commission, Final Report, 2014.
  • 13 Recommendation 2014/478/EU of the European Commission on principles for the protection of consumers and players of online gambling services and for the prevention of minors from gambling online, in: Official Journal of the European Union, Vol. 57, 19 July 2014, pp. 38-46.
  • 14 Open Science Collaboration: Estimating the reproducibility of psychological science, in: Science, Vol. 349, No. 6251, 2015.
  • 15 N. Chater, R. Inderst, S. Huck: Consumer Decision-Making in Retail Investment Services: A Behavioural Economics Perspective, European Commission, Final Report, 2010.
  • 16 R. van Bavel, F.J. Dessart: The case for qualitative methodology in behavioural studies for EU policy-making, JRC Scientific and Technical Reports, forthcoming.
  • 17 See European Commission: Better Regulation Toolbox, available at http://ec.europa.eu/smart-regulation/guidelines/docs/br_toolbox_en.pdf.

Download as PDF

DOI: 10.1007/s10272-018-0711-1

Economic literature at EconBiz

Find relevant publications quickly. EconBiz offers orientation and help in the search for articles, working papers, e-books and more.