Name two behavioral politicians

Digital behavior policy - challenges and implementation hurdles

Behavioral policies have been widespread internationally for more than 20 years.1 They aim to change the behavior of citizens using behavioral economics, psychology, neurosciences and behavioral sciences. The spectrum is very broad and ranges from simplification in administrative communication ("simplification") to training in dealing with risks ("boosts") to the regulation of manipulative marketing strategies ("budges"). The much discussed “nudging” is another sub-strategy of behavioral policy. Changes in behavior should be triggered without coercion or financial incentives by making targeted use of selective perceptions and distortions ("biases") and designing decision-making architectures ("choice architectures") accordingly. A classic example is the visible positioning of healthy products in a cafeteria in order to steer consumer buying behavior in a desired direction.

So none of this is new. However, the use of behavioral policies in the context of digitization is comparatively new. Based on the findings of behavioral science, the dark side of digitization becomes visible: 2 For example, a more recent study by the Norwegian Consumer Council shows how Facebook makes it difficult to handle data in a self-determined manner, especially in areas where facial recognition is at stake, thanks to an overly complex decision-making architecture The commercial use of nudges and other behavior-based strategies is well advanced: Excessively long terms & conditions lead to information overload and thus to a lack of transparency. Simulated warning notices are intended to gain the trust of users. Information deficits are deliberately exploited by "phishing". In online gambling, the multiplication of game accounts (“online wallets”) for one and the same player creates confusion. Personalized price systems make price comparisons difficult and create potentially discriminatory market situations. Some behavioral economists now fear a market failure due to behavioral distortions (“behavioral market failure”). Proponents of behavioral policy therefore argue for digital "counter-nudging" for both consumer and market policy reasons.

In fact, the core argument in the debate about digital behavioral policy is based on the observation that commercial actors have long been changing digital decision-making architectures in their favor through the design of websites and the increasingly algorithmic design of online markets. So why shouldn't consumer policy actors themselves act as digital decision-making architects? Intelligent and individualized presets (“smart defaults”) can limit the time spent in the network or on the phone; Certain plug-ins offer a digital self-restriction against addiction-like and self-damaging consumer behavior for selected websites or at set times of the day (“shopper stopper”); Text messages remind you to cancel online subscriptions; Automated information and summaries of terms and conditions make the consequences of online purchase decisions clear (“smart disclosures”).

When (digital) nudges fail

However, current research on behavioral policy has also identified a number of reasons for nudge failure, as well as a variety of implementation barriers and unintended effects. A study4 already showed in 2013 that environmental policy nudging can actually lead to more economical water consumption; however, a significant proportion of the people concerned tended to consume more energy at the same time. One explanation for this unintended effect lies in the phenomenon of “moral licensing” - saving behavior in one area makes wasteful behavior in another area appear justified.

In the meantime, multiple factors can now be identified which, especially under complex and accelerated conditions such as digital decision-making contexts, can lead to unexpected side effects and instrument failure.5 Studies show that nudging does not work or does not work in the expected way where consumers with a large number of them may are faced with contradicting behavioral incentives. This can lead to extremely short-sighted and spontaneous decisions ("cognitive myopia"), favor herd effects or even trigger a behavior of defense and rejection ("reactance") against consumer policy interventions among the addressees. Last but not least, it is often a lack of infrastructure, a lack of resources or psychological and socio-economic dependencies that block behavior change.

The research on the implementation problems in behavioral policy is currently receiving too little attention. One reason for this is the considerable importance of randomized control studies (RCTs) in behavioral policy. RCTs are used to exclude possible intervening factors in the ex-ante evaluation of behavioral interventions and to investigate causal relationships. This is exactly what the randomized allocation of test persons to intervention and control groups is intended to achieve. For many years, RCTs have been perceived as the “gold standard” in evaluation research; they therefore rank at the top of the hierarchy of evidence. This methodological dominance has certainly provoked critical reactions.6 On the one hand, the hierarchization of evidence creates an epistemic monoculture in which practical knowledge is devalued in favor of a “gold standard”; on the other hand, other sources of knowledge about social, economic and cultural contexts, about subsequent problems and unintended side effects are hidden. RCTs can be implemented particularly effectively online and with the help of mobile phones and therefore also play a central role in digital behavioral policy.

Who Nudges the Nudgers?

Behavioral policy relies heavily on coordination between experts and political decision-makers. As decision architects, they should correct the distortions of perception among citizens and consumers. Most recently, the World Bank and the United Nations have raised the question of who is actually correcting the distortions and short circuits of the decision-making architects. In the nexus of politics and science, multiple cognitive and social bottlenecks can be identified: 7

  • Actors from politics and administration make risky and momentous decisions depending on the way in which problems and challenges are presented to them (“framing effect”).
  • Regardless of their importance, certain problems and issues are more visible in the media or politically than others - this can lead to activism among political decision-makers and to the neglect of issues that are less visible, but possibly just as important (“attention and salience”).
  • Political actors and experts tend to perceive and interpret evidence in such a way that it is in line with previous decisions (“confirmation bias”).
  • Members of groups and teams often agree with the (perceived) majority opinion and thereby defer divergent positions of their own group members or other groups (“group reinforcement” and “inter-group opposition”).
  • Political actors and experts alike often overestimate the future success of certain policies and the possibility of controlling the effects of their decisions (“optimism bias” and “illusion of control”).

These well-known and well-documented problems in the decision-making and advisory process are increased in the "nexus" between policy fields.8 Wherever the coordination between ministries, authorities, scientific, economic and civil society actors is crucial for success due to multiple policy fields overlapping, the aforementioned distortions and silo mentalities are intensified . Smart energy networks, for example, are mutually dependent on intelligent concepts of mobility and consumer policy. The virtual networking of people and objects reinforces this connection. In this respect, digital behavior policy is particularly dependent on the stated challenges and implementation hurdles being systematically recognized and made the subject of a self-critical behavior policy that is capable of learning.


Possibilities and limits of digital behavior policy

For the reasons mentioned, digital behavior policy has to reckon with a wide range of implementation problems, side effects and also limited rationalities from decision-makers and experts. Ultimately, the question of the possibilities and limits of digital behavioral politics refers to a variety of overarching factors. Nudging itself is only one aspect of this. This has at least three consequences:

  1. It is advisable to use the entire spectrum of digital behavioral policies and not just reduce them to nudging. Behavioral findings point to the dark side of digitization and show the cognitive and psychological limits of the concept of the “responsible consumer”. In doing so, they often provide justifications for stronger regulation of online markets, for transparency regulations and instruments to protect particularly vulnerable online consumers.
  2. Digital behavior policy can only be evidence-based if knowledge from different sources and on the basis of multiple methods flows into decisions. Methodological monism creates blind spots and builds insights into social, cultural and socio-economic mechanisms of behavior in digital space.
  3. The decision-making architects and experts are not free from distorted action patterns. Authorities can act as border organizations, provide translation services between science and politics and irritate existing systems of belief. This is particularly true under the conditions of a nexus between political fields that poses new challenges for the regulation of the digital.
  • 1 See H. Straßheim, S. Beck (Ed.): Handbook of Behavioral Change and Public Policy, Cheltenham (UK), Northampton (MA) 2019.
  • 2 See Behavioral Insights Team: The behavioral science of online harm and manipulation, and what do do about it, London 2019, https://www.bi.team/wp-content/uploads/2019/04/BIT_The-behavioural- science-of-online-harm-and-manipulation-and-what-to-do-about-it_Single.pdf (29.1.2020).
  • 3 Forbrukerradet: Deceived by Design, Oslo 2018, https://fil.forbrukerradet.no/wp-content/uploads/2018/06/2018-06-27-deceived-by-design-final.pdf (29.1.2020) .
  • 4 V. Tiefenbeck, T. Staake, K. Roth, O. Sachs: For better or for worse? Empirical evidence of moral licensing in a behavioral energy conservation campaign, in: Energy Policy, Vol. 57 (2013), pp. 160-171.
  • 5 See H. Straßheim: Behavioral mechanisms and public policy design: preventing failures in behavioral public policy, in: Public Policy and Administration, April 2019.
  • 6 See A. Deaton, N. Cartwright: Understanding and misunderstanding randomized controlled trials, NBER Working Paper, No. 22595, Princeton 2016.
  • 7 Cf. M. Hallsworth, M. Egan, J. Rutter, J. McCrae: Behavioral Government. Using behavioral science to improve how governments make decisionsm, London 2018, https://www.bi.team/publications/behavioural-government/ (29.1.2020).
  • 8 See J. Haus, R.-L. Korinek, H. Straßheim: Expertise in the Nexus. From usage to networking research, in: N. Lüdtke, A. Henkel (Ed.): The knowledge of sustainability. Challenges between research and advice, Munich 2018, pp. 63-88.