US Aid Cuts Jeopardize Global HIV Prevention Efforts

Reading time: 3 minutes

In early 2025, the Trump administration implemented significant cuts to U.S. funding for HIV prevention programs, both domestically and internationally. These reductions have raised alarms among global health experts, who warn of potential setbacks in the fight against HIV/AIDS.

Impact on Global HIV Prevention

The U.S. President’s Emergency Plan for AIDS Relief (PEPFAR) has been a cornerstone in the global response to HIV/AIDS, providing two-thirds of international financing for HIV prevention in low- and middle-income countries. Since its inception in 2003, PEPFAR has saved over 26 million lives by investing in critical HIV prevention, treatment, care, and support programs across 55 countries.

However, a 90-day pause in U.S. foreign development assistance, initiated on January 20, 2025, disrupted these efforts. Although a waiver was issued to allow the continuation of life-saving humanitarian assistance, including HIV treatment, the pause created confusion and disrupted services at the community level. In Ethiopia, for instance, 5,000 public health worker contracts and 10,000 data clerk positions, crucial for HIV program implementation, were terminated.

The Global HIV Prevention Coalition warns that if U.S. funding is not restored, there could be an additional 8.7 million new HIV infections among adults, 350,000 among children, 6.3 million AIDS-related deaths, and 3.4 million additional AIDS orphans by the end of 2029.

Domestic Consequences

Domestically, the Centers for Disease Control and Prevention (CDC) has faced significant budget cuts, particularly in its Division of HIV Prevention. An analysis by amfAR indicates that increased funding to this division was associated with a nearly 20% reduction in new HIV infections across the U.S. between 2010 and 2022.

The proposed cuts threaten to reverse this progress. The CDC’s HIV prevention funding, which totaled about $1 billion in FY2024, supports state and local jurisdictions in conducting health surveillance and targeting communities effectively. Reductions in this funding could lead to increased HIV incidence, with negative implications for individual well-being, public health, and healthcare costs.

Organizational Restructuring and Layoffs

The administration’s broader restructuring efforts have also impacted HIV prevention. The CDC is undergoing a major reorganization, with several divisions, including those focused on HIV, set to become part of a new entity, the Administration for a Healthy America (AHA). This move follows significant downsizing, with the CDC workforce reduced by 3,500 to 4,000 through early retirements and layoffs.

Additionally, the Presidential Advisory Council on HIV/AIDS (PACHA) is being overhauled, with all members removed and no timeline provided for appointing new ones. These changes have raised concerns about the continuity and effectiveness of U.S. HIV policy.

Global Health Community’s Response

The global health community has expressed deep concern over these developments. UNAIDS Deputy Executive Director Christine Stegling emphasized that while treatment continuation is vital, prevention efforts are equally crucial to controlling the epidemic. She highlighted that the funding pause has led to the closure of many drop-in health centers and the termination of outreach workers’ contracts, depriving vulnerable groups of support.

The World Health Organization (WHO) also warned that prolonged funding cuts could reverse decades of progress, potentially taking the world back to the 1980s and 1990s when millions died of HIV each year globally.

Conclusion

The U.S. has played a pivotal role in global HIV prevention efforts. The recent funding cuts and organizational changes threaten to undermine years of progress, both domestically and internationally. Restoring and maintaining robust support for HIV prevention is essential to prevent a resurgence of the epidemic and to continue the global fight against HIV/AIDS.

Sources

https://www.reuters.com/business/healthcare-pharmaceuticals/trump-administration-plans-remove-all-members-hiv-advisory-council-2025-04-09

https://www.them.us/story/pepfar-hiv-aids-africa-marco-rubio-donald-trump

https://apnews.com/article/cdc-hiv-administration-for-a-healthy-america-8309109b91e6e4025878f335ea15dc96

https://www.ungeneva.org/en/news-media/news/2025/01/102724/unaids-welcomes-us-decision-keep-funding-life-saving-hiv-treatment

Afonso Freitas

Research Editor &Writer

The Economics of Mindfulness: Why Wellbeing Is a Business Case

Reading Time: 5 minutes

Reframing Wellbeing in the Modern Workplace 

As the nature of work becomes increasingly complex, digital, and fast-paced, employee wellbeing has emerged as a critical driver of organizational success. Far from being a peripheral HR topic, psychological wellbeing directly impacts core business outcomes – from productivity and innovation to turnover and engagement. The notion that investing in wellbeing is costly or optional is increasingly contradicted by empirical evidence showing that it is, in fact, a smart economic decision. 

Workplaces where employees report higher levels of subjective wellbeing – particularly job satisfaction – demonstrate significantly better performance outcomes, including labor productivity, output quality, and profitability. These relationships persist even when controlling for other HR policies, highlighting wellbeing as a distinct and measurable source of competitive advantage. 

Moving Beyond Perks: Systemic Approaches to Wellbeing 

Workplace wellness initiatives often focus on individual-level solutions like meditation apps, fitness memberships, or lunchtime yoga. While these efforts may reduce short-term stress, they fail to address the structural conditions that give rise to chronic strain, disengagement, and mental health risks. 

Interventions are more effective at the organizational or group level. Changes to work schedules, job roles, or team dynamics – especially those that increase employees’ control and participation – have demonstrated a broader and more sustainable impact on wellbeing. Employees who have autonomy in their tasks and a voice in how work is structured consistently report higher levels of job satisfaction, lower stress, and improved work–life balance. These outcomes are amplified in environments that support open communication and shared decision-making. 

Such systemic approaches suggest that wellbeing is not the result of individual resilience, but of healthy, empowering work environments that are intentionally designed. 

Technology and the New Frontier of Workplace Wellbeing 

In response to hybrid and remote work environments, organizations are increasingly turning to digital tools to support mental health and wellbeing. From immersive virtual reality (VR) environments that simulate calming nature scenes to AI-based tools that monitor emotional states via facial expressions, biometric data, or tone of voice, technology now plays a growing role in the design of workplace wellbeing strategies. 

Virtual reality programs have shown promising results in reducing stress and promoting relaxation in various workplace settings. Even short VR interventions with nature-based visuals or guided breathing exercises have been associated with measurable improvements in employee wellbeing. These technologies can serve as accessible and time-efficient micro-breaks, particularly in demanding or high-pressure environments. 

At the same time, the use of emotional AI raises critical ethical concerns. While emotion-recognition systems promise to enhance management decisions and detect early signs of burnout, they also risk turning the workplace into a zone of surveillance. Monitoring affective states without transparent consent or context can undermine psychological safety rather than support it. If technologies are used to control rather than empower employees, they may backfire – reducing trust and increasing stress. 

The key lies in intentional design and ethical implementation. When used responsibly and transparently, digital wellbeing tools can extend access to support and complement systemic approaches to workplace culture. However, technology must remain a tool – not a substitute – for genuine human connection, autonomy, and care. 

Wellbeing as a Catalyst for Innovation 

Wellbeing not only prevents burnout – it enables innovation. Employees who perceive their work as meaningful and values-aligned are more likely to engage in creative thinking, share new ideas, and take initiative. When employees experience purpose and psychological safety, their engagement spills over into behaviors that benefit the organization as a whole. 

Studies indicate that this effect is strengthened when organizational values align with employees’ own spiritual or ethical beliefs. A sense of authenticity and shared purpose in the workplace fosters emotional connection, which in turn drives proactive contributions and innovative work behavior. 

Resilience as a Buffer to Emotional Strain 

In emotionally intense or high-stakes sectors, such as healthcare, workplace resilience plays a critical role in protecting psychological wellbeing. Employees working under high stress, such as nurses in mental health services, report substantially better wellbeing when they experience resilience-supportive conditions like strong team relationships, opportunities for growth, and autonomy in clinical decisions. Higher resilience levels are associated with lower levels of anxiety, depression, and mental distress – even when job demands remain high. 

These findings affirm multidimensional models of wellbeing, which emphasize not just happiness or the absence of illness, but the capacity to grow, feel connected, and exercise agency in the face of adversity. 

From Support Programs to Cultural Shift 

Employee Assistance Programs (EAPs) remain widely used and often valued as accessible tools for short-term counselling and support. However, their long-term effectiveness depends on integration with broader workplace strategies. EAPs that operate in isolation, without addressing organizational culture or workload issues, may offer limited benefits. When combined with systemic measures – such as leadership development, trauma-informed management, or inclusive policy changes – EAPs can serve as effective pillars within a comprehensive wellbeing strategy. 

Designing for Sustainable Human Performance 

The research is clear: organizations that invest in structural wellbeing – not just individual coping – unlock higher engagement, greater innovation, and stronger business outcomes. Mindfulness, autonomy, psychological safety, and meaningful work are not luxury goods; they are essential design principles for the future of work. 

The economics of mindfulness lies in creating environments where people can thrive – not just survive. In doing so, companies don’t just promote wellbeing – they build better, more adaptive organizations for the long term. 

Sources

Bryson, A., Forth, J., & Stokes, L. (2017). Does employees’ subjective well-being affect workplace performance? Human Relations, 70(8), 1017–1037. 

Delgado, C., Roche, M., Fethney, J., & Foster, K. (2021). Mental health nurses’ psychological well-being, mental distress, and workplace resilience. International Journal of Mental Health Nursing, 30, 1234–1247. 

Fox, K. E., Johnson, S. T., Berkman, L. F., Sianoja, M., Soh, Y., Kubzansky, L. D., & Kelly, E. L. (2022). Organisational- and group-level workplace interventions and their effect on multiple domains of worker well-being: A systematic review.Work & Stress, 36(1), 30–59. 

Kirk, A. K., & Brown, D. F. (2003). Employee assistance programs: A review of the management of stress and wellbeing through workplace counselling and consulting. Australian Psychologist, 38(2), 138–143. 

Riches, S., Taylor, L., Jeyarajaguru, P., Veling, W., & Valmaggia, L. (2024). Virtual reality and immersive technologies to promote workplace wellbeing: A systematic review. Journal of Mental Health, 33(2), 253–273. https://doi.org/10.1080/09638237.2023.2182428 

Mantello, P., & Ho, M. T. (2024). Emotional AI and the future of wellbeing in the post-pandemic workplace. AI & Society, 39, 1883–1889. https://doi.org/10.1007/s00146-023-01639-8 

Salem, N. H., Ishaq, M. I., Yaqoob, S., Raza, A., & Zia, H. (2022). Employee engagement, innovative work behaviour, and employee wellbeing: Do workplace spirituality and individual spirituality matter? Business Ethics, Environment & Responsibility, 32(3), 657–669.

Mara Blanz

Research Editor & Editor

Friendship and Social Capital

Reading Time: 5 minutes

Human Features as Capital? A brief history 

    In 1776, Adam Smith wrote, in An inquiry into the nature and causes of the wealth of nations, that “The acquisition of talents during education, study, or apprenticeship, costs a real expense, which is capital in a person. Those talents are part of his fortune and likewise that of society”. This idea might seem quite intuitive for an inhabitant of the world in the 21st century, and even the greenest economist would associate these words with the foundation of Human Capital.  

    However, until the last century, this concept was actually quite unpopular. As Theodore Shultz points out in Investment in Human Capital (1961), investment in human is not devoid of moral and philosophical issues. His words, “It seems to reduce man once again to a mere material component, to something akin to property”, are especially evocative, considering the 13th Amendment to U.S. constitution, which abolished slavery, had entered into force not even one century before.  

    With the advent of statistics and nation-level measurements in the period of WW2, researchers started to observe that increases in national output could not be fully explained by increases in physical capital. It became possible to link the accumulation of skills, capabilities and knowledge humans with these unexplained variations in growth.  

    Today, the World Bank defines Human Capital as “The knowledge, skills, and health that people invest in and accumulate throughout their lives, enabling them to realize their potential as productive members of society.” 

    But what aspects of the multifaceted human being are included in this definition? Reading further, the WB specifies: “Investing in people through nutrition, health care, quality education, jobs and skills helps develop human capital, and this is key to ending extreme poverty and creating more inclusive societies.”  

    Human Relationships as Capital  

      After accepting the “capitalization” of individual’s traits, a more recent step has been recognizing that humans are social animals, and therefore, their relationships are a fortune too. The concept of Social Capital generally refers to social relationships between people that have productive outcomes. In a nutshell, Portes (1998) explains it as “whereas economic capital is in people’s bank accounts and human capital is inside their heads, social capital inheres in the structure of their relationships”.  

      Social capital is for sure a peculiar form of capital: it does not reside in any individual entity, but it’s embedded in society, it lies in relationships, it’s rooted in networks. However, just like any other type of capital, it requires investment and maintenance to yield returns. 

      If defining the determinants of human capital is not an obvious task, defining those of social capital is even more challenging. The Institute for Social Capital indicates a range of dimensions including trust, togetherness, volunteerism, generalized norms, everyday sociability, and neighborhood connections.

      In essence, applying this theory, helping an old lady cross the street, having a neighbor who looks after your child when you’re sick, or trusting the police—all of these actions contribute to a web of reciprocity that will eventually benefit either the individual or the broader society. 

      Just as human capital was initially controversial, social capital was—and perhaps still is—a contested concept. For sure, its reputation increased after the publication of Making Democracy Work: Civic Traditions in Modern Italy by the political scientist Robert Putnam. Focusing on Italian regional governments, he found how government performance was strictly linked to traditions of civic engagement.  

      Friendship as Social Capital 

        Within this broader framework, friendship emerges as a particularly powerful and personal expression of social capital — one that not only supports emotional well-being but also shapes long-term economic and social outcomes. 

        If friendship is part of human capital, then it somehow has impacts that extend beyond the individual to society at large. And it might be more powerful than one can think. In a 2022 study on networks and friendships, Raj Chetty’s and colleagues found that education, racial segregation, education, and family structure were not as important as cross-class connections in determining upward social-mobility. In fact, among all observed components of social capital, friendship across socioeconomic lines was the only one driving mobility. Social cohesion and civic engagement, by contrast, did not seem to play a role.  

        The relationship between friends also shapes the social tissue of communities. In Bowling Alone, Putnam describes how American society, one strong and civic engaged, is now degrading through the whimsical metaphor of bowling. The evocation of many lonely bowlers, with a nostalgic comparison to bowling leagues playing together, reflects the individualist derive of society.  From this point of view, friendship and social interactions are a sort of public good, and not just a private comfort. The weaking of these relationships has consequences that go beyond individual loneliness, but undermine the health of society. 

        Relationships and Policy Makers 

          Friendships, connections, networks, and trust — they all contribute to both individual well-being and a healthier, more cohesive society. Yet we rarely hear politicians or policymakers address these topics directly. In an era where society is becoming increasingly individualistic, the case for investing in social capital becomes even more urgent. Urban planning, for instance, can be intentionally designed to promote cross-class connections and neighborhood friendships. The creation of public and recreational spaces that facilitate meeting and incentive people to socialize, promotion of sports and community strengthening activities can be impactful policies for boosting social capital. Reinvesting in friendships and ways of making them happen it’s not just about enhancing well-being, it’s a necessary step to rebuild a strong social fabric, that can sometimes be as important as and educated or healthy community. 

          Sources:

          Chetty, R., (2022). Social capital I: measurement and associations with economic mobility https://www.nature.com/articles/s41586-022-04996-4 

          Goldin, C. (2016). Human Capital. https://scholar.harvard.edu/files/goldin/files/goldin_human_capital.pdf

          Institute for Social Capital. https://www.socialcapitalresearch.com/literature/evolution/

          Putnam, R. D., Leonardi, R., & Nonetti, R. Y. (1993). Making Democracy Work: Civic Traditions in Modern Italy. Princeton University Press. https://doi.org/10.2307/j.ctt7s8r7 

          Putnam, R. D., “Bowling Alone: America’s Declining Social Capital” Journal of Democracy, January 1995, pp. 65-78. 

          Schultz, T. W. (1961). Investment in Human Capital. The American Economic Review, 51(1), 1–17. http://www.jstor.org/stable/1818907 

          World Bank: The Human Capital Project https://www.worldbank.org/en/publication/human-capital 

          Veronica Guerra

          Research Editor & Writer

          Artificial Intelligence and Ethics: A Necessary Debate 

          Time to read: 6 minutes

          Artificial Intelligence (AI) is no longer a futuristic concept but an integral part of modern society. It shapes decisions in finance, healthcare, law enforcement, and social media, influencing how people interact with technology and each other. The rapid integration of AI, however, brings with it a host of ethical concerns. Questions about fairness, accountability, and transparency challenge the assumption that technological progress is inherently beneficial. AI does not exist in a vacuum—it reflects the values and biases of those who create and deploy it. While ethical AI has become a widely discussed concept, turning principles into action remains a significant challenge. 

          Between Innovation and Responsibility 

          The potential benefits of AI are vast. Automated systems can improve efficiency, analyze massive datasets, and assist in complex decision-making processes. In industries such as healthcare, AI-driven models can detect diseases early, optimize treatment plans, and personalize medical recommendations. In business, predictive analytics can enhance supply chain management and customer experiences. Despite these promising applications, the ethical risks of AI cannot be ignored. 

          A key issue lies in the tension between innovation and responsibility. Companies and developers push for solutions, often prioritizing speed and market dominance over careful ethical consideration. AI ethics frameworks have been introduced to address this, but they frequently lack enforceability, leaving ethical concerns in the hands of the very entities that stand to profit from AI’s widespread adoption. 

          Challenges of Ethical Implementation 

          Ethical AI is easier to discuss than to implement. One of the greatest barriers is the lack of transparency of AI systems. Many machine learning models operate as “black boxes,” meaning their decision-making processes are difficult to interpret, even by their creators. This lack of transparency complicates accountability, making it unclear who should be held responsible when AI systems make biased or harmful decisions. 

          Another persistent challenge is bias in AI models. AI systems are trained on historical data, which often contains existing biases related to race, gender, and socioeconomic status. Rather than eliminating human prejudice, AI has the potential to reinforce and amplify systemic inequalities. Addressing these biases requires a combination of diverse training datasets, algorithmic audits, and ongoing oversight—none of which are currently standard practices across industries. 

          Additionally, economic incentives often clash with ethical considerations. The AI industry is dominated by tech giants that compete for market share, patents, and financial gains. Ethical concerns, such as privacy and fairness, are often secondary to profit-driven objectives. Without clear regulatory frameworks, companies can claim adherence to ethical principles while continuing practices that favor commercial success over social responsibility. 

          Bridging the Gap Between Theory and Practice 

          For AI ethics to move beyond discussion and into action, structural changes are necessary. Regulatory enforcement is one crucial step. Governments and international organizations must establish clear legal guidelines that define ethical AI development and deployment. Without binding regulations, AI ethics remains largely voluntary, dependent on corporate goodwill rather than enforceable standards. 

          Another important approach is enhancing AI explainability. Researchers and developers need to prioritize the creation of AI systems that are interpretable and understandable. This includes designing models with built-in transparency measures, providing clear documentation on decision-making processes, and ensuring that AI-driven recommendations can be challenged when necessary. 

          Additionally, inclusive AI development is crucial. Many AI development teams lack diversity not only in terms of gender and ethnicity, but also regarding socioeconomic background, cultural perspective, and disciplinary expertise, which limits their ability to recognize and mitigate biases in their models. A broader range of perspectives—spanning gender, ethnicity, socioeconomic backgrounds, and disciplines—must be included in AI research and implementation. Ethical AI requires collaboration between technologists, ethicists, policymakers, and affected communities to ensure that AI serves a wider spectrum of societal needs. 

          Case Study: IBM’s Ethical AI Approach 

          IBM (International Business Machines Corporation) has positioned itself as a leader in ethical AI by actively addressing issues of fairness, transparency, and accountability. Unlike many companies that focus solely on AI innovation, IBM has taken significant steps to integrate ethics into AI development through its AI Ethics Board, which oversees responsible AI deployment. 

          A key contribution to ethical AI is its focus on explainability. The company has developed the AI Fairness 360 toolkit, an open-source library designed to help developers detect and mitigate biases in machine learning models. By making these tools publicly available, greater transparency and accountability across the AI industry is encouraged. 

          The company has also taken a strong stance on regulatory engagement, advocating for clear legal frameworks to govern AI systems. Unlike some competitors that resist regulation, the company supports AI governance standards that ensure responsible development and deployment. 

          A notable example of the firm’s commitment to ethical AI is its decision to exit the facial recognition market in 2020. Concerns over racial bias and mass surveillance led IBM to discontinue its facial recognition services, citing the technology’s potential for misuse in law enforcement and violations of civil rights. This decision demonstrated that companies could prioritize ethics over profitability, setting a precedent for responsible AI business practices. 

          IBM’s approach to ethical AI implementation offers several key lessons. The company has demonstrated the importance of proactive governance by establishing an internal AI Ethics Board, ensuring that ethical considerations are embedded throughout the AI development process. To enhance transparency and mitigate bias, it has developed open-source tools such as AI Fairness 360, which help detect and reduce discriminatory patterns in machine learning models. Additionally, the corporation has been a strong advocate for regulatory frameworks, collaborating with policymakers to create enforceable standards that promote responsible AI governance. While the initiatives are not without challenges, they provide a blueprint for other organizations seeking to balance AI innovation with ethical responsibility. 

          A Call for Collective Responsibility 

          The ethical challenges posed by AI are not solely the responsibility of developers or policymakers—society as a whole must engage in shaping the future of AI. Consumers should be informed about how AI affects their lives, researchers must prioritize ethical considerations in innovation, and governments must create legal structures that uphold fairness, transparency, and accountability. 

          The debate around AI ethics is not simply about mitigating harm; it is about ensuring that technological progress aligns with human values. AI should not be left to develop unchecked under the assumption that efficiency outweighs ethical concerns. A proactive approach—one that prioritizes responsible AI practices over damage control—will be essential in defining how AI serves humanity in the years to come. 

          Sources

          • Arbelaez Ossa, L., Lorenzini, G., Milford, S. R., Shaw, D., Elger, B. S., & Rost, M. (2024). Integrating ethics in AI development: A qualitative study. BMC Medical Ethics, 25(10). https://doi.org/10.1186/s12910-023-01000-0 
          • IBM. (2020). IBM CEO’s letter to Congress on facial recognition and responsible AI policy. IBM Newsroom. https://newsroom.ibm.com/2020-06-08-IBM-CEO-Arvind-Krishna-Issues-Letter-to-Congress-on-Racial-Justice-Reforms

          Mara Blanz

          Research Editor & Writer

          Survivorship bias: the omnipresent skewer of decisions 

          Reading time: 7 minutes

          Survivorship bias is a cognitive shortcut that takes place when the successful or surviving part of a group is mistaken for the whole group, due to the invisibility of the group’s failures. A common example of survivorship bias is the assumption that older buildings and architecture were much more durable and stable as suggested by the common saying “they don´t make them like they used to”. This assumption fails to acknowledge that only the sturdier buildings have survived into the present while the rest, the majority, have been destroyed or replaced. A contributing factor to this erroneous belief is the fact that we haven’t experienced the durability or, in this case, the survivability of modern buildings: presently, we are surrounded by both good and bad quality construction; the former will be preserved in time while the latter might not, just like with the older ones.  

          When we “miss what we’re missing” is how author David McRaney describes the survivorship bias. Indeed, if failures are invisible, successes are in the spotlight, and we not only fail to acknowledge that the failures might have held useful information, but we fail to acknowledge their existence altogether. 

          This bias is harmful due to how common it is and how easily it affects decision making. Furthermore, it affects a myriad of sectors ranging from business and finance to science, and even medicine. During the Covid 19 pandemic for instance, healthcare systems struggled to keep up with testing which might have skewed survival and death rates. Medical studies are also more often performed on stronger and younger patients who survive initial diagnoses, as weaker patients are less likely to survive long enough to participate in them, leading to overestimations of successful outcomes.  

          When thinking about research, it is obvious how error-inducing this bias is. Indeed, to be effective, research must be thorough and take into account as many variables as possible. More specifically, statistics rely on surveys and analysis of populations and, to be accurate, they have to put together groups that fully represent them. The survivorship bias skews researchers into only looking at a subset of those populations, leading to incomplete research. Similarly, when making decisions without analyzing all the available data, individuals will automatically not be making the best choices for themselves. 

          Background 

          A study that took place during WW2 has become the prototype of survivorship bias.  

          For context, Abraham Wald was born in 1902 in what was then the city of Klausenburg in the Austro-Hungarian Empire, today’s Cluj-Napoca in Romania. He developed an interest and talent for mathematics and went on to study the subject at the University of Vienna. He later moved to the United States to work at the Austrian Institute for Economic Research. Then, during World War II, Wald joined a classified program that assembled statisticians to focus on military research and strategy to help in the war, the Statistical Research Group (SRG). 

          At the time, the military came to the SRG with data on the placement of enemy bullet holes on planes that had come back from battle, represented by the red dots in the image below. The first conclusion reached was to install more armor in the areas where the planes were getting hit the most. However, Wald pushed the group to do the opposite: since the planes being analyzed were the ones that had come back from battle, the areas in more need of protection were the ones without apparent bullet holes, the ones where the planes that did crash must have been shot. Thus, the missing bullet holes were on the missing planes. This is where the notion of survivorship bias was first coined. In fact, the decision to reinforce the areas of the planes ridden with bullets failed to consider that the planes being looked at were the ones that made it back safely, the ones that survived. While the others’ perceptions had been distorted by the survivorship bias, Wald overlooked it and was instrumental in the reinforcement of the aircraft.  

          Had it not been for him, the group would have made a major mistake despite the stakes being so high, which illustrates how much bias affects decision making. 

          Diagram used to represent the bullet holes on the aircrafts that came back from battle 
          Abraham Wald

          Survivorship bias in the business world 

          Survivorship bias has also crept into the business and finance environment and is apparent in various situations.  

          The first instance is the glorification of successful businesses and people. Every now and then, we hear an inspiring story about how some college dropouts became millionaires. Concrete examples are Steve Jobs, Mark Zuckerberg, Bill Gates, all of whom quit university and went on to become part of the richest people on the planet. Their fame has made them into inspirations and examples to follow. However, chances of becoming a millionaire after dropping out of college are rare. In fact, according to Ramsey Solutions’ National Study of (American) millionaires in 2024, 88% of millionaires graduated from college. Furthermore, the success of the examples above tends to be attributed solely to hard work, when in reality, for every successful college dropout, there are thousands who are not as lucky despite equivalent ambition. Moreover, variables such as luck, timing, networks and socioeconomic background also play a significant part in the path to success. 

          A similar example involves what are called “unicorn start-ups”. This term, coined by venture capitalist Aileen Lee in 2013, refers to a private startup company valued at over one billion dollars. Examples of unicorn start-ups are Uber Technologies Inc, Airbnb and Space X. People venturing into the business world often strive to one day find or create start-ups as the latter, in particular unicorns like the ones above, are viewed as the archetypes of success and entrepreneurship. At the same time, according to Forbes, 8 in 10 startups will fail within the first year of operation and unicorn start-ups got their name from their statistical rarity. 

          Looking up to and trying to emulate success stories is an example of survivorship bias and its consequences. Firstly, it drastically limits the knowledge and awareness needed to have a chance of actually succeeding by leaving out important voices, the voices of failures which are vital in understanding successes. To quote author David McRaney again, “The advice business is a monopoly run by survivors”, only their advice and stories are deemed relevant. Secondly, it leads to overly high degrees of optimism which can influence risk-prone decisions. Finally, it suggests causation from correlation by creating the illusion of certain patterns: dropping out of college does not necessarily put you on the path to becoming a millionaire even though a few millionaires did so. 

          Studies on mutual funds are perhaps the most famous example of survivorship bias in the business world.  A mutual fund is an investment fund that pools money from investors to purchase stocks, bonds, and other assets and securities. When looking at mutual funds, studies tend to only include ones that currently exist and fail to show data on funds that no longer do. Funds cease to exist in the case of mergers and acquisitions but also during restructuring and poor performance. This failure to count lost funds leads to misleading positively biased results that do not actually depict the returns realized by all mutual funds, since funds that close cause a negative return that is not considered. 

          Finally, marketing campaigns can also transmit biased information. Indeed, many rely on attractive figures in terms of client satisfaction and durability of the product: “90% of people loved the product!”. These figures are not necessarily biased or false, but it is important to look at their sources and the factors they consider: these include the sample size and composition and for how long the product was used. For instance, the study might have been set up for success by only using the testimonies of regular and loyal customers.  

          How to avoid the bias 

          Knowing about its existence and understanding how it can influence and impact our judgement is already a huge step in trying to avoid bias. Being selective of data sources, always striving to see the bigger picture and practicing critical thinking are other ways of fighting against it. Since it is present in so many different situations, awareness of the bias can already lead to better and more informed decisions, from financial investments and ventures to medical and scientific conclusions, but also common opinions and values.  

          Conclusion 

          Survivorship bias is omnipresent in our everyday life, impacting our decisions and opinions. 

          However, it is not the only bias and many others like the anchoring, availability and confirmation biases also guide our conduct every day. Although it is impossible to be immune to them altogether as they are unavoidable cognitive occurrences, being aware of them and their significance is enough for a more informed point of view in a variety of subjects and, in particular, decisions as an economic agent. 


          Marta Nascimento


          Sources: 

          “Survivorship Bias – the Decision Lab.” n.d. The Decision Lab. https://thedecisionlab.com/biases/survivorship-bias

          Team, Cfi. 2024. “Survivorship Bias.” Corporate Finance Institute. May 24, 2024. https://corporatefinanceinstitute.com/resources/career-map/sell-side/capital-markets/survivorship-bias/

          Penguin Press. 2018. “Abraham Wald and the Missing Bullet Holes – Penguin Press – Medium.” Medium, June 17, 2018. https://medium.com/@penguinpress/an-excerpt-from-how-not-to-be-wrong-by-jordan-ellenberg-664e708cfc3d

          Solutions, Ramsey. 2024. “The National Study of Millionaires.” Ramsey Solutions. October 3, 2024.  https://www.ramseysolutions.com/retirement/the-national-study-of-millionaires-research#:~:text=Eighty%2Deight%20percent%20of%20millionaires,38%25%20of%20the%20general%20population.&text=And%20over%20half%20(52%25),13%25%20of%20the%20general%20population

          TEDx Talks. 2015. “Missing What’s Missing: How Survivorship Bias Skews Our Perception | David McRaney | TEDxJackson.” https://www.youtube.com/watch?v=NtUCxKsK4xg. 

          Gratton, Peter. 2024. “Survivor Bias Risk: What It Is, How It Works.” Investopedia. September 18, 2024. https://www.investopedia.com/terms/s/survivorship-bias-risk.asp

          Peachman, Rachel Rabkin. 2024. “America’s Best Startup Employers 2024 Methodology.” Forbes, March 8, 2024. https://www.forbes.com/sites/rachelpeachman/2024/02/21/americas-best-startup-employers-2024-methodology/#:~:text=Anyone%20who%20has%20worked%20at,fail%20in%20the%20long%20run

          Bad Behaviour: a Behavioral Economics take on Corruption

          Reading time: 7 minutes

          What is corruption? Is it taking a bribe? Smuggling millions to a tax haven? Or skipping the line on a public service because the office clerk is your neighbor’s nephew’s kid? “Corruption is what those dirty bankers, politicians, and the referee who conducted the last match my club lost at are guilty of, that’s what it is!” – any of us will enragedly say. “I”- we confidently add – “would never do it”.

          But is this really so straightforward? The prevalence of corruption in a given society can be hard to measure, both due to its secretive nature and differences in how it is defined. We rely essentially on what law enforcement records (a biased source if the authorities happen to be corrupt themselves) and on perception surveys, both of the general public and of experts on the matter (which may also be skewed if people have different ideas about what constitutes a corrupt act). But statistical issues aside, the truth is that corruption appears to be a worldwide phenomenon, and a relatively stable one at that. According to Transparency International, the Corruption Perception Index (measured in a 0-100 scale, 100 being the least corrupt), in 2021, was lower than 50 for two thirds of the world. 131 countries “made no significant progress against corruption over the last decade”. Portugal ranked 32nd least corrupt out of 180 countries, at 62 points.

          Corruption Perception Index, 2021

          So, it seems we don’t really think of ourselves as corrupt, but we perceive corruption around us. Is it external factors and mechanisms that influence a person’s choice to engage in the kind of behavior that we call corruption? We know what an economist’s point of view would be on the matter: each choice is dependent on incentives and preferences (of the agent making that choice), and on a rational cost-benefit analysis of the situation. And, like with any decision process, Behavioral Economics also has something to add on the subject: the agent’s choice is conditioned by cognitive biases and bounded rationality. This means that people could be guided (or should we say, nudged?) towards a different behavior pattern. Let us now explore these ideas.

          Why are we corrupt?

          If there’s one thing we should remember when dealing with corruption is that it is harmful, undoubtedly undermining the potential for human and economic development. Corruption can be like a disease, spreading all over and destroying a system from within. Corruption, in fact, corrupts.

          Perception of Corruption by Institution, 2017

          At its core, an act of corruption is a break of trust. An agent is trusted with some power or task and is expected to act according to the best interest of those who deposited that trust in him/her. We can think of it as a contract being made between society and the agentThe agent is trusted by society as a whole to act in society’s best interest. It is easy to see where the problem starts. Two things, together, provide an incentive for the contract to be breached: a conflict of interest between the agent and society, and asymmetry of information. In other words, there is a risk of corrupt behavior if the agent stands to gain something from breaking the contract and can do it without being caught. 

          Given this, the economic reasoning for acts of corruption is simple enough – an agent will rationally assess the costs and benefits of breaking ethical rules and do it if the benefit exceeds the cost. So, a public official who is offered a 5-million-euro bribe will simply perform a cost-benefit analysis (5 million in my pocket vs some time in jail if caught) and decide accordingly.

          Following this line of reasoning, anticorruption policies should focus on increasing transparency and accountability, decreasing asymmetry of information (making it harder to act without our actions becoming public knowledge), and better aligning the agent’s and society’s interests, so that not breaking the contract becomes in the agent’s best interest.

          In a simple, perfectly rational world, this would be all. For better or for worse, that is hardly the picture the world we live in paints.  

          A Behavioral Economics Approach

          A person’s actions are hardly ever determined solely based on costs and benefits. Any agent is affected by mental shortcuts, reciprocity, context, fast-thinking and social norms. People rarely go about their lives carefully deliberating every choice. Indeed, many decisions are automatic. For example, a public official may hire his friend’s nephew for his office without necessarily thinking about ethical rules or the public good. Nonetheless, this is a textbook case of nepotism.

          Another important mechanism is reciprocity – the “you scratch my back, I’ll scratch yours” mentality. This could be seen either in a large-scale favor exchange between two powerful people or in something as small as a driver bribing a police officer to avoid getting a speeding ticket.

          Bribery Payers Index, 2011

          But it is not all about automaticity in decision-making and ethical blind spots. Although no one likes to see themselves as the bad guy, even when agents are aware of the dubious nature of their actions they may still choose to engage in corrupt acts. Why?

          The moral “weight” of corruption is lighter when the agent feels somehow distanced from the action. Experimental evidence shows that having an intermediary as a third party who arranges the bribe (someone to “do the dirty work”, so to speak) significantly increases the percentage of people willing to offer and accept bribes! Thus, bribery is perceived as a common transaction.

          Another problem is our tendency to consider only obvious and immediate results (fast thinking). Corruption presents an obvious, palpable gain, and is often thought of as a “victimless crime”. It is easier to break a rule if no one seems to be worse-off by it. However, according to the United Nations, corruption, bribery, theft, and tax evasion cost at least US$1.26 trillion each year to developing countries, money that essentially could have been implemented in much-needed social and economic policies.

          Finally, let us not forget that as human beings we tend to abide by the perceived behavior of the majority. As a matter of fact, we are easily influenced by our peers, with the underlying mentality of “If everyone is doing it, what’s the big deal if I do it too?” being heavily present in many of the choices we make, corruption decisions not excluded.

          What can we do about it?

          So, does this mean that nothing can be done about corruption, that it should be accepted as a feature of humanity, and that we may as well have to learn to live with it? Far from that! Truth is, by identifying the cognitive biases and mental shortcuts that stir people towards corruption, we are also learning which buttons to push to get them away from it.

          A simple way to surpass the ethical blind spot problem in our decision-making is to simply reiterate the ethical principles a person is already trying to live by. An experiment was conducted where the participants were asked to solve a math test, while being given incentives to cheat. However, some were asked to write down the Ten Commandments they remembered beforehand. Those participants cheated much less than the control group, having been reminded beforehand of the existence of a moral code (not even necessarily their own). As it turns out, awareness matters

          In turn, this opens the door for new anticorruption initiatives. Businessmen could, for example, be asked to sign a document stating their awareness of the organization’s ethical code. Politicians may be required to publicly state all their possible conflicts of interest before taking office. 

          5th Pillar, an Indian NGO, created a Zero Rupee note with a pledge against corruption, to be given to officials who ask for a bribe

          Moreover, nudges that communicate the ethical standards people have for each other may be helpful, again, as a reminder of the trust society puts in each individual, which may work in itself as an incentive for citizens to live up to that trust.

          We know these small nudges are hardly the definitive solution to end corruption once and for all – transparency and accountability measures are still the ones most likely to have a noteworthy impact. However, the nudges we discussed may be just what is needed to curb the small corrupt tendencies in a society in which more sizable schemes are tolerated or even go unnoticed. We may never live in a fully honest world, but awareness of what makes it dishonest is crucial to make sure it never becomes fully, and irreversibly, corrupt.


          Sources: Muramatsu, Roberta; Bianchi, Ana Maria. (2021). “The big picture of corruption: Five lessons from Behavioral Economics”. In Journal of Behavioral Economics for Policy. Vol. 5, Special Issue 3: Roots and Branches, pp. 55-62., Muramatsu, Roberta; Bianchi, Ana Maria. (2021). “Behavioral Economics of Corruption and Its Implications”. In Brazilian Journal of Political Economy. Vol. 41 (1)., Ma, Qingguo; Yan Min. (2018). “Psychological, Behavioral, and Economic Perspectives on Corruption”. In International Journal of Psychology and Psychoanalysis., Statista, Our World in Data, India Times.

          Mariana Gomes

          Leonor Cunha

          Joana Brás

          Let’s play: Behavioral Game Theory

          Reading time: 8 minutes

          Picture this scenario: you’ve been locked in an interrogation room for hours, and the police finally layed out their cards on the table. They know you’re guilty and have your partner in crime in the other room. The police needs a confession, and the one to provide it can walk out freely, leaving the other to serve a long sentence in jail. If you both confess, you both go to jail, but for a shorter time. However, if neither does, both go to prision but receive an even shorter sentence. It is up to you to decide. What do you do? 

          Would you confess or keep quiet?

          If you’ve ever been formally introduced to game theory, you know that its answer is that both of you should run to the officers and tell them the truth. This is the economic prediction as it’s the rational thing to do. But would you do it? Most people’s answer is the economists’ favorite response to (almost) every question: it depends. Who is the other person? How much do you trust them? Is it your friend or your 3rd-floor neighbor? Is it your mother? And how long are those sentences? You may hold your ground facing 6 months, but what if you’re looking at 25 years? Are you even truly guilty? Are you willing to trade away your integrity for a shorter sentence?

          The conventional game theory looks at the game elements (what are the actions and the payoffs?) for the answer. Behavioral game theory tries to look at all the other questions.

          Game Theory vs Behavioral Game Theory

          But let’s back away for a second: what even is game theory, and how exactly is behavioral game theory different?

          Well, game theory’s main objective is to predict behavior through a systematic, mathematical approach. No need to close the article, this is only the “scary” version of the definition. Thankfully, we are not in a Microeconomics class, so we can use a much more pleasant definition: game theory analyses games. Of course, by “games”, we do not mean football or basketball (although that would be fun) but are instead referring to any interaction between people (the players), where their behavior (actions) determines what they get out of the game (their payoffs). Any economic, political, or social interaction can be rewritten as a game, and thus seen through the lens of game theory. The interrogation room situation we started with is a classic example. It is frequently used as an introduction to the subject. The idea is that each player will look at their possible strategies (if my partner confesses, I can confess/not confess…) and where those strategies will land them (if we both confess, we go to jail…), deciding then what the optimal course of action will be. It’s like playing chess – if, for example, your opponent moves the bishop to B7, and you take your knight to C4, you’re doomed; if instead you move your queen to D6, checkmate! The optimal course of action: queen to D6. 

          Game Theory tries to find the optimal course of action

          Game theory does a wonderful job predicting the outcome of such games, but the jump to real life can be tricky. Notice that the optimal course of action was left undefined. “Optimal” depends on what each person wants out of the game, on their preferences. The usual assumption is that players are self-interested to the extreme and completely rational, caring only about getting the best possible outcome for themselves, regardless of what happens to the other player.

          This is exactly where behavioral game theory steps into the picture. It has a practical approach to the games, rather than theoretical. Game theory uses logic and mathematics to find out what a rational and self-interested player would do in a game, and then states what people will choose rationally in such a situation. Behavioral Game Theory takes the opposite path. It asks actual people to play the game and observes their behavior. Such experiments make it possible to see how different preferences affect human behavior (how things like altruism or fairness influence people’s decisions) and how that differs from the theoretical predictions. Why would this matter? Well, these preferences can be incorporated in the models, making them closer to the reality we know, and therefore allowing for better predictions of behavior.

          Playing the Game

          Ultimate Game

          The Ultimatum game is an early example of behavioral game theory’s experiments. In this game, one player is given a certain amount of money (say, 10€) and asked to split it with the other player in whatever way they want. The second player then decides whether to accept the offer or to reject it, in which case neither player gets anything.

          Conventional game theory’s prediction is that the first player should offer as little as possible (1 cent out of 10€) and pocket the rest, since the second one would have no reason to reject it – after all, 1 cent is better than nothing, right?

          The Ultimatum Game consists of proposing a split that the other player accepts or not

          Now, picture yourself playing the game. Do you accept such a low offer? Can you think of anyone who would? Do you think this is the right prediction? If not, congratulations! You are a wonderful forecaster of human behavior. In fact, when this experiment was first conducted, the average offer was equivalent to 3.5 €, and offers below 5€ were more likely to be rejected the further down they went. The experiment has been replicated over the years, with high and low amounts to be split, with consistent results.

          The preference uncovered here is known as negative reciprocity – being willing to “pay a price” (give up some amount) to punish unfair or inappropriate behavior in others. Upon seeing what they considered as an unfair split (a much too low offer from the first player), most players decided they would rather gain nothing than allow the other person to, in their eyes, treat them unjustly.

          Dictator Game

          The dictator game is a slight modification of the Ultimatum game: here, the second player has no power to reject the offer. The first player (the dictator) proposes a split of the initial sum, and that is exactly what each one takes home, even if it means that the second player gets nothing at all. Of course, the theoretical prediction is exactly that – the “dictator” will choose to keep the entire 10€ to themselves and won’t offer anything to the second player.

          But when the experiment was conducted this was not what happened at all! In fact, around two thirds of people chose to offer the equivalent of 1€ to 5€, keeping the rest.

          Those unlikely nice “dictators” were displaying what is known as an altruistic preference. Someone with altruistic preferences is more content with an outcome if the well-being of others increases. That means they play not only with their own outcomes in mind, but also that of others involved in the game, and prefer situations where other people are also benefitted. This behavior can be found in everyday interactions too: when people donate to charities or help someone in need, they are manifesting altruistic preferences.

          Altruistic behavior is found in everyday interactions

          Gift Exchange Game

          Now, for a break from ultimatums and offers, let’s look at the gift-exchange game. This is simply a game made to mimic the interaction between an employer and an employee. First, the “employer” offers the “employee” some amount of money (a “wage”). Then, the employee must perform a task to earn it. Now, what the task is in particular is not so important (it can be anything at all, as long as it is not completely effortless), what is really at stake is how much effort the “employee” puts into completing it.

          Game theory’s prediction here is that being offered any amount at all the “employee” would work as little as possible (self-interested as they are). But do you think that happens? Chances are, you’ve had to do some job in your life, and put some amount of effort into it. Do you always do the least possible required? As it turns out, most people didn’t. They responded to more generous “wages” by working harder. They were, in fact, displaying negative reciprocity’s nicer counterpart: positive reciprocity, the willingness to reward generous actions. People presenting this preference respond positively to actions that benefit them: they go an extra mile when they feel that someone has acted in their best interest.

          People often choose to work harder if they feel they were offered a generous wage

          Notice that neither of these experiments challenge the validity of standard game theory. Its systematic and logic process is still sound – games do have optimal courses of action. Its cornerstone assumptions are the ones that fail: behavioral game theory shows that people in general are not merely self-interested, so what is “optimal” varies according to their preferences.  

          Behavioral game theory is a great example of what the branch of behavioral economics can do for the economic science. It is not a replacement of traditional game theory, but a way to expand on and improve it: behavior models can arise from practice, not from theory. After all, there’s no better way to figure out someone’s behavior than to observe it.


          Sources: American Economic Association, BehavioralEconomics.com, Blackwell Handbook of Judgment and Decision Making, Behavioural Economics: Introduction to Behavioral Game Theory and Game Experiments.

          Constança Almeida

          Mariana Gomes

          Leonor Cunha

          Towards a better tomorrow: The role of behavioral economics in mental health

          Reading time: 6 minutes

          Part II: A small nudge for man, a giant leap for mental health

          Where do nudges come in?

          We’ve talked about how widespread mental health struggles are and how important it is to pay them proper attention. It is particularly necessary to drive people into taking better care of their mental health, as well as end the stigma around mental issues. We will now examine the idea of how behavioral economics can be used for these purposes, namely through nudges.

          Nudges influence choices and behavior patterns, which can be critical to mental health

          But what, exactly, is a nudge? According to Thaler and Sunstein, the creators of the concept, “a nudge is any aspect of the choice architecture that alters people’s behavior in a predictable way without forbidding any options or significantly changing their economic incentives”. 

          Many of those struggling with their mental health do not seek the needed treatment. This may be related with the implications of these issues in decision making, which help-seeking models and interventions often fail to account for.

          This is exactly where Behavioral Economics comes in. It can help reduce engagement in behavior patterns that represent risk factors in the development of mental problems and provide the necessary and adequate frameworks for incentivizing help-seeking by individuals affected by them. Nudges can, therefore, be a complementary and cost-effective strategy for suicide and mental health issues prevention, both by tackling risk factors and by effectively encouraging individuals to seek help.

          But what would such interventions look like? Well, it is necessary to understand that both the field of behavioral economics and our current generalized concern with mental health issues are still very recent. Therefore, they are still in the earliest stages of development, and studies can be costly and hard to implement, so no widespread intervention has taken place so far. Still, there is no lack of initiatives out there!

          There are some initiatives to support mental health

          Dodging the risks

          Ashleigh Woodend, Vera Schölmerich, and Semiha Denktaş, in their article “Nudges to Prevent Behavioral Risk Factors Associated With Major Depressive Disorder”, look at what they call risk factors, behavioral patterns which can increase the odds of developing mental health problems (or worsen existing ones), and propose a series of nudges that could be effective in tackling them. Those negative behaviors include low physical activity, since exercising has a strong positive impact on mental well-being (it releases dopamine and improves self-esteem), inappropriate stress coping mechanisms, as stress can be a powerful trigger of mental health problems, and inadequate maintenance of social ties, as healthy social interaction promotes psychological wellbeing (if the pandemic has taught us anything, is that social isolation and mental health are no friends!).

          Nudges play on our cognitive biases. One such bias is our tendency to prioritize immediate reward over gains in the distant future. For example, how many times have you allowed yourself “just five more minutes” of sleep in the morning, even though you knew it would make you late for class? What if we could use this effect to make us want to get up and increase our physical activity? This can be done through a nudge called temptation bundling, combining an unpleasant activity with a pleasant one.

          For example, regarding the improvement of physical activity habits, you can try to fuse that unappealing morning run with something you will enjoy, like hanging out with friends or listening to some music. Or, if a trip to the gym seems like a punishment, throw in an episode of your favorite show. These are easy ways to nudge yourself into building an active lifestyle pattern.

          Listening to your favorite songs while running can be a way to nudge yourself into improving your physical activity habits

          Another great way to nudge behavior is by fiddling with the phrasing of the message we are trying to convey. It has been shown that positive framed messages are more effective than negative framed messages in promoting prevention behavior. So, putting up a sign with a phrase like “If you meditate, you reduce your risk of mental health problems” is more likely to get people some quiet time with their own thoughts than one phrase saying “If you don’t meditate, your risk of depression will increase”.

          When you go to the beach, do you wear sunscreen? And what if, on your way there, you notice a sign pointing out that without protection, you are likely to get sunburnt within half an hour of direct sunlight? Will you wear it now? Most people will. This is called a salience nudge, where some characteristic of a choice is brought to your immediate attention, “put under the spotlight”, in order to influence that same choice. And this can be done to decrease the risk of mental issues too. By simply highlighting, in the workplace, for example, that a significant number of people have engaged in stress management training, it is possible that more people will try it as well, learning appropriate coping mechanisms to keep their minds as healthy as possible through the difficulties that may come their way.  

          You are probably familiar with the feeling of meaning to do something but never actually getting to do it. Sure, you will read that book you’ve been wanting to read, catch up on your microeconomics study before the midterm gets too close, or finally try to chat a bit with your coworkers and get to know them better. Eventually. The thing is, we are not very good at moving away from the status quo, i.e., we tend to stick to the current state of things and struggle to find the drive to make changes in our lives and environment. So, if we want to stimulate social interaction, it is better, for example, to create an environment where that is the default option rather than something towards which people have to make an effort. 

          An open office can increase employers’ wellbeing

          It may be possible, for example, to change the typical workspace from a place with little to no person-to-person engagement into one of sociability, by adopting an open office model. We would be nudging individuals towards building connections by creating an opt-out system of personal interaction instead of an opt-in – a system requiring extra effort to dodge interaction instead of one that requires extra effort to engage in interaction.

          So, there are ways to nudge people away from behavior patterns that can be detrimental to mental health. But what can nudges do to help those already in mental distress?

          Seeking Help

          In these situations, the best to do is to guide someone towards seeking proper help. Instagram, for example, takes preventive measures when users search for a #depression hashtag: a screen pops up redirecting them to help (this will also work with other mental health related searches, like self-harm). A similar thing happens when someone googles depression or suicide. These are small nudges towards the right path, tailored to those who need them the most.

           If you search for depression on Instagram, the suggestion of help shows up
          If you google “suicide”, the following message appears:
          Help Available
          talk to someone today
          SOS Friendly Voice 213 544 545

          The last mechanism we want to outline is a particularly clever and effective one (as obvious as it may seem): social norms (1). Humans have a strong tendency to follow the norm, tied to a desire for others’ approval. Social norms have been proven to predict behavior patterns. Hence, it is plausible that greater awareness of others recurring to mental health treatment and overall shows of acceptance of those who do it can increase help-seeking behavior. Normalizing the problem can help solve it.


          (1) If you want to know more about this topic check out our article about peer pressure influence click here.


          Authors’ Note:

          We are writing this article as more of an exploration of the power of behavior economics in the prevention of mental health issues and in easing the burden of those already struggling with it than as simply an awareness-raiser to the problem. However, awareness for these issues can never be too much, both in society in general and in the academic community, so we lay out some of the signs to watch out for and urge you to reach out to your friends/loved ones if you notice these signs in them, and to seek help if you feel them yourself.


          Sources: Mental Health Foundation, World Population Review, World Health Organization, WebMD, Yale University, PubMed Central, Medium, SAGE Journals, IZA World of Labor, Recovery Ways.

          Leonor Cunha

          Mariana Gomes

          Constança Almeida

          Towards a better tomorrow: The role of behavioral economics in mental health

          Reading time: 7 minutes

          Part I: The silent pandemic

          If we start being honest about our pain, our anger, and our shortcomings instead of pretending they don’t exist, then maybe we’ll leave the world a better place than we found it.

          – Russell Wilson

          Did you know that 12% of the world’s population (a little more than 1 in every 10 people) live with a mental health disorder? And that Portugal is the 2nd European country with the highest prevalence of mental problems? Sadly, there’s no way of knowing by how much the real numbers surpass these (and there is no question that they do, particularly in less developed countries). 

          Mental health is as important as physical health. It comprises all dimensions of our well-being besides the physical one, namely the emotional, psychological, and social ones. Remember that our body and mind are more than two sides of the same coin. They’re like cogwheels on a machine, making each other spin. And if one of the wheels isn’t turning, you can’t expect the engine to keep running. Your body is a complex system, and if one element is damaged, the others will inevitably suffer too.

          A mental health problem is a health problem. Just like a heart issue can limit your ability to exert physical effort, it affects how you think, feel and act. This way, this can interfere with how you cope with stressful situations, with your relationships, and (most significantly for Behavioural Economics) your choice-making process.

          It should be noted that, contrary to general belief, not all mental health problems are situational, i.e., not all stem from traumatic events or abusive pasts. There are many factors that can make someone predisposed to these kinds of issues, including genetics, brain chemistry, or personality. At the end of the day, mental illness does not choose age, gender, or class. No one is immune, but there are some precautions everyone can take. Learn how to deal with the stress in your life. Take time for yourself. Do something you love (like reading the latest NAC article). Your mental health is to be taken good care of.

          It is important to take time for yourself

          We should also remember that we are all different, and so are our struggles. Sometimes, the same condition will manifest itself in wildly different ways between individuals. And sometimes, a behaviour that is a cause for concern in one person can be completely a healthy and normal conduct for another.

          Two of the most common mental illnesses affecting people are Major Depressive Disorder (colloquially referred to as depression) and Generalized Anxiety Disorder. These mood disorders can be identified by a professional and are treatable. Depression causes feelings of sadness, hopelessness, reduced energy, and sometimes agitation and restlessness. Anxiety causes nervousness, worry, or dread. It should be noted that all these feelings are expected to occur from time to time. However, they are not expected to overwhelm you. When these negative feelings start to show up too often and take their toll on your life, that’s when there could be a cause for concern.

          Economic Impact

          We know mentally ill people are not at their best (they are ill, after all). Someone struggling with mental issues is, therefore, less productive than otherwise. This affects the labour force – if mental health problems are, at least, as frequent as current data shows (and we’ve already made the argument that they may be even more), then a relatively large share of the working-age population is not producing as much as they could be. Needless to say, this will hurt the economy. Depression and anxiety are estimated to cost the global economy $1 trillion per year in loss of productivity (WHO). Besides, the current solutions for these problems (therapy, drugs, among others) are costly and lengthy to apply, with patients often requiring a follow-up. This understandingly puts a heavy strain on healthcare systems, resources, and individuals.

          Mental illness hurts job performance

          Depression and Suicide

          Depression is particularly prevailing. Around 280 million people worldwide suffer from it (the majority of which are women). Moreover, the large number of cases that goes unreported is mostly due to a general feeling of shame or lack of awareness of mental health conditions. Data usually shows higher numbers of mental diseases in developed countries. This does not, however, necessarily mean that such problems are more common in these countries, but rather that they are more readily diagnosed and reported, as developing nations often do not yet possess the resources required to properly address these illnesses.

          Depression Rates by country 2022

          The COVID-19 pandemic hasn’t helped: in 2020, the global prevalence of anxiety and depression increased by 25% (WHO). Young people have been particularly affected – they are disproportionally at risk of self-harming and suicidal behaviour. Worst of all, this boom in the prevalence of mental health issues was paired with severe disruption of mental health services. Although this situation had somewhat improved by the end of 2021, many of those who desperately need care are still unable to get it. Professional psychological support is not cheap or particularly easy to access in most healthcare systems, and there is still a lot of catching up to do after the major gap in services that the pandemic represented. And as if the difficulty in obtaining help was not enough, there are still people who don’t bring themselves to ask for help. It is still too common to believe that it is wrong, shameful or pointless to acknowledge our struggles and search for outside support. 

          The WHO currently estimates that, by 2030, depression will be the world’s most common disease in the world. If we are not careful, the next pandemic we face may be one of mental illness.

          In some cases, depression may even lead to suicide. An individual suffering from depression has a risk of suicide around 20 times higher than one without it. The statistics are beyond troubling, they are outright alarming. Over 700 000 people end their own lives every year. This is the equivalent to one suicide every 40 seconds, making suicide one of the biggest killers in the world, and it is the fourth leading cause of death in 15–29-year-olds. 

           A mental health problem gets in the way of your thought process

          How does this affect how you think?

          Depression doesn’t just get in the way of being happy. It causes chemical changes to happen in your brain, which can seriously impact your thought process. The condition can interrupt or reduce neurotransmitters (chemical “messengers” in the brain), such as serotonin, dopamine, and norepinephrine. These changes may either be what is causing you to be depressed or be another result of whatever triggered it. 

          Depression can impair your attention and memory, altering your ability to absorb new information and make decisions.  

          Decision Making

          Depressed people tend to have more trouble making decisions, even trivial ones. Try putting yourself in such a person’s shoes. Imagine you’re going out to dinner with friends. You have to choose the restaurant, but which one? And there are so many tables there, where will you sit? And what will you order when the menu goes on for so many pages! All those light decisions can weigh so heavily when anxiety keeps telling you that every choice is the wrong one.

          Depression frequently brings along hopelessness. People are unwilling to waste their time on plans that they believe will fail. Besides, they experience considerable anxiety when faced with the need to make a call, even the smallest decisions. This results in high levels of what economists refer to as risk-aversion (reluctancy in taking risks), leading to less information collection, idea production, and option consideration.

          Fortunately, studies have shown that using specific techniques such as cognitive behavioural therapy can help depressed people make better decisions, leading to better long-term outcomes. Moreover, problem-solving treatment can train people to improve their problem-solving skills and distorted thinking patterns.

          Indecisiveness can be a symptom of depression and anxiety

          Executive Function

          Depression may also impair your executive function, which affects your ability to process information. Executive function is often called the CEO of the brain (you are walking around with your version of a tiny Warren Buffet in your head!) because it is in charge of getting things done. Simple tasks, such as paying bills, cleaning your room, or getting out of the house, can be compromised if that CEO takes an unplanned vacation. Fortunately, the executive function can be improved with educational strategies and behavioural approaches. If you’re experiencing issues with executive function, try breaking large tasks down into smaller chunks, create to-do lists and review them frequently.

          As we have seen, depression affects everyone differently, changing habits and the way people live. This translates into changes in behaviour, consumer patterns and decision-making.

          There’s a reason why people say depression runs deep. It affects so much more than just your mood. Fortunately, this is a preventable evil – and Behavioural Economics (nudges, particularly) may be a part of the solution.


          Sources: Mental Health Foundation, World Population Review, World Health Organization, WebMD, Yale University, PubMed Central, Medium, SAGE Journals, IZA World of Labor, Recovery Ways.

          Leonor Cunha

          Mariana Gomes

          Constança Almeida

          Do not bet against these biases

          Reading time: 6 minutes

          How many of you bet on Placard, or Bet click, or any other similar gambling website? How many of you have gone to the casino, or want to go and play in the slot machines? All these games generate feelings of excitement, that rush of not knowing what the outcome might be, and if we are going to win or not. But maybe next time you decide to bet on your favourite sport’s team or go to the casino, there are some things you must take into consideration, such as fallacies and biases we sometimes, unconsciously, fall for. With this article, we will explain some of the most common ones, how they work and how they are present in our daily lives.

          Cognitive biases influence us more than we are aware of

          The Conjunction Fallacy

          The difference between plausibility and probability is the source of this fallacy. Normally, we are more inclined to believe in the probability of an occurrence the more detailed it is, when in fact, and according to our statistics and probabilities lessons, it is exactly the contrary: the more exact and detailed an event is, the less likely it is to occur. For example, the probability of a person being a teacher is higher than the probability of a person being a teacher and a woman (even though, when describing a person with these characteristics, it is more plausible if we add more detail and, therefore, can incur in the mistake of thinking that it is more probable).

          This is called the conjunction fallacy as it refers that the probability of the conjunction, P(A&B), must be always smaller than the probability of its constituents, P(A) and P(B), as “the extension (or the possibility set) of the conjunction is included in the extension of its constituents.’’

          We can relate this fallacy to sports, as there is a lower probability of an event to happen if we add more restrictions. So, for example, let’s assume that:

          • Probability of A, P(A), is equal to team A scoring a goal (and the result to be 1-0)
          • Probability of B, P(B) to correspond to team B scoring a goal (0-1).

          Therefore, the probability of A and B, P(A&B), is equivalent to both team A and team B scoring a goal (and the result to be equal to 1-1). As the probability of the conjunction of A with B, P(A&B), is always lower than the P(A) (or P(B)), then we can assume that the likelihood of a draw is lower than the probability of just one goal.

          Picture showing that P(A&B) £ P(A) or P(B)

          Applying this to real life, when you bet on Placard, the more you specify the occurrence of an event, the more an outcome, if positive, generates higher rewards, as the probability of occurring is lower. For example, betting on Benfica winning Sporting or Sporting not scoring any goal, if true, generates less money than betting Benfica will win and Sporting will not score any goal.

          Hot Hand Fallacy

          Another quite common mistake that many people who bet incur in is the Hot Hand Fallacy. As the term suggests, this comes from the bias that people tend to have regarding the likeliness of something that has been successful in the past to continue that way in the future. Because, if for instance, a specific horse keeps winning race after race, it means that next time it will result in the same outcome, right? Well, not necessarily…

          In fact, we are only considering a small number of random events, instead of analysing the bigger picture. Although a goalkeeper may be in a “hot streak”, defending every possible goal that comes his way, his average save percentage can be way different. People wrongly assume that a small number of occurrences give the possibility to make safe predictions of what is going to happen. In other words, people infer future outcomes prematurely based on small and recent samples of evidence, disregarding other sources that have been recording performance data for years.

          This fallacious reasoning comes from human’s inevitable propensity to formulate patterns and find trends to make sense of the environment that surrounds them. Such usually leads us to have a hard time assessing randomness, viewing events as dependent while they are actually independent of each other.

          So, how can we overcome this? Although it is difficult, people should think twice before jumping into conclusions, regardless of how exciting things can be going for them. There’s a need to check and ensure how things usually play out in the long run, to derive better and more rational decisions.

          The End-of-Day Effect

          Moreover, have you ever noticed that you tend to incur in higher risk when something is about to end, that being either an event or around in the casino, regardless of your previous results being losses or gains?

          Once again, let’s use the casino example: you are preparing to leave, and this is going to be your last bet. If your previous results were of gains, then you will tend to incur higher risks, as explained by the hot hand fallacy. On the other hand, if you accumulated losses, then you also tend to incur higher risks, but in this case in an attempt to recover said losses. The end-of-day effect relates to the fact that gamblers are more willing to have higher risks at the end of the session, either to make up for losses or due to the hot hand fallacy. This effect occurs because perceived endings cause participants to be more concerned with gains rather than losses, and this increase in risk-taking in the final round is mediated by a motivational incentive. This motivational stimulus caused by time perception affects the ability to process information in the frontal lobe, interfering with decision making and impulse control.

          People tend to incur higher risks when it is the end of something

          The Recall Biases

          Lastly, the Recall Bias: it can be explained by the tendency to remember and overestimate wins, whilst forgetting about or underestimating losses. Let’s use our old casino example: you have played and lost in the roulette 2 times in a row. However, when you play it a third time, you end up winning. The outcome will be for you to overvalue that win, and forget the other losses, consequently deciding to play one more time, confident that you will win, even though your net effect might still be negative. Another implication of this bias is that losses will not act as an incentive for individuals to stop gambling, since they believe that they will eventually win.

          Conclusion

          All things considered, any kind of gambling can be addictive and frequently a dangerous behaviour. Examples of individuals getting addicted to gambling, ruining their bank accounts and having negative effects on their own personal lives exist to no end, and the biases and mechanisms presented throughout the article are certainly part of the causes of many of them. Thus, next time you responsibly consider making any bets we recommend you to be mindful and try not to fall into these behavioural traps.


          Sources: The Economics of Sports, The Decision Lab, Frontiers in Psychology, ESPN, American Psychology Association, Springer Link, Forbes, Peel Research Partners, VSin.

          Benedita Elias

          Mariana Gomes