The Federalist Papers: Short overview and considerations about the future of fiscal federalism in the EU

“After full experience of the insufficiency of the existing federal government, you are invited to deliberate upon a New Constitution for the United States of America. The subject speaks its own importance; comprehending in its consequences, nothing less than the existence of the UNION, the safety and welfare of the parts of which it is composed, the fate of an empire, in many
respects, the most interesting in the world.”

— Alexander Hamilton as Publius, Federalist No. 1

Following the American Revolutionary War, and the drafting and ratification of the Articles of Confederation by the 13 states, it soon became obvious that the young confederate government was severely hindered in its functioning by an overall lack of power. Indeed, without an executive or judicial branch, the new government lacked the power and authority to tax, for example. Since it could only request money from states but didn’t have any ability to enforce these requests, both the government and the U.S. army were majorly underfunded.

It was, therefore, to evaluate and, possibly, amend the Articles of Confederation and improve the current situation that delegates from the 13 states gathered in Philadelphia, in 1787, in what was called the Philadelphia (or Constitutional) Convention. Even though a new constitution was drafted and signed in this convention, it was not with this goal in mind that these delegates joined in assembly. However, since many were convinced of the inadequacy of the current system, the convention soon evolved into an effort to redesign and rebuild the whole political structure of the union from a loose confederacy into a more solidly cemented federal union.


However, the drafting of the new constitution and its signing in the convention was only the first step. Next, and most critically, to enter into force, the new Constitution needed to be ratified by 9 of the 13 states. It was to lobby votes in favor of ratification that Alexander Hamilton, one of the convention delegates from the state of New York and the 1st Secretary of Treasury of the United States of the future government, wrote, along with James Madison, one of the most central figures in the drafting of the new Constitution and the Bill of Rights and future president of the U.S., and with John Jay, future 1st Chief Justice of the Supreme Court of the new government, a series of essays whose collection is referred to as The Federalist Papers.

Alexander Hamilton

Alexander Hamilton

The Federalist Papers

The Federalist Papers

James Madison

James Madison

These essays, 85 in total (1), were published as serial installments in newspapers and discussed topics ranging from the benefits of a federal union under the Constitution on matters of war and taxation to the discussion of the principles of separation of powers, how it is upheld by the Constitution and how the system of checks and balances between the three branches of government works under the Constitution, all the while attempting to refute many of the anti-ratification arguments of the time.

Although their effect in promoting the ratification of the Constitution is unverifiable, they certainly are a window into the political and historical framing of the federalists vs anti-federalists debates of the time and can prove useful in understanding some of the debates and arguments employed with regards to federalism in the European Union.


In Federalist No.11, Hamilton talks about the advantages of a common commerce policy, as achievable by federalization with, for example, the ban on inter-state tariffs, echoing many of the free-trade ideas that helped create and develop today’s European Union’s common market.

In Federalist No.30, Hamilton describes the poor situation of the government’s revenues under the Articles of Confederation and argues, namely, that the state of public debt of such a government will be extremely precarious. Indeed, while talking about the future creditors of the government he says:

“to depend upon a government, that must itself depend upon thirteen other governments, for the means of fulfilling its contracts, (…) would require a degree of credulity, not often to be met with in the pecuniary transactions of mankind”

— Alexander Hamilton as Publius, Federalist No. 30

The solution to such a problem, he argued, lied in giving the new Congress the general power to tax and levy tariffs.

However, federal revenues were mainly dependent on tariffs until the beginning of the 20th century, before the creation of the income tax (2). As this new tax was being levied and grew in size, federal fiscal policy also grew in scope, with the creation of the New Deal during the Great Depression, for example.


Both Hamilton’s arguments at the time for a more energetic government, empowered by the power to tax, and the expansion of the scope of federal fiscal policy after the Great Depression timed with the creation of the income tax provide insights into the current discussions on the expansion of centralized fiscal responses by the European Union.

Indeed, for the central institutions of the European Union to be able to provide a more timely and powerful response to a crisis such as the present one, they must also be able to access bigger sources of revenues.

If we want more powerful central institutions in the EU their budgets must also increase.

In 2017, EU budget expenditures were about €137,000 million. These paled in comparison to the U.S federal government’s almost $4,000,000 million in outlays. In Europe, where countries’ governments are already very fiscally active, it is hard to imagine a scenario where an increase of the central EU budget to levels more comparable to those of the U.S. federal government would not come at the cost of shrinking national government’s budgets.

Whether a more centralized response by the EU would, then, be net-beneficial is not something I’m arguing for or against. Indeed, the question that I desire to pose is whether this response, at the expense of member-states’ fiscal power, is politically achievable. Such a question is impossible to definitively answer. On one hand, emergency situations, like the Great Depression in the U.S., seem to be breeding grounds for centralization, on the other, the shifting political landscape in Europe, namely with the rise of Euro-skeptic parties, may foresee a grimmer fate for European federalism.


(1) You can find The Federalist Papers at: https://www.congress.gov/resources/display/content/The+Federalist+Papers or listen to public domain recordings of it by LibriVox at: https://librivox.org/the-federalist-papers-by-alexander-hamilton-john-jay-and-james-madison/

(2)-  Even though Clause 1 of Section 8 of Article 1 of the U.S. Constitution gave Congress the ability to levy taxes it was only with the creation of the 16th Amendment to the U.S. Constitution that Congress was able to levy country-wide income taxes.

Africa’s Endless War

The Sahel is a narrow semi-desert region located south of the Sahara Desert. It stretches from the Atlantic coast to the Red Sea. The region comprises parts of Mauritania, Mali, Burkina Faso, Niger, Nigeria, Chad, Sudan, and Eritrea. In broad terms, we can think of the region as consisting of authoritarian states, with great difficulties to assert their authority inside their borders – in some cases, they are simply failed states.

Although all these countries suffer in various degrees from terrorism and related problems, our piece will focus on the key geopolitical security threat faced by the  more western countries. We will also explain how and why the USA and some European countries have been involved there.

Map of the Sahel region

Map of the Sahel region


Conditions for violence

The entire  region offers the same suitable conditions to the spread of terror. Being one of the poorest in the world, the countries located there are impoverished and underdeveloped;. Furthermore, it is subject to severe food shortages and the effects of climate change, which deepen the problems.

Although these countries are, theoretically, democracies, mistrust in the political classes is widespread, and rightly so. As it is frequent in many African countries, corruption is common and the institutions are generally frail. Governance is poor, agriculture will continue to have problems and security forces and foreign military are as feared as they are welcomed. The states are ill-prepared to meet the challenges their populations face.

All  governments failed to have a meaningful presence there,  as these zones are far away from their capitals. Islam being the dominant faith, Islamist radicals have no difficulties in spreading their violent message coupled with solutions to some basic problems, such as water supply and food administration. The region’s chronic poverty and poor education system helps it gain new recruits. Terrorists and radical groups exploit every local problem and conflict in order to expand their reach. The same logic applies to the expansion of terrorist groups in other zones, like Somalia or Mozambique.


Examples of terror

The countries in this part of Sahel have been the stage of various forms of violence in the past decades, described as a “fireball of conflict” that involves multiple armed groups, military campaigns by national armies and international partners as well as local militias. Conflicts have been constant, arising for many different reasons. The recent peak in violence has drawn the attention of both al-Qaeda and ISIS, among several local groups who fight between themselves as well as against local governments. There are constant news and reports of military operations and attacks, and 2019 was the deadliest year so far, with over 4000 deaths.

We will focus on the most recent events, starting with the most important Islamist terrorist group, Boko Haram. It is the strongest and deadliest, but by no means the sole actor in the conflict.

Boko Haram’s roots can be traced back to the early 2000s, but it started gaining attention in 2009, with a series of attacks in Nigeria. At the same time, the Arab Spring in the northern African countries and the violence that ensued further destabilized the area. Later in 2014, the group pledged allegiance to ISIS and proclaimed a caliphate in the region. This led to the intervention of a regional military coalition in 2015, (Benin, Nigeria, Cameroon, Chad and Niger, backed by the US, UK, and France) which regained the Nigerian territory previously controlled by the terrorists.

Following this, Boko Haram’s new core presence was in the Lake Chad region, one of the poorest regions of Africa and an ungoverned territory in the frontiers of Chad, Cameroon, Nigeria and Niger, where it still operates and was able to extend its reach in other anarchic frontier regions.

Following Boko Haram’s example, jihadists in northern Mali also proclaimed a caliphate in 2014. A quick military intervention led by France, authorized by the United Nations and supported by several non-African countries, regained the territory they controlled. France is the region’s former colonial power, and even though there is a pervasive anti-French sentiment,, it has been long involved.

In 2013, the French government expected to conduct only a short intervention in Mali. Seven years later, it remains there. The United Nations, the African Union and the European Union have also intervened, engaging many countries, with western military operations expected to increase in number and dimension in the next years. This will likely happen even though the Trump administration, that last month nominated a special envoy to the Lakes Region, seems keen to reduce their presence there, in contrast to its European allies.


UN forces in Mali

UN forces in Mali


European and American involvement

João Gomes Cravinho, the Portuguese Defense Minister, said last January:

“It is absolutely fundamental to be present in Sahel. We cannot let the deterioration of the situation in Sahel continue because the result will have an impact on Europe […] It would be irresponsible to turn our backs.”

— João Gomes Cravinho

The support is indeed needed because the military of these Western African countries lacks resources, material, training, and education. They could not win the conflict only by themselves,  and stability in the region is the main goal for Europe. Endemic violence and no state control will increase the flow of drugs, arms and human trafficking, illegal migrants and refugees and  terrorist threats against the continent. European countries would pay a high price for not intervening.

The western countries have the resources to militarily destroy much of these groups, but as recent interventions in the Middle East and Afghanistan proved, strength is insufficient. A full-out war would in the middle run fail to fill the power vacuum in the Sahel, and other Islamist groups would likely arise. There is a political and diplomatic front as well in this war, and the European Union starts to be aware of that, with commissioner Borrell repeatedly asking for a greater diplomatic and military involvement in  Sahel.

There is a broader political mission to face, which constitutes the hardest challenge. It is about stabilizing communities with a basic step that simply has seldom been undertaken: broad, local dialogues among community groups, police forces and officials can prevent radicalization. Local governments and institutions, the civic groups and the foreign actors should all step in this task. At the same time, poverty has to be mitigated and economic development aided.

However, the prospects are not good. In fact, European presence is vital to defend the European countries from security reasons and can mitigate various threats to the continent. Nevertheless, there are no easy ways to counter the underlying challenges that bolster terrorism and violence in Sahel. As The Economist put it: “unless local governance improves, [the military interventions] will not eliminate the jihadist threat”. Poverty and anarchy seem to be there to stay, and where they are, terrorist groups will too.

Sources: ABC news, Al Jazeera, BBC, Financial Times, Guardian, Institute for Security Studies, jornal I, New York Times, Observador, Politico, Reuters, The Economist, The Telegraph, United States Institute of Peace, Vox.

The Cloud Wars: AWS Vs Azure for the Control of Your Internet

Late last year, on October 25th, the United States Department of Defence announced that they would award a contract worth $10 billion dollars – the Joint Enterprise Defence Infrastructure project, henceforth referred to as JEDI – to Microsoft’s cloud computing business – the Microsoft Azure.

In the larger picture, the JEDI contract is but a small drop in the ocean of public contracts. According to the United States themselves, the federal government spends roughly $500 billion dollars on contracts every year. At face value, it was just another story of a tech-related government contract being awarded to the company with the most competitive bid.

Figure 2 - AWS Logo , Source: Amazon

Figure 2 – AWS Logo , Source: Amazon

But Amazon would contest the decision soon after, claiming errors in the process and political interference by President Trump. Like Microsoft, Amazon Web Services (AWS) – a subsidiary that provides cloud computing services – was in the run for the JEDI contract, and was even considered to be a frontrunner. By February of this year, Microsoft’s work had been halted, and the Pentagon was reconsidering the decision, as shown by court documents.

Though this story has taken an acrimonious turn, pitting Amazon against the Executive Branch of the United States with allegations of political interference, the competition between Microsoft and Amazon is not unlike that of any other two companies fighting for market dominance. The reason why President Trump allegedly interfered with the process is because cloud computing is a nascent, rapidly growing field that both companies – Microsoft and Amazon – have deemed crucial in strategic terms. So, he hit Amazon and its founder, with whom he has had public spats in the past, where it hurt.

Ultimately, this begets the question:

What is cloud computing, and why are the stakes so high?

Cloud computing is the delivery of computing services such as servers, databases, storage, software and analytics through the internet (the cloud = the internet).

Cloud computing services offer several benefits to clients:

  • No capital expenditures – You don’t have to buy your own physical assets, you rent them via the “cloud” instead. Like retail chains that started to lease and rent property instead of buying it.

  • Scale & Flexibility – The cost of the rent is proportional to the size of the business and traffic and it is very easy to increase your capacity. In other words, as your computing power needs increase with business and traffic, you are unconstrained by current equipment.

  • Speed & Performance – Your only limitation is your internet connection. Cloud services run on the latest high-tech hardware, so you are not limited by outdated hardware and you don’t have to constantly update your devices.

  • Security and Reliability – Cloud computing services come with automatic backups and disaster recovery, as your data is in many places instead of a single server. Cloud service providers also come with the latest network security methodologies, which would be too expensive for a single business to implement on its own.

These services can be split into three types:

  1. Infrastructure as a service (IaaS) – the most basic version where you rent IT infrastructure such as servers for storage.

  2. Platform as a service (PaaS) – Services that supply on demand environment for developing and testing software applications, such as mobile apps.

  3. Software as a service (SaaS) – delivery of software over the internet, on demand, often requiring only a terminal (no need for installation).

Cloud computing has provided a unique benefit to society in general – it makes it much easier to launch a tech start-up, as the start-up costs are almost non-existent when compared to the 90s. Cloud computing was a major enabler in the tech boom of the last decade.

Cloud-computing-000088211771_Medium-1024x682.jpg

Amazon was the first of the two to launch their cloud computing business via AWS, back in 2006. The story of AWS is interesting. Initially, before 2006, what would become AWS was a private cloud system within Amazon to support data collection and server management across the entire company. Only after using and developing AWS for four years before offering AWS to the market. AWS was not planned and was born of Amazon’s culture of innovation and experimentation.

Microsoft would follow suit in 2010, at the time launching “Windows Azure”.

Today, these companies take up 70% of the market share valued at 227 billion dollars in 2019 (with Amazon being the market leader at 40%).

And seamlessly, without us ever noticing, they power many of the platforms and companies that we use in our day-to-day.


Figure 2 - Netflix Logo , Source: Wikipedia

Figure 2 – Netflix Logo , Source: Wikipedia

Consider Netflix, one of AWS’ high profile clients. In 2009, they opted to migrate from their physical data centre to the cloud, moving thousands of terabytes of data into Amazon owned servers, data that has to be accessed tens of thousands of times per second by 160+ million subscribers in all parts of the world.

Figure 2 - HP Logo , Source: Wikipedia

Figure 2 – HP Logo , Source: Wikipedia

Alternatively, consider HP, one of Microsoft Azure’s high-profile clients. According to HP, they handle more than 600 million technical support contacts each year. The accumulated data points for each of these contacts was used to build an AI assistant via one of the solutions provided by Microsoft Azure.

Either of these examples illustrate different services provided under the same umbrella term of “cloud computing”. Both cloud computing platforms ultimately aim to help businesses develop and meet their organizational goals: they offer many tools and frameworks to build an «on your own terms» platform.


But how would a manager choose between them? If a company already works with a platform, why consider getting a service from a competitor?

Part of the answer lies on the many different services they supply, as well as the current data infrastructure of the company in question. Microsoft is ubiquitous to any company in the world, but AWS seems to be more advanced in the cloud computing game as of right now (hence its frontrunner status in the JEDI contract). Regardless, none of the cloud suppliers offer the same service in the exact same way, or with the same value proposition. For a manager, choosing between AWS and Azure might be a balancing act, and they might end up using both.

Ultimately, the Cloud consumes our day-to-day lives. From the political contrivances as seen in the JEDI contract to the shift in paradigm that directly affects decision-makers, both high and low in a company, articles much like these are but a warning sign of a braver new world to come.


Sources: US Department of Defence, US Datalab, NY Times, Gartner, Microsoft Azure, AWS, Wikipedia


João Vaz Guedes - João Vaz Guedes Maria Mendes - Maria Mendes

Daniel André - Daniel André

The Impact of Globalization on Inequality

Since the European discoveries, several waves of globalization have shaped the way we live today. The most recent one started around the 80s/90s of the previous century and was pushed by several circumstances. First of all, the economic reforms implemented in China around that time by Deng Xiaoping, who ruled the country as paramount leader* between 1978 and 1992 and the fall of the USSR in 1991 brought economic development and openness to vast territories, changing its interaction with the rest of the world. In addition to these two events, the improvements in communication and transportation technologies were key aspects that enabled all the process, boosting global trade and movement of capital between countries. For instance, according to the World Bank, exports of goods and services grew from US$4.1 trillion in 1980, to US$23 trillion in 2015, at constant on 2010 prices

Since the beginning of the process until now, globalization is said to have taken a lot of people out of poverty due to those infusions of foreign capital and technology in less privileged areas of the globe, bringing them economic development and spreading prosperity. Stil according to the World Bank, the global population living with less than US$1.90 per day in this condition decreased from 36% in 1990 to 10% in 2015. The two countries that most contributed to this outcome were China, where this indicator fell from around 66% to 1% in the same time-frame; and India, where poverty affected almost 49% of the population in 1987, shrunk to 21.2% in 2011. Undoubtfully, this is clearly positive and a major advance towards the United Nations’ sustainable development goal of eradicating poverty.

However, even though poverty has shrunk at a global level, the fact is that the increasing wealth that is created and the benefits of globalization are said not to be distributed fairly.

Global real income growth (1988-2008)

Source: Equitymaster

Source: Equitymaster

This chart was elaborated by Branko Milanovic, an economist recognized for his work and research in inequality and income distribution, and depicts the variation in real income according to each percentile of the global income distribution between 1988 and 2008. It can be clearly seen that, during this time frame, the ones who saw their income increase the most was the population living in emerging countries and also the richest citizens of the world. On the other hand, the middle classes of developed countries and the extremely poor virtually remained the same, with some even getting worse off. 

When looking for answers that may explain why this has happened, the novelties brought by this recent wave of globalization should be taken into consideration. The reductions in transportation costs and trade barriers created an atmosphere of incentives for capital owners to move the production segment of the supply chain from developed countries to others with better cost advantages, mainly regarding labor, in order to pursue competitiveness. Therefore, these new opportunities have benefited the global elite, as well as the population of where these jobs were created. On the other hand, this has led developed countries to experience major job losses and its working class to see their real wages/income stagnated overtime, and even decreased.

Even though globalization may have contributed for more inclusiveness and less poverty at a global level, smoothing differences between the richer and the poorer countries, the fact is that, when considering the internal situation of each nation, it may be a different story. 

 

Distribution of pre-tax national Distribution of pre-tax national income in the United States income in China

bvf c.png
c vb.png
Source: World Inequality Database

Source: World Inequality Database

These graphs clearly show that inequality in the United States as well as in China increased. In both countries, independently of whether real incomes increased or not, the share of national income received by the bottom 50 percent of the population fell, while the top 10 percent saw their share of income increase. This being said, it is quite clear that inequality should be a priority for national governments.


*Paramount leader: informal term for the most prominent political leader in the People’s Republic of China, not necessarily involving an official position.

 

Sources: Forbes, The World Bank Data, Equitymaster, World Inequality Database


Oil War 2020

Oil, a three-letter word that embodies the most important source of energy since the 1950s, the lifeblood of modern societies. As the main energy supply, this commodity reshaped not only the power industry, but also how we live, supplying 40% of the world’s energy demand. Therefore, oil continues to survive the constant attempts to shift energy consumption into more sustainable alternatives based on renewable sources, remaining the most-traded non-financial commodity worldwide. The fact that, nowadays, one cannot imagine a world without crude oil, and its inexistence would lead to a screeching slump in modern societies, increases its value, emphasizing its prominence in the global economy.

The United States, Russia and Saudi Arabia arise as crude oil’s largest producers and, in 2019, jointly produced approximately 33 million barrels per day, 54% of total world production. According to IBISWorld, a leading Business Intelligence company, the oil and gas sector’s revenues amounted to approximately $3.3 trillion last year, and with a 2019 Global GDP of around $87 trillion, the oil and gas drilling sector by itself represents around 3,8% of the world economy. It is evident that the 3 main players in this complex industry compete for the monopoly of one the most profitable markets, but one cannot enter this game without caution, since the oil’s biggest sharks will be ready to counter-attack.

With the world markets slowing down due to the most recent crisis caused by the coronavirus pandemic, demand on crude oil has decreased drastically, as isolation measures have tightened around the world.

The members of the Organization of the Petroleum Exporting Countries (OPEC) and the invited country Russia, gathered in a meeting concerning the market demand on the industry after the virus situation. Russia has been allied with Saudi Arabia and the organization since 2016 with the aim to balance its production levels with other countries and keep prices relatively stable.

In this meeting, Saudi Arabia positioned itself and suggested cutting production levels in order to hedge price decreases during these times. However, Putin’s nation was against the proposed measure as they believed it was too early to cut production, the organization failed to reach an agreement between the parties involved, effectively ending the partnership. Some insist that the country is availing oneself of the Asian demand to increase its market share, others agree that it wants to keep prices low to fight the American shale oil industry, which has been growing in the past years. One thing is certain, Russia is indeed worried about its market share and believes that at the moment it is better to be against the Saudis than opting to cooperate. Saudi Arabia counterstroke, announcing they would increase their production to its highest, almost 13 million barrels per day, as well as price discounts in Europe and the United States.

On the 8th of March, the prices started to tumble and the next day they plummeted more than 30%, the worst loss since 1991, and the Russian currency depreciated to its lowest since 2016. For now, the Saudi-Russian alliance is paused, and the war has begun. The Saudis believe they will be able to sustain profits, as their production costs are very low, while Russia claims the ability to sustain prices between 25$ and 30$ for several years due to its National Wealth Fund. Nevertheless, these countries will now experience a thinner margin, despite gaining some revenues alongside customers. From a game theory perspective, we are assisting a prisoner’s dilemma situation where either party may end up hurt after this move since market share will not necessarily be gained and cooperation would better position themselves.

On the 2nd April, Donald Trump claimed that a deal was expected to be reached soon and that production cuts were already in order after affirming that he talked with the leaders of both countries. This prompted a one-day rally in oil futures of 10%, but neither countries committed to supply cuts and the Russian Government denied the claims made by the US President.

On Sunday, April 5th, Saudi Arabia, Russia and other giant oil producers from OPEC made progress in reaching a deal to stem oil prices, despite the difficulty in arranging a meeting and the continuous exchange of accusations between the two leaders. This deal would also involve the US, as they became the biggest oil producer of the last years since the shale revolution and have a great impact on this industry. Together, the members would be proposed to cut their oil production by 10%, but Donald Trump has shown little willingness to do so. The US has even threatened the use of sanctions and tariffs to push the two countries to solve the conflict.

“If the Americans don’t take part, the problem which existed before for the Russians and Saudis will remain — that they cut output while the U.S ramps it up, and that makes the whole thing impossible”

— Fyodor Lukyanov, head of the Council on Foreign and Defence Policy


Source: Trading View

Source: Trading View

Energy companies are suffering the most from oil wars and this may have a damaging effect on the credit markets as well, since they have been very active in the bond market in the past decade and investors were always keen to lend more and more. This borrowing was done using junk-rated bonds and it is remarked that these companies account for 11 per cent of the US high-yield market. Being rated BB or lower, these issuers are at higher risks of default and the current oil war aligned with a decreasing global demand may cause further downgrades and raise the costs of borrowing. With such a heavy representation on the junk bond market, this shock may not stop at low-rated debt and even impact “safer” debt.

Investors reacted as they have been adjusting to the coronavirus outbreak and shifted funds for the usual safe havens. On the 9th March, the sharp oil price drop prompted a decrease of 7% in the S&P 500 in the first minutes of trade. This sell-off was accompanied by a raise in the price of gold and related ETFs and an increase in the purchase of US Treasury Bonds, represented by a decrease in the US Treasury yields.


Source: Bloomberg

Source: Bloomberg

At this stage it is quite unclear to predict any short-term agreement between both parties, however, what is clear is that current prices are bad for producers whilst being well received for consumers. What this means is that, in theory, oil importing countries will benefit from this price decrease, considering that one of their main raw materials used, became drastically cheaper. However, this benefit will obviously not be maximized due to all constraints being imposed by governments worldwide during the COVID-19 pandemic, which, ultimately will make this decrease in prices a small tool to respond to the economic impact caused by the viral disease. Furthermore, not everything is bright for importing countries. Let’s take Portugal as an example, which has Galp, which refines imported Brent oil from Brazil and Angola before selling the final good to retailers, as one of its largest companies. Galp had set an average break-even price of 25 € and is still able to sell at current prices, however, as seen in the last few weeks, its price has been right around that value, even reaching the 21€ quotation. As seen, Portugal is not a producer nor an exporter of the raw material, yet, by having companies in its manufacturing chain like Galp, is still exposed to this war that influences prices worldwide and can lead to lay-offs and even shutdowns.

On the other hand, we have the petroleum exporting countries being harmed by not only the influx of supply as the decrease in demand. Some are being forced to decrease prices while also increasing production, to not lose their market share, while others are considering a step-back in production until prices begin to rise. One example being the case of the US which produces, in mass, Shale Oil, which is more expensive to produce. Also, countries that cannot step-back due to an already unstable financial situation as Venezuela, Ecuador and others third-world countries are being dragged into a fight they just cannot handle, especially considering that, unlike Russia, they do not hold a meaningful foreign exchange reserve to back up these abnormal losses, so, debt defaults are beginning to look a reality.

The markets and investors are not happy having to deal with an oil war and a virus outbreak and, if Russia and Saudi Arabia take long to solve this conflict, Governments and Central Banks may not be able to save world economy.

Sources: CNBC, Investopedia, TIME, Bloomberg, Financial Times, Vox, ABC news

State of emergency: What now?

On the 18th of March of this year, Portugal’s President Marcelo Rebelo de Sousa declared the state of emergency, immediately, to the extent of all Portuguese territory, following other European countries that also opted to declare it, such as Spain, France, Italy and Germany (to name a few from the total of 25 countries that already announced it worldwide).

Since November 1975, after a revolutionary attempt from communist forces to implement a far-left dictatorship, the State of Emergency hasn’t been declared in Portugal. 45 years elapsed and due to the COVID-19 pandemic, Portugal was forced to announce the State of Emergency, in order to restrict the spread of the virus.


What measures can Portugal take to face national catastrophes?

There are 4 mechanisms, consecrated in the Portuguese Law, in order to deal with national catastrophes. From the least to the most severe, we have the state of alert, last used in the summer of 2019 during the protests of truck drivers of hazardous content, which only means that national civil protection and national security forces are ready to attain any request from the government. The state of public calamity, announced two weeks ago by the municipality of Ovar, implies a reduction of economic activity, limitations to the number of inhabitants in public places and the establishment of a safety perimeter. Lastly, the two most severe mechanisms, the state of emergency and the state of siege.

After all, what does the state of emergency imply? What’s the constitutional interpretation? What are the boundaries that define it and that distinguishes it from the state of siege?

What is the state of emergency?

The state of emergency allows the government to suspend certain rights, freedoms and guarantees in order to deal with an exceptional situation. In Portugal, the state of emergency is declared by the President, initially requiring permission from Parliament and then approval from the Council of Ministers. According to the Constitution, it cannot last more than 15 days (although it can be renovated) and it cannot suspend certain rights, such as the right to life or the right to defend oneself in court.

transferir.jpg

In this particular emergency – an epidemic – there are two particular rights whose suspension could be useful: The right to free movement and the right to private initiative. Suspending the right to free movement allows the government to impose quarantine and curfews, to forbid people from leaving their houses for non-essential trips (or to forbid elderly people from leaving their houses for any reason), and to limit entry and exit in Portugal, by cancelling flights to and from critical countries and controlling the border. Suspending the right to private initiative allows the government, among other things, to forbid non-essential commercial establishments from opening, to force essential ones (such as pharmacies, supermarkets or medical supplies factories) to stay open, and to take control of private companies (for example, to temporarily integrate private hospitals in the public healthcare system). The state of emergency declared in Portugal also suspends the freedom of assembly, allowing the government to forbid large public gatherings such as protests, concerts or religious ceremonies.

gvb n.png

Criticism

Some have opposed the declaration of the state of emergency, fearing that the President is opening a dangerous precedent for the suspension of rights and freedoms. These worries are not unwarranted: historically, there are many incidents in several different countries of the state of emergency being abused. For example, in Germany between the two world wars, the state of emergency was declared quite often, usually by governments who didn’t have a majority in Parliament and used the state of emergency to legislate without democratic control. This culminated when, after a fire destroyed the German Parliament, Adolf Hitler blamed the fire on communist rebels and used it as an excuse to declare the state of emergency, imposing the dictatorial regime that lasted until the end of WWII. This is only one of many historical examples of the state of emergency being the start of a dictatorship. While it is difficult to argue that Portugal is currently facing any risk of that nature, these historical examples are the reason why many people are very cautious about supporting the declaration of the state of emergency.

State of siege

On the opposite end of the spectrum, some have claimed that, in Portugal’s current situation, a state of emergency is not enough, and a state of siege should be declared. The state of siege is one degree of severity above the state of emergency. According to Portuguese Law, the state of emergency can be declared due to any public calamity or threat of public calamity, while the state of siege can only be declared in the event of acts of force (such as military invasions) or rebellions. In a state of emergency, rights and freedoms can only be partially suspended, while in a state of siege they can be completely suspended – for example, the current state of emergency suspends the right to strike only for workers in healthcare and vital sectors of the economy; in a state of siege, all strikes could be forbidden. In a state of emergency, the powers of civil authorities can be reinforced, and the armed forces can be tasked with supporting those authorities; in a state of siege, all police forces are put under the authority of the Chief of the General Staff of the Armed Forces, and all civil administrations must provide the armed forces any information they request.

cng.png

Portugal is facing one of the moments of greatest uncertainty in its modern history.

Fighting an unknown enemy poses difficult challenges and raises important questions. Only in the end will the country be capable of scrutinising the choices made and to discuss a future approach towards a similar crisis. Until then, we shall stand as one.


Sources: Observador,Jornal Sol, ECO

Portuguese Law (in Portuguese):

Constituição da República Portuguesa (Portuguese Constitution), namely articles 19 and 138 

Regime do Estado de Sítio e do Estado de Emergência – Lei n.º 44/86, de 30 de setembro 

Decreto do Presidente da República n.º 14-A/2020 



Afonso Botelho - Afonso Botelho Manuel Barbosa - Manuel Barbosa
Nuno Sampayo - Nuno Sampayo

Europe’s man on the moon moment: the Green Deal

The creation

First presented in December 2019, the European Green Deal is the EU’s current most ambitious project or, as Ursula von der Leyen – European Commission’s President – called it, “Europe’s man on the moon moment”. Aimed at transforming Europe into the first climate neutral continent by 2050, simply put, the Green Deal is the EU’s new growth strategy. It’s designed to transform Europe into a modern, resource-efficient and competitive economy, a roadmap to make the EU’s economy sustainable.

[Read more about the policies that are being adopted here]. 

But what’s so different from all the previous attempts to tackle climate change?

  1. The EU is finally transforming all its climate promises into legal obligations for member states (European Climate Law)

  2. It was finally understood that, rather than being mitigated by focusing only on certain industries and a few isolated measures, climate change requires an articulated and integrated system that considers a product’s life cycle, strengthens competitiveness while protecting the environment, gives new rights to consumers and tries to ensure that the resources used are kept in the EU economy for as long as possible (Circular Economy Action Plan)

  3. Because it’s imperative that all European countries take part in this initiative, the Green Deal predicts financial assistance to those countries that face various limitations in the green transition, (European Green Deal Investment Plan and the Just Transition Mechanism) in an attempt to ensure that everything runs smoothly and that all member states are on board.


The  transition 

Despite its idealistic goals, the Green Deal has been suffering severe criticism and has been presented with various obstacles to its achievement. 

What was initially designed as an incentive to make industrial and coal-intensive countries (e.g. Poland, Romania, Germany) sign up the EU’s 2050 climate targets, turned now into one of the most defining features of the European Green Deal: the Just Transition Mechanism, designed to help coal-dependent workers and regions transition to new areas of economic activity. Still, Poland has repeatedly refused to sign the climate targets and take part in this environmental initiative, thus jeopardizing its eligibility to the financial aid provided through this Mechanism.

 A list of 100 regions eligible for this financial aid has already been published, with Germany topping the list, followed by Poland.  These regions meet criteria based on carbon-intensive jobs, fossil fuel industrial activity and GDP per capita. But another potential threat arises: how can the EU guarantee that funds aren’t misused by national governments in projects that serve their own interests, risking these regions’ access to the funding? 

Even though the Green Deal is a priority for the EU, it certainly isn’t for some relatively poorer member states, such as Romania, whose priorities concern infrastructure, education and health systems. Other countries might feel the same way, leading to tensions and conflicts of interest inside the Union, with the possibility of not being able to cope with targets (and most likely sanctions) imposed by the EU later on. 

Unfortunately, this doesn’t stop here. Between extractive, energy-intensive and automobile industries, 11 million jobs will be directly impacted by this Deal, even if they don’t necessarily disappear. Clear directions and measures are needed (which don’t yet exist), especially if most of these jobs are concentrated in Eastern Europe, which could cause a migration wave in the next decades, possibly deepening the existing gap along the East-West divide, increasing social and economic discrepancies and escalating tensions inside Europe. On the other hand, the reality is that markets are changing – and fast. A new green industry revolution is coming and the likelihood of coal mining in Europe being economically viable by 2050 is indeed very small. 


The funding

No matter how badly the European Commission wants to achieve carbon-neutrality while leaving no  country behind, it’s not possible to do so without funding. Essentially, this financing would come from five different sources: 

a) EU budget: the Commission will allocate 25% (compared to the previous 20%) of its budget to climate and environmental expenditures, aiming to raise approximately €503 billion in this decade. 

b) National co-financing: the Commission hopes that this money mobilization will trigger national governments to unleash an additional €114 billion on environmental projects over the next ten years. 

c) InvestEU: It’s estimated to raise around €279 billion, throughout this decade, through private and public investments, thanks to an EU Budget guarantee to the European Investment Bank and other national promotional banks when they invest in “green” projects. 

d) EU Emissions Trading System funds: the Union proposes to devote 20% of the revenues from the auctioning of EU Emissions Trading System (ETS) to the EU budget, for an estimated value of €25 billion over the next 10 years.

e) Just Transition Mechanism: it’s expected to generate around €145 billion until 2030, plus there’s €7.5 billion of “fresh money” already available. 

However much the European Commission is moving in the right direction, it may have been too unrealistic when presenting these numbers: many specialists agree that €500 billion is a bit of a stretch and an amount that couldn’t possibly close the investment gap. Some even believe that this can hardly be called “investment” since much of the funding will be spent on traditional policies (such as farm subsidies) and there is little oversight and almost no room for innovation in this budget. On top of that, the current methodology of how expenditures are accounted for as contributing to climate targets is very flawed and needs to be reviewed.

 InvestEU- the Green Deal’s main funding source- possesses some hitches worth highlighting: given that the EIB has already committed to increase its climate-related financing from 25% to 50%, it’s worth considering the opportunity cost in allocating further funds to it, as these funds could be better used by other EU programmes. Let’s also not forget that there is always the possibility that carbon financiers game the system, meaning EU cash fails to trigger truly green investment from the private sector. 

Finally, as for national co-funding, there might not be many incentives for countries to increase their financing towards green projects. If countries are finding it hard to stay on track with the 2050 climate neutrality objective, how will they be able to divert funds to co-fund the Green Deal? 

Overall, by itself, the plan will most probably not be sufficient to deliver the investments needed for the European Green Deal. Nevertheless, the Green Deal is still a work in progress and in a very early stage of implementation; only the next months will tell how the Commission will further develop, adapt and implement this initiative. Until then, stay tuned, stay informed, stay aware.  

Sources: Euractiv, Bruegel, The Guardian, Financial Times, European Commission

Moore’s Law

In 1975, Gordan Moore was asked to write for a special Edition of Electronic Magazine about the future of silicon components during the next decade. Integrated circuits (we explain what these are below), known today as computer chips, were discovered in the late 1950s and had just begun being tested. When analysing the achievements made by his  company, Intel, and others in the previous years, Moore observed that the number of transistors per microprocessor, as well as other electronic components, had doubled each year, and he __projected that this rate of growth would continue into the future. This prediction became known as Moore’s Law, which states that the number of transistors in an integrated circuit doubles about every 2 years.

1.png

Moore’s main goal was  to transmit the idea that integrated circuits would lower the costs of technology: the larger the number of components, the lower the cost per component, therefore decreasing the price of computers and other electronic devices. According to the law, which became an industry goal and consequently a self-fulfilling prophecy, processor speeds would increase exponentially, because transistors would scale down so that more units could be packed together on a computer chip. The more transistors, the easier and quicker electrons could move between them, therefore increasing a computer’s efficiency and speed. So as the number of transistors on an integrated circuit has doubled every 24 months, computing power has doubled about every 18 months.

Moore’s Law was regarded as a «Rule of Thumb», rather than a Law: the technological industry intended to keep up with its growth rate and so settled a road map based on the continuous innovation of transistors and chips in line with Moore’s Law. Because of the increasing demand for devices, manufacturers and producers strive to innovate and create next-generation chips, less they become obsolete in the face of innovating competition.

2.png

Many devices that we use nowadays owe their existence to the evolution of integrated circuits; Moore himself stated that «Integrated circuits will lead to such wonders as […] personal communications equipment», currently known as mobile phones. Our laptops and electronic wristwatches, medical imaging and digital processing technologies were made possible because of Moore’s Law. In fact, it has been argued that Moore’s Law is one of the main drivers of the economic growth seen in the last 50 years, as it has led to tremendous gains in productivity.

However, over the past decade the pace innovation has slowed down, with Moore himself predicting the end of his law by 2025. To explain why, here is a small primer on transistors:

A transistor, while a simple invention in concept, is one of the foundations of our modern technological society. Without it, you would not be reading this article. Your phone probably has more than 1 Billion transistors, which are microscopic and manufactured with incredible precision on a thin wafer of silicon. A transistor is like a switch, or gate, that either blocks or lets a small current of electrons pass through it. It’s the billions of combinations of these gates opening and closing that permits a computer to perform all its tasks from basic arithmetic calculations to displaying this text on your screen.

Transistors today are around 10 to 20 nanometres. That’s only around 100 times larger than an oxygen molecule so small that engineers have run into problems that they cannot economically fix – quantum tunnelling. At sizes so small, the laws of quantum mechanics take hold and electrons start to obey different rules, where sometimes they simply cross a closed transistor, corrupting the data in the process. This problem has no economical solution now and so we have reached the death of Moore’s Law.

The end of Moore’s Law has long been considered an inevitability. Gordon Moore himself set a conservative timeframe in his original 1965 paper, estimating that his observed rule would remain “constant for at least ten years”. And its death has been proclaimed many times since. Only the ingenuity of the industry has kept it alive for so long.

But this endgame does not signify an end for the advancement of computational power. Several alternative possibilities lie on the horizon, and these range from the simple and intuitive to the fantastical possibilities brought by the advent of quantum computing.

On the intuitive side, we could focus on creating specialized chips for certain tasks. The processor that powers your device (smartphone or computer) is a “beefy” unit, capable of undertaking a wide variety of tasks, albeit at the cost of efficiency. For very specific tasks, specialized chips may be created.  Other solutions may lay anywhere, from finding other materials to creating improvements in software to take advantage of current architecture – did you know that Excel does not take advantage of the extra cores in your computer to handle those heavier, 20.000 row tasks? Although these types of improvements can still take us a long way, at least a forty-fold increase in computing power (around 5 years of innovation), they can only take us so far. When it comes to transitioning to other materials, Graphene has been touted as a possible replacement, but the research is still in the early stages.

On the more futuristic side, quantum computing could usher a new age of technology to rival that of the integrated circuit. Last year, Google published a paper on Nature claiming to have solved a task in under 4 minutes that would have taken a modern supercomputer, the processing equivalent to “around 100.000 desktop computers”, over 10.000 years. There are also studies looking into conceptually abstract possibilities like using DNA to perform arithmetic and logic operations, as well as storage.

Although we will probably never have personal quantum or “DNA” computers, because their upkeep costs and upfront investment are prohibitively high, a world in which a handful of companies offer processing solutions to anyone via cloud computing, much like we see today, sounds plausible.

What impact could the end of Moore’s Law have on the economy? How can we bring about the age of AI if we do not have the hardware to support software innovation? We can only wait and see what happens.

Sources: Intel, Washington Post, Nature, 311 Institute, MIT Technological Review, NY Times, Wikipedia

From flies in urinals to higher savings rates: How nudging influences our decisions

“A choice architect has the responsibility for organizing the context in which people make decisions.”

— Richard H. Thaler, Nudge: Improving Decisions About Health, Wealth, and Happiness

Nudging has been the start point of many economic studies, in particular in the area of behavioral economics as it explores the way people´s decisions could significantly change in a predictable way by modifying the context of such decisions in a very subtle form. The concept was first introduced by the Nobel Prize winner Richard Thaler and Professor Cass Sunstein. Thaler, however, stressed the point that nudging should be used for good, with the goal of improving society’s welfare, but has his wish become a reality?

It all started at Amsterdam’s Schiphol Airport where Thaler´s discoveries were first explored by a simple experience aiming to solve a very real problem: the cleaning manager intended to reduce the “spillage” around urinals in order to reduce the cleaning expenses on the airports´ bathrooms.

Where does Thaler´s nudging theory come into play? Well, to the surprise of many the solution suggested was to “paint” a small fly near the drains of the urinals. Many may (and did) find it ridiculous and childish but actually it was a very credible and simple solution following the reasoning of human (particularly, men’s) behaviour. The goal was to “guide” men in to reducing spillage (without even noticing) and the results were an impressive 80% reduction in spillage and a consequent 8% reduction in cleaning costs. The raw truth is that men can’t help themselves and start aiming at the fly.

Fly painted in Urinal

Fly painted in Urinal

This is one of the most famous examples of the nudge theory, as it respects the most important principles that guide its use. Firstly, all nudging should be transparent and never intended to mislead (a common prevented practice – just like with publicity & advertisements since it can in fact be used to manipulate human behaviour). Secondly, it should be really easy to opt-out of the nudge (it is not mandatory for anyone). And thirdly, the behaviour encouraged should aim at improving the welfare of those being nudged (once more, no harm is intended).

Not only relevant for small problems, but also in various other areas, this theory has impacted and been adopted by governments all over the world. Over the last decade, many countries have seen and tested the effects of nudging – aiming at lower costs and better exploitation of long-lasting benefits – and the results have been astonishing. Whether the goal is to promote a healthier lifestyle by displaying healthy food at eye level in supermarkets – the impact of ordering and context framing – or increasing the savings rate of the population by automatically enrolling people in a savings plan, the results have been mostly  positive.


Another example that illustrates the impact of a nudge lies in organ donations: In countries where enrollment is necessary for those who want to donate organs after dying, the percentage of people that go through with the process is very low. On the other hand, in countries where organ donation is an opt-out option (that is, people are automatically enrolled), the acceptance rates are extremely high.

Effective consent rates, by country. Explicit consent (opt-in, gold) and presumed consent (opt-out, blue)

Effective consent rates, by country. Explicit consent (opt-in, gold) and presumed consent (opt-out, blue)


This enormous difference can be explained by what is called the default effect. Socially speaking, people tend to be change averse and avoid anything that poses extra effort and so accept the proposed default option even though they have the ability to reject it at any time.

This being said, one can probably anticipate that Thaler´s wish to use nudge for good, isn’t entirely respected as private entities are also trying to explore human psychology and use the nudge effect for profit purposes, appealing consumers to either buy or to use their products more often.

Think about these two cases in particular, Spotify and Netflix. If you don’t have a premium subscription in Spotify you may be tired of listening to something like “Don’t have premium? Try now a 1-month free trial!”. As you may know, this very tempting offer requires the allured clients to give their credit card information to be saved for the period after the free trial. Surely, after the free month, the consumer is given the choice to give up the premium account and not pay anything, but the statistics prove that many stay immersed in the inertia of the default effect and simply don’t have the energy to quit, leading them to a monthly payment that they wouldn’t otherwise have if Spotify hadn’t given the nudge. In the Netflix case, their nudge is even more camouflaged. By simply having an automatic count down to the next episode, they encourage people to go for the path of least resistance and keep watching.

Netflix automatic queue

Netflix automatic queue

It is important to highlight that in both cases, consumers weren’t forced to do anything, they were simply guided towards an option. Notwithstanding, this suggested (consumerist) human behaviour falls a bit short on doing good and has been a matter of concern to the authors of the theory.

“Whenever I’m asked to autograph a copy of Nudge, the book I wrote with Cass Sunstein, the Harvard law professor, I sign it, Nudge for good. Unfortunately, that is meant as a plea, not an expectation”

— Thaler to confess to the New York Times

Whether we notice it, or not, nudging is present in many of our daily lives and its influence can lead us to make choices that we wouldn’t otherwise. One could consider nudging anything like a simple buzz from a phone, a “ding” from a microwave or even a jingle played by the washing machine, warning and leading us to its use. The effects, however, go way beyond mere sounds and as people start realising that they could be being manipulated without even noticing, they start to distrust even the things that lead them to better choices, harming, this way, the true purpose of the theory. Regardless, it will continue to be part of our lives and it is our role as citizens to be aware of its potential but also of its limitations, so that we can make the most of such a powerful tool.


Sources:

  • Financial Times

  • NY Times

  • Washington Post

  • Book “Nudge” by Richard Thaler and Cass Sunstein

Scientific Revision: Ana Clara Malta, Behavioral Economics Team Leader

Political Polarization in the U.S.

Aftermath of Antifa protests that led to the cancellation of right-wing Milo Yiannopoulos’s talk at UC Berkeley on the 1st February, 2017.

“(…) each individual among the many has a share of virtue and prudence, and when they meet together, they become in a manner one man, who has many feet, and hands, and senses (…) Hence the many are better judges than a single man of music and poetry; for some understand one part, and some another, and among them they understand the whole.”

— Aristotle, Politics

In his work Politics, Aristotle, while discussing whether the supreme power in the state should belong to the multitude or to the few, argues that the principle of predominance of the many, as opposed to an oligarchy, is, even with all its flaws, grounded in an idea which often presents itself almost self-evidentially to us: that good-faith deliberation of many people is worthwhile since individuals can share knowledge and incorporate the best arguments of every side and, thereby, reach a conclusion/judgement which is more in accordance with reality.

Is it reasonable to expect that all deliberation will have this constructive, moderating effect on what people believe? In a group of people in which participants are exposed to a plurality of views and opinions, the necessary weighing of different arguments can occur.  However, groups can be homogeneous in opinions; what will be the outcome of deliberation then?


An interesting study by the University of Chicago Law School on group polarization had several groups of individuals from two counties in Colorado (one majority conservative and one majority democrat) deliberate on certain issues (such as affirmative action and global warming) and ranked their pre- and post-deliberation opinions. It found a consistent tendency for individuals to move toward their group’s pre-deliberation tendency, i.e. liberals became more liberal and conservatives more conservative.

The researchers argue, for instance, that informational influences are one of the factors that can explain this behavior. These come about due to the fact that, in any group with an initial pre-disposition, the number of arguments presented in favor of that the initial tendency will be bigger than those in the opposite direction; due to this biased argument pool, individuals are more likely to polarize and, with that, become more confident in their views. Adding to this, corroboration by like-minded people further increased individual’s assurance in their world-view. Factors like reputational concerns are likely also at play; usually, people care about being perceived favorably by others and may adjust their beliefs, even if only slightly, to better fit in with the group. (see Asch Conformity Experiment)

What are the implications of these results?

As it turns out, the geographical political segregation we see in the U.S. would seem to indicate that polarization will, as a matter of course, occur. Indeed, with large proportions of democrats in urban centers and with republicans dominating less densely populated areas, the above-described dynamics will occur and we would expect to see an increase in the opinion divide between liberals and conservatives. The data bears this out:

The Pew Research Center publishes many polls and reports on U.S.’ public opinion, political polarization and partisan divide; In addition to their 2017 report, which shows many metrics detailing the increasing divide between parties and people, they published an interactive chart very clearly corroborating our expectations.

Source: Pew Research Center. If you have trouble viewing the chart please visit the original website.


While geographical political segregation is undoubtedly a large potentiator of these tendencies, and certainly worrying due to the vicious cycle it creates, there’s another more recent factor worth mentioning: The Internet. At a first glance, one might think that, by freeing people’s interactions from the shackles of distance, the arrival of the world wide web could work against polarization. However, this effect will be lessened and perhaps completely nullified if people choose to isolate themselves on partisan lines online.

The question arises:

Are human beings’ homophilic tendencies observed online?

We should first understand that, with the spread of the Internet came the ability to access inordinate amounts of information; thereby, its selection became all the more vital. Of course, even before the arrival of the web, you could select what newspaper to read but information personalization was exponentiated greatly in the digital age. It is, as such, possible for individuals to cocoon themselves in informational and ideological bubbles where polarization can occur, just like what was observed in Colorado. Let’s look at a real example:

A study on Twitter’s political polarization gathered data on many users’ political interactions and analyzed the retweet and mentions networks that existed. It found two separate communities in the retweet network with a high degree of partisan division:

A    2019 poll from Berkeley IGS    shows that conservatives living in California (a very democrat-leaning state) are much more likely to having considered leaving the state than liberals; one of the most stated main reasons for this is the state’s political culture. Conservatives leaving the state, therefore, will make it more likely that other conservatives also move out.

A 2019 poll from Berkeley IGS shows that conservatives living in California (a very democrat-leaning state) are much more likely to having considered leaving the state than liberals; one of the most stated main reasons for this is the state’s political culture. Conservatives leaving the state, therefore, will make it more likely that other conservatives also move out.

On the other hand, when analyzing mentions, they found that this network did not reveal, as seen in the case of retweets, an obvious political division. Instead, there was a higher degree of heterogeneity. However, the researchers contend that, even though ideologically-opposed individuals interact with each other through the mentions network, this should not be interpreted as a cure for the issue of Twitter polarization. Indeed, since political discourse on the platform is already highly partisan and disconnected from normal, face-to-face interactions, they argue that “these interactions might actually serve to exacerbate the problem of polarization by reinforcing pre-existing political biases”.

The potential consequences of an increasing ideological divide between members of a society might warrant worry. For example, animosity between republicans and democrats in the U.S. has been increasing, as shown by a recent Pew Research Center report.


urdhjj.png

This, combined with the occurrence of events such as the Charlottesville protests or the UC Berkeley protests, hints at a weakening social fabric as a symptom of the widening chasm.

Considering what we’ve seen so far, it would seem that polarization is fated to continue its course, especially since it is not clear what can and should done at an institutional level to face this problem. However, one thing is for sure: fomenting a culture in which individuals understand the benefits of learning from each other and, therefore, value meaningful, mutually-advantageous discourse can certainly go a long way in countering the above-described trend. Furthermore, crisply distinguishing between political disagreements/arguments and normal social interactions is of the utmost importance if we want to maintain cohesion in a society afflicted by a large ideological split.


Sources:

  • CNN

  • abc News

  • Pew Research Center

  • FiveThirtyEight

  • What Happened on Deliberation Day, University of Chicago Law School Chicago Unbound, Journal Articles

  • Political Polarization on Twitter, M. D. Conover, J. Ratkiewicz, M. Francisco, B. Gonc¸alves, A. Flammini, F. Menczer

  • Leaving California: Half of State’s Voters Have Been Considering This, Berkeley IGS Poll