Machine Learning (Part I)

“Machine Learning is like teenage sex: everyone talks about it, nobody really knows how to do it, everyone thinks everyone else is doing it, so everyone claims they are doing it…”

Machine Learning (ML) and Artificial Intelligence (AI) are buzzwords often used interchangeably in the casual and intellectual discourse of today. Many ideas often spring to mind when either is mentioned: data science, self-driving technology, big data and, on the more ridiculous side, robots hellbent on humanity’s destruction. The truth, however, is that Machine Learning is part of our increasingly data-driven world. It makes our lives better, despite several shortcomings, and is likely to be relevant to you even when not working directly with it.


Picture1.png

Let us take a quick moment to make the distinction between ML and AI. Consider the picture above: Machine Learning, a subset of AI, is a field dedicated to generating predictions based on the hidden patterns, machines pick up within data. In practice, it is an AI technique where the machine writes its own rules. This means that a machine is fed with inputs (in tabular form) such as housing data or photos of dogs and cats, and it learns to perform a specific task without humans telling it how to do so.

In this article, we hope to explore some interesting case studies, such as how Tinder uses these learners to match you with your next date or how Amazon attempted to use an algorithm to analyse CVs (revealing a bias against women instead). With Tinder, for example, a machine takes our explicit (e.g. age range) and implicit (e.g. our photo was taken in a forest) preferences to match us with people likely to be a match. This is a task performed by several algorithms (or learners/machines), each one trained specifically for its task.

How does my swiping allow a Machine to learn?

Tinder uses an ELO-system, attributing a score to every user. Based on this score it will determine the likelihood of two individuals swiping right on each other, resulting in a match. This score is determined by multiple factors, such as the photos, bio and other settings of the profile, as well as swiping activity. Users with similar ELO scores, who have been identified as sharing similar interests, will be shown to each other.

Let us refer to the diagram below.

Picture2.png

Firstly, the algorithm starts by analysing the user’s profile and collecting information from the photos they posted and personal information they wrote on their bio. In the photos, the algorithm can pick up on interests or cues such as liking dogs or nature. Through the bio, the machine will profile you based on words and expressions used (see picture below). From a technical perspective, these are distinct tasks likely to be performed by different learners – identifying words and sentiments is fundamentally different recognizing dogs in pictures.

Picture3.png

At this point, Tinder does still not have much knowledge about one’s preferences and will therefore show your profile to other users at random. It will record the swiping activity and the characteristics of the persons swiping right or left. Additionally, it will identify more features or interests from the user and attempt to present the profile to others in a way that it will increase the likelihood of someone swiping right. As it collects more data, it becomes better at matching you.

The ‘Smart Photos’ option, a feature that places your ‘best’ or ‘most popular’ photo first, is also another instance where Tinder uses Machine Learning. Through a random process in which a profile and pictures are shown to different people in different orders, it will eventually create a ranking for your photos.

In Smart Photos, the main goal is for you to be matched. This works best when the most relevant picture is placed first. This could mean that the most ‘popular’ photo – the one that performed better – might not be the best; think of someone who likes animals. For these people, the photo of you holding a dog is likely to be shown first! Through the work of creating and ranking preferences and choices, a match can be found solely on the valuable insights from a photo.

By and large, the techniques that match you with other people as described above are part of a school of techniques in Machine Learning called ‘Supervised Learning’. In other words, the algorithm that learns to identify dogs and nature has been trained with similar pictures of dogs and nature. These stand in contrast with other schools, such as ‘Semi-supervised Learning’ and ‘Unsupervised Learning’.

The Perils of our (Human) Supervisors

In 2014, a group of Amazon engineers were tasked with developing a learner that could help the company filter the best candidates out of the thousands of applications. The algorithm would be given data with past applicants’ CVs, as well as the knowledge of whether said applicants were hired by their human evaluators – a supervised learning task. Considering the tens of thousands of CVs that Amazon receives, automating this process could save thousands of hours.

The resulting learner, however, had one major flaw: it was biased against women, a trait it picked up from the predominantly male decision-makers responsible for hiring.  It started penalizing CVs where mentions of the female gender were present, as would be the case in a CV where “Women’s chess club” was written.

To make matters worse, when the engineers adjusted so that the learner would ignore explicit mentions to gender, it started picking up on the implicit references. It detected non-gendered words that were more likely to be used by women. These challenges, plus the negative press, would see the project be abandoned.

Problems such as these, arising from imperfect data, are linked to an increasingly important concept in Machine Learning called Data Auditing. If Amazon wanted to produce a Learner that was unbiased against women, a dataset with a balanced amount of female CV’s, as well as unbiased hiring decisions, would have to have been used.

The Unsupervised Techniques of Machine Learning

The focus up until now has been supervised ML types. But what of the other types are there?

In Unsupervised Learning, algorithms are given a degree of freedom that the Tinder and Amazon ones do not have: the unsupervised algorithms are only given the inputs, i.e. the dataset, and not the outputs (or a desired result). These divide themselves into two main techniques: Clustering and Dimensionality Reduction.

Remember when in kindergarten you had to identify different shades of red or green into their respective colour? Clustering works in a similar way: by exploring and analysing the features of each datapoint, the algorithm finds different subgroups to structure the data. The number of groups is a task that that can be made either by the person behind the algorithm or the machine itself. If left alone, it will start at a random number, and reiterate until it finds an optimal number of clusters (groups) to interpret the data accurately based on the variance.


Picture4.png

There are many real-world applications for this technique. Think about marketing research for a second: when a large company wants to group its customers for marketing purposes, they start by segmentation; grouping customers into similar groups. Clustering is the perfect technique for such a task; not only is it more likely to do a better job than a human – detecting hidden patterns likely to go unnoticed by us – but also revealing new insights regarding their customers. Even fields as distinct as biology and astronomy have great use for this technique, making it a powerful tool!

Ultimately brief, Machine Learning is a vast and profound topic with many implications for us in real life. If you’re interested in learning more about this topic, be sure to check out the second part of this article!


Sources: Geeks for Geeks, Medium, Reuters, The App Solutions, Towards Data Science.

Daniel André -  Daniel André Laura Osório -  Laura Osório


André Rodrigues -  André Rodrigues

The (Next) Generation of Energy

“Climate change is, quite simply, an existential threat for most life on the planet – including, and especially, the life of humankind.”

— António Guterres (2018)

For the past decade, the world has witnessed the incremental transformation of the mobility industry, largely through the phenomenon of electric cars. Nowadays, beyond Tesla’s Gigafactory walls, traditional carmakers are making the necessary steps to phase-out the combustion engine. Several prominent and prolific car makers expect the majority of their income to come from all electric car sales by 2030 and to have an all-electric offer – counting hybrids and other mixed-possibilities as well. In the short to medium term, expect as many as 500 electric car models to be available by 2022.

Concurrently, the energy sector is slowly weaning off its thirst for fossil fuels through renewable energies. As countries rethink their energy concerns – a matter of debate that goes well beyond the environment – and implement change through public policy, the automotive industry is being remade with battery powered electric vehicles.

This transformation, slow as it may be when contextualized with the environmental crisis we are in, is crucial but not without its own set of problems. Energy from renewables is largely dependent on factors beyond our control; how feasible would it be to have a fully renewable electric grid, and what good is it to power car batteries if the electricity produced comes from fossil fuels? And in the topic of car batteries, what about their environmental cost, how reliable are they, what alternatives could we consider?

This semester, the team behind the Technology articles at Nova Awareness Club has been tasked with two other topics – Health and Environment. This is no small task in the year of 2020, a time where the world has been ravaged by a global pandemic and where there is a sense of urgency about taking the necessary steps to prevent irreversible damage to the environment.

Although the choice for this article has mostly been fortuitous in a sense – the main idea was to showcase what the readers of The Awareness News should come to expect this term – the underlying message has never been truer. Within our scope, the team pledges to bring awareness to several environment related questions; and to do so using language that portrays the environmental crisis we are in as well as reducing unnecessary technical jargon to a minimum.


Renewable Energies, the Powergrid and the duck-related neologism

Duck Curve . The practical effect of renewable energies on power grids, as seen in California (The curves vaguely shape the outline of a duck). Source: Vox

Duck Curve. The practical effect of renewable energies on power grids, as seen in California (The curves vaguely shape the outline of a duck). Source: Vox

As previously stated, renewable energies are often dependent on factors beyond human control, namely, the weather. Solar farms only produce solar energy when the sun shines (usually not at night). A drought, in the aforementioned hydroelectric example, is typically a factor of extreme weather conditions.

Learning how to juggle these power outputs is key to one day achieving a fully renewable electric grid – a concept that has yet to materialize in real life. For this next part, consider the graph pictured above.

The Duck Curve is a recently coined term that shows the discrepancy between peak power production and peak power needs. Ultimately, this graph shows a very specific example likely to occur in places with an elevated solar output. Different electricity profiles – in other words, the different ways you power an electric grid with all different types of energies – dictate the circumstances.

Keep the following takeaways in mind before we delve into a practical question:

  1. As of yet, there is no such thing as a powergrid fully supplied by renewable energy sources

  2. Consequently, the electric cars you see run on electricity generated by fossil fuels. The degree likely depends on the electricity profile of the place you live in.

The question of whether an electric car is better from an emissions standpoint is often finicky; an article by The Guardian in 2017 states that the benefits of an electric car emissions’ wise throughout its lifetime are just 20% lower than a traditional combustion car. There is, however, a reduction in day-to-day use, and it relates directly to your electricity profile.


The burning questions behind the ‘B’ word… for Batteries

For a consumer, the limitations of an electric car largely revolve around its battery. From an environmental standpoint, making batteries also posts an acute environmental cost.

The deeper we go into the topic, the more fraught it is. The production of lithium-ion cells is energy intensive and demands rare metals – in other words, analysing battery production as a greener or environment friendly alternative to traditional cars starts off as an opportunity cost analysis between both. Fortunately, as energy is increasingly sourced from renewables, it becomes less of a question.

Taking into account the pollution from old batteries – which has not been overlooked in the research for this article – the real environmental cost of batteries seems to be hidden behind several layers of externalities. As such, the next part of this article is dedicated to the often-overlooked brother to battery powered vehicles – hydrogen and hydrogen powered cars. The process behind it is simple enough; joining hydrogen with oxygen generates energy and clean water vapor.

From a functional perspective, hydrogen cars work similarly to battery powered electric cars but with the added benefit of discarding the battery. Albeit with its own set of challenges, hydrogen powered vehicles offer conventional and green mobility. As such, it has gained attraction in spite of its obscure media coverage.

The challenges, however, should not be overlooked. Although hydrogen is the most common substance in the universe, it is rare to find it in its pure form. In what accounts to a net negative energy exchange, to produce it we must spend X amount of energy to obtain X – Y amount of hydrogen in fuel form. It is also very difficult to contain. There are two alternatives here: either pressurise a container, or turn it into a liquid by reducing the temperature. Both these options are costly. With everything tallied up, the average price per kilometer at the time of writing is estimated at $0.17 versus $0.02 in electricity.

Ultimately, Hydrogen can be seen as a tradeoff between efficiency (creating it is a net negative in energy) and storage capacity; of which hydrogen wins in spades against current lithium-ion battery technology. This, in turn, opens up possibilities that seemed impossible or too far off with batteries – namely the possibility of replacing fossil fuels in commercial air travel.

Some governments are ready to invest and incentivize the transition into hydrogen. Japan was the first country to develop a Basic Hydrogen Strategy, in 2017, the most promising initiative to establishing hydrogen as the main energy source in not just mobility. So far, Japan has succeeded in extracting hydrogen from other sources such as manure and waste plastic and with a decent-sized hydrogen car fleet, it is an important proof-of-concept for hydrogen’s efficiency and sustainability. It is the most successful case of a country committing to hydrogen for its energy needs.

Germany is another adherent to hydrogen, producing already 20 billion standard cubic meters, although 95% comes from fossil fuels such as coal and natural gas. The Bundesregierun – the German federal government – has adopted a national hydrogen strategy in June of this year. This will ensure support on Hydrogen innovation and technology for both German and European companies in the international stage.


Sources: The Guardian, Vox, New York Times, BBC, Youtube Videos, UN News, BloombergNEF, California Energy Comission, NREL, European Environmental Agency.

Lab Grown Food: Opportunity for sustainability or dystopian nightmare?

In 2014, the United Nations issued a report claiming that at current rates of soil degradation “all of the world’s topsoil could be gone within 60 years”.

The production of livestock is responsible for 14 to 18% of our greenhouse emissions and takes up to 70% of all agricultural land. Most of the world’s crops are used to sustain livestock and it is a major cause of deforestation and water contamination, problems which farther aggravate climate change and the health of our ecosystems.

While the world population has doubled over the last 50 years, the amount of meat produced has more than quadrupled – in fact, if the world ate as much meat as the top 20 meat-eating countries, the whole surface of habitable land would have to be used to feed people, and, even if packing animals together, that still wouldn’t be enough. Moreover, factory farmed animals are fed antibiotics: in the US, more than 70% of all antibiotics sold each year now go to farm animals, which has lead many to speculate that the industry is fueling the risk of a deadly pathogen that is resistant to bacteria, a so called super bug, that will have consequences greater than the current Covid-19 pandemic.

By changing the way we grow meat, or our meat consumption habits, this diagnostic would improve significantly. However, what if we could grow all our food from a lab? Meat might not be the only lab-grown product of the future.


Lab-grown meat, as the name suggests, consists on creating a piece of meat through cell culture. Initially, a small segment of cells tissue is taken from an animal and subsequently, added to a growth medium – like a soup that provides proteins, vitamins, sugars, and hormones. Along with a temperature-controlled environment, the cells are tricked into thinking they are still inside their owner, hence growing and replicating themselves. This process takes between two to six weeks, and the final product is a doughy chunk of meat, close to minced meat, which will then give rise to our everyday meat-products.

 

 

Artificial lab grown meat in Petri dish (YouTube)Artificial lab grown meat in Petri dish (YouTube)

Since the 80’s, many products have been used as an attempt to substitute meat, such as soybeans and wheat gluten, however, they could never replace the taste of a hamburger. The objective of cultured meat is quite complex: to create something that tastes just like regular meat, with as much vitamins, proteins, and nutrients, that can be cooked and is as affordable. The biggest challenge, however, is to reproduce its consistency, which hardly influences its flavour and juiciness.

Scientists are looking for ways to reproduce a steak structure: they need to supply nutrients to cells in the centre of a piece of meat, like vessels do in a body. This has been a huge challenge, and the latest progress was made by a Harvard team, using a type of gelatine. They came up with a product that, although not having as much muscle fibre as natural meat, had a similar texture. But even if muscle cells, fat and connective tissue can be made from scratch, the biggest challenge still remains: how can we combine them in order to build an actual steak, with its texture, aroma, flavour, appearance and functionality?

And is lab grown meat even affordable? The first lab-grown burger, produced in 2013 by Mosa Meat, cost $1.2 million per pound (more or less 2.45 million euros per kilogram), due to cell production process’ large costs. However, Future Meat, another lab-grown meat start-up, is planning on launching a new line of in-vitro meat that will cost less than $10 per pound (more or less 2 euros per kilo). This exponential decrease in price in a few years is expected to continue

When it comes to the environmental impacts, they are still being discussed. While Mosa Meats, a clean meat producer, states that cultured meat process requires 99% less land and 96% less water than livestock agriculture, some say producing all our meat in labs would only cut greenhouse emissions from beef by 7%, because of the energy the process requires.

There is also a moral dilemma which still needs to be tackled, i.e. the growth medium. Currently, scientists are using Fetal Bovine Serum, which is, in other words, blood harvested from the fetuses of slaughtered pregnant cows, immediately killing them. While firms are looking for a plant substitute, these processes do not help much on reducing the cruelty of the current methods, although they would drastically reduce the number of slaughtered animals.

Finally, one must also consider the yuck factor: many people are still resilient to the idea of eating something created in a lab, which can be explained by the Uncanny Valley effect. In robotics, this is known as the discomfort with «humanoids» that are closed to being human but not quite there yet. The same happens with meat: when you are faced with something that is a very close imitation, your brain is forced to expect it to be exactly the same, and if not, you will most probably freak out. In February 2019, Animal Advocacy Research Fund funded a survey which revealed that 29.8% of U.S. consumers, 59.3% of Chinese consumers, and 48.7% of Indian consumers would be very/ extremely willing to regularly purchase cell-based meat.


Meat is not the only food that scientists and entrepreneurs are trying to grow in a Lab. Solar Foods grew flour, the main source of calories in the west, from within a metal tank. The process, described like a froth-like soup of bacteria fueled by hydrogen, allowed for the creation of artificial flour. The use of hydrogen was touted as being 10 times as efficient as photosynthesis. And in these labs, where food is grown in giant vats, the land efficiency is estimated at 20.000 times greater by Solar Foods. Under these circumstances, food could virtually be grown anywhere on the world, on a fraction of the area.

Soylent Green – a 1973 movie featuring Charlton Heston and based on a story by Harry Harrison describes a world in the year 2022 where exponential growth of the population has led to natural disasters and chronic food and water shortages. As the actual year 2022 looms, this seems a good time to discuss our options to ensure this fiction does not become reality

 

 

World Population Projections (ResearchGate)World Population Projections (ResearchGate)

Maria Mendes

João Guedes

Daniel André

Dubai: The Pearl of the Middle East

This city needs no introduction. As the main attraction and destination of the Middle East, Dubai is an exotic and trendy city full of luxury and amazement. Skyscrapers everywhere, including the highest in the world, building new ones nonstop. The most amazing and extravagant hotels, such as the Burj Al Arab: a seven-star hotel with 200 rooms, each 2 stories high. A coast with artificial capes and islands, full of extraordinary mansions. Its very own indoor ski resort, the Ski Dubai, while outside temperatures reach more than 40º Celsius. One of the most spectacular cities in the world, that in many ways makes no sense at all. How can a city so successful in so many ways be built in a hostile desert, by a previously unknown people, in a region so influenced by political tensions and wars?


The Past

Dubai is one of 7 monarchies that would make the United Arab Emirates (UAE), located on the coast of the Persian Gulf.

In the beginning of the twentieth century, Dubai was just a small insignificant trade port. The city survived by having special diplomatic relations with the United Kingdom, offering stability, and by selling its finest trade resource: high quality pearls. The only special thing about this city was its strategic position, close to the Strait of Hormuz.

In the 1930s, the creation of high-quality fake pearls and the Great Depression devastated the economy. Dubai experienced great migrations and economic losses, now as an official protectorate of the British Empire. It was in this period that the people realised the disadvantages of being dependent on one trade resource and the advantages of having stability in the region, provided by the British. These will be the two factors that will define Dubai.

Old Dubai in 1950 (source: wikipedia)

Old Dubai in 1950 (source: wikipedia)

Throughout the twentieth century, more and more oil was being found in the Emirates, but not much in the Emirate of Dubai. When the UAE became independent in 1972, the country was increasingly dependent on their oil exports. But Dubai learned from its past; it focused on diversifying its sources of income. As such, it invested their share of the oil revenues on infrastructure like ports, roads and airports. From there, they attracted foreign investment, granting special economic zones for any interested. All of this was only possible by having almost perfect stability in the country. As the years passed, they became great competitors in maritime trade, banking, finance, energy, science innovation, aviation and, of course, real estate. It was in the 1990s that the city exploded with the famous skyscrapers, while wars were being fought all over the Middle East.


The Present

Nowadays, it is a global city like no other. Over two million people live in Dubai, with more than 3 quarters being immigrants. While more than 80% of UAE’s GDP is dependent on oil related revenues, less than 5% of Dubai’s GDP is as such. Because it is right between Asia, Europe and Africa; and is so safe and diversified in its services, it serves as a bridge for business and diplomacy between the continents. It is, in many ways, the Switzerland of the Middle East.


Present Dubai

Present Dubai

The main types of people that Dubai attracts are entrepreneurs, to establish their companies in the city so that it becomes increasingly competitive; qualified workers, to work in such companies; and tourists, 13 million per year, from the extremely rich to the normal western, as tourism is a field where Dubai excels at.

I was able to interview an entrepreneur and a tourist so that they could share their experiences in Dubai:

Our entrepreneur is the owner and CEO of a marketing company in Portugal. He chose to do business in Dubai to take the advantage of the bridge between societies. Not only is it easy to set up business in the city, but also there is easy access to other markets from Asia and Africa. There are companies from all over the world in Dubai, making the competition fierce. It’s extremely difficult to survive in such a market, with all the big international players present. Still he is steadily surviving.

Our tourist is a student from Nova SBE that travelled to Dubai during the summer holidays.  She found many comparisons between Dubai to the big cities of the USA: big skyscrapers, big shopping malls, great suburban areas, gigantic highways and the automobile as its main form of transportation. Everything like that, only everything more extravagant. She particularly liked the desert landscape, the extravagant shopping malls and the culture. Most of the old Dubai is often forgotten, but that is where you can truly find the roots of the people, in the old part of the city, like per example, the Souks, covered traditional markets with a different one for each type of product, from clothes to gold.


Dubai’s traditional covered markets, the Souks

Dubai’s traditional covered markets, the Souks

Her trip dismisses the myth of Dubai being only for the super-rich. You can still have a great holiday in Dubai without spending that much.

These testimonies only confirm what was already stated. Dubai is a safe and exciting place to visit and work. But the city is far from perfect: it has serious problems.

The city grew exponentially in only 30 years. The city was not planned to grow that greatly, so there are very serious logistical problems. Big highways separate entire neighbourhoods and many streets are completely disconnected from each other by foot.

Dubai is seen as having a very relaxed law relative to neighbouring countries, and that is true for the most part. Women do not have to cover their hair, other religions are free to be practiced, even alcohol is legal. But there are still harsh laws. You can’t drink in the street, you can’t show intimacy in public (like hugging, holding hands and kissing) and you can’t say or report badly about the government, not in public nor in social media. There is no freedom of speech. One shocking case was of a British Phd student that was in Dubai to study. He was arrested for just suspicion of spying. Trialed and sentenced for life imprisonment with no lawyer present. He was later released, but not after 5 months in solitary confinement.

And then there is the rule of law itself. Many laws are ignored when it becomes convenient. There are reports from tourists of showing intimacy and drinking in public with no repercussions. Some labor laws are also ignored.


Living conditions of forced labor workers

Living conditions of forced labor workers

And that leads to problems in human rights. Many less educated people come to Dubai to work. The more desperate are cheated out of their salary when recruited to various jobs, mostly construction. They are maintained in conditions considered less than humane, forced to work without pay. This is no different than slavery. It is possible that those amazing skyscrapers were constructed by these people.


The Future

Dubai will certainly outlast oil, thanks to its diversification and its eccentric identity, attracting business and attention worldwide. It has serious problems, but they should be overcome with increasing influence from the west.

Meanwhile, the increasingly more bizarre construction projects are underway, like the Dubai Creek Harbour. This will be an urban complex full of luxury apartments, green parks and the Dubai Creek tower. This latter will cost one billion US dollars and it will be the tallest structure ever made by mankind, standing 1,3 kilometres high. Construction was expected to finish in 2021, but that will probably be postponed due to the Covid-19 pandemic. Nevertheless, when it does finish, it will maintain Dubai in the hotspot that it is currently standing.


The Dubai Creek Tower (source: EMAAR Properties)

The Dubai Creek Tower (source: EMAAR Properties)

The Cloud Wars: AWS Vs Azure for the Control of Your Internet

Late last year, on October 25th, the United States Department of Defence announced that they would award a contract worth $10 billion dollars – the Joint Enterprise Defence Infrastructure project, henceforth referred to as JEDI – to Microsoft’s cloud computing business – the Microsoft Azure.

In the larger picture, the JEDI contract is but a small drop in the ocean of public contracts. According to the United States themselves, the federal government spends roughly $500 billion dollars on contracts every year. At face value, it was just another story of a tech-related government contract being awarded to the company with the most competitive bid.

Figure 2 - AWS Logo , Source: Amazon

Figure 2 – AWS Logo , Source: Amazon

But Amazon would contest the decision soon after, claiming errors in the process and political interference by President Trump. Like Microsoft, Amazon Web Services (AWS) – a subsidiary that provides cloud computing services – was in the run for the JEDI contract, and was even considered to be a frontrunner. By February of this year, Microsoft’s work had been halted, and the Pentagon was reconsidering the decision, as shown by court documents.

Though this story has taken an acrimonious turn, pitting Amazon against the Executive Branch of the United States with allegations of political interference, the competition between Microsoft and Amazon is not unlike that of any other two companies fighting for market dominance. The reason why President Trump allegedly interfered with the process is because cloud computing is a nascent, rapidly growing field that both companies – Microsoft and Amazon – have deemed crucial in strategic terms. So, he hit Amazon and its founder, with whom he has had public spats in the past, where it hurt.

Ultimately, this begets the question:

What is cloud computing, and why are the stakes so high?

Cloud computing is the delivery of computing services such as servers, databases, storage, software and analytics through the internet (the cloud = the internet).

Cloud computing services offer several benefits to clients:

  • No capital expenditures – You don’t have to buy your own physical assets, you rent them via the “cloud” instead. Like retail chains that started to lease and rent property instead of buying it.

  • Scale & Flexibility – The cost of the rent is proportional to the size of the business and traffic and it is very easy to increase your capacity. In other words, as your computing power needs increase with business and traffic, you are unconstrained by current equipment.

  • Speed & Performance – Your only limitation is your internet connection. Cloud services run on the latest high-tech hardware, so you are not limited by outdated hardware and you don’t have to constantly update your devices.

  • Security and Reliability – Cloud computing services come with automatic backups and disaster recovery, as your data is in many places instead of a single server. Cloud service providers also come with the latest network security methodologies, which would be too expensive for a single business to implement on its own.

These services can be split into three types:

  1. Infrastructure as a service (IaaS) – the most basic version where you rent IT infrastructure such as servers for storage.

  2. Platform as a service (PaaS) – Services that supply on demand environment for developing and testing software applications, such as mobile apps.

  3. Software as a service (SaaS) – delivery of software over the internet, on demand, often requiring only a terminal (no need for installation).

Cloud computing has provided a unique benefit to society in general – it makes it much easier to launch a tech start-up, as the start-up costs are almost non-existent when compared to the 90s. Cloud computing was a major enabler in the tech boom of the last decade.

Cloud-computing-000088211771_Medium-1024x682.jpg

Amazon was the first of the two to launch their cloud computing business via AWS, back in 2006. The story of AWS is interesting. Initially, before 2006, what would become AWS was a private cloud system within Amazon to support data collection and server management across the entire company. Only after using and developing AWS for four years before offering AWS to the market. AWS was not planned and was born of Amazon’s culture of innovation and experimentation.

Microsoft would follow suit in 2010, at the time launching “Windows Azure”.

Today, these companies take up 70% of the market share valued at 227 billion dollars in 2019 (with Amazon being the market leader at 40%).

And seamlessly, without us ever noticing, they power many of the platforms and companies that we use in our day-to-day.


Figure 2 - Netflix Logo , Source: Wikipedia

Figure 2 – Netflix Logo , Source: Wikipedia

Consider Netflix, one of AWS’ high profile clients. In 2009, they opted to migrate from their physical data centre to the cloud, moving thousands of terabytes of data into Amazon owned servers, data that has to be accessed tens of thousands of times per second by 160+ million subscribers in all parts of the world.

Figure 2 - HP Logo , Source: Wikipedia

Figure 2 – HP Logo , Source: Wikipedia

Alternatively, consider HP, one of Microsoft Azure’s high-profile clients. According to HP, they handle more than 600 million technical support contacts each year. The accumulated data points for each of these contacts was used to build an AI assistant via one of the solutions provided by Microsoft Azure.

Either of these examples illustrate different services provided under the same umbrella term of “cloud computing”. Both cloud computing platforms ultimately aim to help businesses develop and meet their organizational goals: they offer many tools and frameworks to build an «on your own terms» platform.


But how would a manager choose between them? If a company already works with a platform, why consider getting a service from a competitor?

Part of the answer lies on the many different services they supply, as well as the current data infrastructure of the company in question. Microsoft is ubiquitous to any company in the world, but AWS seems to be more advanced in the cloud computing game as of right now (hence its frontrunner status in the JEDI contract). Regardless, none of the cloud suppliers offer the same service in the exact same way, or with the same value proposition. For a manager, choosing between AWS and Azure might be a balancing act, and they might end up using both.

Ultimately, the Cloud consumes our day-to-day lives. From the political contrivances as seen in the JEDI contract to the shift in paradigm that directly affects decision-makers, both high and low in a company, articles much like these are but a warning sign of a braver new world to come.


Sources: US Department of Defence, US Datalab, NY Times, Gartner, Microsoft Azure, AWS, Wikipedia


João Vaz Guedes - João Vaz Guedes Maria Mendes - Maria Mendes

Daniel André - Daniel André

Moore’s Law

In 1975, Gordan Moore was asked to write for a special Edition of Electronic Magazine about the future of silicon components during the next decade. Integrated circuits (we explain what these are below), known today as computer chips, were discovered in the late 1950s and had just begun being tested. When analysing the achievements made by his  company, Intel, and others in the previous years, Moore observed that the number of transistors per microprocessor, as well as other electronic components, had doubled each year, and he __projected that this rate of growth would continue into the future. This prediction became known as Moore’s Law, which states that the number of transistors in an integrated circuit doubles about every 2 years.

1.png

Moore’s main goal was  to transmit the idea that integrated circuits would lower the costs of technology: the larger the number of components, the lower the cost per component, therefore decreasing the price of computers and other electronic devices. According to the law, which became an industry goal and consequently a self-fulfilling prophecy, processor speeds would increase exponentially, because transistors would scale down so that more units could be packed together on a computer chip. The more transistors, the easier and quicker electrons could move between them, therefore increasing a computer’s efficiency and speed. So as the number of transistors on an integrated circuit has doubled every 24 months, computing power has doubled about every 18 months.

Moore’s Law was regarded as a «Rule of Thumb», rather than a Law: the technological industry intended to keep up with its growth rate and so settled a road map based on the continuous innovation of transistors and chips in line with Moore’s Law. Because of the increasing demand for devices, manufacturers and producers strive to innovate and create next-generation chips, less they become obsolete in the face of innovating competition.

2.png

Many devices that we use nowadays owe their existence to the evolution of integrated circuits; Moore himself stated that «Integrated circuits will lead to such wonders as […] personal communications equipment», currently known as mobile phones. Our laptops and electronic wristwatches, medical imaging and digital processing technologies were made possible because of Moore’s Law. In fact, it has been argued that Moore’s Law is one of the main drivers of the economic growth seen in the last 50 years, as it has led to tremendous gains in productivity.

However, over the past decade the pace innovation has slowed down, with Moore himself predicting the end of his law by 2025. To explain why, here is a small primer on transistors:

A transistor, while a simple invention in concept, is one of the foundations of our modern technological society. Without it, you would not be reading this article. Your phone probably has more than 1 Billion transistors, which are microscopic and manufactured with incredible precision on a thin wafer of silicon. A transistor is like a switch, or gate, that either blocks or lets a small current of electrons pass through it. It’s the billions of combinations of these gates opening and closing that permits a computer to perform all its tasks from basic arithmetic calculations to displaying this text on your screen.

Transistors today are around 10 to 20 nanometres. That’s only around 100 times larger than an oxygen molecule so small that engineers have run into problems that they cannot economically fix – quantum tunnelling. At sizes so small, the laws of quantum mechanics take hold and electrons start to obey different rules, where sometimes they simply cross a closed transistor, corrupting the data in the process. This problem has no economical solution now and so we have reached the death of Moore’s Law.

The end of Moore’s Law has long been considered an inevitability. Gordon Moore himself set a conservative timeframe in his original 1965 paper, estimating that his observed rule would remain “constant for at least ten years”. And its death has been proclaimed many times since. Only the ingenuity of the industry has kept it alive for so long.

But this endgame does not signify an end for the advancement of computational power. Several alternative possibilities lie on the horizon, and these range from the simple and intuitive to the fantastical possibilities brought by the advent of quantum computing.

On the intuitive side, we could focus on creating specialized chips for certain tasks. The processor that powers your device (smartphone or computer) is a “beefy” unit, capable of undertaking a wide variety of tasks, albeit at the cost of efficiency. For very specific tasks, specialized chips may be created.  Other solutions may lay anywhere, from finding other materials to creating improvements in software to take advantage of current architecture – did you know that Excel does not take advantage of the extra cores in your computer to handle those heavier, 20.000 row tasks? Although these types of improvements can still take us a long way, at least a forty-fold increase in computing power (around 5 years of innovation), they can only take us so far. When it comes to transitioning to other materials, Graphene has been touted as a possible replacement, but the research is still in the early stages.

On the more futuristic side, quantum computing could usher a new age of technology to rival that of the integrated circuit. Last year, Google published a paper on Nature claiming to have solved a task in under 4 minutes that would have taken a modern supercomputer, the processing equivalent to “around 100.000 desktop computers”, over 10.000 years. There are also studies looking into conceptually abstract possibilities like using DNA to perform arithmetic and logic operations, as well as storage.

Although we will probably never have personal quantum or “DNA” computers, because their upkeep costs and upfront investment are prohibitively high, a world in which a handful of companies offer processing solutions to anyone via cloud computing, much like we see today, sounds plausible.

What impact could the end of Moore’s Law have on the economy? How can we bring about the age of AI if we do not have the hardware to support software innovation? We can only wait and see what happens.

Sources: Intel, Washington Post, Nature, 311 Institute, MIT Technological Review, NY Times, Wikipedia

The Renewable Energy Sources Act: From words to actions

Karl Marx once said that, until now, philosophers limited themselves to interpret the world; however, the goal is to change it. And change is necessary –human beings reached the current state of evolution due to their capacity to adapt and overcome.

Nowadays, people (or at least their vast majority) are concerned with climate change and all its associated consequences, such as the melting of ice caps, rising sea levels, the extinction of species, or still the increasing frequency of natural disasters. We’ve made estimates and we’ve searched for solutions–once again, we looked at innovation for a way out. Now, it’s our job to act rather than to react.

The fight against climate change must include a shift towards renewable energies. The possibility of substituting fossil fuels with energy harnessed from wind, sun, earth and water creates lots of expectations but also lots of opportunities. The problem remains, however, of how to make these energies accessible, cheap and efficient – and this is why the German example is worth highlighting.

In 2000, Germany launched the Renewable Energy Sources Act, or EEG (Erneuerbare-Energien-Gesetz), a set of laws that consisted in a feed-in tariff (a transfer made to households and businesses that use renewable energies to generate their own electricity) in order to ‘enable the energy supply to develop in a sustainable manner in particular in the interest of mitigating climate change and protecting the environment, to reduce the costs to the economy and not least by including long-term external effects, to conserve fossil energy resources and to promote the further development of technologies to generate electricity from renewable energy sources’ (Renewable Energy Sources Act, 2014). This scheme replaced the Electricity Feed-in Act (1991), the first green electricity feed-in tariff in the world that was contested by the European Court of Justice, who considered it an illegal threat to competition (article 87 EC Treaty).

Consequently, in 1999, Hermann Scheer and Hans-Josef Fell developed the EEG legislation. This law imposed on grid operators the obligation to prioritise the purchase of electricity generated exclusively from hydrodynamic power or wind, solar radiation or geothermal energy, instead of nuclear power, gas or coal. Besides this, grid operators should pay compensations to producers based on the technology used and quantity of energy purchased, giving producers a feed-in tariff with a duration of 20 years in which they could guarantee the return of their investment. The trick here, in order to avoid the same scrutiny by the European Court of Justice, was that, in contrast with the 1991 Electricity Feed-in Act, those payment were not considered public subsidies because they didn’t derive from taxation but from a surcharge on consumers that shared the expenses – so, there was no charge in Germany’s public finances. The EEG also foresaw a regular decrease in the feed-in tariffs (known as ‘degression’) as technologies became more cost-efficient.

The EEG legislation has been reviewed over the years and suffered some changes in 2004, 2009, 2012, 2014 and 2017.


IMPACT

Since the EEG legislation was enforced in 2000, the cost of photovoltaic systems decreased by 50% in 5 years. In relation to the coverage of renewable energy, the initial target for Germany was for 12.5% of its electricity production to derive from renewable sources by 2020. In 2007 it already covered 14.7%. In 2014 it covered 27.4% and in 2018 this value was at 37.8%. Currently, the target for 2050 is at least 80%. From data obtained from the period between 1990 and 2015, it’s visible that wind was the renewable source that most contributed towards Germany’s green transition in regard to gross generation of electricity.


gross generation of electricity by source in germanygross generation of electricity by source in germany

Besides this, thousands of long-lasting jobs have been created from these clean sources of energy – wind was the source that employed most people, more than doubling the amount of jobs created between 2004 and 2013, followed by biomass and solar. Usually, the abrupt transition to renewable energy leads to fears of some loss of jobs, which has strong impact on public opinion. However, the data shows that the transition to renewable energies demonstrates huge potential in creating more jobs than it destroys.


publicly funded research administrationpublicly funded research administration

Nowadays, in Germany, renewable energy can compete with fossil fuels, even when taking into account the cost of transport of such energy and the costs associated to the building of the infrastructure required for its production. In the case of renewable energy, the cost per kilowatt per hour depends on many natural factors, such as amount of wind and hours of sunlight, but they are all, on average, below $0.11/kWh, with onshore wind and geothermal being the cheapest (both $0.03/kWh), whereas biomass and offshore wind are the most expensive ($0.09/kWh and $0.11/kWh, respectively). Coal represents a cost of about $0.13/kWh and nuclear energy around $0.09/kWh.

This bet on renewable energy turned out to be very profitable for Germany. The cheaper production of energy allowed the country to be much more competitive in terms of electricity prices, until recently enforcing its position as a net exporter of energy over the years. France, Austria and Netherlands are the most common destinies of German energy.


ACCEPTANCE

The EEG legislation could be considered a social and economic success – it has increased the use of renewable energy while raising awareness about pollution, created thousands of jobs, and allowed Germany to become profitable in this sector. This success is further demonstrated by the attempt of other countries (as, for example, Brazil) to copy the feed-in tariff in order to accelerate their transition to renewable energies.

On the 8th of May 2016, there was a point during the day in which Germany was guaranting 87% of the energy being consumed by the entire country at that specific time from renewable sources. The production was so high that producers were obliged to offer free energy to consumers in order to drain the electricity.

However, the EEG is far from perfection and has been criticized many times to this day. The biggest grievance against this law was the high levels of feed-in tariff support. This position gained the support of the European Commission in 2014 (even though, until this day, the EC defends that ‘well-adapted feed-in tariff regimes are generally the most efficient and effective support schemes for promoting renewable electricity’) and led to some modifications in the legislation. In 2014, it was adopted what is known as the EEG 2.0., in which the compensation rates ceased being defined by the government to becoming defined through auctions.

This auction system was criticized too. In 2012, estimates pointed out that almost half of the renewable energy capacity in Germany was owned by citizens through energy cooperatives and private installations. According to the critics, the auction system would harm these kinds of producers, threatening all the development allowed by the original EEG legislation.

Today, Germany wants to obtain between 80% and 100% of the electricity consumed within its territory through renewable sources by the end of 2050. This path won’t be easy in a country where big coal plants are still the main source of energy, even after all those efforts of transition. In July 2019, Germany became, for the first time in almost two decades, a net importer of energy.

Once again, capacity to adapt and overcome is required.

#block-yui_3_17_2_1_1581875946965_36281 .sqs-gallery-block-grid .sqs-gallery-design-grid { margin-right: -20px; }
#block-yui_3_17_2_1_1581875946965_36281 .sqs-gallery-block-grid .sqs-gallery-design-grid-slide .margin-wrapper { margin-right: 20px; margin-bottom: 20px; }

Harmony OS

“Hongmen” is a name which can be traced back to Chinese mythology, representing the primordial chaos of the world before creation. That’s the idea beyond the new open source, microkernel-based distributed operating system (OS) being developed by Huawei, known in China by Hongmen OS and in Europe by Harmony OS.

Harmony OS has been in development since 2012. However, in May of 2019, Huawei increased its efforts in the operating system as a response to the export restrictions imposed by the United States. It was born from the war between the US and Huawei, which begun with strong suspicions by American intelligence agencies that Huawei was linked with Chinese military forces, being able to supply the Chinese government with access to data using a backdoor. For the US, it was a question of national security. Consequently, the US started imposing export restrictions to Huawei. Companies such as Google and Microsoft were prohibited of making business with the Chinese company. This created a problem for Huawei, given that the operating system used by their devices was Android, owned by Google. Within this new context, the necessity to innovate and create a solution in order to respond to the restrictions on commerce appeared. Necessity is the mother of creation, and so the Harmony OS was born. Harmony OS was announced to the world without warning at 9th of August of 2019, as a plan B to the use of Android.

Huawei’s senior vice president, Catherine Chen, said that Harmony OS was designed for any Internet of Things (IoT) hardware. This means devices such as smartphones, televisions, smart speakers, cars, computers and other connected devices will have the same OS. The creation of an ecosystem, removing barriers will make contact between devices much easier, a convenience which may be pivotal to attract consumers. Google doesn’t have such an advantage, and probably never will. It will be also an overall smoother OS, making it more appealing to app developers.


iot.png

Huawei’s Harmony was presented as being transparent, smooth, safe and unified, while Google’s Android was described as “unstable” and “fragmented”.

Although we can only trust Huawei’s words to a certain point, it is important not to forget that this is a more contemporary OS in comparison to the more modern versions of the now outdated Android.

Ready to take the market?

We already established that this new operating system is, being as euphemistic as one can be, bearable. But is it enough to compete with the leading OS’s in the market? Android absolutely dominates the market, being the OS of current Huawei smartphones, and iOS has Apple, the company with a powerful brand and the merits of having introduced smartphones to the world, as a backbone. Various OS’s supported by big companies have failed in the past. Microsoft Phone was very convenient between devices and had the fortune of Bill Gates’ support it, but lacked in practicality, becoming unattractive for consumers, which led app developers to become uninterested in developing Microsoft-supported. Samsung had the Tizen, which is still one of the smoothest OS’s ever developed. However, Google answered to this threat by limiting the access from Tizen users to the Google App store, making it rather difficult to download apps. Tizen is now restricted to Samsungs’ smartwatches and smart TV’s. In both these cases, Windows and Samsung, two gigantic players in the software industry, were strangled by this Google dominated market.

Huawei knows this, which may be the main reason that it will not use Harmony OS for smartphones, at least not immediately. Huawei is playing safe, continuing to use Android for as long as they can. However if US sanctions continue, Huawei will have no choice but to change to Harmony.

huawaei.png

Will it work?

It will be quite the challenge for Huawei, but it certainly can make it happen.

Transition will be more difficult in Western markets. It is true that Huawei is everywhere nowadays, but we are all very used to Android and we don’t see any good reason to change. Being an overall best OS is not good enough. Most western apps will take time to convert, even though they will operate smoother than in android when they do. With only the past sanctions on Huawei, there was a considerable drop in sales in Europe, but they remained the second most bought brand.

In China however, it’s a whole new world for Huawei. With all the buzz between China and the US, and after seen as such conflicts affect the smartphone industry, Huawei is considered a patriotic symbol against America. In a way, the US government helped giving the perfect environment for the home success of the company. In China, not only Google but other American apps like Facebook and Uber were banned. Years have passed and now there are homegrown alternatives. The dependence on Android weakened and Huawei might finish it for good. There are over 200 000 000 Huawei users in China. If Huawei decided to lunch Harmony OS for smartphones, and all these users potentially convert, they finally become free of any American dependence. App developers will have no problem into converting their apps to be Harmony supported as it is overall better and has a minimum of 200 000 000 users as potential customers. And from there, as more developers work to be supported by Harmony, it will become more appealing to the Western world. It will spread to the European and American markets as Huawei keeps its success as a brand.

For now, Harmony OS isn’t ready to take the world market of smartphones, but you can already find it on Huawei new Smart TV’s. If these times of uncertainty continue, who knows until where this operating system could go. Huawei already said that it will not wait much longer – it will launch Harmony to smartphones between May and August of 2020 if the sanctions do not end. Android is a dinosaur compared to this brand new OS, and it may very well become obsolete. Conquering China seems easily possible to Harmony OS, and thereafter, there will be a world for the taking.


Sources:

  • CNN Business;

  • Technode;

  • Publico;

  • Business Insider.

Article Written By:


João Mário Caetano - João Mário Caetano João Rodrigues - João Rodrigues

The Impact of 5G

Over the years, new inventions have usually impacted the world economy as well as the business environment, contributing to the so-called creative destruction. At this day and age, a new technology is about to revolutionize the way we interact with the digital world: it is the new generation of cellular network, also known as 5G. While it is true that this disruption has raised controversies regarding health issues, privacy and external political interference, the expected economic returns from the availability of mobile high-speed and low-latency connections are highly relevant, and should therefore be considered and assessed.


One of the most important and relevant impacts of this new technology is the increase in productivity derived from cost reductions or efficiency improvements, which are predicted to occur in virtually all economic sectors. For example, in industries that are directly linked with vehicles or machines of any kind that require human intervention, the possibility for remote control and monitoring of these assets allows for operational tasks to be performed by someone who is far away, thus reducing time spent and transportation costs.

Even in public administration benefits are expected, with high-speed connections allowing for faster responses of emergency services. Another case is the healthcare industry, in which the possibility for remote procedures, more effective monitoring of medical equipment and collection of data from wearable devices will reduce reduce costs and improve the quality of healthcare provided. Accordingly, this contributes to a healthier and more productive workforce, thus generating multiplying effects on the rest of the economy.

Overall, this expected increase in productivity reveals extreme importance, since it will enable a higher economic growth in the long-run.

Another positive impact foreseen from 5G is the ability to unlock the full potential of the Internet of Things. With the explosive growth in the number of connected devices and gadgets, existing networks are struggling to keep pace. Therefore, 5G’s high capacity is expected to allow seamless connections, with this product connectivity allowing firms to create and launch completely new business models, thus driving entrepreneurship and growth.

Furthermore, in remote regions where the installment of wired infrastructure is not economically viable due to low population density, 5G emerges as an alternative to offer these areas more reliable and higher quality internet connections than they currently have. This allows for the settling of businesses and job creation, with all the benefits associated with it. However, urban areas are predicted to be the first ones to receive the technology, meaning that this may only occur further in time.


Picture11.png

At such a premature stage, forecasting the economic impacts of this novelty is a quite complex task. However, all in all, 5G is expected to bring a push to economic growth and prosperity for the countries that pursue the disruption, leaving behind, in relative terms, the nations that forego the opportunity. Therefore, this may be a major source of inequality between the richer countries, which are able to implement the technology, and the poorer nations that still have to employ a significant share of their scarce resources in satisfying more crucial and basic needs, and so contributing even more for the digital divide.


Sources

  • Cnet

  • Medium

  • McKinsey & Company

Fantastic Unicorns and Where to Find Them

Unicorns: The Origin

More than a mythological creature, a unicorn has gained a new application in this century – to designate privately-held startups with a value of over $1 billion. The majority of these companies are in the vanguard of their industry, building the path for a generation of new technologies, while being considered as a bellwether of future economic development. This unique expression was created by Aileen Lee, founder of venture capital fund CowboyVC, in her article “Welcome to the Unicorn Club”, where she proceeded to study software startups founded between 2003-2010, coming to the conclusion that only a mere 0.07% of these privately-held companies would ever reach the billion-dollar valuation, realizing that finding a firm with such characteristics was as difficult as finding a mythical unicorn.


Looking back at the first decade of this century, which was analyzed in the article that created the “unicorn” concept, billion-dollar startups were generated at a rate of four per year, which amounted to 39 unicorns in total. Furthermore, San Francisco was identified as the headquarters to the vast majority of unicorns (15), followed by New York (3) and, tied in third place, Seattle and Austin (2). Lastly, the most fertile industries for unicorns were e-commerce, consumer audience, software-as-a-service (SaaS) and enterprise software.


In only nine years, this landscape has changed immensely and China is to be held accountable. In 2018, a study conducted by Hurun revealed that Chinese unicorns were being conceived at a rate of one every 3.8 days, with the country owning up to 41.7% of the world’s unicorns, with a sum of 206, shortly followed by the USA, with 203 unicorns. The top three ecosystems were Beijing (82), San Francisco (55) and Shanghai (47), and the aggregated value of unicorns in the world sums up to a staggering $1.7 trillion. Lastly, the industries containing the largest percentage of unicorns are e-commerce, fintech, cloud and AI.

Aileen Lee mentioned disruptive technology as a conductor for the creation of unicorns. This inference still holds, as we evolve into a technology-dependant society that requires the expansion and automatization of various fields ranging from health to blockchain, which creates a need for constant technological breakthroughs and, consequently, fertilizes soil for innovative startups to blossom. Moreover, capital injection into these initiatives has been subject to exponential growth across the years through venture capital, that enables startups to raise private equity and ensure long-term growth, while allowing investors to potentially earn disproportionately high returns.


Picture4.png

Why China?

As seen, the unicorn phenomenon was most prominent in the US but, recently, China has turned into the leading country. The fast economic growth and rapid modernization of the economy play a major role in the development of newly-founded companies.

Privately held companies valued at $1 billion or more - 2012

Privately held companies valued at $1 billion or more – 2012

Privately held companies valued at $1 billion or more - 2018    Source: Wall Street Journal

Privately held companies valued at $1 billion or more – 2018

Source: Wall Street Journal


This massive growth was only made possible by the increased pace of technological innovation, as mastering the use of technology turned into the biggest source of competitive advantage for startups. In the particular case of China, around 40% of the unicorns are technology-driven, and the majority of them consider big data technology to be the biggest source of innovation and differentiation. Big data is especially relevant, as it allows tracking consumer habits and studying behaviors, thus improving overall marketing strategies adopted by the companies. This contributes to higher competitiveness and lower operating costs.

China’s capital market performance is also essential for the promotion of unicorns. The country’s capital usually arises from two forces – government policy and extremely diversified sources of financing channels. China, in particular, is living in a golden era of opportunities for startups who are looking for investors, as the CSRC (China Securities Regulatory Commission) is currently smoothing regulations for investment, hence attracting more unicorns based overseas to pursue their interests in the Chinese market. Apart from the Chinese government, the main global unicorn investors include the likes of Alibaba, Tencent and the venture-capital giant Sequoia, which has invested in 92 unicorns.


Some argue that the valuation of the Chinese unicorns might be inflated compared to the US, but the growth in investment in those firms has largely surpassed the values registered in American startups, which can be explained by the difference between the capital markets of both countries. The Chinese government ended up injecting a big dose of “Patriotism” in their market, directing resources specifically to make their firms more competitive, while the US opted for a more passive approach in terms of access to credit. Furthermore, the American system is more regulatory compared to the Chinese market, which facilitates investments in the oriental market when compared to the occidental one.


The following companies have not only been dominating the Chinese but also on a global scale, having conquered the title of China’s (and the world’s) most valuable unicorns, with a combined value close to 280 billion dollars (around 16% of all unicorns combined):

Ant Financial: This fintech company, founded just 5 years ago, is currently the highest valued unicorn in the world, with a valuation of $150 billion. The startup was spun out by the Alibaba Group, it has as subsidiaries the world’s largest online payment platform (Alipay) and the largest money-market fund (Yu’e Bao). The company views itself as “dedicated to bringing to the world equal opportunities”, while being able to answer the financial needs of society, through the use and development of innovative technologies.

Bytedance: Created in 2012 by Zhang Yiming, the internet technology enterprise gained importance through the development of AI platforms that informed and enhanced interactions between people all around the globe. Combined, all the apps and platforms of Bytedance have about 1.5 billion monthly active users (more than Instagram) as of July. Providing trustworthy information was what Zhang’s team had in mind when they created their most successful project “Toutiao”, which not only works as a news platform but, due to the complex algorithms used, is also able to provide a tailored feed to each individual user.

Didi Chuxing: Finishing the top 3, the ride-sharing company serves approximately 550 million users in over 400 cities. The 7-year-old enterprise triumphed from the moment it was launched, reaching 55% of the smartphone-based taxi-hailing market only 1 year after its creation. The founder Cheng Wei did not settle for the Chinese market, having invested in Uber and Lyft, while expanding to Japan, Brazil, and Australia. Didi was also included twice in Fortune’s “Change the World” list.


The downturn

Despite having experienced rapid growth in the number of unicorns, China’s ability to incubate billion-dollar startups has plummeted in the first semester of this year. In the first six months of 2019, only 36 unicorns were fostered in China, corresponding to a 30% drop when compared to the same period in 2018.

The development of the Chinese tech industry has put the country in the crossfire of the trade war, with the Trump administration accusing China of intellectual property theft. In May 2019, the US blocked Huawei Technologies Co. from importing American materials, and is considering doing the same for a wide variety of startups. The trade war has affected valuable startups such as smartphone-maker Xiaomi Corp. and delivery company Meituan Dianping, who saw their stocks plunge after going public, which reinforced to investors that private-market valuations are disproportionate to their real value.

The deceleration of economic growth has resulted in uncertainty in the funding market, drying up venture capital. The second quarter of this year was marked by a 77% tumble in investments in China, which translates to $9.4 billion, contrasted with the $41.3 billion investment in the second quarter of 2018, a peak for the China venture deals. Meanwhile, venture deals in the US rose about 15% and investment in Europe climbed 32%, according to Preqin.

Source: Preqin (via Bloomberg)

Source: Preqin (via Bloomberg)

The truth is that China had entered in a tech bubble, with the median tech enterprise valuation corresponding to about 31 times its EBITDA. Years of steep growth in tech investments resulted in enormous profits. Now, concerns regarding pre-IPO valuation recall the ones of the dotcom bubble. In 2017 and 2018, around 62% of venture-backed Chinese internet and software companies that filed for a public offer lost more than 30% of their values within the first 12 months after their listing.

However, valuations have not yet declined in China. The country’s startups have been resistant to down rounds (when a private company offers additional shares for sale at a lower price than had been sold for in the previous financing round). Meanwhile, venture firms are pivoting funding to alternative business models that require less capital, such as enterprise software.

This may simply be a time when Chinese venture capitalists are being more cautious, provided the volatile negotiations between Donald Trump and Xi Jinping with yet unpredictable results in the tech industry. All in all, whether the current downturn is just a setback or turns into the burst of a financing bubble depends on how venture investors, entrepreneurs and market regulators behave through this economic tension.


Sources:

  • Bloomberg

  • Preqin

  • Financial Times

  • Statista

  • Hurun

  • PwC

  • “Welcome To The Unicorn Club: Learning From Billion-Dollar Startups” by Aileen Lee

Article Written By:

Diogo Alves - Diogo Alves Lourenço Paramés - Lourenço Paramés