Smart cities: are they a smart idea?

Reading time: 7 minutes

Article written in partnership with Nova Tech Club.

In the European Union’s point of view, a smart city goes beyond the use of digital technologies for better resource use and less emissions. It means smarter urban transport networks, upgraded water supply and waste disposal facilities, as well as more efficient ways to light and heat buildings. It also means a more interactive and responsive city administration, safer public spaces and striving to meet the needs of an aging population.

SDG 11 of the UN 2030 Goals looks at these solutions as ways to achieve more inclusive, safe, resilient, and sustainable human settlements. For example, their 2022 report highlights that 99% of the world´s urban population breathes polluted air, and municipal solid waste has collection and management problems that need to be tackled immediately (only 82% of this waste is collected and only 55% is managed in controlled facilities). In line with this, smart cities emerge as a possible approach to deal with these issues.

In short, smart cities are designed to achieve a sustainable organization as a society. It incites discussions between urban planners, city councils and even technology giants so as to enhance the population´s lives. Nonetheless, opposing views bring forth a sense of distrust related to the actual smartness of an actual implementation of the concept of smart cities.

Portugal’s Smart Cities initiatives

Unbeknownst to many, Portugal has a vast number of initiatives with the aim of creating or developing smart cities by easing collaborations between municipalities
SMART PORTUGAL has been promoting the ‘smartification’ of Portuguese cities through various events, namely Smart Cities Tour and the “Cimeira dos Autarcas”, in an effort to increase national and international collaboration, but crucially, to let the public in on the innovations already in development. In collaboration with “Associação Nacional de Municípios Portugueses”, NOVA Cidade – Urban Analytics Lab, the organization behind SMART PORTUGAL, has implemented an annual activity plan to accelerate smart city innovation in the country, having also created simple and clear guidelines and standards, paving the way for Portugal to become a global leader in the field.

Portugal Smart City, under the SMART PORTUGAL program, tries to bring cities and companies together to connect innovators and implementors. Gatherings and fairs, such as the SMARTCITY Expo World Congress, allow businesses, academics, and legislators to come together and find partners for pioneering projects. Some initiatives have already broken ground and are producing palpable results. Rener Living Lab, or RPCI, formed in 2009, is a national smart city network that now accounts for more than 120 municipalities with certified smart projects, distinguishing their quality and workability, and increasing their international projection. 

SMARTCITY Expo World Congress

In more practical terms, a number of Portuguese cities have been recognized as being in the forefront of positive change. 2020 saw Lisbon crowned European Green Capital following efforts to use residual waters to feed the city’s parks and an affordable public transport pass that allows citizens to cheaply travel between the cities and surrounding 18 boroughs. Valongo has been distinguished with the European Green Leaf award in 2022 as a result from an effort to increase city energy efficiency and create urban farms. Guimarães has also been classified as one of “100 Smart Cities” by the European Commission through its efforts in river shore quality with the construction of “Ecovias”, as well as a bet in a circular economy with the programme RRRCICLO, among others. 

The European Approach on Smart Cities

On a European scale, there have been in the past few years many examples of city implementations and EU initiatives. Take Copenhagen for a great urban design example: approximately 43% of all commutes are conducted by bikeVienna´s Citizens´ Solar Power Plant project must also be highlighted since it was very successful in engaging its citizens and energy companies to promote solar power energy. In Barcelona for example, the REC (Real Economy Currency) is introduced as a local social currency, which allows transactions in a community between individuals, institutions and businesses that accept it. This project fosters small businesses that are struggling to survive in digital times and in big cities.

The European Union has been very avant-garde when it comes to respecting the historical roots of cities and advocating for their sustainable future. EU Missions are a new way to bring concrete solutions to some of our greatest challenges. They have ambitious goals and hope to deliver tangible results, with the Climate-Neutral and Smart Citiesinitiative being one of the most ambitious missions that aims to deliver 100 climate-neutral and smart cities by 2030, ensuring that these cities will act as experimentation and innovation hubs to enable all European cities to follow suit by 2050. Funding will cover a wide range of subjects such as urban planning and design for climate-neutral cities, sustainable urban mobility, positive and clean energy districts, with a lot of projects already being implemented. Furthermore, personal data protection is also a pertinent subject to the EU´s concerns. Project Decode, for instance, provides tools that put individuals in control of whether they choose to keep their personal information private or share it for the public good.

Smart Cities across the World

While smart city projects exist and thrive worldwide, some cities have gone above and beyond in creating a smart ecosystem for its residents, improving sustainability and efficiency. Masdar, in the UAE, was what can perhaps be called the most significant green project in the Arab World. This pilot project aimed at housing 50 thousand people in an urban landscape with no automobiles and making sole use of renewable energy. While the initiative has seen its fair amount of success, it’s important to point out that it is still significantly smaller than first planned, with some critics also pointing out that the focus should be on greenifying existing cities and not creating new ones. 

Songdo, South Korea followed a similar path to Masdar, though at a more significant scale. With innovative urban waste collecting systems, trash is transported using a network of pipes eliminating the need for trucks. With the concept of Ubiquitous City – where citizens can access services anywhere, anytime, from home banking and teleconferencing to intelligent transport systems and remote sensing – becoming an area of intense focus, Songdo has also incorporated some innovations in line with it. CCTV and sensors, for example, have become essential for the Korean city to control traffic flows and quickly respond and adapt when accidents occur, informing locals of exact public transport timetables and occupancy. 

Songdo Control centre (CCTV and sensors control)

Nevertheless, this city concept has its flaws. Korean residents have complained that, maybe due to its intent as an international city, Songdo doesn’t feel quite authentic. Foreigners, in contrast, have a sense of deja vu as the replicas of the boulevards of Paris, pocket parks of Charleston, Central Park in New York City, or the canals in Venice make the city seem like a patchwork of other urban areas.  Another big area of complaint has been what many thought would be the city’s main selling point: technology. Constant surveillance and monitoring have left many with a feeling of unease, with concerns for privacy and intellectual property growing. Similar smart projects have been halted entirely after encountering significant pushback from citizens not wanting to share so much information. Alphabet’s Sidewalk Labs Toronto project, with “(…) mass timber housing, heated and illuminated sidewalks, public Wi-Fi, and, of course, a host of cameras and other sensors to monitor traffic and street life(…)” was one of said projects, facing heavy criticism from the get-go.

Conclusion

Smart Cities are now more popular than ever. Meeting UN’s SDG 11, this project has even gained momentum in small countries such as Portugal, with EU legislation adapting to ease the rise of smarter and more sustainable practices. Worldwide examples, from the Middle East to South East Asia, offer a glimpse of more radical initiatives, along with its benefits and shortcomings. While services may be improved upon, privacy is an ever-growing concern, and if legislators, investors, and urban planners want to go ahead with these new forms of design and construction, the safeguard of private information needs to be on top of everyone’s mind.  


Sources: Energy Cities Hubs, DECODE project, European Commission, The Global Goals, NOVA Cidade Urban Analytics Lab, Forum das Cidades, Jornal de Negócios, The Verge, Bloomberg

Manuel Rocha

Manuel Lourenço

(guest writer NTC)

Telecommunications’ Forgotten Link

Deep under the ocean lies a wide reaching entity spanning over a million kilometers, with the power to influence and shape our very own society. This mysterious object is known to attract sharks’ voracious appetite, yet it is not an animal of any kind. It’s not Cthulhu[1], it’s fiber optic submarine cables. And believe it or not, these structures are the foundations of the telecommunication systems that we are so reliant on. But why and who place cables at the bottom of the ocean?

For this month’s article, the teams behind Development Economics and Technology, Health and Environment have challenged each other to bring their own spin on cables running through the ocean’s floor.


From the telegraph to the internet

The concept behind wired communications isn’t properly new. Curiously enough, neither is the idea of running them by the ocean floor to connect regions far part. In fact, the first Transatlantic cable was installed all the way back in the 1850s. This venture – an attempt to shorten communications between the UK and its American allies from a week’s long process to a single day – would take the form of a telegraph line. Composed by just seven innocuous copper wires, it lasted all of three weeks before it broke. Nevertheless, it laid the groundwork for the communication’s architecture of the future.

Fast-forwarding to today there’s a 1.2 million kilometers long fiber-optic submarine cable infrastructure connecting the world together. Whereas the first cables were dingy and incapable of sustaining a slightly higher voltage, these optical fibres are protected by several layers of plastic and metals, shielding them fromthe hazardous threats of the environment (at least non-shark related threats). They weigh over 1.4 tones per kilometer and come to about 10 cm in diameter.

It’s fair to say that a lot of resources go into these cables, and one cannot help but wonder how they are made or even how they monetized.


Picture 1.png

The answer should be intuitive for both: in one, the telecoms footing the bill split the bandwidth – the rate at which data is transferred – between themselves. With this allocated bandwidth, they provide communications services (such as loading up this webpage) to their respective markets.

As for laying the down cables, it is mostly a matter of preparation and work. Firstly, the installers must evaluate the ocean floor path they wish to install the cables on, something done through different scanning methods, which is both arduous and expensive. After choosing the desired path, a special type of cargo ship, such as the one in figure, must be employed to lay the cables underneath the ocean floor, in order to prevent their breakage.

Be it due to the need of each of these companies to operate several of these vessels in order to lay down these cables, or the fact that they are expensive to make on their own, it’s fair to call it a capital-intensive industry. New players are inhibited by the steep entry barriers, even taking into account that these cables have a relatively short lifespan – each should last about 25 years – and don’t seem to keep up with the demand of our increasingly tech-hungry societies. Competition is nevertheless fierce for a relatively limited number of contracts.

But who are these companies: are they privately or publicly owned? Currently, the submarine fibre optic cablesare owned by private companies with a stake in communications. This also includes companies such as Fujitsu, Alcatel, Huawei and so on. However, they often receive funding for these ventures and cooperateclosely with public entities – after all, communications are essential to any functional nation-state, even beyond consumer markets.

But here’s a question that you might have asked yourself already: Why not satellites? Couldn’t this essential service be assured by satellites, whilst also ensuring a wider reach? After all, satellites are all over the place and next-generation connectivity tends to take wireless form; be it my earbuds or even charging my phone.

For starters, there is latency. Fiber optic has lower latency times, meaning that the travel time for the internet signals to their destination and back are shorter. It might not appear to be that relevant – unless you play videogames – but even a difference as small as 20 milliseconds can severely disrupt your internet activity. Second, wired connections have larger bandwidths, which as mentioned before, are the aggregate maximum capacity of an internet connection. This is especially true when taking into account larger scales: whilst our beloved cables operate at a capacity measured in the terabits per second (1012), satellites only provide around the gigabit per second (109) region. From 5G to autonomous cars, the main driver for these technologies are the submarine cables.


Big tech and the scramble for the African market


Locations of the world’s data centers

Locations of the world’s data centers

Underwater sea cables are not only a better solution when compared to satellites, but are also good investments for companies that have to think big. Whereas in the past the main investors were telecoms, the new wave of investors across Europe and America are big tech. But what about Africa, does it apply to that continent as well?  The answer is yes.

Companies like Google and Facebook, among others, have made considerable investments in Africa since it represents a market with highly unexploited potential. Moreover, network traffic in this continent has increased sharply, in addition to demand for the Internet, therefore making it both viable and vital.

Cloud computing is also on the rise. As companies migrate their IT infrastructure to the cloud, big tech companies who already provide the bulk of cloud computing and services start building more data centers. They consume a lot more bandwidth.

Therefore, gaining a stake in fiber optic cable is an opportunity for both vertical and horizontal integration. Google is already providing phone plans across the world. But as with any continent and country, Africa comes with a number of challenges. For starters, it is limited in the number of English speakers as well as levels ofliteracy; the bulk of the internet is made up of English-language websites, effectively making the contentsinaccessible for most. Reliable access to power, in addition to economic barriers, are very preponderant as well. Nevertheless, the implementation of communication technology is of extreme importance in mitigating the gap between developed and in-development countries, as well as empowering their populations.

In Africa, there are still many countries without any cable connection. The countries with the highest number of subsea cables landing stations are Egypt with 15 and South Africa and Djibouti with 11 cables each.


The number of connections between Africa and the world. The number of connections between Africa and the world.

The predicted investments in Africa will imply a greater need for data centers. At the moment, these are located in South Africa and Nigeria. This location corresponds more or less with the countries where there are more cable landing stations – in other words, countries with more and better infrastructures. Either because they have a bigger need for services, or because they are more stable, those are the places where companies will want to put their new data centers in.

Picture 1.png

Partly because the country with the largest number of data centers in Africa still has 20 times less centers than in Europe, Africa could become the next big battleground for companies vying for a stake in the world stage.


Picture 1.png

[1] “(…) fictional cosmic entity created by writer H. P. Lovecraft and first introduced in the short story “The Call of Cthulhu” (…) Lovecraft depicts it as a gigantic entity worshipped by cultists, in shape like an octopus, a dragon, and a caricature of human form.” (Wikipedia)

André Rodrigues André Rodrigues Daniel André Daniel André Laura Osório Laura Osório

Machine Learning (Part II)

To all our readers, this is the second part of an article still brought to you by humans. We encourage all to go read Part I here in case you missed it.

Why all the buzz around machine learning now? Just how many of them are there? What are ‘Neural Networks’ (otherwise known as deep learning) and why do they threaten to take our jobs? And finally, how likely is it that my robot vacuum cleaner wrote this entire article? (Tip: More likely now than ever before)


   Although similarities nowadays are sparse, Artificial Neural Networks got their name from being modelled after our own biological human neurons.

Although similarities nowadays are sparse, Artificial Neural Networks got their name from being modelled after our own biological human neurons.

To broach a topic as diverse as Artificial Intelligence only raises more questions than it answers. This is especially true when writing an introductory article to the topic. As a result, the Tech team is dedicating a second story to further develop ideas brought to the table during the first part of our article.

From deciphering literally all questions that Machine Learning can answer – from an abstract perspective in the very least – to explaining some factors behind the notable rise of Neural Networks. In keeping in tone with the previous article, we’ll further explain some of the nuance behind Recommendation systems (such as the ones used by Amazon and Netflix) and the way these systems (traditional vs. new) complement each other.

The 5 most useful questions ever answered by Machines

When breaking down Tinder’s diverse processes, we saw how Learners could be utilized to perform several distinct tasks (image recognition vs. matching) and how one system built on top of another (new data powered other learners). The result of this systematic and iterative approach towards Machine Learning shows how data can be used to extrapolate powerful predictions. It is but one of many successful examples in how these powerful algorithms constantly shape our lives.

Our first part also provides a notion of the breadth and versatility of Learners. Much like how it’s said that all plots in media are variations of just seven basic story archetypes, it’s said that Machine Learning can only provide answers to 5 basic questions. When looking at a user’s Tinder profile in order to assign a trait or personality, we looked at what is called a Classification task – “Is this A or B.” Assigning a score to a user to predict a match with another user is what is called a Regression task – “How much or how many” – something not so (mathematically) different from trying to predict house prices. More towards the end of that story, we also brought up Clustering in regard to its potential uses in segmentation – in other words, “How is this organized”.

The two other questions, despite playing a very minor part, were also mentioned in some way or shape. They are: “Is this weird?”, useful in anomaly detection (also known as the reason why you shouldn’t use a credit card for one dollar purchases) and “What should I do now?”; a question that a machine is likely to ask itself whether being taught how to drive or when considering an insurrection against its human overlords.


Yes, there is a model called  Logistic Regression . Yes, it is ironically cruel (especially if you’re hearing about all this for the first time). While objectively a Regression model (as in, it uses regression) it is used as a  Classifier /for Classification tasks (e.g. based on the regression output, it will classify an object as A if below a 0.5 threshold or B if above 0.5)

Yes, there is a model called Logistic Regression. Yes, it is ironically cruel (especially if you’re hearing about all this for the first time). While objectively a Regression model (as in, it uses regression) it is used as a Classifier/for Classification tasks (e.g. based on the regression output, it will classify an object as A if below a 0.5 threshold or B if above 0.5)

While reducing all types of Machine Learning to 5 simpler questions might help you understand the nature of them, it likely puts you no closer to figuring out which one allows the GPT-3 model to produce human-like text. It might surprise the reader to learn that of all models in the diagram above, only one directly relates to Neural Networks – and that it does not explain the human-like text capabilities of GPT-3.

Much how Machine Learning is a field of techniques within Artificial Intelligence, Deep Learning is an entire field within ML. Many of them have been around for decades now – even before a time where computational power allowed for the efficient use of ML – often times in the form of scientific papers that could never go beyond conceptual form. Neural Networks, much like a lot of techniques in ML, grew in use and popularity as processing power turned many of these techniques viable.

In this sense, Neural Networks are the latest – and perhaps greatest – of ideas taken out of the Machine Learning icebox. From ‘Supervised’ to ‘Unsupervised’, the school of ML is capable of answering and solving any of these tasks. Going beyond versatility, it has also proven itself highly successful in performing tasks that traditional techniques, could not.

What Machine wrote my news?

Pretend for a moment that a Machine is capable of human-like thoughts (they aren’t, despite their increasingly impressive cognition). Would GPT-3, while outputting text, ask itself “How many?” or “Is this A or B?”

Furthermore, could a non-Neural Network learner have produced such an outcome? Can we say for certain that Neural Networks are inherently better than conventional techniques? For either question, it has to do with the quirks in data. Neural Networks, more specifically Convolutional Neural Networks (CNN), excel at the many challenges brought up with image recognition (namely high dimensionality). When faced with traditional techniques, Neural Networks will not perform inherently better outside of one notable exception – data size.

Past a certain (big) size, Neural Networks are practically guaranteed to be the better choice due to scalability. The bigger the data, the better it works when measured against other models. Work in Machine Learning has a lot to do with measuring and evaluating performance, and in keeping in tone, it has more to do with picking the better model than writing thousands of lines of code.

Additionally, often times we will find a mixture of both (Neural vs. Traditional) powering our increasingly complex systems. Consider Amazon and Netflix; both boast powerful Recommendation Systems, a million-dollar idea (Netflix Prize) that nudges you towards the next movie or item.

A traditional Recommendation System is a matter of matrix factorization. In simpler terms, it is one of the easier algorithms you can write by hand (and with just one or two courses of Calculus). Another thing is that Recommendation Systems pair you with something likely to be relevant – either due to similarities with other users or items – in essence, a Regression or Classification task.

At surface level, much remains the same by migrating to Neural Networks. Data goes in the model, and a prediction (whether regression or classification) comes out. The interesting part is how Recommendation Systems can be used to transform the data before it goes inside the model. Layered on top of each other, a learner can perform multiple tasks (answering more than one question) before reaching our desired output.

To return to our initial question, the secret to what GPT-3 might think before a prediction is likely to be “How much/How many” – it is described as an autoregressive model after all. But the secret to its success might be in answering multiple questions in succession.


Sources: Netflix Prize, The Ascent, The Awareness News, The Guardian, Towards Data Science.

Coulter, D., Gilley, S., Sharkey, K., 2019. Data Science for Beginners video 1: The 5 questions data science answers. 22 March

Pant, R., Singhal, A., Sinha, P., 2017. Use of Deep Learning in Modern Recommendation System: A Summary of Recent Works. 7 Dec

Machine Learning (Part I)

“Machine Learning is like teenage sex: everyone talks about it, nobody really knows how to do it, everyone thinks everyone else is doing it, so everyone claims they are doing it…”

Machine Learning (ML) and Artificial Intelligence (AI) are buzzwords often used interchangeably in the casual and intellectual discourse of today. Many ideas often spring to mind when either is mentioned: data science, self-driving technology, big data and, on the more ridiculous side, robots hellbent on humanity’s destruction. The truth, however, is that Machine Learning is part of our increasingly data-driven world. It makes our lives better, despite several shortcomings, and is likely to be relevant to you even when not working directly with it.


Picture1.png

Let us take a quick moment to make the distinction between ML and AI. Consider the picture above: Machine Learning, a subset of AI, is a field dedicated to generating predictions based on the hidden patterns, machines pick up within data. In practice, it is an AI technique where the machine writes its own rules. This means that a machine is fed with inputs (in tabular form) such as housing data or photos of dogs and cats, and it learns to perform a specific task without humans telling it how to do so.

In this article, we hope to explore some interesting case studies, such as how Tinder uses these learners to match you with your next date or how Amazon attempted to use an algorithm to analyse CVs (revealing a bias against women instead). With Tinder, for example, a machine takes our explicit (e.g. age range) and implicit (e.g. our photo was taken in a forest) preferences to match us with people likely to be a match. This is a task performed by several algorithms (or learners/machines), each one trained specifically for its task.

How does my swiping allow a Machine to learn?

Tinder uses an ELO-system, attributing a score to every user. Based on this score it will determine the likelihood of two individuals swiping right on each other, resulting in a match. This score is determined by multiple factors, such as the photos, bio and other settings of the profile, as well as swiping activity. Users with similar ELO scores, who have been identified as sharing similar interests, will be shown to each other.

Let us refer to the diagram below.

Picture2.png

Firstly, the algorithm starts by analysing the user’s profile and collecting information from the photos they posted and personal information they wrote on their bio. In the photos, the algorithm can pick up on interests or cues such as liking dogs or nature. Through the bio, the machine will profile you based on words and expressions used (see picture below). From a technical perspective, these are distinct tasks likely to be performed by different learners – identifying words and sentiments is fundamentally different recognizing dogs in pictures.

Picture3.png

At this point, Tinder does still not have much knowledge about one’s preferences and will therefore show your profile to other users at random. It will record the swiping activity and the characteristics of the persons swiping right or left. Additionally, it will identify more features or interests from the user and attempt to present the profile to others in a way that it will increase the likelihood of someone swiping right. As it collects more data, it becomes better at matching you.

The ‘Smart Photos’ option, a feature that places your ‘best’ or ‘most popular’ photo first, is also another instance where Tinder uses Machine Learning. Through a random process in which a profile and pictures are shown to different people in different orders, it will eventually create a ranking for your photos.

In Smart Photos, the main goal is for you to be matched. This works best when the most relevant picture is placed first. This could mean that the most ‘popular’ photo – the one that performed better – might not be the best; think of someone who likes animals. For these people, the photo of you holding a dog is likely to be shown first! Through the work of creating and ranking preferences and choices, a match can be found solely on the valuable insights from a photo.

By and large, the techniques that match you with other people as described above are part of a school of techniques in Machine Learning called ‘Supervised Learning’. In other words, the algorithm that learns to identify dogs and nature has been trained with similar pictures of dogs and nature. These stand in contrast with other schools, such as ‘Semi-supervised Learning’ and ‘Unsupervised Learning’.

The Perils of our (Human) Supervisors

In 2014, a group of Amazon engineers were tasked with developing a learner that could help the company filter the best candidates out of the thousands of applications. The algorithm would be given data with past applicants’ CVs, as well as the knowledge of whether said applicants were hired by their human evaluators – a supervised learning task. Considering the tens of thousands of CVs that Amazon receives, automating this process could save thousands of hours.

The resulting learner, however, had one major flaw: it was biased against women, a trait it picked up from the predominantly male decision-makers responsible for hiring.  It started penalizing CVs where mentions of the female gender were present, as would be the case in a CV where “Women’s chess club” was written.

To make matters worse, when the engineers adjusted so that the learner would ignore explicit mentions to gender, it started picking up on the implicit references. It detected non-gendered words that were more likely to be used by women. These challenges, plus the negative press, would see the project be abandoned.

Problems such as these, arising from imperfect data, are linked to an increasingly important concept in Machine Learning called Data Auditing. If Amazon wanted to produce a Learner that was unbiased against women, a dataset with a balanced amount of female CV’s, as well as unbiased hiring decisions, would have to have been used.

The Unsupervised Techniques of Machine Learning

The focus up until now has been supervised ML types. But what of the other types are there?

In Unsupervised Learning, algorithms are given a degree of freedom that the Tinder and Amazon ones do not have: the unsupervised algorithms are only given the inputs, i.e. the dataset, and not the outputs (or a desired result). These divide themselves into two main techniques: Clustering and Dimensionality Reduction.

Remember when in kindergarten you had to identify different shades of red or green into their respective colour? Clustering works in a similar way: by exploring and analysing the features of each datapoint, the algorithm finds different subgroups to structure the data. The number of groups is a task that that can be made either by the person behind the algorithm or the machine itself. If left alone, it will start at a random number, and reiterate until it finds an optimal number of clusters (groups) to interpret the data accurately based on the variance.


Picture4.png

There are many real-world applications for this technique. Think about marketing research for a second: when a large company wants to group its customers for marketing purposes, they start by segmentation; grouping customers into similar groups. Clustering is the perfect technique for such a task; not only is it more likely to do a better job than a human – detecting hidden patterns likely to go unnoticed by us – but also revealing new insights regarding their customers. Even fields as distinct as biology and astronomy have great use for this technique, making it a powerful tool!

Ultimately brief, Machine Learning is a vast and profound topic with many implications for us in real life. If you’re interested in learning more about this topic, be sure to check out the second part of this article!


Sources: Geeks for Geeks, Medium, Reuters, The App Solutions, Towards Data Science.

Daniel André -  Daniel André Laura Osório -  Laura Osório


André Rodrigues -  André Rodrigues

The (Next) Generation of Energy

“Climate change is, quite simply, an existential threat for most life on the planet – including, and especially, the life of humankind.”

— António Guterres (2018)

For the past decade, the world has witnessed the incremental transformation of the mobility industry, largely through the phenomenon of electric cars. Nowadays, beyond Tesla’s Gigafactory walls, traditional carmakers are making the necessary steps to phase-out the combustion engine. Several prominent and prolific car makers expect the majority of their income to come from all electric car sales by 2030 and to have an all-electric offer – counting hybrids and other mixed-possibilities as well. In the short to medium term, expect as many as 500 electric car models to be available by 2022.

Concurrently, the energy sector is slowly weaning off its thirst for fossil fuels through renewable energies. As countries rethink their energy concerns – a matter of debate that goes well beyond the environment – and implement change through public policy, the automotive industry is being remade with battery powered electric vehicles.

This transformation, slow as it may be when contextualized with the environmental crisis we are in, is crucial but not without its own set of problems. Energy from renewables is largely dependent on factors beyond our control; how feasible would it be to have a fully renewable electric grid, and what good is it to power car batteries if the electricity produced comes from fossil fuels? And in the topic of car batteries, what about their environmental cost, how reliable are they, what alternatives could we consider?

This semester, the team behind the Technology articles at Nova Awareness Club has been tasked with two other topics – Health and Environment. This is no small task in the year of 2020, a time where the world has been ravaged by a global pandemic and where there is a sense of urgency about taking the necessary steps to prevent irreversible damage to the environment.

Although the choice for this article has mostly been fortuitous in a sense – the main idea was to showcase what the readers of The Awareness News should come to expect this term – the underlying message has never been truer. Within our scope, the team pledges to bring awareness to several environment related questions; and to do so using language that portrays the environmental crisis we are in as well as reducing unnecessary technical jargon to a minimum.


Renewable Energies, the Powergrid and the duck-related neologism

Duck Curve . The practical effect of renewable energies on power grids, as seen in California (The curves vaguely shape the outline of a duck). Source: Vox

Duck Curve. The practical effect of renewable energies on power grids, as seen in California (The curves vaguely shape the outline of a duck). Source: Vox

As previously stated, renewable energies are often dependent on factors beyond human control, namely, the weather. Solar farms only produce solar energy when the sun shines (usually not at night). A drought, in the aforementioned hydroelectric example, is typically a factor of extreme weather conditions.

Learning how to juggle these power outputs is key to one day achieving a fully renewable electric grid – a concept that has yet to materialize in real life. For this next part, consider the graph pictured above.

The Duck Curve is a recently coined term that shows the discrepancy between peak power production and peak power needs. Ultimately, this graph shows a very specific example likely to occur in places with an elevated solar output. Different electricity profiles – in other words, the different ways you power an electric grid with all different types of energies – dictate the circumstances.

Keep the following takeaways in mind before we delve into a practical question:

  1. As of yet, there is no such thing as a powergrid fully supplied by renewable energy sources

  2. Consequently, the electric cars you see run on electricity generated by fossil fuels. The degree likely depends on the electricity profile of the place you live in.

The question of whether an electric car is better from an emissions standpoint is often finicky; an article by The Guardian in 2017 states that the benefits of an electric car emissions’ wise throughout its lifetime are just 20% lower than a traditional combustion car. There is, however, a reduction in day-to-day use, and it relates directly to your electricity profile.


The burning questions behind the ‘B’ word… for Batteries

For a consumer, the limitations of an electric car largely revolve around its battery. From an environmental standpoint, making batteries also posts an acute environmental cost.

The deeper we go into the topic, the more fraught it is. The production of lithium-ion cells is energy intensive and demands rare metals – in other words, analysing battery production as a greener or environment friendly alternative to traditional cars starts off as an opportunity cost analysis between both. Fortunately, as energy is increasingly sourced from renewables, it becomes less of a question.

Taking into account the pollution from old batteries – which has not been overlooked in the research for this article – the real environmental cost of batteries seems to be hidden behind several layers of externalities. As such, the next part of this article is dedicated to the often-overlooked brother to battery powered vehicles – hydrogen and hydrogen powered cars. The process behind it is simple enough; joining hydrogen with oxygen generates energy and clean water vapor.

From a functional perspective, hydrogen cars work similarly to battery powered electric cars but with the added benefit of discarding the battery. Albeit with its own set of challenges, hydrogen powered vehicles offer conventional and green mobility. As such, it has gained attraction in spite of its obscure media coverage.

The challenges, however, should not be overlooked. Although hydrogen is the most common substance in the universe, it is rare to find it in its pure form. In what accounts to a net negative energy exchange, to produce it we must spend X amount of energy to obtain X – Y amount of hydrogen in fuel form. It is also very difficult to contain. There are two alternatives here: either pressurise a container, or turn it into a liquid by reducing the temperature. Both these options are costly. With everything tallied up, the average price per kilometer at the time of writing is estimated at $0.17 versus $0.02 in electricity.

Ultimately, Hydrogen can be seen as a tradeoff between efficiency (creating it is a net negative in energy) and storage capacity; of which hydrogen wins in spades against current lithium-ion battery technology. This, in turn, opens up possibilities that seemed impossible or too far off with batteries – namely the possibility of replacing fossil fuels in commercial air travel.

Some governments are ready to invest and incentivize the transition into hydrogen. Japan was the first country to develop a Basic Hydrogen Strategy, in 2017, the most promising initiative to establishing hydrogen as the main energy source in not just mobility. So far, Japan has succeeded in extracting hydrogen from other sources such as manure and waste plastic and with a decent-sized hydrogen car fleet, it is an important proof-of-concept for hydrogen’s efficiency and sustainability. It is the most successful case of a country committing to hydrogen for its energy needs.

Germany is another adherent to hydrogen, producing already 20 billion standard cubic meters, although 95% comes from fossil fuels such as coal and natural gas. The Bundesregierun – the German federal government – has adopted a national hydrogen strategy in June of this year. This will ensure support on Hydrogen innovation and technology for both German and European companies in the international stage.


Sources: The Guardian, Vox, New York Times, BBC, Youtube Videos, UN News, BloombergNEF, California Energy Comission, NREL, European Environmental Agency.

Lab Grown Food: Opportunity for sustainability or dystopian nightmare?

In 2014, the United Nations issued a report claiming that at current rates of soil degradation “all of the world’s topsoil could be gone within 60 years”.

The production of livestock is responsible for 14 to 18% of our greenhouse emissions and takes up to 70% of all agricultural land. Most of the world’s crops are used to sustain livestock and it is a major cause of deforestation and water contamination, problems which farther aggravate climate change and the health of our ecosystems.

While the world population has doubled over the last 50 years, the amount of meat produced has more than quadrupled – in fact, if the world ate as much meat as the top 20 meat-eating countries, the whole surface of habitable land would have to be used to feed people, and, even if packing animals together, that still wouldn’t be enough. Moreover, factory farmed animals are fed antibiotics: in the US, more than 70% of all antibiotics sold each year now go to farm animals, which has lead many to speculate that the industry is fueling the risk of a deadly pathogen that is resistant to bacteria, a so called super bug, that will have consequences greater than the current Covid-19 pandemic.

By changing the way we grow meat, or our meat consumption habits, this diagnostic would improve significantly. However, what if we could grow all our food from a lab? Meat might not be the only lab-grown product of the future.


Lab-grown meat, as the name suggests, consists on creating a piece of meat through cell culture. Initially, a small segment of cells tissue is taken from an animal and subsequently, added to a growth medium – like a soup that provides proteins, vitamins, sugars, and hormones. Along with a temperature-controlled environment, the cells are tricked into thinking they are still inside their owner, hence growing and replicating themselves. This process takes between two to six weeks, and the final product is a doughy chunk of meat, close to minced meat, which will then give rise to our everyday meat-products.

 

 

Artificial lab grown meat in Petri dish (YouTube)Artificial lab grown meat in Petri dish (YouTube)

Since the 80’s, many products have been used as an attempt to substitute meat, such as soybeans and wheat gluten, however, they could never replace the taste of a hamburger. The objective of cultured meat is quite complex: to create something that tastes just like regular meat, with as much vitamins, proteins, and nutrients, that can be cooked and is as affordable. The biggest challenge, however, is to reproduce its consistency, which hardly influences its flavour and juiciness.

Scientists are looking for ways to reproduce a steak structure: they need to supply nutrients to cells in the centre of a piece of meat, like vessels do in a body. This has been a huge challenge, and the latest progress was made by a Harvard team, using a type of gelatine. They came up with a product that, although not having as much muscle fibre as natural meat, had a similar texture. But even if muscle cells, fat and connective tissue can be made from scratch, the biggest challenge still remains: how can we combine them in order to build an actual steak, with its texture, aroma, flavour, appearance and functionality?

And is lab grown meat even affordable? The first lab-grown burger, produced in 2013 by Mosa Meat, cost $1.2 million per pound (more or less 2.45 million euros per kilogram), due to cell production process’ large costs. However, Future Meat, another lab-grown meat start-up, is planning on launching a new line of in-vitro meat that will cost less than $10 per pound (more or less 2 euros per kilo). This exponential decrease in price in a few years is expected to continue

When it comes to the environmental impacts, they are still being discussed. While Mosa Meats, a clean meat producer, states that cultured meat process requires 99% less land and 96% less water than livestock agriculture, some say producing all our meat in labs would only cut greenhouse emissions from beef by 7%, because of the energy the process requires.

There is also a moral dilemma which still needs to be tackled, i.e. the growth medium. Currently, scientists are using Fetal Bovine Serum, which is, in other words, blood harvested from the fetuses of slaughtered pregnant cows, immediately killing them. While firms are looking for a plant substitute, these processes do not help much on reducing the cruelty of the current methods, although they would drastically reduce the number of slaughtered animals.

Finally, one must also consider the yuck factor: many people are still resilient to the idea of eating something created in a lab, which can be explained by the Uncanny Valley effect. In robotics, this is known as the discomfort with «humanoids» that are closed to being human but not quite there yet. The same happens with meat: when you are faced with something that is a very close imitation, your brain is forced to expect it to be exactly the same, and if not, you will most probably freak out. In February 2019, Animal Advocacy Research Fund funded a survey which revealed that 29.8% of U.S. consumers, 59.3% of Chinese consumers, and 48.7% of Indian consumers would be very/ extremely willing to regularly purchase cell-based meat.


Meat is not the only food that scientists and entrepreneurs are trying to grow in a Lab. Solar Foods grew flour, the main source of calories in the west, from within a metal tank. The process, described like a froth-like soup of bacteria fueled by hydrogen, allowed for the creation of artificial flour. The use of hydrogen was touted as being 10 times as efficient as photosynthesis. And in these labs, where food is grown in giant vats, the land efficiency is estimated at 20.000 times greater by Solar Foods. Under these circumstances, food could virtually be grown anywhere on the world, on a fraction of the area.

Soylent Green – a 1973 movie featuring Charlton Heston and based on a story by Harry Harrison describes a world in the year 2022 where exponential growth of the population has led to natural disasters and chronic food and water shortages. As the actual year 2022 looms, this seems a good time to discuss our options to ensure this fiction does not become reality

 

 

World Population Projections (ResearchGate)World Population Projections (ResearchGate)

Maria Mendes

João Guedes

Daniel André

Dubai: The Pearl of the Middle East

This city needs no introduction. As the main attraction and destination of the Middle East, Dubai is an exotic and trendy city full of luxury and amazement. Skyscrapers everywhere, including the highest in the world, building new ones nonstop. The most amazing and extravagant hotels, such as the Burj Al Arab: a seven-star hotel with 200 rooms, each 2 stories high. A coast with artificial capes and islands, full of extraordinary mansions. Its very own indoor ski resort, the Ski Dubai, while outside temperatures reach more than 40º Celsius. One of the most spectacular cities in the world, that in many ways makes no sense at all. How can a city so successful in so many ways be built in a hostile desert, by a previously unknown people, in a region so influenced by political tensions and wars?


The Past

Dubai is one of 7 monarchies that would make the United Arab Emirates (UAE), located on the coast of the Persian Gulf.

In the beginning of the twentieth century, Dubai was just a small insignificant trade port. The city survived by having special diplomatic relations with the United Kingdom, offering stability, and by selling its finest trade resource: high quality pearls. The only special thing about this city was its strategic position, close to the Strait of Hormuz.

In the 1930s, the creation of high-quality fake pearls and the Great Depression devastated the economy. Dubai experienced great migrations and economic losses, now as an official protectorate of the British Empire. It was in this period that the people realised the disadvantages of being dependent on one trade resource and the advantages of having stability in the region, provided by the British. These will be the two factors that will define Dubai.

Old Dubai in 1950 (source: wikipedia)

Old Dubai in 1950 (source: wikipedia)

Throughout the twentieth century, more and more oil was being found in the Emirates, but not much in the Emirate of Dubai. When the UAE became independent in 1972, the country was increasingly dependent on their oil exports. But Dubai learned from its past; it focused on diversifying its sources of income. As such, it invested their share of the oil revenues on infrastructure like ports, roads and airports. From there, they attracted foreign investment, granting special economic zones for any interested. All of this was only possible by having almost perfect stability in the country. As the years passed, they became great competitors in maritime trade, banking, finance, energy, science innovation, aviation and, of course, real estate. It was in the 1990s that the city exploded with the famous skyscrapers, while wars were being fought all over the Middle East.


The Present

Nowadays, it is a global city like no other. Over two million people live in Dubai, with more than 3 quarters being immigrants. While more than 80% of UAE’s GDP is dependent on oil related revenues, less than 5% of Dubai’s GDP is as such. Because it is right between Asia, Europe and Africa; and is so safe and diversified in its services, it serves as a bridge for business and diplomacy between the continents. It is, in many ways, the Switzerland of the Middle East.


Present Dubai

Present Dubai

The main types of people that Dubai attracts are entrepreneurs, to establish their companies in the city so that it becomes increasingly competitive; qualified workers, to work in such companies; and tourists, 13 million per year, from the extremely rich to the normal western, as tourism is a field where Dubai excels at.

I was able to interview an entrepreneur and a tourist so that they could share their experiences in Dubai:

Our entrepreneur is the owner and CEO of a marketing company in Portugal. He chose to do business in Dubai to take the advantage of the bridge between societies. Not only is it easy to set up business in the city, but also there is easy access to other markets from Asia and Africa. There are companies from all over the world in Dubai, making the competition fierce. It’s extremely difficult to survive in such a market, with all the big international players present. Still he is steadily surviving.

Our tourist is a student from Nova SBE that travelled to Dubai during the summer holidays.  She found many comparisons between Dubai to the big cities of the USA: big skyscrapers, big shopping malls, great suburban areas, gigantic highways and the automobile as its main form of transportation. Everything like that, only everything more extravagant. She particularly liked the desert landscape, the extravagant shopping malls and the culture. Most of the old Dubai is often forgotten, but that is where you can truly find the roots of the people, in the old part of the city, like per example, the Souks, covered traditional markets with a different one for each type of product, from clothes to gold.


Dubai’s traditional covered markets, the Souks

Dubai’s traditional covered markets, the Souks

Her trip dismisses the myth of Dubai being only for the super-rich. You can still have a great holiday in Dubai without spending that much.

These testimonies only confirm what was already stated. Dubai is a safe and exciting place to visit and work. But the city is far from perfect: it has serious problems.

The city grew exponentially in only 30 years. The city was not planned to grow that greatly, so there are very serious logistical problems. Big highways separate entire neighbourhoods and many streets are completely disconnected from each other by foot.

Dubai is seen as having a very relaxed law relative to neighbouring countries, and that is true for the most part. Women do not have to cover their hair, other religions are free to be practiced, even alcohol is legal. But there are still harsh laws. You can’t drink in the street, you can’t show intimacy in public (like hugging, holding hands and kissing) and you can’t say or report badly about the government, not in public nor in social media. There is no freedom of speech. One shocking case was of a British Phd student that was in Dubai to study. He was arrested for just suspicion of spying. Trialed and sentenced for life imprisonment with no lawyer present. He was later released, but not after 5 months in solitary confinement.

And then there is the rule of law itself. Many laws are ignored when it becomes convenient. There are reports from tourists of showing intimacy and drinking in public with no repercussions. Some labor laws are also ignored.


Living conditions of forced labor workers

Living conditions of forced labor workers

And that leads to problems in human rights. Many less educated people come to Dubai to work. The more desperate are cheated out of their salary when recruited to various jobs, mostly construction. They are maintained in conditions considered less than humane, forced to work without pay. This is no different than slavery. It is possible that those amazing skyscrapers were constructed by these people.


The Future

Dubai will certainly outlast oil, thanks to its diversification and its eccentric identity, attracting business and attention worldwide. It has serious problems, but they should be overcome with increasing influence from the west.

Meanwhile, the increasingly more bizarre construction projects are underway, like the Dubai Creek Harbour. This will be an urban complex full of luxury apartments, green parks and the Dubai Creek tower. This latter will cost one billion US dollars and it will be the tallest structure ever made by mankind, standing 1,3 kilometres high. Construction was expected to finish in 2021, but that will probably be postponed due to the Covid-19 pandemic. Nevertheless, when it does finish, it will maintain Dubai in the hotspot that it is currently standing.


The Dubai Creek Tower (source: EMAAR Properties)

The Dubai Creek Tower (source: EMAAR Properties)

The Cloud Wars: AWS Vs Azure for the Control of Your Internet

Late last year, on October 25th, the United States Department of Defence announced that they would award a contract worth $10 billion dollars – the Joint Enterprise Defence Infrastructure project, henceforth referred to as JEDI – to Microsoft’s cloud computing business – the Microsoft Azure.

In the larger picture, the JEDI contract is but a small drop in the ocean of public contracts. According to the United States themselves, the federal government spends roughly $500 billion dollars on contracts every year. At face value, it was just another story of a tech-related government contract being awarded to the company with the most competitive bid.

Figure 2 - AWS Logo , Source: Amazon

Figure 2 – AWS Logo , Source: Amazon

But Amazon would contest the decision soon after, claiming errors in the process and political interference by President Trump. Like Microsoft, Amazon Web Services (AWS) – a subsidiary that provides cloud computing services – was in the run for the JEDI contract, and was even considered to be a frontrunner. By February of this year, Microsoft’s work had been halted, and the Pentagon was reconsidering the decision, as shown by court documents.

Though this story has taken an acrimonious turn, pitting Amazon against the Executive Branch of the United States with allegations of political interference, the competition between Microsoft and Amazon is not unlike that of any other two companies fighting for market dominance. The reason why President Trump allegedly interfered with the process is because cloud computing is a nascent, rapidly growing field that both companies – Microsoft and Amazon – have deemed crucial in strategic terms. So, he hit Amazon and its founder, with whom he has had public spats in the past, where it hurt.

Ultimately, this begets the question:

What is cloud computing, and why are the stakes so high?

Cloud computing is the delivery of computing services such as servers, databases, storage, software and analytics through the internet (the cloud = the internet).

Cloud computing services offer several benefits to clients:

  • No capital expenditures – You don’t have to buy your own physical assets, you rent them via the “cloud” instead. Like retail chains that started to lease and rent property instead of buying it.

  • Scale & Flexibility – The cost of the rent is proportional to the size of the business and traffic and it is very easy to increase your capacity. In other words, as your computing power needs increase with business and traffic, you are unconstrained by current equipment.

  • Speed & Performance – Your only limitation is your internet connection. Cloud services run on the latest high-tech hardware, so you are not limited by outdated hardware and you don’t have to constantly update your devices.

  • Security and Reliability – Cloud computing services come with automatic backups and disaster recovery, as your data is in many places instead of a single server. Cloud service providers also come with the latest network security methodologies, which would be too expensive for a single business to implement on its own.

These services can be split into three types:

  1. Infrastructure as a service (IaaS) – the most basic version where you rent IT infrastructure such as servers for storage.

  2. Platform as a service (PaaS) – Services that supply on demand environment for developing and testing software applications, such as mobile apps.

  3. Software as a service (SaaS) – delivery of software over the internet, on demand, often requiring only a terminal (no need for installation).

Cloud computing has provided a unique benefit to society in general – it makes it much easier to launch a tech start-up, as the start-up costs are almost non-existent when compared to the 90s. Cloud computing was a major enabler in the tech boom of the last decade.

Cloud-computing-000088211771_Medium-1024x682.jpg

Amazon was the first of the two to launch their cloud computing business via AWS, back in 2006. The story of AWS is interesting. Initially, before 2006, what would become AWS was a private cloud system within Amazon to support data collection and server management across the entire company. Only after using and developing AWS for four years before offering AWS to the market. AWS was not planned and was born of Amazon’s culture of innovation and experimentation.

Microsoft would follow suit in 2010, at the time launching “Windows Azure”.

Today, these companies take up 70% of the market share valued at 227 billion dollars in 2019 (with Amazon being the market leader at 40%).

And seamlessly, without us ever noticing, they power many of the platforms and companies that we use in our day-to-day.


Figure 2 - Netflix Logo , Source: Wikipedia

Figure 2 – Netflix Logo , Source: Wikipedia

Consider Netflix, one of AWS’ high profile clients. In 2009, they opted to migrate from their physical data centre to the cloud, moving thousands of terabytes of data into Amazon owned servers, data that has to be accessed tens of thousands of times per second by 160+ million subscribers in all parts of the world.

Figure 2 - HP Logo , Source: Wikipedia

Figure 2 – HP Logo , Source: Wikipedia

Alternatively, consider HP, one of Microsoft Azure’s high-profile clients. According to HP, they handle more than 600 million technical support contacts each year. The accumulated data points for each of these contacts was used to build an AI assistant via one of the solutions provided by Microsoft Azure.

Either of these examples illustrate different services provided under the same umbrella term of “cloud computing”. Both cloud computing platforms ultimately aim to help businesses develop and meet their organizational goals: they offer many tools and frameworks to build an «on your own terms» platform.


But how would a manager choose between them? If a company already works with a platform, why consider getting a service from a competitor?

Part of the answer lies on the many different services they supply, as well as the current data infrastructure of the company in question. Microsoft is ubiquitous to any company in the world, but AWS seems to be more advanced in the cloud computing game as of right now (hence its frontrunner status in the JEDI contract). Regardless, none of the cloud suppliers offer the same service in the exact same way, or with the same value proposition. For a manager, choosing between AWS and Azure might be a balancing act, and they might end up using both.

Ultimately, the Cloud consumes our day-to-day lives. From the political contrivances as seen in the JEDI contract to the shift in paradigm that directly affects decision-makers, both high and low in a company, articles much like these are but a warning sign of a braver new world to come.


Sources: US Department of Defence, US Datalab, NY Times, Gartner, Microsoft Azure, AWS, Wikipedia


João Vaz Guedes - João Vaz Guedes Maria Mendes - Maria Mendes

Daniel André - Daniel André

Moore’s Law

In 1975, Gordan Moore was asked to write for a special Edition of Electronic Magazine about the future of silicon components during the next decade. Integrated circuits (we explain what these are below), known today as computer chips, were discovered in the late 1950s and had just begun being tested. When analysing the achievements made by his  company, Intel, and others in the previous years, Moore observed that the number of transistors per microprocessor, as well as other electronic components, had doubled each year, and he __projected that this rate of growth would continue into the future. This prediction became known as Moore’s Law, which states that the number of transistors in an integrated circuit doubles about every 2 years.

1.png

Moore’s main goal was  to transmit the idea that integrated circuits would lower the costs of technology: the larger the number of components, the lower the cost per component, therefore decreasing the price of computers and other electronic devices. According to the law, which became an industry goal and consequently a self-fulfilling prophecy, processor speeds would increase exponentially, because transistors would scale down so that more units could be packed together on a computer chip. The more transistors, the easier and quicker electrons could move between them, therefore increasing a computer’s efficiency and speed. So as the number of transistors on an integrated circuit has doubled every 24 months, computing power has doubled about every 18 months.

Moore’s Law was regarded as a «Rule of Thumb», rather than a Law: the technological industry intended to keep up with its growth rate and so settled a road map based on the continuous innovation of transistors and chips in line with Moore’s Law. Because of the increasing demand for devices, manufacturers and producers strive to innovate and create next-generation chips, less they become obsolete in the face of innovating competition.

2.png

Many devices that we use nowadays owe their existence to the evolution of integrated circuits; Moore himself stated that «Integrated circuits will lead to such wonders as […] personal communications equipment», currently known as mobile phones. Our laptops and electronic wristwatches, medical imaging and digital processing technologies were made possible because of Moore’s Law. In fact, it has been argued that Moore’s Law is one of the main drivers of the economic growth seen in the last 50 years, as it has led to tremendous gains in productivity.

However, over the past decade the pace innovation has slowed down, with Moore himself predicting the end of his law by 2025. To explain why, here is a small primer on transistors:

A transistor, while a simple invention in concept, is one of the foundations of our modern technological society. Without it, you would not be reading this article. Your phone probably has more than 1 Billion transistors, which are microscopic and manufactured with incredible precision on a thin wafer of silicon. A transistor is like a switch, or gate, that either blocks or lets a small current of electrons pass through it. It’s the billions of combinations of these gates opening and closing that permits a computer to perform all its tasks from basic arithmetic calculations to displaying this text on your screen.

Transistors today are around 10 to 20 nanometres. That’s only around 100 times larger than an oxygen molecule so small that engineers have run into problems that they cannot economically fix – quantum tunnelling. At sizes so small, the laws of quantum mechanics take hold and electrons start to obey different rules, where sometimes they simply cross a closed transistor, corrupting the data in the process. This problem has no economical solution now and so we have reached the death of Moore’s Law.

The end of Moore’s Law has long been considered an inevitability. Gordon Moore himself set a conservative timeframe in his original 1965 paper, estimating that his observed rule would remain “constant for at least ten years”. And its death has been proclaimed many times since. Only the ingenuity of the industry has kept it alive for so long.

But this endgame does not signify an end for the advancement of computational power. Several alternative possibilities lie on the horizon, and these range from the simple and intuitive to the fantastical possibilities brought by the advent of quantum computing.

On the intuitive side, we could focus on creating specialized chips for certain tasks. The processor that powers your device (smartphone or computer) is a “beefy” unit, capable of undertaking a wide variety of tasks, albeit at the cost of efficiency. For very specific tasks, specialized chips may be created.  Other solutions may lay anywhere, from finding other materials to creating improvements in software to take advantage of current architecture – did you know that Excel does not take advantage of the extra cores in your computer to handle those heavier, 20.000 row tasks? Although these types of improvements can still take us a long way, at least a forty-fold increase in computing power (around 5 years of innovation), they can only take us so far. When it comes to transitioning to other materials, Graphene has been touted as a possible replacement, but the research is still in the early stages.

On the more futuristic side, quantum computing could usher a new age of technology to rival that of the integrated circuit. Last year, Google published a paper on Nature claiming to have solved a task in under 4 minutes that would have taken a modern supercomputer, the processing equivalent to “around 100.000 desktop computers”, over 10.000 years. There are also studies looking into conceptually abstract possibilities like using DNA to perform arithmetic and logic operations, as well as storage.

Although we will probably never have personal quantum or “DNA” computers, because their upkeep costs and upfront investment are prohibitively high, a world in which a handful of companies offer processing solutions to anyone via cloud computing, much like we see today, sounds plausible.

What impact could the end of Moore’s Law have on the economy? How can we bring about the age of AI if we do not have the hardware to support software innovation? We can only wait and see what happens.

Sources: Intel, Washington Post, Nature, 311 Institute, MIT Technological Review, NY Times, Wikipedia

The Renewable Energy Sources Act: From words to actions

Karl Marx once said that, until now, philosophers limited themselves to interpret the world; however, the goal is to change it. And change is necessary –human beings reached the current state of evolution due to their capacity to adapt and overcome.

Nowadays, people (or at least their vast majority) are concerned with climate change and all its associated consequences, such as the melting of ice caps, rising sea levels, the extinction of species, or still the increasing frequency of natural disasters. We’ve made estimates and we’ve searched for solutions–once again, we looked at innovation for a way out. Now, it’s our job to act rather than to react.

The fight against climate change must include a shift towards renewable energies. The possibility of substituting fossil fuels with energy harnessed from wind, sun, earth and water creates lots of expectations but also lots of opportunities. The problem remains, however, of how to make these energies accessible, cheap and efficient – and this is why the German example is worth highlighting.

In 2000, Germany launched the Renewable Energy Sources Act, or EEG (Erneuerbare-Energien-Gesetz), a set of laws that consisted in a feed-in tariff (a transfer made to households and businesses that use renewable energies to generate their own electricity) in order to ‘enable the energy supply to develop in a sustainable manner in particular in the interest of mitigating climate change and protecting the environment, to reduce the costs to the economy and not least by including long-term external effects, to conserve fossil energy resources and to promote the further development of technologies to generate electricity from renewable energy sources’ (Renewable Energy Sources Act, 2014). This scheme replaced the Electricity Feed-in Act (1991), the first green electricity feed-in tariff in the world that was contested by the European Court of Justice, who considered it an illegal threat to competition (article 87 EC Treaty).

Consequently, in 1999, Hermann Scheer and Hans-Josef Fell developed the EEG legislation. This law imposed on grid operators the obligation to prioritise the purchase of electricity generated exclusively from hydrodynamic power or wind, solar radiation or geothermal energy, instead of nuclear power, gas or coal. Besides this, grid operators should pay compensations to producers based on the technology used and quantity of energy purchased, giving producers a feed-in tariff with a duration of 20 years in which they could guarantee the return of their investment. The trick here, in order to avoid the same scrutiny by the European Court of Justice, was that, in contrast with the 1991 Electricity Feed-in Act, those payment were not considered public subsidies because they didn’t derive from taxation but from a surcharge on consumers that shared the expenses – so, there was no charge in Germany’s public finances. The EEG also foresaw a regular decrease in the feed-in tariffs (known as ‘degression’) as technologies became more cost-efficient.

The EEG legislation has been reviewed over the years and suffered some changes in 2004, 2009, 2012, 2014 and 2017.


IMPACT

Since the EEG legislation was enforced in 2000, the cost of photovoltaic systems decreased by 50% in 5 years. In relation to the coverage of renewable energy, the initial target for Germany was for 12.5% of its electricity production to derive from renewable sources by 2020. In 2007 it already covered 14.7%. In 2014 it covered 27.4% and in 2018 this value was at 37.8%. Currently, the target for 2050 is at least 80%. From data obtained from the period between 1990 and 2015, it’s visible that wind was the renewable source that most contributed towards Germany’s green transition in regard to gross generation of electricity.


gross generation of electricity by source in germanygross generation of electricity by source in germany

Besides this, thousands of long-lasting jobs have been created from these clean sources of energy – wind was the source that employed most people, more than doubling the amount of jobs created between 2004 and 2013, followed by biomass and solar. Usually, the abrupt transition to renewable energy leads to fears of some loss of jobs, which has strong impact on public opinion. However, the data shows that the transition to renewable energies demonstrates huge potential in creating more jobs than it destroys.


publicly funded research administrationpublicly funded research administration

Nowadays, in Germany, renewable energy can compete with fossil fuels, even when taking into account the cost of transport of such energy and the costs associated to the building of the infrastructure required for its production. In the case of renewable energy, the cost per kilowatt per hour depends on many natural factors, such as amount of wind and hours of sunlight, but they are all, on average, below $0.11/kWh, with onshore wind and geothermal being the cheapest (both $0.03/kWh), whereas biomass and offshore wind are the most expensive ($0.09/kWh and $0.11/kWh, respectively). Coal represents a cost of about $0.13/kWh and nuclear energy around $0.09/kWh.

This bet on renewable energy turned out to be very profitable for Germany. The cheaper production of energy allowed the country to be much more competitive in terms of electricity prices, until recently enforcing its position as a net exporter of energy over the years. France, Austria and Netherlands are the most common destinies of German energy.


ACCEPTANCE

The EEG legislation could be considered a social and economic success – it has increased the use of renewable energy while raising awareness about pollution, created thousands of jobs, and allowed Germany to become profitable in this sector. This success is further demonstrated by the attempt of other countries (as, for example, Brazil) to copy the feed-in tariff in order to accelerate their transition to renewable energies.

On the 8th of May 2016, there was a point during the day in which Germany was guaranting 87% of the energy being consumed by the entire country at that specific time from renewable sources. The production was so high that producers were obliged to offer free energy to consumers in order to drain the electricity.

However, the EEG is far from perfection and has been criticized many times to this day. The biggest grievance against this law was the high levels of feed-in tariff support. This position gained the support of the European Commission in 2014 (even though, until this day, the EC defends that ‘well-adapted feed-in tariff regimes are generally the most efficient and effective support schemes for promoting renewable electricity’) and led to some modifications in the legislation. In 2014, it was adopted what is known as the EEG 2.0., in which the compensation rates ceased being defined by the government to becoming defined through auctions.

This auction system was criticized too. In 2012, estimates pointed out that almost half of the renewable energy capacity in Germany was owned by citizens through energy cooperatives and private installations. According to the critics, the auction system would harm these kinds of producers, threatening all the development allowed by the original EEG legislation.

Today, Germany wants to obtain between 80% and 100% of the electricity consumed within its territory through renewable sources by the end of 2050. This path won’t be easy in a country where big coal plants are still the main source of energy, even after all those efforts of transition. In July 2019, Germany became, for the first time in almost two decades, a net importer of energy.

Once again, capacity to adapt and overcome is required.

#block-yui_3_17_2_1_1581875946965_36281 .sqs-gallery-block-grid .sqs-gallery-design-grid { margin-right: -20px; }
#block-yui_3_17_2_1_1581875946965_36281 .sqs-gallery-block-grid .sqs-gallery-design-grid-slide .margin-wrapper { margin-right: 20px; margin-bottom: 20px; }