We live in an era of extraordinary abundance. At any moment, people are exposed to far more alternatives than previous generations did, across nearly every domain in life. The world has never offered so much choice, yet many individuals feel increasingly overwhelmed by it.
Psychological research suggests that, while choice is essential for autonomy and well-being, too many options can have the opposite effect on decision-making quality and satisfaction. This phenomenon challenges the assumption of classical economics that more alternatives lead to better outcomes.
The psychology of choice overload
When confronted with a large number of alternatives, individuals often experience difficulty in making decisions, a tendency known in literature as choice overload or overchoice.
One of the earliest and most cited demonstrations of this effect was the so-called “jam experiment” conducted by psychologists Sheena Iyengar and Mark Lepper. In their study, shoppers at a local market were presented with either 24 varieties of jam or just 6, and while more customers stopped at the larger display, far fewer made a purchase compared to those who saw fewer options.
This counterintuitive result highlights a central paradox: abundance of choice can reduce the likelihood of a decision being made at all. The cognitive load associated with evaluating too many alternatives can lead to what psychologists identifiy as decision paralysis, where individuals delay or avoid making any choice due to overwhelming complexity.
In this context, research points to additional consequences of choice overload, including increased stress, regret for forgone options, and lower confidence in the choices that are made.
Figure 1: Illustration of the “jam experiment” showing how larger assortments attract more shoppers but lead to lower purchase rates compared to smaller assortments. Source: Your Marketing Rules
The cognitive cost of choice overload
From a neuroscientific perspective, decision-making consumes cognitive resources. In particular, the prefrontal cortex, often described as the brain’s executive center for planning and evaluation, plays a significant role in choosing among alternatives. As the complexity of options increases, so does the mental effort required to process information and make judgments, defined as cognitive load. When faced with an excessive number of alternatives, this increased load can exceed working memory capacity, leading to mental fatigue and suboptimal choices.
In extreme cases, prolonged decision-making under such conditions can trigger what psychologists term decision fatigue, a decline in decision quality that arises after repeated cognitive exertion during choice tasks. Importantly, decision fatigue often results in a shift toward simpler heuristics or impulsive reactions based on biases, rather than thoughtful deliberation.
How the digital era multiplies our choices
In the digital era, choice overload permeates everyday life: a typical online marketplace now offers thousands of products, each often presenting mulitple ratings, features, and reviews. Streaming services aggregate tens of thousands of titles, and users often report spending more time choosing what to watch than actually watching.
Figure 2: Number of TV programs produced in the U.S. from 1950 to 2022, showing accelerated growth in the digital age. Source: IMDB
Even outside market-based decision environments, people face an ever-expanding range of alternatives in careers, travel destinations, social interactions, and financial decisions. Behavioral economists and psychologists note that this proliferation of options can paradoxically diminish overall satisfaction and confidence in one’s choices. This trend also shapes broader macroeconomic dynamics. When choices become overwhelming, people participate less actively in markets, often stepping back from decisions altogether. Evidence from e-commerce illustrates that when faced with an excess of product options, many consumers simply postpone or abandon their purchases.
Figure 3: Proportion of subjects who bought any pens as a function of the number of choices available. Source: Ness Labs
The human cost of abundance
Although choice is often associated with autonomy and freedom, an excess of options may lead to psychological downsides. One well-studied distinction in literature differentiates between “maximizers”, individuals who seek the best possible option, and “satisficers”, those who settle for good enough. When faced with abundant choices, maximizers tend to experience higher levels of regret, lower satisfaction, and greater decision anxiety than satisficers.
Further research suggests that an abundance of choice can even undermine self-control and promote impulsive behavior, particularly after making repeated decisions. This effect has been documented in studies showing that frequent decision-making can deplete mental resources, leading to cognitive and emotional fatigue.
Beyond individual psychology, widespread choice overload may contribute to broader societal patterns of stress and dissatisfaction. Rather than eliciting joy, the freedom to choose can inflate expectations and intensify regret, particularly when people believe a better choice was possible.
Toward smarter choices
Despite its potential drawbacks, choice is still a fundamental part of our lives and need not be feared. A growing body of research indicates that individuals can navigate abundant options more effectively through strategic decision frameworks and environmental design. For example, consciously limiting the number of alternatives under consideration, a practice known as pre-filtering, has been shown to streamline decision-making and reduce cognitive strain. Other helpful approaches include setting clear criteria before engaging in selection, focusing on satisficing rather than maximizing when faced with many options, and using structured heuristics that prioritize key attributes over exhaustive comparison.
Behavioral economists refer to these techniques collectively as part of choice architecture, which aims to structure decision environments in ways that support better outcomes without eliminating freedom of choice.
Conclusion
The paradox of choice illustrates a key tension in modern life: while freedom and autonomy are deeply valued, an excess of options can undermine the satisfaction and confidence individuals seek. Across consumer behavior, digital decisions, and everyday life, too many alternatives can lead to fatigue, regret, and disengagement.
Understanding the psychological and neural mechanisms behind choice overload does not require rejecting freedom, but rather it leads to a more intentional relationship with our decisions.
Sources:When Choice is Demotivating: Can One Desire Too Much of a Good Thing? by Iyengar & Lepper (Journal of Personality and Social Psychology); The Paradox of Choice: Why More Is Less by Schwartz; Why Do We Have a Harder Time Choosing When We Have More Options? by The Decision Lab; On the Advantages and Disadvantages of Choice: Future Research Directions in Choice Overload and Its Moderators by Misuraca, Nixon, Miceli, Di Stefano, Scaffidi, Abbate (Frontiers in Psychology); Choice Overload: A Conceptual Review and Meta-Analysis by Chernev, Böckenholt, Goodman (Journal of Consumer Psychology); Decision Fatigue in E-Commerce: How Many Product Options Are Too Many? by Winsome Writing Team (Winsome); The Paradox of Choice: How Too Many Options Affect Consumer Decision-Making by Winsome Writing Team (Winsome).
Artificial Intelligence (AI) is no longer a futuristic concept but an integral part of modern society. It shapes decisions in finance, healthcare, law enforcement, and social media, influencing how people interact with technology and each other. The rapid integration of AI, however, brings with it a host of ethical concerns. Questions about fairness, accountability, and transparency challenge the assumption that technological progress is inherently beneficial. AI does not exist in a vacuum—it reflects the values and biases of those who create and deploy it. While ethical AI has become a widely discussed concept, turning principles into action remains a significant challenge.
Between Innovation and Responsibility
The potential benefits of AI are vast. Automated systems can improve efficiency, analyze massive datasets, and assist in complex decision-making processes. In industries such as healthcare, AI-driven models can detect diseases early, optimize treatment plans, and personalize medical recommendations. In business, predictive analytics can enhance supply chain management and customer experiences. Despite these promising applications, the ethical risks of AI cannot be ignored.
A key issue lies in the tension between innovation and responsibility. Companies and developers push for solutions, often prioritizing speed and market dominance over careful ethical consideration. AI ethics frameworks have been introduced to address this, but they frequently lack enforceability, leaving ethical concerns in the hands of the very entities that stand to profit from AI’s widespread adoption.
Challenges of Ethical Implementation
Ethical AI is easier to discuss than to implement. One of the greatest barriers is the lack of transparency of AI systems. Many machine learning models operate as “black boxes,” meaning their decision-making processes are difficult to interpret, even by their creators. This lack of transparency complicates accountability, making it unclear who should be held responsible when AI systems make biased or harmful decisions.
Another persistent challenge is bias in AI models. AI systems are trained on historical data, which often contains existing biases related to race, gender, and socioeconomic status. Rather than eliminating human prejudice, AI has the potential to reinforce and amplify systemic inequalities. Addressing these biases requires a combination of diverse training datasets, algorithmic audits, and ongoing oversight—none of which are currently standard practices across industries.
Additionally, economic incentives often clash with ethical considerations. The AI industry is dominated by tech giants that compete for market share, patents, and financial gains. Ethical concerns, such as privacy and fairness, are often secondary to profit-driven objectives. Without clear regulatory frameworks, companies can claim adherence to ethical principles while continuing practices that favor commercial success over social responsibility.
Bridging the Gap Between Theory and Practice
For AI ethics to move beyond discussion and into action, structural changes are necessary. Regulatory enforcement is one crucial step. Governments and international organizations must establish clear legal guidelines that define ethical AI development and deployment. Without binding regulations, AI ethics remains largely voluntary, dependent on corporate goodwill rather than enforceable standards.
Another important approach is enhancing AI explainability. Researchers and developers need to prioritize the creation of AI systems that are interpretable and understandable. This includes designing models with built-in transparency measures, providing clear documentation on decision-making processes, and ensuring that AI-driven recommendations can be challenged when necessary.
Additionally, inclusive AI development is crucial. Many AI development teams lack diversity not only in terms of gender and ethnicity, but also regarding socioeconomic background, cultural perspective, and disciplinary expertise, which limits their ability to recognize and mitigate biases in their models. A broader range of perspectives—spanning gender, ethnicity, socioeconomic backgrounds, and disciplines—must be included in AI research and implementation. Ethical AI requires collaboration between technologists, ethicists, policymakers, and affected communities to ensure that AI serves a wider spectrum of societal needs.
Case Study: IBM’s Ethical AI Approach
IBM (International Business Machines Corporation) has positioned itself as a leader in ethical AI by actively addressing issues of fairness, transparency, and accountability. Unlike many companies that focus solely on AI innovation, IBM has taken significant steps to integrate ethics into AI development through its AI Ethics Board, which oversees responsible AI deployment.
A key contribution to ethical AI is its focus on explainability. The company has developed the AI Fairness 360 toolkit, an open-source library designed to help developers detect and mitigate biases in machine learning models. By making these tools publicly available, greater transparency and accountability across the AI industry is encouraged.
The company has also taken a strong stance on regulatory engagement, advocating for clear legal frameworks to govern AI systems. Unlike some competitors that resist regulation, the company supports AI governance standards that ensure responsible development and deployment.
A notable example of the firm’s commitment to ethical AI is its decision to exit the facial recognition market in 2020. Concerns over racial bias and mass surveillance led IBM to discontinue its facial recognition services, citing the technology’s potential for misuse in law enforcement and violations of civil rights. This decision demonstrated that companies could prioritize ethics over profitability, setting a precedent for responsible AI business practices.
IBM’s approach to ethical AI implementation offers several key lessons. The company has demonstrated the importance of proactive governance by establishing an internal AI Ethics Board, ensuring that ethical considerations are embedded throughout the AI development process. To enhance transparency and mitigate bias, it has developed open-source tools such as AI Fairness 360, which help detect and reduce discriminatory patterns in machine learning models. Additionally, the corporation has been a strong advocate for regulatory frameworks, collaborating with policymakers to create enforceable standards that promote responsible AI governance. While the initiatives are not without challenges, they provide a blueprint for other organizations seeking to balance AI innovation with ethical responsibility.
A Call for Collective Responsibility
The ethical challenges posed by AI are not solely the responsibility of developers or policymakers—society as a whole must engage in shaping the future of AI. Consumers should be informed about how AI affects their lives, researchers must prioritize ethical considerations in innovation, and governments must create legal structures that uphold fairness, transparency, and accountability.
The debate around AI ethics is not simply about mitigating harm; it is about ensuring that technological progress aligns with human values. AI should not be left to develop unchecked under the assumption that efficiency outweighs ethical concerns. A proactive approach—one that prioritizes responsible AI practices over damage control—will be essential in defining how AI serves humanity in the years to come.
Sources
Arbelaez Ossa, L., Lorenzini, G., Milford, S. R., Shaw, D., Elger, B. S., & Rost, M. (2024). Integrating ethics in AI development: A qualitative study. BMC Medical Ethics, 25(10). https://doi.org/10.1186/s12910-023-01000-0
IBM. (2020). IBM CEO’s letter to Congress on facial recognition and responsible AI policy. IBM Newsroom. https://newsroom.ibm.com/2020-06-08-IBM-CEO-Arvind-Krishna-Issues-Letter-to-Congress-on-Racial-Justice-Reforms
Powers, T. M., & Ganascia, J.-G. (2020). The ethics of the ethics of AI. In M. D. Dubber, F. Pasquale, & S. Das (Eds.), The Oxford Handbook of Ethics of AI (pp. 1–26). Oxford University Press. https://doi.org/10.1093/oxfordhb/9780190067397.013.2
The landscape of work is undergoing a significant transformation, driven by technological advancements, evolving societal attitudes, and global events such as the COVID-19 pandemic. The pandemic acted as a catalyst, accelerating the adoption of remote work and prompting companies to reassess traditional work structures. Central to this evolution are the concepts of remote work, hybrid models, and the reimagining of traditional office spaces. These developments are not only altering the physical location where work occurs but are also reshaping the dynamics of the global workforce.
Remote Work: A Lasting Change?
The adoption of remote work has seen a substantial increase, particularly during the pandemic. Gallup reports that U.S. workers averaged 3.8 remote workdays per month in 2023, a rise from 2.4 days before the pandemic. This shift has led to enhanced productivity for many, as employees experience fewer office-related distractions and a better work-life balance. Additionally, companies can now access a broader talent pool without geographical constraints.
However, remote work is not without its challenges. Feelings of isolation and loneliness are common among remote workers, stemming from reduced face-to-face interactions. This can lead to a weakened sense of team cohesion and connection to the company’s culture. Moreover, the blurring of boundaries between personal and professional life can result in difficulties disconnecting from work, potentially leading to burnout. A survey by PwC in 2022 highlighted that 39% of employees were concerned about not receiving adequate training in digital and technology skills from their employers, underscoring the need for ongoing support in a remote setting.
Hybrid Models: The Emerging Standard
To balance the advantages and drawbacks of remote work, many organizations are adopting hybrid work models, which combine in-office and remote working. A PwC survey found that 46% of companies anticipated implementing a hybrid model by the end of 2022. This approach allows employees to engage in collaborative activities in the office while performing focused tasks remotely.
For hybrid models to be effective, a reevaluation of office design is essential. According to workplace strategy experts, companies are shifting away from the traditional cubicle-based layout in favor of open, flexible spaces that encourage teamwork and innovation. This may involve reducing the number of assigned desks in favor of creating more collaborative spaces that foster teamwork and innovation. Leading companies like Google and Facebook are at the forefront of redesigning their offices to support flexible layouts and incorporate technology that facilitates seamless collaboration between in-office and remote employees.
The Office’s Evolution: From Workspace to Collaboration Hub
The traditional office is being redefined from a place solely for individual work to a hub for collaboration and creativity. In this new model, the office complements remote work by providing spaces designed for team interactions and innovative endeavors. According to a report by JLL (Jones Lang LaSalle), while global office occupancy rates have declined, there is an increased demand for spaces that support collaborative work.
This shift has significant implications for the commercial real estate sector. As companies reduce their physical office spaces, property owners are compelled to offer more flexible leasing options and rethink office configurations to accommodate a more mobile workforce. For instance, some landlords are transforming traditional office buildings into co-working hubs, while others are integrating wellness-oriented designs that include outdoor workspaces, improved ventilation, and enhanced communal areas to foster employee engagement. The focus is now on creating environments that enhance employee experience, promote well-being, and support a variety of work styles.
Conclusion
The future of work is characterized by flexibility and adaptability. Remote work and hybrid models are becoming integral components of organizational strategies, necessitating a reimagining of the traditional office. As businesses navigate this evolving landscape, they must address the challenges associated with these new work arrangements, such as maintaining company culture, ensuring employee well-being, and providing adequate support and training. By embracing these changes thoughtfully, organizations can create a dynamic work environment that meets the needs of their employees and positions them for success in a rapidly changing world.
The Intellectual and Environmental Ethics of Artificial Intelligence
For the past years, artificial intelligence (AI) has had a rather prevalent impact on our lives: from assembling cars to determining which ads one is exposed to on social media. However, the emergence of generative AI, as a new category of technological resources, has taken the world by storm, with OpenAI’s ChatGPT alone reaching 300 million weekly active users in December 2024 (Singh, 2025) and, thus, having major implications not only on the environment but also on the unique human ability to envision and create. According to Gartner, AI-driven data analysis is set to account for more than 50% of all business analytics by 2025, while Forbes reports that AI-powered advertising tools can increase ROI by up to 30% compared to traditional methods.
In fact, as you read this sentence, generative AI programs may already be developing email prompts, debugging your code, and even preparing your dinner’s recipe simultaneously.
With the of AI usage re-shaping the way one works and interacts, as well as the possible rise of DeepSeek, which is projected to surpass ChatGPT’s performance, (Wiggers, 2025) clear benefits are defined, as studies predict 40% productivity improvements (MIT Sloan, 2023). Nevertheless, its groundbreaking promise to improve performance has been tempered, as of late, with growing concerns that these intricate and mystifying systems may do more societal harm than economic good, namely regarding creative outlooks and academic integrity (UNESCO, n.d).
As people progressively feel the immense rush of having more and more automated activities in their lives while companies hurry to improve efficiency, one should stop to think and ask:
What are the trade-offs for such benefits?
Intellectual Property
“And your novel?” “Oh, I put in my hand and rummage in the bran pie.” “That’s so wonderful. And it’s all different.” “Yes, I’m 20 people.”
– Virginia Woolf and Lytton Strachey
Retrieved from In the Margins: On the Pleasures of Reading and Writing
Creation is a complex and often unappreciated place, where the creative must give shape to wild, wanderer, unstructured ideas – many times, rummaging in the bran pie to see what comes out – to form a cohesive original piece. The realization that this type of work must be protected, so as to justify its high stakes, gave birth to the concept of intellectual property.
According to the World Intellectual Property Organization (WIPO), intellectual property (IP) refers to “creations of the mind, such as inventions; literary and artistic works; designs; and symbols, names and images used in commerce”. IP is protected by law: the Intellectual Property Rights (IPR), which encompass the right to be credited for their own work; to uphold their integrity; for others not to use the artists’ work without permission… Generative AI comes to challenge those pre-established rules.
By giving birth to unseen imagery with the utilization of prompts, creating adapted screenplays set up on the scenery of your favorite novels, and even developing catchy songs about the dean of your school – always surprisingly fast –, AI is increasingly taking its place at the creatives’ desk. But there is a catch: GenAI does not materialize exactly original elements. Rather, the tools are based on massive amounts of data, which are used to train them into recovering patterns that then enable the response to the prompt (MIT Sloan 2021).
This can become problematic when one starts to ask if there is ownership of the content that is provided to train Generative AI. This matter has already been brough up in the courtrooms. For example, Andersen v. Stability AI et al., in 2022. Various artists filed a class-action copyright infringement lawsuit against several AI organizations, claiming unauthorized use of their work for AI training (Harvard Business Review 2023). Ultimately, the courts’ decisions are going to add to the interpretation of the fair use doctrine.
Artists around the world are also starting to take the matter into their own hands. One of the most impactful cases of such traces back to the Writers Guild of America strike, that marked 2023. The culmination of this event consisted of an agreement which, among other things, laid ground for the establishment of artificial intelligence use. Although artists may use AI tools in their work, companies are prohibited from forcing them to do so – which would probably result in the drafting of lower paying contracts. More importantly, now “the WGA reserves the right to assert that exploitation of writers’ material to train AI is prohibited by MBA or other law” (Vox 2023).
AI’s Role in Academic Integrity
One has to be honest in one’s work, acknowledge others’ work properly, and give credit where one has used other people’s ideas or data.”
– Campbell & Waddington, 2024
Academic integrity is a critical component in education and research work within today’s rapidly evolving academic landscape as it reflects the value of the qualifications offered by an institute, as well as the ethical conduct of students. It regards the collective activity of students and teachers to demonstrate courtesy toward intellectual property and uphold moral and ethical standards in academic works. According to the European Network for Academic Integrity (ENAI), this concept includes “compliance with ethical and professional principles, standards, practices and consistent system of values that serves as guidance for making decisions and taking action in education, research, and scholarship.”.
With the growing presence of generative AI, students and academic researchers are supported in various aspects, including data analysis, decision-making and writing. AI has, in this sense, revolutionized the academic world, offering unmatched assistance. Nevertheless, its rapid integration into the sector, as well as its inability to understand and produce authentic scholarly work, raises concerns on students’ critical thinking capacities, plagiarism and overall academic integrity.
In fact, a study conducted with a sample of 5894 students across Swedish universities highlights a growing dependency on AI tools, with over 50% of positive responses to the use of chatbots, and over a third of students affirming the regular reliance on Large Language Models (LLM), such as ChatGPT in education (Malmström et al. 2023). AsAI tools are becoming progressively user-friendly, barriers to its wide adoption are significantly reduced. Namely, ChatGPT and similar AI applications can serve as self-learning tools, assisting students in acquiring information, answering questions and resolving problems instantaneously, thereby enriching learning experiences and offering personalized support.
However, despite its potential to enhance academic work, people’s perceptions around its misuse for academic shortcuts still indicate mixed responses (Schei et al. 2024). The debate further extends to ethical territory, as AI-facilitated plagiarism and academic misconduct becomes increasingly prevalent and possibly encourages a culture of intellectual laziness and plagiarism practices, such as Mosaic Plagiarism: which involves taking phrases from a source without crediting them or copying another person’s ideas and replacing these with synonymic phrase structures but for proper crediting (Farazouli et al. 2023).
Data sets used by LLMs often rely on information collected through data scraping from third-party websites and published work. While this practice is not necessarily considered misconduct, it may be obtained without explicit consent from the sources, meaning that it is possible for one’s AI-generated work or writing material to contain non-credited phrases and ideas. One example of such occurrence lies within the lawsuit infringed upon Open AI by the New York Times for copyright issues and unauthorized use of published content to train AI models (The New York Times 2023). Furthermore, critics also point out generative AI’s technical limitations and existing bias dependent on its training data, as it may create incorrect or outdated information, leading to extended reliability concerns. As AI becomes more deeply integrated in academia, without proper education, its misuse and over-reliance are a prominent motive for concern.
Environmental Impact and Water Consumption
Another factor to account for when addressing AI usage and reliance is its environmental impact, which is not often considered by end-users.
As worldwide corporate AI investments experienced exponential growth in the past years, from $12.75B in 2015 to $91.9B in 2022 (Statista 2024), so does its impact on water consumption since AI models (especially GPT-4) require significant energy and water resources to its function.
Global total corporate AI investment from 2015 to 2022 – Statista
When assessing water consumption in data centers, one should account for both its “onsite” direct use to cool servers, and its indirect use as an energy generator. (OECD.AI n.d.)
Furthermore, the data centers require the use of fresh water for refrigeration through cooling towers, liquid cooling, or air conditioning, while power plants supplying electricity also need large amounts of water. Thus, training and running AI models can consume millions of liters with even small AI questioning using significant amounts, as these consume 1.8 to 12 liters of water per kWh of energy.
AI’s water usage is, thereby, a growing concern, its growing water demands outpacing energy efficiency and being projected to reach up to 6.6B cubic meters (approximately 6 times of Denmark’s annual water withdrawal) (Li et al. 2025).
The hazard that AI imposes on the environment goes far beyond the hydrological issue discussed.
In a study carried out by Strubell et al. (2020), it was demonstrated that the carbon dioxide emissions associated with the training of a single type of common natural language processing (NLP) model greatly surpassed the values that are attributed to familiar consumption. Namely, the training of an AI model under such conditions yields approximately 600,000 lb of carbon dioxide emissions, whereas using a car for a lifetime produces one fifth of the same amount.
Of course, there is also a concern with the amount of energy used by artificial intelligence facilities. In such regard, Alex De Vries (2023) found out in a study that, by 2027, the AI industry could be consuming between 85 to 134 terawatt hours (Twh)annually, which compares to the amount of energy used by a small country such as the Netherlands. Additionally, GenAI tools may use nearly 33 times more energy to carry out a task than task-specific software would (World Economic Forum 2024). What is more, the extraction of natural resources that integrate the components of AI hardware can constitute a source of worry. In an interview, Yale’s Associate Professor Yuan Yao explains that the supply chain of these parts requires partaking in activities such as mining and metal production, that may lead to soil erosion and pollution.
Interestingly, Wang et al. (2024) suggest that the amount of e-waste (discarded electrical or electronic devices) generated could end up comprising a total of 1.2–5.0 million tons until 2030, depending on the pace of the industry’s growth. According to the World Health Organization, if e-waste is unreliably recycled, it can release up to a thousand different chemical substances, including known neurotoxicants such as lead.
As one becomes aware of the ethical concerns that come with AI development, and therefore its use, we can start to address these issues: by both reflecting on policies that can be implemented to mitigate the harm of such outbreaking technology and aiming to make more considerate and sustainable use of GenAI.
Madalena Martinho do Rosário
External VP
Mª Francisca Pereira
President
Sources:
Ferrante, Elena. 2022. In the Margins: On the Pleasures of Reading and Writing. UK: Europa Editions.
Strubell, Emma, Ananya Ganesh, and Andrew McCallum. 2020. Energy and Policy Considerations for Deep Learning in NLP. Annual Meeting of the Association for Computational Linguistics. https://doi.org/10.48550/arXiv.1906.02243
Malmström, H., Stöhr, C., & Ou, A. W. (2023). Chatbots and other AI for learning: A survey of use and views among university students in Sweden. (Chalmers Studies in Communication and Learning in Higher Education 2023:1) https://doi.org/10.17196/cls.csclhe/2023/01
———. 2024b. “Perceptions and Use of AI Chatbots Among Students in Higher Education: A Scoping Review of Empirical Studies.” Education Sciences 14 (8): 922. https://doi.org/10.3390/educsci14080922.
Farazouli, Alexandra, Teresa Cerratto-Pargman, Klara Bolander-Laksov, and Cormac McGrath. 2023. “Hello GPT! Goodbye Home Examination? An Exploratory Study of AI Chatbots Impact on University Teachers’ Assessment Practices.” Assessment & Evaluation in Higher Education 49 (3): 363–75. https://doi.org/10.1080/02602938.2023.2241676.
Li, Yang, Islam, Ren. Making AI Less “Thirsty”: Uncovering and Addressing the Secret Water Footprint of AI Models. (2025). https://arxiv.org/pdf/2304.03271
Artificial intelligence (AI) has emerged as one of the most transformative forces of the 21st century. The proliferation of AI technologies across industries is reshaping the way we work, live, and interact with the world. From revolutionizing healthcare and finance to automating everyday tasks, AI’s potential is immense. However, as AI continues to advance, it raises profound ethical questions that need urgent attention. In this article, we will explore both the opportunities AI offers and the ethical dilemmas it presents.
The Evolution of Artificial Intelligence: From Science Fiction to Reality
AI is no longer a concept confined to science fiction. Over the last few decades, it has evolved from simple automation tools to highly sophisticated systems capable of learning and adapting to complex scenarios. AI systems, powered by machine learning algorithms, can now analyze vast amounts of data, detect patterns, and make decisions faster and more accurately than humans. This progress has made AI an indispensable tool in various sectors, and its influence continues to grow.
The first significant leap for AI occurred with the development of machine learning (ML), a subfield of AI that allows computers to “learn” from data without being explicitly programmed. By feeding AI systems with large datasets, they can improve their accuracy over time, making predictions and automating tasks with increasing efficiency. In recent years, deep learning, a subset of ML, has emerged as a powerful method of training neural networks that simulate the human brain’s structure. This has propelled the development of AI applications that seem almost sentient — capable of recognizing images, understanding natural language, and even driving autonomous vehicles.
AI in Key Sectors: Transforming Industries
The practical applications of AI are vast and rapidly expanding. In healthcare, AI is making significant strides in diagnostic accuracy. AI-powered tools like IBM Watson Health and Google Health are helping doctors analyze medical images and diagnose diseases like cancer with remarkable precision. In fact, a study in the journal Nature found that an AI system was able to detect breast cancer in mammograms with an accuracy rate surpassing human radiologists. AI’s ability to analyze enormous datasets and find patterns hidden to the human eye is also revolutionizing personalized medicine. By analyzing patient records, AI can identify the best treatment options for individuals based on their unique genetic makeup.
In finance, AI is transforming investment strategies, risk assessment, and fraud detection. AI algorithms are now capable of analyzing market trends and making trades at speeds far beyond human capacity. Robo-advisors like Betterment and Wealthfront use AI to create personalized investment portfolios, making wealth management more accessible to the average consumer. Similarly, AI-powered fraud detection systems are becoming integral in the financial sector, using sophisticated algorithms to monitor transactions for signs of fraudulent activity.
In the manufacturing sector, AI is enhancing efficiency through automation. The concept of Industry 4.0, which integrates AI with the Internet of Things (IoT) and data analytics, is reshaping factories. AI systems can monitor production lines, predict maintenance needs, and even adjust operations in real-time to maximize efficiency. As a result, businesses can achieve higher levels of productivity and reduce operational costs.
AI is also making waves in transportation. Autonomous vehicles, powered by AI, are set to revolutionize the way people and goods move. Companies like Tesla and Waymo are pioneering self-driving car technology, which promises to reduce traffic accidents, lower transportation costs, and improve mobility for people who are unable to drive.
The Ethical Implications of AI: Privacy, Bias, and Job Displacement
While the potential benefits of AI are staggering, its rapid development also raises serious ethical concerns. As AI systems become more integrated into our lives, questions about privacy, bias, and job displacement loom large.
Privacy: The Dilemma of Data Collection
One of the most pressing ethical issues with AI is privacy. Many AI systems rely on vast amounts of data to function, often drawing from personal information, such as browsing habits, location data, or even medical records. This raises concerns about how this data is collected, stored, and used. In recent years, data breaches and misuse have highlighted the risks of collecting such sensitive information.
A notable example is the Facebook-Cambridge Analytica scandal, where data harvested from millions of users was misused to influence political elections. AI-powered systems that track online behavior, such as targeted advertising, rely on analyzing this data to deliver personalized content. While this can improve user experience, it also exposes individuals to the risk of manipulation and exploitation. This issue has led to calls for more robust privacy regulations, such as the General Data Protection Regulation (GDPR) in Europe, which aims to give individuals more control over their personal data.
Bias: The Risk of Reinforcing Inequality
Another significant concern with AI is bias. AI systems are only as good as the data they are trained on. If the data reflects societal inequalities or biases, the AI system can perpetuate and even amplify these biases. For instance, an AI algorithm trained on historical hiring data may inherit biases against women or minority groups, leading to discriminatory hiring practices.
In one high-profile example, a recruitment tool developed by Amazon was found to favor male candidates over female candidates due to bias in the training data. Similarly, in the criminal justice system, AI algorithms used for risk assessment have been shown to disproportionately target people of color, exacerbating existing racial inequalities.
The risks of AI bias have prompted calls for greater transparency in the development of AI systems, as well as initiatives to ensure diversity in the teams designing and implementing these technologies. Experts are urging developers to adopt ethical frameworks that prioritize fairness and accountability.
Job Displacement: The Impact on Employment
AI’s potential to automate jobs raises concerns about mass job displacement. As AI technologies continue to improve, there is a growing fear that machines will replace human workers in a wide range of industries. According to a 2021 McKinsey report, automation could displace 375 million workers globally by 2030, particularly in sectors like manufacturing, retail, and transportation.
While AI will undoubtedly create new job opportunities, particularly in fields related to data science, machine learning, and AI development, it is unlikely that displaced workers will easily transition into these roles. The World Economic Forum predicts that 85 million jobs could be displaced by AI by 2025, but it also forecasts the creation of 97 million new jobs that require skills in emerging technologies. The challenge lies in reskilling the workforce to meet these demands and mitigate the social and economic impact of job losses.
Balancing Innovation with Ethics: The Path Forward
As AI continues to evolve, it is critical that society finds a balance between harnessing its transformative power and addressing the ethical challenges it presents. There are several ways to navigate this delicate balance.
One key strategy is regulation. Governments and international organizations must implement clear and comprehensive regulations that govern AI development and use. In the European Union, the AI Act, which was proposed in April 2021, aims to regulate high-risk AI systems and ensure that AI is used in a manner that is transparent, accountable, and respectful of fundamental rights. These regulations are designed to ensure that AI systems are tested for fairness and accuracy before being deployed.
Transparency is also crucial. Developers need to make AI systems more understandable and explainable, especially when they are making high-stakes decisions, such as in healthcare or criminal justice. This can help build trust in AI systems and ensure that they are used responsibly.
Another important approach is to prioritize ethical AI design. As AI technologies become more integrated into our daily lives, it is essential that they are developed with fairness, accountability, and inclusivity in mind. This includes addressing issues like bias in training data and ensuring that AI systems are accessible to everyone, regardless of background or socio-economic status.
The Future of AI: Optimism with Caution
AI has the potential to improve our lives in ways we are only beginning to understand. From curing diseases to solving global challenges like climate change, AI offers immense opportunities. However, these benefits will only be realized if we are vigilant in addressing the ethical concerns that come with their rise.
By establishing strong ethical guidelines, promoting transparency, and ensuring that AI development benefits all of society, we can navigate the challenges and opportunities that lie ahead. As we continue to develop AI, it is crucial to remember that technology should serve humanity, not the other way around.
As AI continues to shape the future, it’s up to us to ensure that it evolves in a way that is ethical, inclusive, and ultimately beneficial for all.
Artificial Intelligence, AI, has changed many aspects of our lives over the past few decades, with global economic and social repercussions. Even though companies have always sought efficiency, innovation, and competitive advantage, nowadays AI tools play a fundamental role in their strategies, from automating repetitive tasks to enabling data-driven decision-making.
Job Market and Management Approaches
Inevitably, evolution brings opportunities and challenges, and the need to redefine the nature of work arises. New technologies create both winners and losers in the job market, effectively changing the current occupation demand, requiring adaptability and the re-evaluation of the workers’ essential skills.
Estimated net job creation due to AI, 2017-2037 (Source: PwC)
As expected, there is a rising need for experts in a variety of sectors, especially the ones dedicated to AI development, data science, cybersecurity, and E-commerce, being AI and machine learning specialists the fastest-growing job fields. The same happens with jobs that can difficultly be replaced by AI, namely within the health sector. However, in other cases—particularly in physical labour and services— technology is replacing labour rather than enhancing it. According to a McKinsey Global Institute study, by 2030, at least 14% of jobs in OECD countries will be easily automatable, forcing the workers to pursue a new career, while 32% could face substantial changes.
The Positive Symbiotic Relationship between AI and Company Functionality
Despite the threat that AI poses to jobs, there is also the potential for business to successfully integrate it into their operations to create a cooperative relationship between technology and human expertise, as machines excel at handling repetitive tasks and data analysis, while humans bring creativity, emotional intelligence, and complex problem-solving skills to the table.
As a matter of fact, AI reduces human error and boosts operational efficiency by automating repetitive operations and processes, completing them faster and more accurately, thus allowing workers to concentrate on more strategic and value-added work. Indeed, the European Parliament estimated an increase of 11-37% in labour productivity related to AI by 2035.
In the realm of logistics and supply chain management, AI optimizes operations by predicting demand, helping companies improve their supply chains and reduce costs. For example, in transportation, autonomous systems in driverless transport are reshaping logistics. Also, AI simplifies data analysis and customer service, improving productivity and cost-effectiveness levels. Through machine learning algorithms companies promptly assess trends and anticipate patterns in consumer behaviour to create more accurate strategic planning and enhance competitiveness. AI technologies further enable businesses to offer personalized experiences to their customers and instant support through chatbots, improving customer interactions and satisfaction.
Additionally, AI is progressively making a pronounced appearance in the decision-making processes of managers and CEOs, from who these figures should hire or promote, to what the establishment of job evaluation standards.
On another note, as companies deal with more complex cyber threats, AI has become a crucial asset in fortifying cybersecurity defenses. Algorithms recognize patterns, spot security lapses, and react instantly to online attacks. By taking a proactive stance when it comes to cybersecurity, businesses can better protect their data and maintain stakeholders’ trust.
Machine learning, with its situation-specific adaptive capabilities, is unlocking new possibilities for controlling processes and predicting issues in production and utilization of resources. Thus, through advanced algorithms and data analysis, organizations can optimize resource usage, reduce waste, and implement eco-friendly practices.
Historically, context-dependent learning processes posed significant challenges to automation due to a reliance on implicit knowledge and tasks lacking explicit rules of action. However, a paradigm shift is taking place: individuals are no longer just intelligent learning beings but are also assisted by AI, forming a symbiotic relationship that enhances human potential.
AI’s Duality: a Boost to the Economy or a Destabilizer of the Job Market?
The field of artificial intelligence is subject to cycles of intense interest – AI summers – and periods of skepticism and disappointment with its development – AI winters. Currently, the prolonged summer we are experiencing is characterized by significant funding and widespread adoption within the business world, with companies such as Google testing virtual try-ons that utilize their generative AI exemplifying how AI is reshaping operational structures, driving innovation, and providing new services.
With this being the most preponderant summer yet, what does this mean for the economy?
Many have grand expectations, with studies carried out by Goldman Sachs pointing out to the fact that “widespread AI adoption could eventually drive a 7% or almost $7trn increase in annual global GDP over a ten-year period.”, referring also to a three-percentage-point rise in annual labour-productivity growth in firms that adopt the technology, representing a huge uplift in income compounded over the years. In addition, a study published in 2021 by Tom Davidson of Open Philanthropy talks about a more than 10% chance of “explosive growth”, that being an increase of at least 30% in global output, sometime this century.
Nevertheless, in tandem with the rise in the positive overview that AI may bring to the economy, concerns about job displacement and the future of many career paths persist, giving rise to an overall sense of uncertainty. In a recent publication, Tyna Eloundou of OpenAI has stated that “around 80% of the US workforce could have at least 10% of their work tasks affected by the introduction of LLMs [Large Language Models]”.
Furthermore, The Economist’s coverage of the unpreparedness of employers for AI and Timnit Gebru’s advocacy for responsible AI development highlight the ethical considerations that should underpin the development of AI technologies. All in all, defining ethical boundaries in AI should involve transparency policies and accountability for decision-making within companies that inevitably adopt the tool, so as to ensure that AI is developed and used responsibly without perpetuating bias or harm.
The future of the job market, therefore, requires a delicate balance between innovation and ethical considerations to foster a work environment that prioritizes both technological progress and human well-being. The most prevalent concern is the rapid adoption of AI eventually leading to the destruction of jobs at a pace surpassing their creation, as barriers to entry, particularly related to owning and generating vast amounts of data could potentially stifle competition and innovation. Thereby, the dual nature of AI, as both a source of optimism and anxiety, underscores the need for thoughtful consideration and strategic planning as businesses navigate this transformative technological landscape.
Conclusion: the Future of the Job Market
As we traverse this dynamic landscape, one central theme emerges: the future of employment relations hinges on achieving a harmonious coexistence between human ingenuity and the technological prowess of AI. The synergy between human skills and AI capabilities stands as the cornerstone for unlocking the full potential of this transformative partnership.
There is also to say that, despite the ongoing debate, academic evidence on whether AI and industrial robots harm employment remains inconclusive. This uncertainty underscores the importance of continued research and vigilance in monitoring the impact of AI on the job market.
It can be established that the transformative wave of AI in employment relations has the potential to be both an opportunity and a challenge. The pivotal point is to enhance transparency in AI development, coupled with accountability for its ethics. Striking this balance is crucial for fostering a job market that not only benefits businesses but also safeguards the well-being of employees.
Sources:The Wall Street Journal, World Economic Forum, European Parliament, The Economist, Vogue Business, BBC, The New York Times, Nexford University, Griffiths, Paul & Nowshade, Mitt. 2019. “European Conference on the Impact of Artificial Intelligence and Robotics”. EM-Normandie Business School, Oxford, UK, White House, European Commission. 2022. “The Impact of Artificial Intelligence on the Future of Workforces in the European Union and the United States of America
Like any other structure of society, education is an ever-evolving affair that flows according to shifts in paradigms around politics, philosophy, and, of course, technology. The idea that learning could be performed in an online setting started to become more popular in the 2010s, with the rise in demand for online courses. Later, the COVID-19 pandemic emerged as a prominent landmark that established a new vision for what education could be, by creating an overnight necessity for the general implementation of mostly previously untried learning techniques that were based online.
The procedures that governments and faculties now designed to implement regarding education, specifically concerning the role of innovation, represent a crucial point, as they’re to decide on the establishment of efficient online educational infrastructures.
In this article, we aim to disclose how innovation is shaping the ongoing discourse on education, considering the economic and educational effects of its recent online development. We will explore the profound economic and educational impacts of these innovations, shedding light on their benefits and potential challenges.
An Ed-Tech Tragedy (UNESCO)
The transformative impact of online education
The economic implications of this shift are evident, as it reduces the need for physical infrastructures, commuting and other associated costs. This, in turn, results in cost savings for both institutions and students. Additionally, online courses often allow for a broader reach, attracting learners from all over the world and diversifying revenue streams for educational institutions
The educational impact of online learning is equally significant. Online education and remote work environments offer unparalleled flexibility. Students can choose when and where to study, enabling them to balance education with other responsibilities such as work or family. Students now have the flexibility to learn at their own pace as educational resources are accessible 24/7.
Online education presents a distinctive opportunity to provide personalised learning experiences due to its adaptability and flexibility, which fosters a more inclusive learning environment, accommodating individuals with diverse needs and learning styles, as it allows pupils to adapt the pace and style of learning to their preferences. This is due to innovations such as interactive multimedia, virtual classrooms, and AI-driven personalized learning that have made education more engaging and increasingly customisable. Moreover, online education has the potential to seamlessly combine skill development and degree attainment, aligning with the specific requirements of both students and the labour market. It can also revolutionise career planning and coaching services and deliver a unique and engaging learning environment. So, ultimately, this adaptability caters to the diverse needs of individuals pursuing online education, whether they are seeking a career change, community building, or connections with people from various backgrounds and regions.
Evolution of number of learners and enrolments in online learning
Innovations in online education have undoubtedly reshaped our economic and educational landscapes. They have opened new opportunities for cost savings, global collaboration, and individualised learning and work experiences. However, these innovations are not without their challenges and concerns, which must be addressed to ensure equitable access and a balanced work-life dynamic.
Some considerations when addressing e-learning
The recent announcement of Sweden’s Minister of Education and Research Lotta Edholm, encouraging a shift to more traditional didactic devices, contrasting with their current hyper-digitalised approach, has been raising mediatic attention concerning the approach of innovation in education.
In that specific case, the government proposes a fallback in the complete digitalization of learning mechanisms, rather than an absolute abandonment of such practices. With this measure, there is an active attempt to recover the engagement of children with books and handwriting. This decision is representative of the many that have been arising lately, contributing to a much larger discussion on the extension to which society wants education to hinge on the digital and the online.
Online learning is often correlated with social isolation, which becomes an aggravated problem for the development of children that depend on school in order to enhance interpersonal relationships. Oftentimes, social isolation is a key contributing factor to mental health disorders such as depression and anxiety. E-learning also requires skills such as self-motivation and time-management, which many students struggle with, especially harming students with conditions such as autism or ADHD. It is difficult to replicate the learning environment of a classroom, particularly when considering that the experience of learning online is heavily affected by the environment and means that the students have at home. There is also the problem of technical difficulties, such as internet connection or even issues with devices, that hinder the learning experience as they create additional barriers and challenges.
An Ed-Tech Tragedy (UNESCO)
Furthermore, for many low-income students, online courses are a poor substitute for in-person learning. In that sense, online learning can end up accentuating inequalities when it comes to acquisition of appropriate technological devices, guarantee or even access to stable internet connection and proper maintenance. This is supported by an UNESCO report – “An Ed-Tech Tragedy” – which discloses the worsening in disparities resulting from education becoming largely reliant on technology during the COVID-19 pandemic.
Interesting Facts and Data-Driven Perspective on the Evolution of Online Education
These facts shed light on the remarkable evolution of online education and its growing prominence in recent years. The data from the World Economic Forum underscores Coursera’s – a leading platform that provides courses online – exceptional growth trajectory, with its user base expanding significantly from 2016 through 2022, culminating in an impressive 92 million users. A notable aspect is the platform’s resilience in 2021, as it managed to sustain its upward momentum. The proliferation of online learning has witnessed an extraordinary expansion, with enrolment figures skyrocketing from a relatively modest 26 million in 2016 to a truly impressive 189 million by the year 2021. This substantial growth underscores the profound impact and increasing acceptance of online education on a global scale.
Statistics from the National Center for Education reveal a seismic shift in the landscape of higher education. In 2019, a mere 2.4 million students exclusively pursued online education, but the onset of the COVID-19 pandemic triggered a nearly 200% surge, resulting in 7 million students embracing online learning.
Furthermore, a forward-looking perspective presented by Statista highlights the United States’ dominance in the global online learning market, with a projected revenue of $74.8 billion in 2023. These statistics align with research findings, indicating that a substantial portion of American graduate students, specifically two out of every five, perceive online education as offering a superior overall educational experience compared to traditional classroom instruction. These facts collectively underscore the transformative impact and increasing acceptance of online education in contemporary society.
Evolution of number of learners and enrolments in online learning
Conclusion
Many educational institutions are exploring hybrid models that combine the best of both online and in-person experiences. This approach allows for increased flexibility while maintaining valuable face-to-face interactions. Hybrid models can bridge some of the gaps associated with remote work and online education, catering to various learning, and working styles.
The future of education is likely to continue evolving in response to technological advancements. As we navigate this ever-changing landscape, it is essential to strike a balance between harnessing the benefits of innovation and addressing the associated challenges, with a commitment to creating inclusive, accessible, and sustainable educational environments for all.
References
The Guardian, World Economic Forum, Mckinsey, Statista, Financial Times, UNESCO
Article written in partnership with Nova Tech Club.
In the European Union’s point of view, a smart citygoes beyond the use ofdigital technologiesfor better resource use and less emissions. It means smarter urban transport networks, upgraded water supply and waste disposal facilities, as well as more efficient ways to light and heat buildings. It also means a more interactive and responsive city administration, safer public spaces and striving to meet the needs of an aging population.
SDG 11 of the UN 2030 Goals looks at these solutions as ways to achieve more inclusive, safe, resilient, and sustainable human settlements. For example, their 2022 report highlights that 99% of the world´s urban population breathes polluted air, and municipal solid waste has collection and management problems that need to be tackled immediately (only 82% of this waste is collected and only 55% is managed in controlled facilities). In line with this, smart cities emerge as a possible approach to deal with these issues.
In short, smart cities are designed to achieve a sustainable organization as a society. It incites discussions between urban planners, city councils and even technology giants so as to enhance the population´s lives. Nonetheless, opposing views bring forth a sense of distrust related to the actual smartness of an actual implementation of the concept of smart cities.
Portugal’s Smart Cities initiatives
Unbeknownst to many, Portugal has a vast number of initiatives with the aim of creating or developing smart citiesby easing collaborations between municipalities. SMART PORTUGAL has been promoting the ‘smartification’ of Portuguese cities through various events, namely Smart Cities Tour and the “Cimeira dos Autarcas”, in an effort to increase national and international collaboration, but crucially, to let the public in on the innovations already in development. In collaboration with “Associação Nacional de Municípios Portugueses”, NOVA Cidade – Urban Analytics Lab, the organization behind SMART PORTUGAL, has implemented an annual activity plan to accelerate smart city innovation in the country, having also created simple and clear guidelines and standards, paving the way for Portugal to become a global leader in the field.
Portugal Smart City, under the SMART PORTUGAL program, tries to bring cities and companies together to connect innovators and implementors. Gatherings and fairs, such as the SMARTCITY Expo World Congress, allow businesses, academics, and legislators to come together and find partners for pioneering projects. Some initiatives have already broken ground and are producing palpable results. Rener Living Lab, or RPCI, formed in 2009, is a national smart city network that now accounts for more than 120 municipalities with certified smart projects, distinguishing their quality and workability, and increasing their international projection.
SMARTCITY Expo World Congress
In more practical terms, a number of Portuguese cities have been recognized as being in the forefront of positive change. 2020 saw Lisbon crowned European Green Capital following efforts to use residual waters to feed the city’s parks and an affordable public transport pass that allows citizens to cheaply travel between the cities and surrounding 18 boroughs. Valongo has been distinguished with the European Green Leaf award in 2022 as a result from an effort to increase city energy efficiency and create urban farms. Guimarães has also been classified as one of “100 Smart Cities” by the European Commission through its efforts in river shore quality with the construction of “Ecovias”, as well as a bet in a circular economy with the programme RRRCICLO, among others.
The European Approach on Smart Cities
On a European scale, there have been in the past few years many examples of city implementations and EU initiatives. Take Copenhagen for a great urban design example: approximately 43% of all commutes are conducted by bike. Vienna´s Citizens´ Solar Power Plant project must also be highlighted since it was very successful in engaging its citizens and energy companies to promote solar power energy. In Barcelona for example, the REC (Real Economy Currency) is introduced as a local social currency, which allows transactions in a community between individuals, institutions and businesses that accept it. This project fosters small businesses that are struggling to survive in digital times and in big cities.
The European Union has been very avant-garde when it comes to respecting the historical roots of cities and advocating for their sustainable future. EU Missions are a new way to bring concrete solutions to some of our greatest challenges. They have ambitious goals and hope to deliver tangible results, with the Climate-Neutral and Smart Citiesinitiative being one of the most ambitious missions that aims to deliver 100 climate-neutral and smart cities by 2030, ensuring that these cities will act as experimentation and innovation hubs to enable all European cities to follow suit by 2050. Funding will cover a wide range of subjects such as urban planning and design for climate-neutral cities, sustainable urban mobility, positive and clean energy districts, with a lot of projects already being implemented. Furthermore, personal data protection is also a pertinent subject to the EU´s concerns. Project Decode, for instance, provides tools that put individuals in control of whether they choose to keep their personal information private or share it for the public good.
Smart Cities across the World
While smart city projects exist and thrive worldwide, some cities have gone above and beyond in creating a smart ecosystem for its residents, improving sustainability and efficiency. Masdar, in the UAE, was what can perhaps be called the most significant green project in the Arab World. This pilot project aimed at housing 50 thousand people in an urban landscape with no automobiles and making sole use of renewable energy. While the initiative has seen its fair amount of success, it’s important to point out that it is still significantly smaller than first planned, with some critics also pointing out that the focus should be on greenifying existing cities and not creating new ones.
Songdo, South Korea followed a similar path to Masdar, though at a more significant scale. With innovative urban waste collecting systems, trash is transported using a network of pipes eliminating the need for trucks. With the concept of Ubiquitous City – where citizens can access services anywhere, anytime, from home banking and teleconferencing to intelligent transport systems and remote sensing – becoming an area of intense focus, Songdo has also incorporated some innovations in line with it. CCTV and sensors, for example, have become essential for the Korean city to control traffic flows and quickly respond and adapt when accidents occur, informing locals of exact public transport timetables and occupancy.
Songdo Control centre (CCTV and sensors control)
Nevertheless, this city concept has its flaws. Korean residents have complained that, maybe due to its intent as an international city, Songdo doesn’t feel quite authentic. Foreigners, in contrast, have a sense of deja vu as the replicas of the boulevards of Paris, pocket parks of Charleston, Central Park in New York City, or the canals in Venice make the city seem like a patchwork of other urban areas. Another big area of complaint has been what many thought would be the city’s main selling point: technology. Constant surveillance and monitoring have left many with a feeling of unease, with concerns for privacy and intellectual property growing. Similar smart projects have been halted entirely after encountering significant pushback from citizens not wanting to share so much information. Alphabet’s Sidewalk Labs Toronto project, with “(…) mass timber housing, heated and illuminated sidewalks, public Wi-Fi, and, of course, a host of cameras and other sensors to monitor traffic and street life(…)” was one of said projects, facing heavy criticism from the get-go.
Conclusion
Smart Cities are now more popular than ever. Meeting UN’s SDG 11, this project has even gained momentum in small countries such as Portugal, with EU legislation adapting to ease the rise of smarter and more sustainable practices. Worldwide examples, from the Middle East to South East Asia, offer a glimpse of more radical initiatives, along with its benefits and shortcomings. While services may be improved upon, privacy is an ever-growing concern, and if legislators, investors, and urban planners want to go ahead with these new forms of design and construction, the safeguard of private information needs to be on top of everyone’s mind.
Sources: Energy Cities Hubs, DECODE project, European Commission, The Global Goals, NOVA Cidade Urban Analytics Lab, Forum das Cidades, Jornal de Negócios, The Verge, Bloomberg
Deep under the ocean lies a wide reaching entity spanning over a million kilometers, with the power to influence and shape our very own society. This mysterious object is known to attract sharks’ voracious appetite, yet it is not an animal of any kind. It’s not Cthulhu[1], it’s fiber optic submarine cables. And believe it or not, these structures are the foundations of the telecommunication systems that we are so reliant on. But why and who place cables at the bottom of the ocean?
For this month’s article, the teams behind Development Economics and Technology, Health and Environment have challenged each other to bring their own spin on cables running through the ocean’s floor.
From the telegraph to the internet
The concept behind wired communications isn’t properly new. Curiously enough, neither is the idea of running them by the ocean floor to connect regions far part. In fact, the first Transatlantic cable was installed all the way back in the 1850s. This venture – an attempt to shorten communications between the UK and its American allies from a week’s long process to a single day – would take the form of a telegraph line. Composed by just seven innocuous copper wires, it lasted all of three weeks before it broke. Nevertheless, it laid the groundwork for the communication’s architecture of the future.
Fast-forwarding to today there’s a 1.2 million kilometers long fiber-optic submarine cable infrastructure connecting the world together. Whereas the first cables were dingy and incapable of sustaining a slightly higher voltage, these optical fibres are protected by several layers of plastic and metals, shielding them fromthe hazardous threats of the environment (at least non-shark related threats). They weigh over 1.4 tones per kilometer and come to about 10 cm in diameter.
It’s fair to say that a lot of resources go into these cables, and one cannot help but wonder how they are made or even how they monetized.
The answer should be intuitive for both: in one, the telecoms footing the bill split the bandwidth – the rate at which data is transferred – between themselves. With this allocated bandwidth, they provide communications services (such as loading up this webpage) to their respective markets.
As for laying the down cables, it is mostly a matter of preparation and work. Firstly, the installers must evaluate the ocean floor path they wish to install the cables on, something done through different scanning methods, which is both arduous and expensive. After choosing the desired path, a special type of cargo ship, such as the one in figure, must be employed to lay the cables underneath the ocean floor, in order to prevent their breakage.
Be it due to the need of each of these companies to operate several of these vessels in order to lay down these cables, or the fact that they are expensive to make on their own, it’s fair to call it a capital-intensive industry. New players are inhibited by the steep entry barriers, even taking into account that these cables have a relatively short lifespan – each should last about 25 years – and don’t seem to keep up with the demand of our increasingly tech-hungry societies. Competition is nevertheless fierce for a relatively limited number of contracts.
But who are these companies: are they privately or publicly owned? Currently, the submarine fibre optic cablesare owned by private companies with a stake in communications. This also includes companies such as Fujitsu, Alcatel, Huawei and so on. However, they often receive funding for these ventures and cooperateclosely with public entities – after all, communications are essential to any functional nation-state, even beyond consumer markets.
But here’s a question that you might have asked yourself already: Why not satellites? Couldn’t this essential service be assured by satellites, whilst also ensuring a wider reach? After all, satellites are all over the place and next-generation connectivity tends to take wireless form; be it my earbuds or even charging my phone.
For starters, there is latency. Fiber optic has lower latency times, meaning that the travel time for the internet signals to their destination and back are shorter. It might not appear to be that relevant – unless you play videogames – but even a difference as small as 20 milliseconds can severely disrupt your internet activity. Second, wired connections have larger bandwidths, which as mentioned before, are the aggregate maximum capacity of an internet connection. This is especially true when taking into account larger scales: whilst our beloved cables operate at a capacity measured in the terabits per second (1012), satellites only provide around the gigabit per second (109) region. From 5G to autonomous cars, the main driver for these technologies are the submarine cables.
Big tech and the scramble for the African market
Locations of the world’s data centers
Underwater sea cables are not only a better solution when compared to satellites, but are also good investments for companies that have to think big. Whereas in the past the main investors were telecoms, the new wave of investors across Europe and America are big tech. But what about Africa, does it apply to that continent as well? The answer is yes.
Companies like Google and Facebook, among others, have made considerable investments in Africa since it represents a market with highly unexploited potential. Moreover, network traffic in this continent has increased sharply, in addition to demand for the Internet, therefore making it both viable and vital.
Cloud computing is also on the rise. As companies migrate their IT infrastructure to the cloud, big tech companies who already provide the bulk of cloud computing and services start building more data centers. They consume a lot more bandwidth.
Therefore, gaining a stake in fiber optic cable is an opportunity for both vertical and horizontal integration. Google is already providing phone plans across the world. But as with any continent and country, Africa comes with a number of challenges. For starters, it is limited in the number of English speakers as well as levels ofliteracy; the bulk of the internet is made up of English-language websites, effectively making the contentsinaccessible for most. Reliable access to power, in addition to economic barriers, are very preponderant as well. Nevertheless, the implementation of communication technology is of extreme importance in mitigating the gap between developed and in-development countries, as well as empowering their populations.
In Africa, there are still many countries without any cable connection. The countries with the highest number of subsea cables landing stations are Egypt with 15 and South Africa and Djibouti with 11 cables each.
The number of connections between Africa and the world.
The predicted investments in Africa will imply a greater need for data centers. At the moment, these are located in South Africa and Nigeria. This location corresponds more or less with the countries where there are more cable landing stations – in other words, countries with more and better infrastructures. Either because they have a bigger need for services, or because they are more stable, those are the places where companies will want to put their new data centers in.
Partly because the country with the largest number of data centers in Africa still has 20 times less centers than in Europe, Africa could become the next big battleground for companies vying for a stake in the world stage.
To all our readers, this is the second part of an article still brought to you by humans. We encourage all to go read Part I here in case you missed it.
Why all the buzz around machine learning now? Just how many of them are there? What are ‘Neural Networks’ (otherwise known as deep learning) and why do they threaten to take our jobs? And finally, how likely is it that my robot vacuum cleaner wrote this entire article? (Tip: More likely now than ever before)
Although similarities nowadays are sparse, Artificial Neural Networks got their name from being modelled after our own biological human neurons.
To broach a topic as diverse as Artificial Intelligence only raises more questions than it answers. This is especially true when writing an introductory article to the topic. As a result, the Tech team is dedicating a second story to further develop ideas brought to the table during the first part of our article.
From deciphering literally all questions that Machine Learning can answer – from an abstract perspective in the very least – to explaining some factors behind the notable rise of Neural Networks. In keeping in tone with the previous article, we’ll further explain some of the nuance behind Recommendation systems (such as the ones used by Amazon and Netflix) and the way these systems (traditional vs. new) complement each other.
The 5 most useful questions ever answered by Machines
When breaking down Tinder’s diverse processes, we saw how Learners could be utilized to perform several distinct tasks (image recognition vs. matching) and how one system built on top of another (new data powered other learners). The result of this systematic and iterative approach towards Machine Learning shows how data can be used to extrapolate powerful predictions. It is but one of many successful examples in how these powerful algorithms constantly shape our lives.
Our first part also provides a notion of the breadth and versatility of Learners. Much like how it’s said that all plots in media are variations of just seven basic story archetypes, it’s said that Machine Learning can only provide answers to 5 basic questions. When looking at a user’s Tinder profile in order to assign a trait or personality, we looked at what is called a Classification task – “Is this A or B.” Assigning a score to a user to predict a match with another user is what is called a Regression task – “How much or how many” – something not so (mathematically) different from trying to predict house prices. More towards the end of that story, we also brought up Clustering in regard to its potential uses in segmentation – in other words, “How is this organized”.
The two other questions, despite playing a very minor part, were also mentioned in some way or shape. They are: “Is this weird?”, useful in anomaly detection (also known as the reason why you shouldn’t use a credit card for one dollar purchases) and “What should I do now?”; a question that a machine is likely to ask itself whether being taught how to drive or when considering an insurrection against its human overlords.
Yes, there is a model called Logistic Regression. Yes, it is ironically cruel (especially if you’re hearing about all this for the first time). While objectively a Regression model (as in, it uses regression) it is used as a Classifier/for Classification tasks (e.g. based on the regression output, it will classify an object as A if below a 0.5 threshold or B if above 0.5)
While reducing all types of Machine Learning to 5 simpler questions might help you understand the nature of them, it likely puts you no closer to figuring out which one allows the GPT-3 model to produce human-like text. It might surprise the reader to learn that of all models in the diagram above, only one directly relates to Neural Networks – and that it does not explain the human-like text capabilities of GPT-3.
Much how Machine Learning is a field of techniques within Artificial Intelligence, Deep Learning is an entire field within ML. Many of them have been around for decades now – even before a time where computational power allowed for the efficient use of ML – often times in the form of scientific papers that could never go beyond conceptual form. Neural Networks, much like a lot of techniques in ML, grew in use and popularity as processing power turned many of these techniques viable.
In this sense, Neural Networks are the latest – and perhaps greatest – of ideas taken out of the Machine Learning icebox. From ‘Supervised’ to ‘Unsupervised’, the school of ML is capable of answering and solving any of these tasks. Going beyond versatility, it has also proven itself highly successful in performing tasks that traditional techniques, could not.
What Machine wrote my news?
Pretend for a moment that a Machine is capable of human-like thoughts (they aren’t, despite their increasingly impressive cognition). Would GPT-3, while outputting text, ask itself “How many?” or “Is this A or B?”
Furthermore, could a non-Neural Network learner have produced such an outcome? Can we say for certain that Neural Networks are inherently better than conventional techniques? For either question, it has to do with the quirks in data. Neural Networks, more specifically Convolutional Neural Networks (CNN), excel at the many challenges brought up with image recognition (namely high dimensionality). When faced with traditional techniques, Neural Networks will not perform inherently better outside of one notable exception – data size.
Past a certain (big) size, Neural Networks are practically guaranteed to be the better choice due to scalability. The bigger the data, the better it works when measured against other models. Work in Machine Learning has a lot to do with measuring and evaluating performance, and in keeping in tone, it has more to do with picking the better model than writing thousands of lines of code.
Additionally, often times we will find a mixture of both (Neural vs. Traditional) powering our increasingly complex systems. Consider Amazon and Netflix; both boast powerful Recommendation Systems, a million-dollar idea (Netflix Prize) that nudges you towards the next movie or item.
A traditional Recommendation System is a matter of matrix factorization. In simpler terms, it is one of the easier algorithms you can write by hand (and with just one or two courses of Calculus). Another thing is that Recommendation Systems pair you with something likely to be relevant – either due to similarities with other users or items – in essence, a Regression or Classification task.
At surface level, much remains the same by migrating to Neural Networks. Data goes in the model, and a prediction (whether regression or classification) comes out. The interesting part is how Recommendation Systems can be used to transform the data before it goes inside the model. Layered on top of each other, a learner can perform multiple tasks (answering more than one question) before reaching our desired output.
To return to our initial question, the secret to what GPT-3 might think before a prediction is likely to be “How much/How many” – it is described as an autoregressive model after all. But the secret to its success might be in answering multiple questions in succession.
Sources: Netflix Prize, The Ascent, The Awareness News, The Guardian, Towards Data Science.
Coulter, D., Gilley, S., Sharkey, K., 2019. Data Science for Beginners video 1: The 5 questions data science answers. 22 March
Pant, R., Singhal, A., Sinha, P., 2017. Use of Deep Learning in Modern Recommendation System: A Summary of Recent Works. 7 Dec