Artificial Intelligence and Ethics: A Necessary Debate 

Time to read: 6 minutes

Artificial Intelligence (AI) is no longer a futuristic concept but an integral part of modern society. It shapes decisions in finance, healthcare, law enforcement, and social media, influencing how people interact with technology and each other. The rapid integration of AI, however, brings with it a host of ethical concerns. Questions about fairness, accountability, and transparency challenge the assumption that technological progress is inherently beneficial. AI does not exist in a vacuum—it reflects the values and biases of those who create and deploy it. While ethical AI has become a widely discussed concept, turning principles into action remains a significant challenge. 

Between Innovation and Responsibility 

The potential benefits of AI are vast. Automated systems can improve efficiency, analyze massive datasets, and assist in complex decision-making processes. In industries such as healthcare, AI-driven models can detect diseases early, optimize treatment plans, and personalize medical recommendations. In business, predictive analytics can enhance supply chain management and customer experiences. Despite these promising applications, the ethical risks of AI cannot be ignored. 

A key issue lies in the tension between innovation and responsibility. Companies and developers push for solutions, often prioritizing speed and market dominance over careful ethical consideration. AI ethics frameworks have been introduced to address this, but they frequently lack enforceability, leaving ethical concerns in the hands of the very entities that stand to profit from AI’s widespread adoption. 

Challenges of Ethical Implementation 

Ethical AI is easier to discuss than to implement. One of the greatest barriers is the lack of transparency of AI systems. Many machine learning models operate as “black boxes,” meaning their decision-making processes are difficult to interpret, even by their creators. This lack of transparency complicates accountability, making it unclear who should be held responsible when AI systems make biased or harmful decisions. 

Another persistent challenge is bias in AI models. AI systems are trained on historical data, which often contains existing biases related to race, gender, and socioeconomic status. Rather than eliminating human prejudice, AI has the potential to reinforce and amplify systemic inequalities. Addressing these biases requires a combination of diverse training datasets, algorithmic audits, and ongoing oversight—none of which are currently standard practices across industries. 

Additionally, economic incentives often clash with ethical considerations. The AI industry is dominated by tech giants that compete for market share, patents, and financial gains. Ethical concerns, such as privacy and fairness, are often secondary to profit-driven objectives. Without clear regulatory frameworks, companies can claim adherence to ethical principles while continuing practices that favor commercial success over social responsibility. 

Bridging the Gap Between Theory and Practice 

For AI ethics to move beyond discussion and into action, structural changes are necessary. Regulatory enforcement is one crucial step. Governments and international organizations must establish clear legal guidelines that define ethical AI development and deployment. Without binding regulations, AI ethics remains largely voluntary, dependent on corporate goodwill rather than enforceable standards. 

Another important approach is enhancing AI explainability. Researchers and developers need to prioritize the creation of AI systems that are interpretable and understandable. This includes designing models with built-in transparency measures, providing clear documentation on decision-making processes, and ensuring that AI-driven recommendations can be challenged when necessary. 

Additionally, inclusive AI development is crucial. Many AI development teams lack diversity not only in terms of gender and ethnicity, but also regarding socioeconomic background, cultural perspective, and disciplinary expertise, which limits their ability to recognize and mitigate biases in their models. A broader range of perspectives—spanning gender, ethnicity, socioeconomic backgrounds, and disciplines—must be included in AI research and implementation. Ethical AI requires collaboration between technologists, ethicists, policymakers, and affected communities to ensure that AI serves a wider spectrum of societal needs. 

Case Study: IBM’s Ethical AI Approach 

IBM (International Business Machines Corporation) has positioned itself as a leader in ethical AI by actively addressing issues of fairness, transparency, and accountability. Unlike many companies that focus solely on AI innovation, IBM has taken significant steps to integrate ethics into AI development through its AI Ethics Board, which oversees responsible AI deployment. 

A key contribution to ethical AI is its focus on explainability. The company has developed the AI Fairness 360 toolkit, an open-source library designed to help developers detect and mitigate biases in machine learning models. By making these tools publicly available, greater transparency and accountability across the AI industry is encouraged. 

The company has also taken a strong stance on regulatory engagement, advocating for clear legal frameworks to govern AI systems. Unlike some competitors that resist regulation, the company supports AI governance standards that ensure responsible development and deployment. 

A notable example of the firm’s commitment to ethical AI is its decision to exit the facial recognition market in 2020. Concerns over racial bias and mass surveillance led IBM to discontinue its facial recognition services, citing the technology’s potential for misuse in law enforcement and violations of civil rights. This decision demonstrated that companies could prioritize ethics over profitability, setting a precedent for responsible AI business practices. 

IBM’s approach to ethical AI implementation offers several key lessons. The company has demonstrated the importance of proactive governance by establishing an internal AI Ethics Board, ensuring that ethical considerations are embedded throughout the AI development process. To enhance transparency and mitigate bias, it has developed open-source tools such as AI Fairness 360, which help detect and reduce discriminatory patterns in machine learning models. Additionally, the corporation has been a strong advocate for regulatory frameworks, collaborating with policymakers to create enforceable standards that promote responsible AI governance. While the initiatives are not without challenges, they provide a blueprint for other organizations seeking to balance AI innovation with ethical responsibility. 

A Call for Collective Responsibility 

The ethical challenges posed by AI are not solely the responsibility of developers or policymakers—society as a whole must engage in shaping the future of AI. Consumers should be informed about how AI affects their lives, researchers must prioritize ethical considerations in innovation, and governments must create legal structures that uphold fairness, transparency, and accountability. 

The debate around AI ethics is not simply about mitigating harm; it is about ensuring that technological progress aligns with human values. AI should not be left to develop unchecked under the assumption that efficiency outweighs ethical concerns. A proactive approach—one that prioritizes responsible AI practices over damage control—will be essential in defining how AI serves humanity in the years to come. 

Sources

  • Arbelaez Ossa, L., Lorenzini, G., Milford, S. R., Shaw, D., Elger, B. S., & Rost, M. (2024). Integrating ethics in AI development: A qualitative study. BMC Medical Ethics, 25(10). https://doi.org/10.1186/s12910-023-01000-0 
  • IBM. (2020). IBM CEO’s letter to Congress on facial recognition and responsible AI policy. IBM Newsroom. https://newsroom.ibm.com/2020-06-08-IBM-CEO-Arvind-Krishna-Issues-Letter-to-Congress-on-Racial-Justice-Reforms

Mara Blanz

Research Editor & Writer

AI, The Good, The Bad and The Ugly 

Reading time: 12 minutes

The Intellectual and Environmental Ethics of Artificial Intelligence 

For the past years, artificial intelligence (AI) has had a rather prevalent impact on our lives: from assembling cars to determining which ads one is exposed to on social media. However, the emergence of generative AI, as a new category of technological resources, has taken the world by storm, with OpenAI’s ChatGPT alone reaching 300 million weekly active users in December 2024 (Singh, 2025) and, thus, having major implications not only on the environment but also on the unique human ability to envision and create. According to Gartner, AI-driven data analysis is set to account for more than 50% of all business analytics by 2025, while Forbes reports that AI-powered advertising tools can increase ROI by up to 30% compared to traditional methods.  

In fact, as you read this sentence, generative AI programs may already be developing email prompts, debugging your code, and even preparing your dinner’s recipe simultaneously.  

With the of AI usage re-shaping the way one works and interacts, as well as the possible rise of DeepSeek, which is projected to surpass ChatGPT’s performance, (Wiggers, 2025) clear benefits are defined, as studies predict 40% productivity improvements (MIT Sloan, 2023). Nevertheless, its groundbreaking promise to improve performance has been tempered, as of late, with growing concerns that these intricate and mystifying systems may do more societal harm than economic good, namely regarding creative outlooks and academic integrity (UNESCO, n.d). 

As people progressively feel the immense rush of having more and more automated activities in their lives while companies hurry to improve efficiency, one should stop to think and ask: 

What are the trade-offs for such benefits?

Intellectual Property

And your novel?” 
“Oh, I put in my hand and rummage in the bran pie.” 
“That’s so wonderful. And it’s all different.” 
“Yes, I’m 20 people.” 

– Virginia Woolf and Lytton Strachey

 Retrieved from In the Margins: On the Pleasures of Reading and Writing 

Creation is a complex and often unappreciated place, where the creative must give shape to wild, wanderer, unstructured ideas – many times, rummaging in the bran pie to see what comes out – to form a cohesive original piece. The realization that this type of work must be protected, so as to justify its high stakes, gave birth to the concept of intellectual property.  

According to the World Intellectual Property Organization (WIPO), intellectual property (IP) refers to “creations of the mind, such as inventions; literary and artistic works; designs; and symbols, names and images used in commerce”. IP is protected by law: the Intellectual Property Rights (IPR), which encompass the right to be credited for their own work; to uphold their integrity; for others not to use the artists’ work without permission… Generative AI comes to challenge those pre-established rules.  

By giving birth to unseen imagery with the utilization of prompts, creating adapted screenplays set up on the scenery of your favorite novels, and even developing catchy songs about the dean of your school – always surprisingly fast –, AI is increasingly taking its place at the creatives’ desk. But there is a catch: GenAI does not materialize exactly original elements. Rather, the tools are based on massive amounts of data, which are used to train them into recovering patterns that then enable the response to the prompt (MIT Sloan 2021).  

This can become problematic when one starts to ask if there is ownership of the content that is provided to train Generative AI. This matter has already been brough up in the courtrooms. For example, Andersen v. Stability AI et al., in 2022. Various artists filed a class-action copyright infringement lawsuit against several AI organizations, claiming unauthorized use of their work for AI training (Harvard Business Review 2023). Ultimately, the courts’ decisions are going to add to the interpretation of the fair use doctrine.   

Artists around the world are also starting to take the matter into their own hands. One of the most impactful cases of such traces back to the Writers Guild of America strike, that marked 2023. The culmination of this event consisted of an agreement which, among other things, laid ground for the establishment of artificial intelligence use. Although artists may use AI tools in their work, companies are prohibited from forcing them to do so – which would probably result in the drafting of lower paying contracts. More importantly, now “the WGA reserves the right to assert that exploitation of writers’ material to train AI is prohibited by MBA or other law” (Vox 2023). 

AI’s Role in Academic Integrity 

One has to be honest in one’s work, acknowledge others’ work properly, and give credit where one has used other people’s ideas or data.”  

– Campbell & Waddington, 2024 

Academic integrity is a critical component in education and research work within today’s rapidly evolving academic landscape as it reflects the value of the qualifications offered by an institute, as well as the ethical conduct of students. It regards the collective activity of students and teachers to demonstrate courtesy toward intellectual property and uphold moral and ethical standards in academic works. According to the European Network for Academic Integrity (ENAI), this concept includes “compliance with ethical and professional principles, standards, practices and consistent system of values that serves as guidance for making decisions and taking action in education, research, and scholarship.”. 

With the growing presence of generative AI, students and academic researchers are supported in various aspects, including data analysis, decision-making and writing. AI has, in this sense, revolutionized the academic world, offering unmatched assistance. Nevertheless, its rapid integration into the sector, as well as its inability to understand and produce authentic scholarly work, raises concerns on students’ critical thinking capacities, plagiarism and overall academic integrity.   

In fact, a study conducted with a sample of 5894 students across Swedish universities highlights a growing dependency on AI tools, with over 50% of positive responses to the use of chatbots, and over a third of students affirming the regular reliance on Large Language Models (LLM), such as ChatGPT in education (Malmström et al. 2023). As AI tools are becoming progressively user-friendly, barriers to its wide adoption are significantly reduced. Namely, ChatGPT and similar AI applications can serve as self-learning tools, assisting students in acquiring information, answering questions and resolving problems instantaneously, thereby enriching learning experiences and offering personalized support.  

However, despite its potential to enhance academic work, people’s perceptions around its misuse for academic shortcuts still indicate mixed responses (Schei et al. 2024). The debate further extends to ethical territory, as AI-facilitated plagiarism and academic misconduct becomes increasingly prevalent and possibly encourages a culture of intellectual laziness and plagiarism practices, such as Mosaic Plagiarism: which involves taking phrases from a source without crediting them or copying another person’s ideas and replacing these with synonymic phrase structures but for proper crediting (Farazouli et al. 2023). 

Data sets used by LLMs often rely on information collected through data scraping from third-party websites and published work. While this practice is not necessarily considered misconduct, it may be obtained without explicit consent from the sources, meaning that it is possible for one’s AI-generated work or writing material to contain non-credited phrases and ideas. One example of such occurrence lies within the lawsuit infringed upon Open AI by the New York Times for copyright issues and unauthorized use of published content to train AI models (The New York Times 2023). Furthermore, critics also point out generative AI’s technical limitations and existing bias dependent on its training data, as it may create incorrect or outdated information, leading to extended reliability concerns.  
As AI becomes more deeply integrated in academia, without proper education, its misuse and over-reliance are a prominent motive for concern. 

Environmental Impact and Water Consumption  

Another factor to account for when addressing AI usage and reliance is its environmental impact, which is not often considered by end-users.  

As worldwide corporate AI investments experienced exponential growth in the past years, from $12.75B in 2015 to $91.9B in 2022 (Statista 2024), so does its impact on water consumption since AI models (especially GPT-4) require significant energy and water resources to its function. 

Global total corporate AI investment from 2015 to 2022 – Statista 

When assessing water consumption in data centers, one should account for both its “onsite” direct use to cool servers, and its indirect use as an energy generator.  (OECD.AI n.d.) 

Furthermore, the data centers require the use of fresh water for refrigeration through cooling towers, liquid cooling, or air conditioning, while power plants supplying electricity also need large amounts of water. Thus, training and running AI models can consume millions of liters with even small AI questioning using significant amounts, as these consume 1.8 to 12 liters of water per kWh of energy.  

AI’s water usage is, thereby, a growing concern, its growing water demands outpacing energy efficiency and being projected to reach up to 6.6B cubic meters (approximately 6 times of Denmark’s annual water withdrawal) (Li et al. 2025). 

The hazard that AI imposes on the environment goes far beyond the hydrological issue discussed. 

In a study carried out by Strubell et al. (2020), it was demonstrated that the carbon dioxide emissions associated with the training of a single type of common natural language processing (NLP) model greatly surpassed the values that are attributed to familiar consumption. Namely, the training of an AI model under such conditions yields approximately 600,000 lb of carbon dioxide emissions, whereas using a car for a lifetime produces one fifth of the same amount. 

Of course, there is also a concern with the amount of energy used by artificial intelligence facilities. In such regard, Alex De Vries (2023) found out in a study that, by 2027, the AI industry could be consuming between 85 to 134 terawatt hours (Twh) annually, which compares to the amount of energy used by a small country such as the Netherlands. Additionally, GenAI tools may use nearly 33 times more energy to carry out a task than task-specific software would (World Economic Forum 2024). What is more, the extraction of natural resources that integrate the components of AI hardware can constitute a source of worry. In an interview, Yale’s Associate Professor Yuan Yao explains that the supply chain of these parts requires partaking in activities such as mining and metal production, that may lead to soil erosion and pollution.  

Interestingly, Wang et al. (2024) suggest that the amount of e-waste (discarded electrical or electronic devices) generated could end up comprising a total of 1.2–5.0 million tons until 2030, depending on the pace of the industry’s growth. According to the World Health Organization, if e-waste is unreliably recycled, it can release up to a thousand different chemical substances, including known neurotoxicants such as lead.  

As one becomes aware of the ethical concerns that come with AI development, and therefore its use, we can start to address these issues: by both reflecting on policies that can be implemented to mitigate the harm of such outbreaking technology and aiming to make more considerate and sustainable use of GenAI.  


Madalena Martinho do Rosário

External VP

Mª Francisca Pereira

President

Sources:

Ferrante, Elena. 2022. In the Margins: On the Pleasures of Reading and Writing. UK: Europa Editions. 

Appel, Gila, Juliana Neelbauer, and David A. Schweidel. 2023. “Generative AI Has an Intellectual Property Problem”. Harvard Business Review. Accessed January 30, 2025. https://hbr.org/2023/04/generative-ai-has-an-intellectual-property-problem  

Brown, Sara. 2021. “Machine learning, explained”. MIT Sloan. Accessed January 31, 2025. https://mitsloan.mit.edu/ideas-made-to-matter/machine-learning-explained  

Wilkinson, Alissa, and Emily Stewart. “The Hollywood writers’ strike is over — and they won big”. VOX. Accessed January 30, 2025. https://www.vox.com/culture/2023/9/24/23888673/wga-strike-end-sag-aftra-contract  

Kanungo, Alokya. 2023. “The Green Dilemma: Can AI Fulfil Its Potential With”. Earth.org. Accessed January 31, 2025. https://earth.org/the-green-dilemma-can-ai-fulfil-its-potential-without-harming-the-environment/  

Strubell, Emma, Ananya Ganesh, and Andrew McCallum. 2020. Energy and Policy Considerations for Deep Learning in NLP. Annual Meeting of the Association for Computational Linguistics. https://doi.org/10.48550/arXiv.1906.02243 

Kemene, Eleni, Bart Valkhof, and Thapelo Tladi. 2024. “AI and energy: Will AI help reduce emissions or increase demand? Here’s what to know”. World Economic Forum. Accessed February 1, 2025. https://www.weforum.org/stories/2024/07/generative-ai-energy-emissions/  

De Vries, Alex. 2023. “The growing energy footprint of artificial intelligence.” Joule 7(10): 2191-2194. https://doi.org/10.1016/j.joule.2023.09.004 

YSE News. 2024. “Can We Mitigate AI’s Environmental Impacts?”. Accessed January 30, 2025. https://environment.yale.edu/news/article/can-we-mitigate-ais-environmental-impacts  

Peng Wang, Ling-Yu Zhang, Asaf Tzachor & Wei-Qiang Chen. 2024. “E-waste challenges of generative artificial intelligence”. Nature Computational Science 4: 818–823 https://doi.org/10.1038/s43588-024-00712-6  

World Health Organization. 2024. “Electronic waste (e-waste)”. WHO. Accessed February 1, 2025. https://www.who.int/news-room/fact-sheets/detail/electronic-waste-(e-waste)  

———. 2025b. “Number of ChatGPT Users (February 2025).” DemandSage. January 31, 2025. https://www.demandsage.com/chatgpt-statistics/

———. 2025c. “DeepSeek: Everything You Need to Know About the AI Chatbot App.” TechCrunch, January 31, 2025. https://techcrunch.com/2025/01/28/deepseek-everything-you-need-to-know-about-the-ai-chatbot-app/

“How Generative AI Can Boost Highly Skilled Workers’ Productivity | MIT Sloan.” 2023. MIT Sloan. October 19, 2023. https://mitsloan.mit.edu/ideas-made-to-matter/how-generative-ai-can-boost-highly-skilled-workers-productivity

UNESCO. n.d. Artificial Intelligence Recommendation Ethics | UNESCO. UNESCO. https://www.unesco.org/en/artificial-intelligence/recommendation-ethics 

Campbell, Caroline, and Lorna Waddington. 2024. “Academic Integrity Strategies: Student Insights.” Journal of Academic Ethics 22 (1): 33–50. https://doi.org/10.1007/s10805-024-09510-1.Glossary – ENAI. (n.d.). https://www.academicintegrity.eu/wp/glossary/ 

Malmström, H., Stöhr, C., & Ou, A. W. (2023). Chatbots and other AI for learning: A survey of use and views among university students in Sweden. (Chalmers Studies in Communication and Learning in Higher Education 2023:1) https://doi.org/10.17196/cls.csclhe/2023/01 

———. 2024b. “Perceptions and Use of AI Chatbots Among Students in Higher Education: A Scoping Review of Empirical Studies.” Education Sciences 14 (8): 922. https://doi.org/10.3390/educsci14080922

Farazouli, Alexandra, Teresa Cerratto-Pargman, Klara Bolander-Laksov, and Cormac McGrath. 2023. “Hello GPT! Goodbye Home Examination? An Exploratory Study of AI Chatbots Impact on University Teachers’ Assessment Practices.” Assessment & Evaluation in Higher Education 49 (3): 363–75. https://doi.org/10.1080/02602938.2023.2241676

The New York Times. 2023. The Times Sues Open AI And Microsoft Over AI Use Of Copyrighted Work. December 27, 2023. The New York Times. https://www.nytimes.com/2023/12/27/business/media/new-york-times-open-ai-microsoft-lawsuit.html 

“Total Global AI Investment 2015-2022 | Statista.” 2024. Statista. August 12, 2024. https://www.statista.com/statistics/941137/ai-investment-and-funding-worldwide/. 

“How Much Water Does AI Consume? The Public Deserves to Know – OECD.AI.” n.d. https://oecd.ai/en/wonk/how-much-water-does-ai-consume. 

Li, Yang, Islam, Ren. Making AI Less “Thirsty”: Uncovering and Addressing the Secret Water Footprint of AI Models. (2025). https://arxiv.org/pdf/2304.03271 

Estoril Conferences 24th of October – Afternoon Sessions 

Reading time: 9 minutes

Policies Talks

What if businesses can be part of the solution? 

Introduced by Nova SBE’s own Miguel Ferreira, LSE’s professor Alex Edmans delivers a speech in which he explores the bewitching idea that companies that incorporate in their objectives both purpose and profits perform better in the long run. The alure of this keynote goes beyond our desire for that premise to be truth, as the academic work of Professor Edmans greatly focuses on this relationship between profit and purpose. In this sense, companies may now have the chance not only to achieve financial success, but to contribute directly to improve society. This debate is particularly interesting in the context of our university, that every year certifies thousands of students that will come to take part in the next most successful businesses.  

Artificial Intelligence and Technology Talks

Soon after, the audience had the delight of following the launch of the newest Digital Data and Design Institute, founded by Nova SBE in partnership with Nova Medical School and Harvard Digital Data Design Institute (D^3). The presentation included an amusing theatre in which Nova SBE’s dean Pedro Oliveira participated and a video message from the Chair and Co-Founder of Digital, Data, & Design Institute at Harvard, Karim Lakhani.  

This institute aims at helping companies navigate this environment in which new technologies are promptly emerging, while integrating them in their business practices, by integrating both academic research and practical applications.  

And in case you are wondering where the facilities of this institute will be located, they will encompass the previous televisions and sofa’s area near Pingo Doce and the space right above them, in the KMPG galleries.  

So, what can we expect of this brand-new AI driven world?

To enter this discussion, we were presented with the pop star entrance of Derek Ali, a mixing engineer that worked with Kendrick Lamar, Childish Gambino, Cardi B, SZA, Brockhampton, amongst many others. Ali discussed with Jen Stave the inevitable question of what the place of AI in the landscape of creative work will be. Should creatives be worried about losing their job? Should parents impose the learning of the craft before the use of AI? How will the industry change in 5 years? 

As a means to showcase the key role that AI may be able to play in music production, Derek Ali created 100% AI generated music demos using prompts. The audience ended up enjoying a fado song about Portugal’s President Marcelo Rebelo de Sousa “Canto ao Nosso Presidente” and a catchy pop song about Nova SBE’s dean, Pedro Oliveira. AI might be able to unveil new realms of creation by allowing artists to access inspiration more easily. 

The late afternoon’s panel discussion “AI and the Future of Talent” moderated by Nikolaj Malchow-Moller, focused on the implications of artificial intelligence for the future of talent, labor markets, and industry and society organizational structures. Panelists included Francisco Veloso, Rembrand M. Koning, and Matthew Prince. The key issue highlighted was the displacement and possible unemployment for experienced workers. As such, it was highlighted how, historically, technological advancements did not eliminate jobs, but rather created them. For example, contrary to fears, the introduction of ATMs did not reduce employment. In fact, the number of workers in banks increased, as bank workers simply transitioned to higher-level, more specialized roles. This suggests that although AI, like previous emerging past technologies, may make certain jobs obsolete, it will create new, more specialized and perhaps higher paying jobs. Matthew Prince further sought to deconstruct the fear surrounding AI, highlighting how it is often fabricated and introduced by the very people who are developing and implementing the technology in the hope of dissuading new entrants into the industry, and possibly encourage regulatory barriers, so as to keep their competitive advantage.

Gender differences in AI adoption was also explored, as women, in general, are less likely to engage with AI. Some argued that this might allow women to continue developing valuable skills like rational and logical thinking, while others worried that men may gain a competitive edge due to their greater familiarity with AI. Furthermore, AI can assist with the education sector, by facilitating roles traditionally filled by teaching assistants (TAs), such as class notes and assignments. This could reshape the structure of academic institutions, particularly the responsibilities of faculty and support staff. Rather than replacing human workers entirely, AI may enable them to focus on other areas, such as developing soft skills. In fact, AI should be viewed as a tool to complement human labor, not as a replacement for critical thinking or decision-making. While AI can streamline technical analysis, it cannot substitute for judgment. The growing presence of AI will put more pressure on managers to develop skills in critical thinking, judgment, and interpretation—capabilities that cannot yet be automated. Business schools and organizations need to focus on developing these skillsets to ensure that workers can effectively navigate AI-integrated environments.

The debate also touched on other topics: Will AI accelerate inequality? How is global competition on AI development being handled? It was largely emphasized how allowing the USA or China to create AI monopolies might erode diverse cultural perspectives, creating a more homogenized global landscape, as the American or Chinese way of thinking prevails. The panel further called for the development of European AI, which encompassed “European sensitivity”, so as to adapt these tools to the European reality and way of thinking.

The nest guest on the theme of AI was Robert Seamans, professor at NYU, who discussed the expected transformative impact of Artificial Intelligence, emphasizing how technological advancements drive economic growth. He drew parallels between these emerging innovations and earlier technologies such as railways, steel production, telephones, and motor vehicles, all of which played a significant role in economic expansion. However, Seamans was careful to note that new technologies are not simply “plug and play.” They require time to become productive and for widespread adoption to occur. The importance of complementary assets—those additional resources and capabilities needed to fully leverage new technologies—was highlighted as a critical factor in this process.

Seamans provided an example from Cleveland, Ohio, where robots are used to manufacture and sell metal parts to larger firms. He pointed out that the effectiveness of these robots depends heavily on specific complementary assets. For instance, the grip at the end of a robotic arm must be precisely designed for the task at hand. Seamans explained that while a robotic arm might cost $30,000, the necessary complementary assets, like specialized grips, can require an additional investment of $60,000. Furthermore, achieving maximum productivity involves trial and error, as it takes time to determine the best combination of complementary assets. He argued that this investment in time and resources is also true for artificial intelligence technologies. In this sense, human capital also plays a particularly crucial role when it comes to AI adoption: workers who understand both their industry and the technical aspects of AI are best positioned to leverage AI effectively. The speaker referenced research on AI occupational exposure scores, which measure how different jobs are impacted by AI. These scores showed clear correlations with demographic factors such as salary, education, and creativity. Higher salaries tend to be associated with greater exposure to generative AI, as do higher levels of education and creativity. So, he also strongly encouraged firms to invest in their workers: the long-term success of AI and other big innovations will depend not just on the technologies themselves, but on the people who understand how to apply them.

Next, Michael Sheldrick, author of Ideas to Impact, delivered a thought-provoking speech on how to foster active citizenship and drive social change in the age of AI. He began by discussing ways to build engaged global citizenship, highlighting initiatives he took part in like the Global Citizen app: the app encourages users to take meaningful actions to support communities, fostering projects in Africa and a notable 2021 initiative in inland Brazil. Sheldrick drew an interesting comparison between social media’s role in the past and the influence of generative AI today. Just as social media transformed communication and engagement, AI is expected to revolutionize industries and reshape societal structures.

Shifting his focus to the music industry, Sheldrick noted its rapid growth, particularly in Africa, where a flowering music scene is creating numerous job opportunities. One practical example of such is Kendrick Lamar’s possible first tour in Africa, emphasizing the broader cultural and economic impact of such events. He then pushed forth the belief that everyone can do something, that everyone has a role to play, that we need to work together to achieve meaningful results. This approach is referred to as “policy entrepreneurship,” where leaders across sectors must collaborate to create policies that harness AI’s potential while addressing its challenges.

Pedro Gardete, President of the Scientific Council of Nova SBE, closed the AI section of the event, expressing gratitude to everyone involved. He shared a story about a strategic vision exercise he conducted with students in a focus group, where he asked them what they wanted most from university. Many replied they wanted to learn about what companies wanted from them, that is, a real-world application of the academic knowledge they acquire at university. Pedro Gardete highlighted how discussing and introducing AI in the academic environment sets NOVA apart, as a leading pioneer in new technology application. The speaker proposed an exercise to the audience, who were asked to share a story with their neighbors about a time they felt supported. He then asked the audience to ask themselves whether AI could have replaced that same support, sparking reflection on the irreplaceable nature of empathy and human connection, even in an increasingly automated world.

Closing session

In the closing section of the event, renowned football player Pepe took center stage to discuss his life and career. While originally planned as a traditional panel session with Executive Director of Estoril Conferences Laurinda Alves, the format shifted into something more spontaneous: Pepe and the moderator invited the children attending the conference onto the stage, giving them the opportunity to ask their idol questions directly.

The children’s questions covered key moments in Pepe’s career and personal life, asking about his early years, including his arrival in Portugal and his experiences playing for top clubs, such as his triumph in the Champions League, which the athlete recalled with pride and joy. Throughout the unconventional interview, Pepe also opened up about his mistakes and lessons learned throughout his career, sharing with the children that success is not just about winning, but also about resilience, personal growth, and learning from failure.On a final note, the star was asked what he plans for his future, now that he is retiring from the football playing field. While he did not give specific details, Pepe spoke about his desire to stay connected to football in some capacity, whether through coaching, mentoring young players, or other endeavors. His message to the young audience was clear: no matter what comes next, it’s important to stay passionate, keep learning, and remain humble in the pursuit of one’s goals.




M Francisca Pereira

Mafalda Carvalho