被窝影视福利

Generic selectors
Exact matches only
Search in title
Search in content
Post Type Selectors

Existential Threats and Risks to All

ARCHIVED REPORTS

听 Overview
听 Planet
听 People
听 Security
听 Sustainability
20 Notable Reports
Directory of Reports
Editor’s Remarks

MARCH ARTICLES
Why the Rules-Based International Order is Indispensable

Thomas Reuter, EXTRA Chair

The International Rules-Based Order (RBIO) is a vast system of formal laws, rules, agreements, and procedures, as well as informal norms, conventions, and tacit understandings that together regulate how the nearly 200 sovereign nations on this small and shrinking planet interact with one another. These interactions have become ever more substantial, frequent, and faster in the wake of an enduring trade- and technology-driven globalisation process.

The RBIO has become all the more indispensable not only for negotiating agreements in the domains of politics, trade, and conventional security, but also for all joint efforts to protect humanity from existential risks that do not recognize national boundaries. Notwithstanding the present sense of rupture, as described by , I therefore argue that, while the RBIO has always been subject to challenges and will need to change continuously to accommodate new developments, such as the rise of AI or the rise of new powers, it will be with us forever in one form or another.听

The RBIO is often associated with the post-WW2 effort to build a more stable and peaceful world, and with the rupture of the post-war world order. This is a narrow and misleading perspective. In fact, an RBIO of some kind or another has been around for centuries. Global integration has a very long history indeed; it has undergone development and expansion, experiencing many ups and downs.

Nevertheless, the gradual internationalisation of legal frameworks is an undeniable trend. Today, in the 21st century, global integration has become so profound that it is unstoppable, placing it squarely outside the realm of ideology-driven choices. Stopping this trend is only imaginable in the context of a cataclysmic global conflagration that (I presume) nobody wants. Meanwhile, wherever there is growth in the scope of human interaction, rules and laws necessarily and inevitably follow to optimise the benefits of cooperation in emerging social fields.

Isolationism is, therefore, an illusory cloud-castle project that has no place in the real world. Even a relatively isolated country such as North Korea finds it can exist only in interdependence with other nations 鈥 unless it were willing to embrace a prehistoric lifestyle, in which case it would quickly be absorbed by its neighbours.

The political choice for modern nation-states is much more restricted. A nation may either embrace and enhance global integration or seek to limit it. All nations seek to game the rules, to the extent possible, to further their national interests, and some may cheat or threaten. But this does not negate the game of cooperative engagement; it is a regular part of playing it. What isolationists propose at a practical level, therefore, is rather a more limited engagement with other nations, or an engagement on radically self-serving terms.

Were this attitude to spread, and that may indeed be the case, then the RBIO is disrupted, it becomes a more adversarial system than it has been, and may give way to coercion or open violence in extremis. This is a suboptimal and dangerous state of affairs. No human society has ever managed to persist in the absence of law for any length of time. We are a species whose evolutionary success was built on the more highly sophisticated forms of cooperation at our disposal, compared to other species.

The increasing scale and scope of social spheres of interdependence and cooperation have been the driving force in the , as Norbert Elias and countless other social historians have shown in great detail. Human language is perhaps the most fundamental expression of human cooperativeness, as argued, while the law is the most codified and prescriptive, and also the most subject to contestation.

There have been many periods of relative lawlessness and chaos that bear testimony to these contestations. Most often, lawlessness arose from major social grievances and associated attempts to challenge an order no longer seen as fair, as violating the in social interaction. Conversely, the motivation for defending or challenging the existing legal code can also be a perceived loss of privilege, in short, a desire to keep or make the law (more) unjust.

At all times, of course, there is a minority who seeks to profit from breaking the social contract and the law, engaging in organised crime and adopting a gangster approach to life. US Defence Secretary Hegseth, in his introduction to the , turns on its head the reality so thoroughly explored and described by social science. He chastises all past administrations, Republican and Democratic alike, for upholding 鈥渟uch cloud-caste abstractions as the rules-based international order.鈥

The RBIO, however, far from being a fantasy, is an essential utility for the world and indeed, a practical asset even to the US military, as one might expect a person in Hegseth鈥檚 position to know. Along with many civilian goods, it helps reduce military threats by limiting great-power rivalry, regulates interactions among nations鈥 armed forces, imposes restrictions on weapons of mass destruction, helps avoid accidents and unintended escalation, protects civilians in war, and provides a legal framework for managing disputes.

Just one example. For some 80 years now, the US has supported arms control, especially of nuclear weapons, to limit the threats to America and its allies. Without such agreements, the US military has to deal with a much more difficult threat environment, with unacceptably high risk levels that are now ever more obvious, even to the average observer of world affairs. The multilateral Nuclear Non-Proliferation Treaty (NPT) has been the cornerstone, along with other agreements restricting chemical and biological weapons, missile development, and arms transfers. All these agreements are part of the RBIO.

Carney鈥檚 assessment of RBIO rupture specifically aims at the NDS policy statement and the spade of subsequent economic and military threats and acts of aggression by the world鈥檚 primary hegemon, tearing up the legal framework that the US itself, under President Roosevelt, had been the chief designer of. This order is imperfect and has been criticised for its injustices and for choring up western privileges, often with the help of double standards, as Carney also notes. Ot is an order that nevertheless did manage to evade another global war and to enable cooperation to a degree sufficient to produce a time of unprecedented affluence and rising living standards in many, if not all, parts of the world.

The justifying narrative of grievance put forward by the US political movement that forms the popular base supporting these aggressive policies is one of American decline, of privileges lost, and a determination to retain or regain such privileges by any means, fair or foul. Behind this populist grievance narrative hides another agenda, backed by a plutocratic establishment that benefits from US exceptionalism and fears that the rise of other major powers, such as China, and the joint actions of alliances, such as BRICS, to create a more balanced RBIO, could reduce their profitable system of dominance of the global economy. Ironically, to rising powers and alliances, the rupture presents a golden opportunity to achieve precisely that.

One strategic calculus that has been debated for many years is the option to seize a shrinking window of opportunity to utilise a still-superior capacity for forward power projection, based on an unparalleled network of global military bases. Until recently, while certainly pausing to consider aggressive options such as an attack on Iran, no US president has thought this kind of gamble worthwhile, instead relying on Western alliance networks to contain the perceived trend toward a more balanced system of global power-sharing.

As renowned political analysts, Jeffrey Sachs and John Mearsheimer , the decision to attack Iran seems to reflect a general breakdown of governance and due process across all domains within the US, leading to dysfunctions such the sidelining of sage advice even from the government鈥檚 own institutions, creating an atmosphere of unchecked hubris and fostering the echo chamber-driven fictitious belief that violence will solve the nation鈥檚 problems.

The evidence so far suggests that the war with Iran is not going as its instigators were hoping. Forward projection, it seems, while still full of lethal consequences and disruptive power, is not sustainable, even against a middle power that has been crippled by sanctions for half a century yet remains determined to resist.

Faith in the delusion that 鈥榖rutality pays鈥 is subject to contagion, unfortunately, by way of imitation among adversaries or by way of coerced compliance among vassals. It subliminally erodes trust in the law worldwide, thereby corroding social cohesion and our ability to cooperate. From an existential risk perspective, this is an extremely dangerous attack on the corpus juris gentium, 鈥榯he entirety of the law of nations鈥. It calls for a global alliance against gangsterism and in urgent defence of the rule of law. The law needs to be upheld because it keeps us alive on so many levels.

The Global Peace Offensive: A common framework for roll-out

Donato Kiniger Passigli and Charlotte 脴rnemark, Global Peace Offensive Center

鈥淚 am not only a pacifist but a militant pacifist. I am willing to fight for peace.鈥 (Albert Einstein, interview with G. S. Viereck, January 1931)

This article elaborates on the Global Peace Offensive (GPO), a proactive peacebuilding initiative designed to address rising global conflict. Spearheaded by the 被窝影视福利 of Arts and Science (WAAS), the European Academy of Science and Art (EASA), and the Alma Mater Europaea (AMEU), with support from the Club of Rome, the GPO aims to empower academic and civil society actors to build peace through de-escalation, inclusive engagement, and the application of local knowledge. The establishment of the first Global Peace Offensive Centre (GPOC) in Maribor, Slovenia, marks a key step. The article outlines the GPO’s framework and听implementation strategy, drawing on insights from previous听Cadmus Journal publications by Kiniger Passigli (2024, 2025) that highlight the potential of dialogue and symbolic gestures to de-escalate conflicts.

Core Principles and Strategic Imperatives

At the heart of the Global Peace Offensive lie three fundamental principles that guide its strategic imperatives:

  1. Localized Initiatives Leading to De-escalation: The GPO prioritizes focused, incremental progress to address specific problems through tailored solutions. This approach emphasizes tangible outcomes at the community level, fostering greater buy-in and ownership compared to broad, top-down agreements that often lack local anchoring. This approach is rooted in the belief that lasting peace is built from the ground up.
  2. Partnership Development and Deepened Trust-Building: Recognizing that peacebuilding requires a collaborative ecosystem, the GPO emphasizes developing a vibrant network of institutional partners and pioneering individuals. This network fosters a sense of shared purpose. It facilitates learning at all levels, uniting academics, grassroots organizations, civil society groups, local government entities, and other socio-economic actors in a collective effort to build momentum for change. Effective communication and inter-communal outreach are critical components.
  3. Iterative Processes of Dialogue Facilitation: The GPO champions iterative dialogue processes that leverage cultural, scientific, and educational diplomacy alongside traditional diplomatic channels to achieve lasting peace. This commitment to continuous engagement and learning ensures that the peacebuilding process remains adaptive and responsive to evolving needs.

The Need for a Common Diagnostic Framework

The authors make a compelling case for a shared diagnostic framework among those involved in the GPO. While embracing diverse tools and interpretations, a common framework offers several crucial advantages:

  • Facilitates Broader Participation: By providing a standardized yet adaptable methodology, the framework lowers barriers to engagement, enabling a wider range of individuals and organizations to contribute meaningfully without reinventing the wheel.
  • Ensures Adherence to Shared Fundamental Principles: The framework serves as a safeguard, ensuring that all GPO activities adhere to core values of integrity, independence, and impartiality, maintaining a cohesive and ethical vision.
  • Enhances Comparability and Peer Learning: A shared framework enables meaningful comparison of lessons learned across different contexts, facilitating peer learning and promoting continuous improvement within the GPO network. This aligns with the goals of platforms like the OECD鈥檚 Effective Institutions Platform (OECD/EIP, 2023).

Key Components of the Diagnostic Approach

The diagnostic approach is structured around four key components:

  1. Understanding the ‘Here and Now’ of Conflict: This component uses systems thinking to analyse the current conflict landscape, identifying interconnected components, dynamics, and patterns that perpetuate it. It involves gathering factual information, mapping narratives, and pinpointing potential areas for tension reduction.
  2. Focusing on Positive Pivot Points for Localized Action: This involves identifying promising opportunities to build trust and foster de-escalation through new relational connections. The GPO seeks to catalyse citizen diplomacy and grassroots peacebuilding initiatives.
  3. Encouraging Wide Uptake, Use, and Interpretation of Findings: This component emphasizes transparency and accessibility. By making information publicly available, the GPO encourages diverse perspectives, validation, and the utilization of findings across various sectors.
  4. Engaging Academia, Civil Society, and Policymakers: By bridging the gap between research and practice, the GPO seeks to elevate the impact of civic action and translate lessons learned into effective policies.

Appreciative Inquiry to Harness Positive Micro Solutions

The GPO employs Appreciative Inquiry, a methodology that focuses on strengths, successes, values, and hopes to identify positive pivot points and build momentum for constructive peacebuilding efforts. Appreciative Inquiry, as described by Whitney and Trosten-Bloom (2003), operates on the assumption that “questions and dialogue about strengths, successes, values, hopes, and dreams are themselves transformational” (p. 1).

Breaking Cycles and Prioritizing Human Security

The article stresses the importance of breaking free from cycles of military escalation and prioritizing civil peace over liberal peace in conflict resolution. Civil peace focuses on stabilizing society and preventing violence, thereby providing a foundation for broader political and economic reforms. The authors advocate a shift towards a comprehensive understanding of human security that addresses the underlying causes of conflict, such as economic despair, social injustice, and political oppression, echoing Boutros Boutros-Ghali’s call for an integrated approach to human security (1992).

Conclusion: A Call to Action

The Global Peace Offensive is a crucial and timely initiative to foster sustainable peace in a world facing increasing conflict and division. The GPO aims to build a more peaceful and just world. The authors emphasize the urgent need to translate these principles into tangible and measurable peacebuilding outcomes.

References

  • Boutros-Ghali, B. (1992).听Agenda for Peace. United Nations.
  • Kiniger Passigli, D. (2024).听Time for a Peace Offensive. Cadmus Journal.
  • Kiniger Passigli, D. (2025).听The Peace Offensive: A New Strategic Framework for Conflict Resolution. Cadmus Journal.
  • OECD/EIP (2023). How to Guide on Peer-to-Peer Learning. EIP, OECD.
  • Whitney, D., & Trosten-Bloom, A. (2003).听The power of Appreciative Inquiry. Berrett-Koehler.
FEBRUARY ARTICLES
Digital Extractivism: How AI Centers Strain Water Resources

Ana Maria Paraschiv, EXTRA听Communication and Networking Specialist

Extractivism today goes beyond mining to include the overuse of water for AI data centers. Intensive water consumption for cooling reduces availability for agriculture and human use, creating local vulnerabilities. This article explores the hidden social and ecological costs.

We live in times when the world we once knew no longer exists鈥攐r is on the verge of disappearing. Uncertainty and volatility are the new norms. But we, as humans, must be acutely aware鈥攁nd anticipatory鈥攐f costs before launching technologies. We cannot plunge headfirst and see what happens. Yet this is occurring with data centers. Because of the race for AI dominance, major powers aren’t considering the consequences or resources. If there were consciousness and foresight, we could anticipate impacts and incorporate circular solutions from the beginning.

Today’s digital revolution has promised unprecedented opportunities. Yet it comes with invisible costs. Among these, perhaps most overlooked is water鈥攖he hidden water sustaining servers, data centers, and AI models. Every prompt typed, every model trained draws upon this resource. Large data centers consume billions of liters yearly; projections suggest global demand could reach 1.2 trillion liters by 2030. These are withdrawals from systems upon which human communities depend, often in regions facing scarcity, drought, or social tension.

馃搳 1.2 trillion liters by 2030 鈥⑻ 420+ conflicts in 2024 鈥⑻ 125M liters saved per Microsoft zero-water facility

Geopolitics and the Race Without Reflection

Water scarcity is increasingly a matter of social and geopolitical security. In Gaza, the West Bank, Ukraine, Africa, and Spain, water infrastructure attacks and disputes have erupted. Globally, more than 420 water-related conflicts were reported in 2024. The invisible demands of digital infrastructure collide with the visible realities of human survival.

The current AI arms race exemplifies a dangerous pattern: prioritizing competitive advantage over comprehensive assessment. This is a structural inevitability of competition. When the imperative is to build faster, considerations of water scarcity become externalities鈥攑roblems to address later, if at all. Yet imagine if there were no race: we could anticipate impacts before they materialize, design data centers with circular water systems from the outset, conduct regional water stress assessments before breaking ground, and establish international standards that make sustainability a shared baseline rather than a competitive disadvantage.

“If there were no race, we could incorporate circular solutions from the beginning, rather than depleting resources and scrambling for fixes when scarcity becomes undeniable.”

Practical Solutions: From Crisis to Coexistence

Solutions are practical and already underway. Circularity must become standard: hardware and water must be reused, blowdown treated and returned, rainwater captured, treated wastewater leveraged. The principle of “One Water” treats water as circular rather than disposable鈥攁 moral imperative about justice and equity, not just efficiency.

鉁 Implemented Solutions:

  • Zero-water cooling systems: Microsoft facilities in Arizona and Wisconsin eliminate freshwater dependency entirely, saving 125 million liters/year per center
  • Blowdown water reuse: Genesis Water Tech achieves 70-95% reuse through advanced treatment systems (IDE Tech, Genesis Water Tech)
  • Treated wastewater: Utilizing treated municipal wastewater as alternative source for cooling, reducing pressure on freshwater
  • Rainwater harvesting: Integration of precipitation collection systems into data center design
  • AI for optimization: Using artificial intelligence for real-time monitoring and reduction of water consumption
  • Strategic planning: UK Government integrates water strategies into digital infrastructure planning, assessing regional water stress before approval

鈥 Circular economy: Microsoft implements hardware reuse and refurbishment practices to reduce total footprint

These advances must be embedded in a broader ethic of responsibility recognizing the interconnectedness of technological progress, environmental sustainability, and human well-being. Science provides a model: truth emerges through evidence, debate, and collective verification. Likewise, water policy and technological governance must be transparent, accountable, and participatory, with public oversight ensuring benefits don’t come at the expense of communities or ecosystems.

Anticipatory Design for an Uncertain Future

We live in an era defined by transformation and turbulence. The world we once knew鈥攑redictable, stable鈥攊s disappearing. In this reality where uncertainty and volatility are the new norms, humanity’s survival relies on reason, deliberation, and social cooperation. As the thirst of the digital world meets limits of the natural world, we must resist reactive problem-solving and commit to anticipatory design鈥攂uilding systems that account for impacts from inception, not as damage control after depletion.

This demands institutional courage: governments regulating preemptively, corporations prioritizing long-term resilience, civil society demanding accountability. Our challenge is clear: to reconcile AI ambitions with imperatives of water, justice, and sustainability. Progress is not measured in computational power or market dominance, but in the health of our communities, ecosystems, and shared future. The age of technology will define centuries ahead鈥攂ut only if guided by wisdom, equity, and profound respect for finite resources upon which all life depends.

References
  • Microsoft Circular Centers:
  • Genesis Water Tech (Efficiency):
  • Genesis Water Tech (Wastewater):
  • IDE Tech (Blowdown Reuse):
  • UK Government Report:
  • AWA (Australian Water Association):
  • Pacific Institute (Water Conflicts):
Extractivism and the Polycrisis: What Lies Beneath? A Commonist Response

S. A. Hamed Hosseini; Senior Lecturer in Social Sciences, University of Newcastle, Australia; Fellow, 被窝影视福利 of Art and Science

Climate breakdown, biodiversity collapse, widening inequality, democratic erosion, pandemics, mass displacement; we may call this the 鈥.鈥 The term helpfully signals that these are not separate problems but interconnected, mutually reinforcing crises. Yet our responses remain stubbornly fragmented. Climate policy here, social policy there, economic policy elsewhere; each locked in its own silo, often working at cross-purposes.

This fragmentation is not a separate problem from the polycrisis. It is the crisis itself 鈥 perhaps its deepest layer. We might call it a : our collective inability to respond coherently to interconnected crises. The usual prescriptions, like better coordination, more interdisciplinary research, and holistic frameworks, assume that fragmentation is simply an intellectual or administrative failure. But if that were true, decades of sustainability science and systems thinking should have made more headway by now.

Both the converging crises and our fragmenting responses share a common structural root: the architecture of modern capitalist societies.

From Decommonization to Compartmentality

In societies organized around commoning (and many such formations existed before capitalism and persist today), what we now separate into 鈥渆conomic,鈥 鈥渟ocial,鈥 鈥渆cological,鈥 and 鈥減olitical鈥 domains were woven together. Production, care, spirituality, governance, and relationship with land formed an organic whole. This is not to romanticize pre-capitalist societies, many of which had their own hierarchies. But the specific we take for granted today (where 鈥渢he economy鈥 operates as an autonomous sphere, disembedded from nature and society) is. It emerged with capitalism and was essential to its functioning.

The rise of capitalism required breaking apart these organic connections. Commons (such as shared lands, waters, forests, knowledge systems, care networks, collective decision-making practices) had to be enclosed, privatized, destroyed, or relegated to a subordinate 鈥渢hird sector.鈥 This process, which we may call , did more than transfer resources from collective to private hands. It restructured how different dimensions of life relate to each other.

In a commoning way of life, are deeply intertwined, like organs in a living body. But capital cannot operate on such an organic whole. It needs to extract, measure, and accumulate, and you cannot extract from a living relationship without first severing it. Capital must separate these dimensions into distinct modules that can be independently managed and exploited: labor abstracted from its social meaning, nature reduced to raw materials, care commodified or made invisible, political imagination constrained to managing what already exists.

The result is : a world divided into isolated domains, each governed by its own logic, each blind to the others. This is not just a way of thinking; it is the structural outcome of decommonization.

Why Compartmentalized Responses Fail

This explains why our responses to the polycrisis consistently fail. Carbon trading treats emissions as an isolated variable to optimize through markets, while ignoring the social relations that produce emissions and the communities harmed by extraction. 鈥淪ustainable development鈥 promises to balance growth with environmental protection, but the very grammar of 鈥渂alancing鈥 assumes these are separate things to trade off.

Even critical movements often reproduce compartmentality. Environmental movements that overlook labor. Labor movements that take ecology too lightly. Identity politics severed from political economy. Each fights on its own terrain, unable to see how the terrains are connected, because capitalism has structurally disconnected them.

The Commonist Impulse

Yet compartmentality is not total. In with nearly one hundred Australian grassroots organizations (like cooperatives, transition towns, commons-oriented projects), I found both the persistence of compartmentality and something else: what I call the .

This is a countertendency: an orientation toward seeing and acting on connections that compartmentality obscures. Many significant compartmentalization; an eco-village focused on carbon footprint but disconnected from broader struggles, a worker cooperative ensuring internal democracy while ignoring its environmental supply chain. Valuable work, but isolated.

However, organizations embracing commons-based principles, food sovereignty, de-growth, and economic democracy showed substantially higher commonist orientations. Those prioritizing decolonization demonstrated the most balanced attention to both ecological and social dimensions. Certain ways of thinking actively cultivate the capacity to see beyond compartmentality.

From Commoning to Re/Commonization

If compartmentality results from decommonization, the response must be : actively rebuilding organic connections that capital has severed.

But we need an important distinction. Commoning refers to practices through which existing commons sustain themselves 鈥 stewardship, reciprocity, and collective decision-making. Think of Indigenous land management or a community garden where neighbors share tools and harvests. These are precious.

, by contrast, is the transformative project of creating or restoring commons from conditions of decommonization. It does not just maintain what exists but actively challenges the compartmentalized architecture of capitalist society. It is explicitly political.

This matters because commoning alone is perpetually vulnerable. A food cooperative may operate beautifully internally but find itself forced to compete on capitalist terms externally. Worse, capital actively mimics commons to neutralize their challenge; what I call pseudo-commoning. On social media, billions do commoning work daily: sharing knowledge, building relationships, supporting each other. But this takes place on infrastructure designed to harvest that activity for corporate profit. Users build community; platforms extract value.

The Dual Strategy

Re/commonization requires a : building alternatives from below while simultaneously contesting the political and legal structures that enforce compartmentalization. Neither alone is sufficient.

Grassroots commoning without political struggle remains marginal, vulnerable to absorption. Political movements without rooted alternatives become empty; they may win elections but lack institutional forms to reorganize society. The two must develop together.

Imagine a municipality that declares water a legal commons; a that changes the default logic governing an essential commons. But switches do not accumulate automatically into systemic change. Without political movements defending them, progressive institutions can be defunded, cooperatives outcompeted. The threshold where re/commonization becomes irreversible must be fought for, through movements that challenge power structures, contest elections, and push for constitutional change.

Beyond Extractivism

Is extractivism a prime cause of the polycrisis? Yes, but extractivism is one face of a deeper structure. Extraction of minerals, forests, water, labor, and data all depend on prior decommonization that severed these from organic connections and made them available as isolated 鈥渞esources.鈥

The polycrisis cannot be solved by better coordination across existing domains. What is required is the transformation of those domains themselves; their into organically interconnected forms of life.

This is not utopian dreaming. The commonist impulse already exists in movements that refuse compartmentalized thinking, that insist on connections between ecological health and social justice, between economic democracy and decolonization. The task is to, build it into durable institutions, and fight politically for conditions in which they can flourish.

We move beyond compartmentality not by thinking our way out, but by building and fighting our way through.

Multi-layered hazards and harms of oil extractivism

Anja Nygren,听Prof. of Global Development, University of Helsinki, Australia;听Founding member, EXALT

We continue to witness a global trend of ever increasing demand for oil, gas, and minerals. Large-scale resource exploitation, or 鈥榚xtractivism鈥, is delving ever deeper below the Earth鈥檚 surface, and it is anticipated that the deep seabed will be the next resource extraction frontier in the quest for more hydrocarbons, minerals, and metals. These new developments have huge environmental, political, social, and economic implications that need closer scrutiny.听

The oil industry is a 鈥渕ega-business鈥, acting as a cornerstone of the global economy. According to the information provided by the datasets of Global Energy Monitor, there are currently 6,160 active oil and gas extraction sites in the world, 205 under development, and 476 oil or gas discoveries. The economic power of the oil industry is evident in the fact that eight of the thirty world鈥檚 biggest companies in 2024, ranked by revenue, were oil and gas companies.听

The extraction of oil and gas and their processing for value-added products do not, however, materialize without consequences. Fossil fuel production and consumption are key to understanding the risk of climate collapse. The study by et al., published in Nature in 2025, showed that the world鈥檚 fourteen largest fossil fuel and cement producers contributed 28 percent to overall anthropogenic climate change in 2010鈥2019. In addition, oil extraction has multifaceted links to water pollution, land degradation, exposure to health risks and livelihood dispossession.

Related to hazards and harms associated with oil extractivism, three aspects are crucial to consider, as I pointed out in听 a :

First, it is important to pay attention to layered resources. A horizontal, land-centered perspective provides limited insights into environmental and social effects of oil extraction. We need to explore the multifaceted interconnections across subterranean, terrestrial, atmospheric, and hydrological layers to understand the socio-spatial effects and the possible poly-crisis associated with fossil fuel production and consumption. Even when considering only the subterrainian domain of oil and gas operations, they are conducted on multiple layers with complex effects. Yet, the subterrain is not a world apart; oil and gas extraction has complex impacts on the air, soil, subsoil, surface water and groundwater.听

Second, we need to consider layered politics. This includes issues related to land tenure, resource access, territorial control, legislation and governance. Key aspects here are property relations and rules of access to, and control of, resources. In most countries, landowners own just the surface land, while the state owns subterranean resources. Thus, the state has the authority to grant concessions for oil drilling, which means that companies can get access to subterranean resources while landowners must give them the right of temporary occupation of land to build oilwells and transfer the extracted products. This often leads to controversies. Although the USA, with the conditions of split estates is an exception, because the split estate law grants the mineral estate dominance over the surface estate, conflicts with farmers over oil and gas extraction are common also in the USA.听

Farmers usually continue to cultivate their land at the extractive frontlines, although the oil extraction restricts their land use. For example, it is prohibited to cultivate perennial crops and trees near oilwells and pipelines for security reasons. The harms of soil degradation and water pollution also remain with landowners. This often leads to indirect dispossession as smallholders need to search for other livelihood options than their poorly producing fields. In many oilfields, pipelines are crisscrossing the fields and villages, witha high risk of oil spills or gas explosions.

Third, we need to consider multi-temporality. When it comes to the hazards and harms of oil extraction, multiple timescales intersect. Hydrocarbons are non-renewable resources that were formed over millions of years. However, the harms caused by oil extraction are calculated from a very short-term perspective. In a recent at the frontlines of oil extraction in Mexico, Viviana Rabelo Avalos and I showed that smallholders are often compensated in the form of single payments for lost crops, dead animals, and corroded fences, with limited recognition of wider-scale soil degradation or water pollution. As many impacts of oil extraction are temporally latent, it is exceedingly difficult for local smallholders, and even for scientists with access to advanced technology and state-of-the-art knowledge, to provide timely evidence of the slowly cumulating impacts of oil extraction on human health and ecological integrity.

Vast, multimillennial timescales also need to be considered when evaluating the impacts of fossil fuels on climate change. In his new book , Timothy Mitchell, a political theorist and historian from the University of Columbia, calls the prevailing policies and practices of fossil fuel production and consumption (2026) procedures for 鈥榮tealing the future鈥. Related to these multi-millennial consequences, it is dangerous to rely on temporalities that reward short-term thinking. Instead, we need to make decisions that fully consider long-term responsibility for more sustainable and just futures, and encourage us to act accordingly.

JANUARY ARTICLES
A Moment for Truth: Current Geopolitical Destabilisation has Major Flow-On Effects on other X-Risks

Thomas Reuter, EXTRA Chair

Today’s geopolitical order was established at the end of World War II in recognition of the devastating fact that 15 million soldiers and at least 45 million civilians had lost their lives. The multilateral system created at this time of great soul-searching was not perfect. Still, it has served to prevent another and potentially much more lethal world war for the last 70 years, it has promoted the emergence of a more globalised and integrated world economy, and brought unprecedented affluence to middle and working classes in many parts of the planet, something often referred to as ‘the peace dividend’. This order is being wilfully destroyed before our very eyes, at a rapid and still accelerating pace.听

The multilateral system that has promoted democracy, human rights, a system of global aid, scientific cooperation, global efforts to prevent pandemics, nuclear arms controls, the Sustainable Development Goals (SDGs) agreement, the Paris agreement on climate change, a global Convention on Biodiversity, and so much more, is being dismantled in the name of the very nation that was its principal architect and supporter, itself floundering under a leadership that threatens to destroy the very same values that informed these agreements – not just internationally but also domestically. On both fronts, the window for timely resistance to growing lawlessness, violence, and corruption is shrinking, and, in the meantime, every informed, clear-thinking, and well-meaning person on the planet is finding it difficult to sleep at night as we face this moment of truth.

This crisis of governance could not come at a worse time for humanity because we have arrived at a moment in history when human survival is threatened from multiple directions. Our life support system is faltering under the impact of anthropogenic threats that are becoming risks and manifest realities. We cannot live in denial. This is also a moment for truth.

The current situation calls on us all to evolve as human beings, towards acceptance of life under a more unified global system of governance. I do not mean a global despotism but a system of consensual agreements between autonomous nations, informing binding laws designed to ensure that all nations mitigate existential risks and contribute to positive solutions, in a way that is fair, context-sensitive, proportionate, predictable, and, on the whole, sufficient to avert catastrophe. All such cooperative efforts need to be guided by human-centred moral principles, such as equality and justice, and by rational decision-making firmly grounded in scientific evidence and a value-driven imagining of desirable and achievable futures. Without justice, rationality, and imagination, we cannot expect all nations and peoples to buy into a shared agenda for the future of the global village.听

This common-sense assertion may require a backup argument these days, given that so many people now find it fashionable to flirt with the opposite idea that “strong” or even ruthless leadership is needed in a crisis. Indeed, strong, courageous, and enlightened leadership is in high demand. Regressing into tribal ideology and establishing dictatorships that serve the vested interests and survival of a privileged few, however, is not a strength. It is a sign of general panic and weakness, exploited by populist manipulators to serve their own ends. Violence and oppression are not strength, at least not when the aim is the welfare of the many, of society as a whole.听

Strong leadership is very much possible within democratic systems, and is not possible without them. No leader can dictate a truth to us all, as the social theorist J眉rgen Habermas most cogently and exhaustively argued in his monumental work, The Theory of Communicative Action. Truth is a social process; it is the performative product of intersubjective engagement and communication in the absence of coercion. Coercion kills the truth, while free debate and deliberation facilitate truth. The truth is not something some people know, and others do not; it is what is known collectively through human interaction. That is why democracy (or another procedure for open and rational decision-making traveling under a different name) is not a luxury: Free deliberation is a necessity for producing truth, and truth is a necessity for survival. When thugs take over the village council, we can rightly fear for our lives. There is no security in such a set-up. Even the thugs themselves are at risk, for they are ready to fall upon one another at any moment – such is their nature. A genuinely strong leader articulates and promotes a truth we have arrived at together through deliberation.

Science is a model for understanding how truth-building works. Truth claims in science are publicly made and supported by hard evidence. Such claims are there to be tested, and not shored up by authority. Counterclaims are made if there is any weakness detected in the initial claim. Insights, ideas, and discoveries are thus shared within a community that operates on principles of equal participation and merit, evidence-based and rational analysis and debate, and the absence of coercion.

Being at arm’s length from power, ideally, science is tasked with speaking truth to power. The world is admittedly not ideal, nor is science. Science can be colonised from the outside by vested interests or silenced by power, as we observe in nations where democracy is now in decline. Science can also be compromised by its own internal power structures or by the inertia of established theory. But by and large, science still deserves and enjoys a relatively high measure of public trust because scientists can be counted on to keep one another honest. We have strong leaders in science; we are sometimes blinded for a while by powerful epistemes, but we also have regular revolutions in our thinking when the evidence calls for it.听

This, however, is not enough to keep science in order. Science relies on the existence of a rational, democratic political process, whereby the public sets goals and assesses the practical and moral value of the products of research and technological innovation, to ensure that their impact is constructive and desirable. Science does not tell people what to do 鈥 it leaves that to politics. Furthermore, science needs democratic politics to protect it from potential enslavement, and governments to impose sensible regulations to prevent the flagrant abuse of technology as a tool of domination and exploitation by a few. Scientists can ask for regulations on technologies such as AI, but cannot legislate and enforce them.

If sound sleep evades us, let us not waste time lying around, tossing and turning. It is better to get up and address the problems we face. Our eyes need to be open to all core risks, as well as new, risk-amplifying political, economic, and cultural trends, for the punch that strikes home is the one we do not see coming. Humanity has always lived in changing landscapes of natural and self-imposed risk, has managed to thrive regardless, and can continue to do so. Watchfulness is vital, but let us also well remember that the recipe for human survival is not the use of tooth and fang, but rather, a combination of rational thought, free communication, and voluntary social cooperation. That has been, and still is, the formula for the greater success of the human species compared to most others. Today,y we need to employ this magic formula more than ever.

DECEMBER ARTICLES
Independent Risk Amplifiers: Towards a more integrated way of understanding X-Risks

Thomas Reuter, EXTRA Chair

, a recent Roosevelt Institute report for Oxfam (featured in our reviews section), confirms that inequality is not just an economic but a political issue, giving rise to oligarchic structures, industrial-scale misinformation, political polarisation, and eventually a push for overtly autocratic forms of government. Similar conclusions are drawn in another, featured on the decline of democracy in the US and globally.听

The evidence shows that equality promotes productivity and growth, and a more egalitarian society results in better policy and resilience. Legislative and policy changes in the wake of oligarchic political manipulation, let alone autocracy, are strongly associated with outcomes detrimental to environmental protection, climate change mitigation, and human security.

Inequality thus indirectly amplifies existential threats by weakening our ability to muster a rational and responsible response to nature-based challenges, such as climate change. Aside from that, inequality has important implications for climate justice, insofar as the consumption of the wealthiest 1% accounts for a vast share of total CO2 emissions. At the same time, the poorest are most vulnerable to the impact (for now). Similar arguments could be made about pandemics and other x-risks.

The inequality crisis did not happen overnight, nor did it go unnoticed. A Nobel Prize-winning economist and former head economist at the World Bank, Joseph Stiglitz dissected the system by which a powerful US elite has managed to enrich themselves at everyone鈥檚 expense in his 2013 book, . In the US, he argued, politics has been hijacked by a privileged few, leading to a decline in workers鈥 real wages and making a mockery of the fabled “trickle-down effect” that neoliberalism has long used as its fig leaf. Senator Elizabeth Warren鈥檚 earlier work had already shown the decline of the middle class.听

Similarly, Stiglitz notes that people are forced to work ever more hours to maintain a lifestyle they once managed on a single income. He documents how employee incomes fell, and inequality rose in the wake of deregulation, regressive taxation, union busting, and dismantled social nets, leading to an inequality crisis of unprecedented proportions. This was not a linear process, of course, and there were a few winners outside the billionaire class among the host of losers, but these finer points do not concern me here.

Twelve years later, the situation has worsened dramatically. In the last year alone, as the Unequal report argues, 10 US oligarchs gained USD 698 billion in wealth. At the same time, the livelihoods of working families came under attack from the policy decisions of an increasingly autocratic government.听

Furthermore, as politics spills over into what should be the domain of natural science, climate change has been declared a hoax, and support for the Paris Agreement was withdrawn by this administration, to the satisfaction of fossil fuel campaign donors and lobbyists. Similarly, the risk of a pandemic increased with the dismantling of USAID programs for infectious disease containment in frontline regions and with a general hostility towards vaccination and other evidence-based public health approaches.

Amid a flood of bad news about rising inequality and democratic backsliding, there are also signs of growing awareness of the risk amplification associated with them, notably, among the G20, which recently produced its first-ever inequality report, led by Joseph Stiglitz, commissioned by the president of South Africa, and endorsed by the South Africa G20 meeting, with support from Europe and Africa and in the absence of the US.听

The report lays out a comprehensive redesign of global economic governance that could turn the tide, thereby acknowledging that inequality is a policy choice, one that may have driven policy choices in recent decades but is also reversible. The report has suddenly disappeared from the G20 website in recent days, however, perhaps permanently. The US has taken over the site, as host nation of the 2026 G20 meeting, from which South Africa has been uninvited. The report can still be found .

Also, autocratic systems may not be hostile to existential risk mitigation in every single case and across the board, as other historical factors, different ideologies, and power structures come into play. Much depends on the structural capacity of the political elite to withstand pressure from vested interests, as well as on the specific nature of those interests. For example, the vast investment in the renewable energy transition in China creates vested industrial interests that pull in a somewhat different direction than those of industrial elites in primary fossil fuel-producing states such as the US.听

And even in the US, not all vested interests align with current policy, and the resulting elite tensions could lead to a political reversal. And let us not forget the simple fact that the truth has a habit of forcibly impressing itself on those who ignore it. Irrational policy settings cannot endure for very long, though it may well be long enough to prove fatal for time-sensitive risk mitigation processes.

Overall, whenever vested interests determine risk mitigation policy and weaken social equity, cohesion, and civil liberties, silencing the majority of stakeholders, we can nevertheless expect irrational decision-making to increase significantly. That is, if by 鈥榠rrational鈥 we mean non-alignment with a holistic and universal ideal of long-term human security and prosperity, and alignment instead with the short-sighted, reductionistic, and sometimes outright sociopathic self-interests of a few.听

We can also expect a sharp decline in resilience among the disadvantaged majority, especially the poorest, within affected populations. The recent decline in global cooperation on climate change mitigation vividly illustrates this risk-amplification process. A very few stand to become even richer, while many will be at ever greater risk from wildfires, hurricanes, and floods, and left with ever fewer resources in their hands as they fight for survival.

Editorial note: Just as our newsletter went听to press, the听 was released, showing that inequality is still increasing rapidly worldwide, triggering calls for urgent action. With the top 10% of the world鈥檚 richest people responsible for 77% of all carbon emissions, this is not just a social and political but also a planetary crisis.

AI Update 2025: Recent Articles on the AI Industry and Impacts

Michael Marien, EXTRA Director of Research

The AI Industry Bubble

Central to the development of AI is Nvidia鈥檚 computer chips, 鈥渢he most essential and expensive component in almost every AI scheme.鈥 (1) As a result, Nvidia has reached a record $5 trillion value, and has become 鈥渁 driving force behind the US economy,鈥 with spending on data centers filled with the company鈥檚 chips accounting for 92% of US GDP growth in the first half of 2025. 鈥淲ithout it, the economy would have grown 0.1%鈥. The Economist notes that, 鈥淥n October 29, Nvidia became the world鈥檚 first $5 trillion company,鈥 and, working with President Trump, the AI industry led by Nvidia 鈥渉as plans to reindustrialize America.鈥 (2)

Nvidia controls about 90% of the market for chips used in AI projects, and 鈥渋ts financial performance has become a bellwether for the tech industry, which is investing trillions of dollars in big data centers all over the world.鈥 Nvidia is expecting $500 billion in sales through the end of 2026, 鈥漺hich would more than double what it made over the previous two years.鈥 (3)

AI is still 鈥渁n unproven and expensive technology that could take years to develop fully. How much companies will ultimately get back in return…is unclear.鈥 Nevertheless, four of the industry鈥檚 wealthiest companies鈥擥oogle, Microsoft, Meta, and Amazon鈥攁re raising their spending by billions, 鈥渋ncreasingly feeding concerns that the tech industry is heading toward a dangerous bubble.鈥 If AI underwhelms, or if the systems ultimately require far less computing, 鈥渢here could be growing risk.鈥 (4)

This risk is explicitly stated by two economists at the Stanford Institute for Economic Policy Research, who warn of the dot-com recession in 2001 and the housing crisis in 2008, during which investors poured so much money into the stock market that it inflated two speculative bubbles.

A third bubble of our century鈥攖he AI Bubble鈥攊s now more likely than not. If lackluster AI performance or sluggish adoption causes investors to doubt lofty profit explanations, 鈥渢his probably-a-bubble will pop. And a lot of people, not just wealthy investors, will get hurt.鈥 (5)

A similar, but more worried, view is provided by Harvard professor Gita Gopinath, a former chief economist of the IMF, who warns that the US stock market is near an all-time high and is fueled by enthusiasm for AI, drawing comparisons to the dotcom crash. There are good reasons to worry about another market correction, but consequences 鈥渃ould be far more severe and global in scope than those felt a quarter of a century ago.鈥

A market correction of the same magnitude as the dotcom crash 鈥渃ould wipe out over $30 trillion in wealth for American households,鈥 and 鈥渇oreign investors could face losses of more than $15 trillion.鈥 (6) Compounding the situation, and adding to the overall risk, is the escalation of the tariff wars.

The AI bubble may burst, The Economist warns in its November 15 Cover Feature. 鈥淚f America鈥檚 stock market crashes, it will be one of the most predicted financial implosions in history. Everyone from bank bosses to the IMF has warned about the stratospheric valuations of America鈥檚 tech companies.鈥 (7)

Global Shield notes that 鈥渢he political economy of AI is becoming the key driver of any underlying global catastrophic risk from AI development.鈥 Regardless of whether there is an AI bubble, 鈥渢he large capital investment in AI and AI-related infrastructure could have implications for global catastrophic risk.鈥(8)

Conversely, a professor at the University of Pennsylvania’s Wharton School offers an optimistic view, admitting that a bursting bubble could be painful in the short term. 鈥淏ut what if we鈥檙e in a ‘rational bubble,’ that, unlike other big speculative manias in history, takes our economy to a fundamentally better place?鈥

It is 鈥渞ational to risk losing on several bets, if just a few can deliver a thousandfold return, which some AI investments almost certainly will鈥I is a general-purpose technology that will most likely fundamentally alter a wide range of economic activities鈥ts transformative potential could be on par with electricity.鈥 With such big payoffs, 鈥渋ncentives to invest in AI are enormous鈥nvestors believe that falling behind is far more damaging than over-investing.鈥 (9)

The 鈥淎I Time Bomb鈥

Aside from the potential AI Bubble in the stock market due to underperformance, Stephen Witt, author of a history of Nvidia, The Thinking Machine, has argued for a broader risk of malperformance. His lengthy two-page article in The New York Times, 鈥淭he AI Time Bomb Is Ticking,鈥 argues that the debate over AI risk has been 鈥渕ired in theoreticals.鈥 Still, there is now 鈥渁 large body of evidence鈥s scary as anything in the doomerist imagination.鈥

The big five labs (OpenAI, Anthropic, xAI, Google, and Meta) are engaged in intense competition, and 鈥渘o one can afford to slow down.鈥 A dominant position in AI might be 鈥渢he biggest prize in the history of capitalism.鈥 AI is highly capable, and 鈥渋ts capabilities are accelerating.鈥 The risks are real, and 鈥渢he biological life on this planet is, in fact, vulnerable to these systems. 鈥淲e have passed the threshold that nuclear fission passed in 1939.鈥

AI could wipe us out, given a pathogen research lab or the wrong safety guidelines. But the US national security apparatus 鈥渋s terrified of losing ground to the Chinese effort, and has lobbied hard against legislation that would inhibit progress of the technology鈥rotecting humanity from AI thus falls to overwhelmed nonprofits.鈥 (10)

The October International AI Safety Report provides a more measured assessment: First Key Update on Capabilities and Risk Implications, in that 鈥渢he field of AI is moving too quickly for a single-year publication to keep pace.鈥 The report, chaired by Yoshua Bengio, finds that 鈥渘ew training techniques have driven significant improvements in AI capabilities鈥hough reliability challenges persist…raising potential oversight challenges.鈥

These improvements 鈥渉ave implications across multiple risk areas鈥s AI systems are increasingly able to act with some degree of autonomy.鈥 Increased capabilities are uplifting both biological and cyber threats while also strengthening defenses.鈥 AI companions 鈥渁re increasingly prevalent, and may pose both risks and benefits to users.鈥 (11)

Data Centers Go Global

Stephen Witt writes that 鈥淒ata centers for AI are the new American factory. Packed with computing equipment, they absorb information and emit AI. Since the launch of Chat GPT in 2022, they have begun to multiply at an astonishing rate.鈥 The arrival of Nvidia鈥檚 GPUs and the onset of large-scale AI training transformed the data center business, which began in the 1990s. Witt describes a visit to a data center outside Las Vegas, noting that the Trump Administration 鈥渉as made construction of data centers a national priority.鈥

This construction is projected to represent 2-3% of US GDP in the coming years, based on the premise that 鈥渟tuffing more Nvidia chips into the sheds will result in better AI. So far this has proved true鈥ut there is now talk of a data shortage, with high-quality text harder to find.鈥 The next frontier is 鈥渨orld model鈥 data to develop autonomous robots. (12)

Arguably more important, 鈥Power-Thirsty AI Frenzy Incites Fury Across Globe鈥s Data Centers Deplete Resources鈥 reports that 鈥渘early 50% of the 1,244 largest data centers in the world were outside the US,鈥 according to Synergy Research Group, which studies the industry. 鈥淎nd more are coming, with at least 575 data center projects in development globally.鈥 According to UBS Investment Bank, 鈥渃ompanies are expected to spend $375 billion on data centers globally this year, and $500 billion in 2026.鈥 (13)

But data centers 鈥渘eed vast amounts of power for computing and water to cool the computers,鈥 contributing to or exacerbating disruptions in Mexico (with 110 data centers) and at least a dozen other countries. The issues are compounded by a lack of transparency from Google, Amazon, Microsoft, and others, which often work through subsidiaries.

Many governments are eager for an AI foothold, and provide cheap land, tax breaks, and access to resources, and are taking a hands-off approach to regulation and disclosures.鈥 Tech companies claim they generate their own energy and recycle water. (14)

Recent AI Impacts

AI is having a variety of impacts on society, politics, medicine, business, and education. Each impact warrants a different story, ranging from deeply concerning to constructive with caveats. A brief overview of some impacts follows.

  • Chatbots and 鈥淏rain Rot. 鈥淭he tech industry tells us that chatbots and new AI search tools will supercharge the way we learn and thrive,鈥 and that anyone who ignores this will be left behind. But studies published so far on AI鈥檚 effects on the brain 鈥渇ound that people who rely heavily on chatbots and AI search tools for tasks like writing essays and research are generally performing worse than people who don鈥檛 use them.鈥 The slang term 鈥渂rain rot,鈥 which describes a deteriorated mental state resulting from engaging with low-quality internet content, was named word of the year in 2024 by the Oxford English Dictionary. (15)
  • Chatbots and Ideology. 鈥淯sers increasingly seem to accept chatbots as authoritative sources, despite repeated warnings of their propensity to make mistakes at times and even make things up.鈥 Since appearing a few years ago, AI-powered chatbots like ChatGPT and Google鈥檚 Gemini have been pitched as dispassionate sources. They remain the most popular by far, but a suite of new ones is popping up, e.g., Gab鈥檚 Arya, which claims to be a better source of facts and reflects the partisan debate in much of mainstream and social media. (16)
  • Sora App for Fake Videos. Perhaps most concerning is the October release of the Sora app by OpenAI, the maker of the popular ChatGPT chatbot. Sora, a free app on iPhones, lets users instantly generate realistic-looking videos. Many early adopters have posted videos for fun, like a cat floating to heaven. But 鈥渢he arrival of Sora, along with similar AI-powered generators released by Meta and Google this year, has major implications.鈥 This could represent 鈥渢he end of visual fact as we know it鈥濃攙ideo serving as an objective record of reality. Society as a whole will have to treat videos with the same skepticism people already do with words. Any video seen on an app that involves scrolling through short videos 鈥渘ow has a high likelihood of being fake.鈥 (17)
  • AI Chatbots for Mental Health. More than 1.5 million people have used Woebot, a start-up created in 2017 at Stanford to allow discussion of problems and moods, but it was discontinued in 2025. Ash, developed by Slingshot in NYC as a therapy, is now being promoted for mental health (but it 鈥渟ometimes does the unexpected or makes mistakes鈥). Dartmouth has published results of a trial of Therabot to reduce symptoms of anxiety, depression, and eating disorders, but it is 鈥渘ot yet ready for widespread use.鈥 Still, generative AI technologies hold promise for mental health crises, especially for people in rural areas where care is unavailable. (18)
  • Chatbots for Medical Advice. In a June 2024 poll, more than 1 in adults regularly ask AI chatbots for health information and advice. 鈥淩ecent studies have shown that ChatGPT can pass medical licensing exams and solve clinical cases more accurately than humans can. But AI chatbots are also notorious for making things up, and their faulty medical advice seems to have caused real harm.鈥 These risks underscore the need for caution and critical thinking. 鈥淣o chatbot is ready to replace your physician.鈥 In sum, 鈥渢reat AI as an educational resource rather than as a decision-maker.鈥 (19)
  • Chatbots for Physicians. Many people are more confident in AI diagnoses than in those by professionals. 鈥淚n the US alone, misdiagnosis disables hundreds of thousands of people each year; autopsy studies suggest that it contributes to perhaps one in every ten deaths.鈥 But, in a recent survey, 鈥渁bout a fifth of Americans said that they鈥檝e taken medical advice from AI that later proved to be incorrect.鈥 Chatbots also create serious privacy concerns. 鈥淚t seems inevitable that the future of medicine will involve AI, and medical schools are already encouraging students to do so.鈥 A recent study with OpenAI reported that 鈥渃linicians who used AI Consult made 16% fewer diagnostic errors and 13% fewer treatment errors.鈥 Many medical questions do not have a correct answer. Both patients and doctors 鈥渃ould think of AI not as a way to solve mysteries, but as a way to gather clues.鈥 (20)
  • AI-Driven Job Loss. Recent layoffs by large companies have prompted suggestions that the economy is entering an AI-driven restructuring. 鈥淏ut the AI apocalypse is probably not here yet鈥xperts say that the transition is likely to be more gradual, in many cases occurring as new companies built to exploit AI take market share from more established companies that are slower to embrace it.鈥 (21)

Conclusion

This update is similar to an incomplete jigsaw puzzle, with various pieces placed together to form a rough view of the whole. Unlike such a puzzle, the picture of AI is rapidly evolving, so additional Updates will likely be warranted in the months ahead.

Insofar as existential threats and risks, the focus of the EXTRA newsletter, AI is now an 鈥渆lephant in the room,鈥 along with nuclear weapons and climate change. Unlike these two major threats, whose impacts are negative and generally known, AI has and will have a variety of impacts, both positive and negative, large and small鈥攑robably not 鈥渆xistential鈥 for all or parts of humanity, but not to be dismissed.

This update identifies concerns about the 鈥淎I Bubble鈥 in the US economy and a possible stock market crash; the rapid growth of resource-intensive data centers worldwide; the negative impacts of Chatbots; the generally positive potential for mental and physical health; and, so far, the slow pace of job displacement. None of the articles cited here mentions the looming prospect of AI superintelligence, which could be the greatest threat of all AI developments, especially in military affairs

Readers are encouraged to send recent evidence-based articles on the pros and cons of the expanding AI industry, especially from non-US sources. And to freely re-publish this newsletter, or some link to it, in other newsletters.

REFERENCES

  1. Nvidia Reaches a Record $5 Trillion Value as Power Consolidates in AI,New York Times, 31 Oct 2025, B1. The company 鈥渁dded $1 trillion in market value in just the past four months.鈥
  2. The Economist, 1 Nov 2025, p.23.
  3. 听鈥NVIDIA Reports Profit Increase of Eye-Popping 65%,鈥 New York Times, 21 Nov 2025, B5. NVIDIA’s sales are growing 鈥渆ven as it remains blocked from selling to China.鈥澨
  4. 鈥淭ech Titans Accelerate Investment Into AI,鈥 New York Times, 1 Nov 2025, B1.
  5. Warning: Our Stock Market Is Looking Like a Bubble,鈥 Jared Bernstein and Ryan Cummings, New York Times, 16 Oct 2025, A20. However, there is 鈥渁 good chance the damage won鈥檛 be nearly as bad as the last bubble.鈥
  6. Gita Gopinath, 鈥淏y Invitation鈥, The Economist, 18 Oct 2025, p.16. 鈥淎 crash could torch $35 trillion of wealth.鈥
  7. How Markets Could Topple the Global Economy,鈥 The Economist Cover Feature, 15 Nov 2025, p.11 Leader. Investors are betting that 鈥渧ast spending on AI will pay off鈥(but) lofty expectations are often disappointed, at first, by new technologies.鈥
  8. 鈥淣avigating the Political Economy of AI Investment,鈥 Global Shield Briefing, 24 Nov 2025. Global Shield Newsletter (San Francisco) 鈥渋s dedicated to reducing global catastrophic risk of all hazards.鈥
  9. 鈥淎I Is a Bubble. Maybe That鈥檚 OK,鈥 Mohamed A. El-Erian (Wharton School), The New York Times, 24 Nov 2025, A21
  10. 鈥淭he AI Time Bomb鈥 by Stephen Witt, New York Times, Sunday, 12 Oct 2025, SR 6-7. An unusual two-page article by the author of The Thinking Machine: A History of the AI Giant NVIDIA.
  11. International AI Safety Report. First Key Update: Capabilities and Risk Implications, Oct 2025, 36p. First published in Jan 2025 (298p), it is chaired by Yoshua Bengio (Univ of Montreal). Also see interview with Bengio (Nature, 12 Nov 2025) on 鈥淢achine Learning Power as AI鈥檚 Threat to Humanity.鈥
  12. 鈥淚nformation Overload: Inside the Data Centers That Train AI and Drain the Electrical Grid鈥 by Stephen Witt, The New Yorker, 3 Nov 2025, 20-25.
  13. 鈥淧ower-Thirsty AI Frenzy Incites Fury Across Globe: From Mexico to Ireland, Activists Cry Foul as Data Centers Deplete Resources,鈥 New York Times, 21 Oct 2025, p.1. Ireland has become 鈥渙ne of the clearest examples of the transnational backlash against data centers.鈥
  14. 鈥淗ow Turmoil in Chile Embodies AI鈥檚 No-Win Politics,鈥 New York Times, 21 Oct 2025, B1. 鈥淣eighborhoods affected by AI data centers are deeply unsatisfied.鈥 In the US, data centers are a significant issue in rural Georgia, where 鈥渁t least 26 are under construction within 60 miles of Atlanta, and another 53 are planned.鈥 鈥淓scalating Energy Bills Help Drive a Political Shift: Voters Objecting to Spread of Data Centers,鈥 New York Times, 1 Dec 2025, A13.
  15. 鈥淓xamining How AI and Social Media Play a Role in 鈥楤rain Rot鈥,鈥 New York Times, 10 Nov 2024, B1. ALSO SEE 鈥The Dawn of the Stupid Age鈥 by Sophie McBain in The Guardian, adapted version in The Week, 21 Nov 2025, 36-37.听
  16. 鈥淣ew Chatbots, Forged in Their Creator鈥檚 Bias, Tangle What Is Fact of Fiction,鈥 New York Times, 10 Nov 2025, p.1.
  17. 鈥淎I Videos Are So Good You Can No Longer Trust Your Eyes,鈥 New York Times, 15 Oct 2025, B1.
  18. 鈥淓xperts Explore AI Chatbots for Therapy,鈥 New York Times, 14 Nov 2025, B1.
  19. 鈥淏e Prepared Before Seeking AI Medical Advice,鈥 New York Times, 4 Nov 2025, D7.
  20. 鈥淧rompt Diagnosis: AI is Already Helping Physicians and Patients. But There are Side Effects,鈥 Dhruv Khullar, The New Yorker, 29 Sept 2025, 20-25. The author is an Associate Professor at Weill Cornell Medical College, NYC.
  21. 鈥淎I Transition Will Most Likely be Gradual,鈥 New York Times, 8 Nov 2025, B1 on job displacement by AI.
NOVEMBER ARTICLES
Polycrisis and Systemic Risk: Assessment, Governance, and Communication

Huan Liu, School of Public Administration, South China University of Technology & Ortwin Renn, Research Institute for Sustainability 鈥 Helmholtz Centre Potsdam (RIFS)

In recent years, the focus of integrated disaster and risk research has shifted from topical analysis, such as natural hazards, technological accidents, or environmental crises, toward a comprehensive understanding of interconnected and mutually interactive risk sources and crises. This evolving perspective has often been described through the concept of 鈥減olycrisis鈥, which emphasizes how crises in one domain can amplify or cascade into others. At the same time, the literature on systemic risk has explored how multiple, interacting risks threaten the functionality and survivability of entire systems, including climate stability, cybersecurity, and energy production.

Recently we published a review paper in the International Journal of Disaster Risk Science (Liu and Renn 2025), which provides a summary of the literature on both concepts, explicates the commonalities and differences and develops a risk and crisis concept that builds a bridge between the two research traditions. Our review delineated the implications of a joint understanding of polycrisis and systemic risk for the practice of risk assessment, risk and crisis governance, and effective communication to different audiences. This article summarizes the findings of this analysis for a general readership.

This paper examines both the benefits and risks of integrating AI into educational settings, emphasizing the importance of implementing it in an ethical, inclusive, and strategically guided manner.

Key Outcomes of the Review Paper

  • A clarification of the concepts of “polycrisis” and “systemic risk,” outlining their theoretical underpinnings, historical trajectories, and practical relevance.
  • A detailed comparison of the two concepts, identifying both commonalities and critical distinctions.
  • A discussion of the implications for risk assessment, including novel methods and techniques suited to cope with the special features of cascading crises and interacting risks.
  • An exploration of governance strategies, management frameworks, and communication approaches, including the role of public participation and stakeholder engagement, resulting in improved methods for dealing with conflicting values and assigning prudent an ethically acceptable trade-off.
  • A synthesis of the key literature across disciplinary boundaries.
  • Practical recommendations and implications for future research and policy development.

Definitions and Concepts of Polycrisis and Systemic Risk

鈥淧olycrisis鈥 and 鈥渟ystemic risk鈥 (as illustrated in Figure 2) both deal with the complexity and interconnectivity of modern risks, but they differ in emphasis. Systemic risk focuses on potential harms within a system, while polycrisis emphasizes the realization that there are interconnected crises across multiple systems, often involving coincidental but interacting impacts. This distinction lays the theoretical groundwork for constructing an integrated risk analysis framework applicable to risk assessment, governance, and communication.

Fig. 2 Overview of the evolving debate and number of publications on systemic risk and polycrisis. Source Adapted from Sillmann et al. (2022), refined by authors.
Assessment, Governance, and Communication of Systemic Risk in Polycrisis

Under the polycrisis paradigm, traditional risk assessment methodologies face significant challenges. Until recently such assessments often have focused narrowly on a single risk, neglecting the interdependence and cascading amplification effects between different threats, and frequently have underestimated the importance of systemic context and structural factors. Data scarcity and insufficient structural modeling capacity have further limited the effectiveness of risk assessments. To address this, we proposes a theoretical framework drawing on complexity science, resilience theory, and network theory, advocating for the use of methods such as agent-based modeling, system dynamics, network models, stress testing, statistical and machine learning, and artificial intelligence to adapt to risks characterized by complexity, uncertainty, ambiguity, highly non-linear dynamics and interconnectedness (Renn 2024).

Current systemic risk governance still suffers from structural inadequacies, including the difficulty of quantifying cause-effect relationships, the inability to gauge 鈥渂lack swan鈥 risks, the plurality of knowledge claims and assessments, contra-intuitive implications, the silo structure of risk management institutions, and an over-reliance on 鈥渢rial-and-error鈥 models. To enhance governance effectiveness, we recommend promoting trans-sectoral impact assessments, collaboration between diverse institutions, building inclusive governance structures, and integrating resilience management principles into institutional design and policy responses. The goal is to strengthen a system鈥檚 capacity for absorption, recovery, and adaptation to shocks, thereby achieving comprehensive risk governance that is cooperative, integrative, and inclusive (Renn, 2021, 2024).
Existing risk communication practices are often characterized by the unidirectional transmission of expert knowledge, neglecting the social context of risk perception and the diverse public values, which fails to effectively address public concerns. Furthermore, even though dialogic and participatory communication models are theoretically appealing, their practical implementation in systemic risk contexts faces significant challenges. Therefore, the review paper proposes a communication strategy centered on the core functions of Enlightenment, Trust-building, Dialogue, and Co-determination (Renn 2020). This strategy emphasizes building open, credible, and deliberative risk communication platforms to enhance public comprehension, engagement, and capacity for action.

Conclusion and Future Research Needs

Understanding and managing polycrisis and systemic risks require innovative approaches in assessment, governance, and communication. By embracing complexity and fostering inclusive participation to better understand that complexity, risk management can be better equipped to address the dynamic challenges of our interconnected world. Future research is essential to refine existing theories, validate proposed frameworks, and translate academic insights into practical tools for risk management and governance.

References

Liu, H., & Renn, O. (2025). Polycrisis and Systemic Risk: Assessment, Governance, and Communication. International Journal of Disaster Risk Science, 16, 526鈥549. https://doi.org/10.1007/s13753-025-00636-3

Renn, O. 2020. Risk communication: Insights and requirements for designing successful communication programs on health and environmental hazards. In Handbook of risk and crisis communication, 2nd edn., ed. R.L. Heath, and H.D. O鈥橦air, 80鈥89. New York: Routledge.
Renn, O. 2021. New challenges for risk analysis: Systemic risks. Journal of Risk Research 24: 127鈥133.
Renn, O. 2024. Systemic risks and polycrisis: The need for an integrative approach. In An interdisciplinary discourse on regulation. Biotic self-regulation: Model for man-made systems?, ed. W. Haber, M. Grambow, and P. Wilderer, 25鈥29. Munich: TUM-IAS.
Sillmann, J., I. Christensen, S. Hochrainer-Stigler, J. Huang-Lachmann, S. Juhola, K. Kornhuber, M. Mahecha, R. Mechler, et al. 2022. ISC-UNDRR-RISK KAN briefing note on systemic risk. Paris: International Science Council. https://doi.org/10.24948/2022.01. Accessed 28 Apr 2025.

Polycrisis, systemic risk and resilience: Novel analytical insights and evidence needed to turn crisis into opportunity

Reinhard Mechler, IIASA

The concept of Polycrisis鈥攁 system of interconnected and compounding crises鈥攊s receiving increasing attention. In our dynamic and complex world, multiple crises often interact, potentially leading to global tipping points and local adaptation limits. While not entirely a new concept, it has risen to enhanced prominence, as the global landscape of the twenty-first century is increasingly defined by such mutual amplification of nested and intertwined systemic risks. This reality requires new approaches to analysis, assessment, and governance in general. This article asks how interconnected risks can be addressed to yield multiple, overlapping resilience benefits.听

Polycrisis and resilience

A by a team of IIASA researchers (Reinhard Mechler, Piotr 呕ebrowski, Romain Clercq-Roques, Patik Patil, Stefan Hochrainer-Stigler) on 鈥淧olycrises and Positive Externalities鈥 highlights that polycrises are at least partially driven by negative externalities鈥攕uch as climate change, public health issues, income inequality etc. However, there has been very limited analysis of whether and how positive externalities may contribute to fostering resilience systemically and help precipitate positive transformations. Positive externalities鈥攕ocietal benefits arising directly or indirectly from targeted interventions鈥攎ay help “unlock” effective and acceptable pathways to sustainable development. For example, investments in universal health coverage not only improve health outcomes for a wide range of diseases but also enhance resilience to increasingly climate-driven health risks (such as heat-related illnesses), reduce healthcare costs, and boost productivity of a healthier workforce. Research shows that such positive gains can strengthen health systems overall, create multiplier effects through reducing health burdens, improving productivity, and freeing resources to reinvest in risk reduction and other resilience measures.听

Disaster risk reduction and climate adaptation analysts have emphasized the need for orienting risk management investments towards interventions that generate so-called multiple or triple resilience dividends. This means extending the focus in decision making from avoiding and reducing impacts and risks to also considering development (co-)benefits arising irrespective of disaster event occurrence. In this context, the 鈥渢riple dividend of resilience鈥 (TDR) concept has suggested that, in addition to risk reduction benefits (dividend 1), benefits would also arise from benefits associated with unlocked development (dividend 2) as well as from co-benefits (dividend 3), for example, from investments into disaster-safe and energy efficient housing. Yet, despite the increasing burdens imposed by systemic disaster and climate risks and wide-spread recognition of this concept over a decade as well as solid evidence regarding the benefits of reducing risk, it has remained difficult to motivate sustained investment across scales into disaster and climate risk reduction.

IIASA researchers argue that this gap is due in part to conceptual ambiguity around the notion of 鈥渦nlocking dividends,鈥 lack of consistent reporting, insufficient awareness of positive externalities, and a limited understanding of how dividends evolve across time and space. Their review reveals that there are indeed significant (co) benefits and positive externalities in both implemented and planned risk management and adaptation projects, as well as in model-based simulations used to support policy design across scales. Advancing research on systemic risk and resilience can help to surface these benefits, improve decision-making, and strengthen governance鈥攕o crucial for building resilience to escalating disaster and climate risks in a polycrisis context.

Evidence: limited, but strong

Overall, the evidence case for the TDR appears powerful, although limited. It has been understood that dividends 2 and 3 may create substantial additional value to the benefits from avoiding losses (dividend 1) (Mechler and Hochrainer-Stigler 2019; Heubaum 2022). For example, Mechler and Hochrainer-Stigler (2019) examined 65 cost-benefit studies of DRR across the world as to whether they considered dividends 2 and 3. The majority of studies done are pre-event appraisal (compared to post-event evaluations) and only 15 assessments can be said to have covered dividends beyond the first. Yet, for this non-representative sample, the research found an average benefit-cost ratio of 6.7 compared to the average cost-benefit of around 5.1 for standard dividend 1 studies. The World Resources Institute (WRI 2025) assessed the full value of 320 climate adaptation projects across agriculture, health, water, and infrastructure sectors from 2014 to 2024 in 12 lower and middle-income countries. The analysis revealed substantial and wide-ranging benefits, showing that every USD 1 invested in adaptation can generate more than USD 10.5 in returns over a 10-year observation period, with estimated average annual financial returns estimated at ranging from 20% to 27%, while not all benefits have been monetized. As the authors suggest, these findings provide compelling evidence to expand adaptation financing, enhance data and evaluation approaches, and better integrate climate adaptation with mitigation efforts to maximize overall impact. The authors find the second and third dividends to often outnumber the first, and that the second and third dividends combined can amount to double the value of the first.听

Next steps for polycrisis and systemic risk research听

Future work may focus on deepening the theoretical and evidence base around polycrisis and resilience dynamics and providing actionable guidance for policy and practice.听 This may also include advancing systems-based modelling techniques and methods听for diagnosing and simulating the interdependencies and positive as well as negative feedback loops and cascades characteristic of polycrisis dynamics. Tools from system dynamics and integrated assessment modelling are particularly well-positioned to illuminate emergent risks and leverage points.

In the context of the polycrisis, we also suggest that policymakers need greater focus on addressing underlying stresses that set a system up for collapse. Analysts have suggested that policymakers ought to overcome 鈥渢rigger fixation,鈥 that is, a tendency to focus on most immediate shock trigger events alone (Lawrence et al. 2024). Trigger fixation has been described as the 鈥渘ormalcy bias鈥 in disaster risk reduction work (Mileti et al. 1999), where it has remained difficult to motivate sustained pre-disaster investment into resilience-generating DRR and CCA policies at project level as well as aggregate scale, while post-disaster major, yet insufficient spending for rehabilitation and recovery is being done.听 What seems necessary is thus to focus on shock (risk) driver interactions, not just examine isolated crises and shocks. Early action to reduce risk and potential impacts becomes even more important due to possibilities of adaptation limits and negative social tipping points (Juhola et al. 2022). As some suggest, decision making is charged to address system architecture and high-leverage intervention points with an integrated assessment of the interlinked crises to ensure addressing one crisis will not exacerbate others. Resilience thus becomes an equally important consideration alongside efficiency in policy evaluation and decision making (Wernli et al. 2023; Lawrence et al. 2024).听

Addressing risk in the context of the global Polycrisis, characterized by more enduring and human-induced changes in the environment than those of the past, thus requires governance approaches that embrace innovative analytical methods and involve a diverse set of stakeholders to be brought together by interdisciplinary and transdisciplinary approaches. Overall, the governance and management of systemic risk in a polycrisis context may present a departure from traditional risk governance, which typically focus on specific sectors and responses听 (Florin and Nursimulu 2018; Renn et al. 2020; Sillmann et al. 2022) to discern which interventions, reforms, or behavioural changes hold the greatest potential for reducing vulnerability while creating multiple resilience dividends with positive and potentially transformational socioeconomic development outcomes.听听

Ethics, Governance, and Systemic Resilience in an Age of Polycrisis

Pia-Johanna Schweizer

The contemporary world is increasingly characterized by cascading crises, such as climate change, pandemics, energy insecurity, and geopolitical instability, that cannot be understood in isolation. This condition has been described by scholars as one of systemic risk, where the interdependence of vital systems produces vulnerabilities that traditional, compartmentalized modes of governance can no longer address (Renn et al., 2022; Schweizer, 2021). This condition has led to interacting crisis through tightly coupled economic, social, and ecological systems, giving rise to the concept of polycrisis (Lawrence et al., 2024; Liu & Renn, 2025). The recognition that society鈥檚 critical infrastructures and ecological supports are deeply interconnected has led to calls for new approaches to both resilience and governance that are adaptive, participatory, and ethically grounded. My recent work with Sirkku Juhola ( Schweizer & Juhola 2024) and a paper by Benjamin Hofbauer et al. (2025) advance this intellectual project by reimagining how systemic risks can be governed and how resilience can be ethically framed. Building on this work, in this article I propose that the challenge of the Anthropocene is not only how to make systems endure, but how to make them just and equitable.

At the core of our argument lies the claim that systemic risks are qualitatively different from conventional risks. They arise not from isolated hazards but from the dynamic interplay of interconnected systems on which societies depend, including energy supply, financial networks, public health systems, and ecosystems. Because these systems are highly complex and adaptive, disturbances in one can trigger cascading effects in others. The COVID-19 pandemic and the accelerating impacts of climate change illustrate this entanglement vividly, resulting in a virus disrupting supply chains and economies and rising temperatures destabilizing energy systems and food production. These examples underscore what we ( Schweizer and Juhola 2024) describe as the three defining properties of systemic risk, namely complexity, uncertainty, and ambiguity. Complexity stems from the non-linear interdependencies that make outcomes unpredictable. Uncertainty reflects the limits of scientific foresight in such dynamic conditions. Ambiguity, finally, arises from competing interpretations and normative disagreements over how to value and respond to risks and crisis.

Established risk management systems, built on quantification, prediction, and control, have reached their limits. Governance must therefore be reoriented toward learning rather than mastery. A framework for the governance of and for systemic risks needs to replace hierarchical command with reflexive, iterative, and inclusive processes. Reflection and iteration acknowledge that knowledge is provisional and that policies must adapt over time. Inclusion and transparency ensure that decisions incorporate diverse perspectives and remain publicly accountable. This concept of governance is less about stabilizing a system than about cultivating its capacity to learn from disruption (Kuhlicke, 2013).听

This reconfiguration of governance is inherently political. It disperses authority across multiple levels and actors, such as states, scientific institutions, communities, and civil society, recognizing that no single actor can control systemic processes. In doing so, it also redefines the relationship between knowledge and power. Governance becomes a space of negotiation among different forms of expertise and experience, rather than the application of technocratic solutions. This move away from 鈥渢echnocratic solutionism鈥 opens the door to ethical engagement with resilience and adaptation (Hofbauer et al., 2025).听听

Hofbauer and colleagues take up this task by arguing that resilience, often invoked as a neutral goal of climate adaptation, is in fact a deeply normative concept. To make a system resilient always implies a decision about what should persist, what may transform, and who benefits from stability or change. These are not merely technical questions but moral and political ones. Drawing on theories of justice, we distinguish between distributive, procedural, recognitional, and ontological dimensions of justice, each of which illuminates a different aspect of resilience. Distributive justice concerns who gains or loses from resilience measures; procedural justice addresses who participates in decision-making; recognitional justice asks whose identities and experiences are acknowledged; and ontological justice challenges the very assumptions about what counts as a system and what it means for it to flourish.

By situating resilience within this ethical framework, Hofbauer et al. (2025) challenge the tendency to treat adaptation as a matter of optimization or efficiency. Their analysis of the case study of the Rhine-Erft catchment in western Germany, which was severely affected by flooding in 2021, illustrates how resilience building involves contested moral choices. The region鈥檚 inter-municipal flood protection initiatives represent attempts at adaptive governance, yet they also raise questions about which values guide such cooperation. Should policy prioritize economic recovery, ecological restoration, or social equity? How should competing temporalities, notably the urgency of short-term protection versus the need for long-term transformation, be reconciled? Confronting such questions requires systemic resilience to be ethically informed which leads to a mode of governance that integrates moral reflection into systems thinking and embraces the ambiguity of evolving values (Hofbauer et al., 2025).

What unites both analyses is the recognition that governance and ethics are themselves systemic phenomena. Just as social and ecological systems are complex and adaptive, so too are the values, institutions, and practices through which societies govern them. The governance of systemic risks therefore cannot be separated from the cultivation of ethical reflexivity. While the emphasis on inclusion and deliberation ( Schweizer and Juhola 2024) provides the procedural architecture for this reflexivity, the focus on justice by Hofbauer et al. (2025) provides its moral substance. Together, they articulate a vision of resilience not as the mere capacity to bounce back from shocks but as a collective process of negotiating how societies can govern uncertainty and ambiguity.

This synthesis also signals a broader philosophical shift. It rejects the illusion of control that underpinned modern risk governance. The aspiration to predict and manage complex systems through technical expertise alone is complemented by a continuous practice of participation and learning. Systemic resilience thus becomes a form of ethical pragmatism, balancing competing goods in the face of incomplete knowledge. Rather than seeking to eliminate uncertainty, it asks how societies can remain responsive within it and cultivates adaptability and solidarity. In this sense, resilience and justice are not opposing aims but mutually constitutive.

The task that emerges from these arguments is both practical and philosophical. Practically, governance systems must design institutions capable of experimentation, iteration, and multi-level cooperation. Philosophically, societies must learn to see governance itself as a moral practice. The ethics of resilience thus lies in acknowledging the plural and historically situated nature of justice (Hofbauer et al., 2025). The governance of systemic risk听 lies in building collective capacities to learn and deliberate in conditions of complexity (Schweizer & Juhola, 2024). Taken together, their work points toward a conception of governance that is neither purely procedural nor purely moral, but integrative: a mode of action that unites systems thinking with ethical reflection. Thus, in the age of polycrisis, the question is no longer focused only on how to make systems resilient but rather on what kind of systems (and the functions they uphold) deserve resilience in the first place. The answer, suggested here, lies in a commitment to just resilience by intertwining adaptive governance and moral responsibility.

References

Hofbauer, B., Einh盲upl, P., Hochrainer-Stigler, S., L枚hrlein, J., Bittner, D., & Schweizer, P.-J. (2025). Just Systems or Justice in Systems? Exploring the Ethical Implications of Systemic Resilience in Local Climate Adaptation. International Journal of Disaster Risk Science. https://doi.org/10.1007/s13753-025-00653-2

Kuhlicke, C. (2013). Resilience: A capacity and a myth: findings from an in-depth case study in disaster management research. Natural Hazards, 67(1), 61鈥76. https://doi.org/10.1007/s11069-010-9646-y

Lawrence, M., 被窝影视福利r-Dixon, T., Janzwood, S., Rockst枚m, J., Renn, O., & Donges, J. F. (2024). Global polycrisis: The causal mechanisms of crisis entanglement. Global Sustainability, 7, e6. https://doi.org/10.1017/sus.2024.1

Liu, H., & Renn, O. (2025). Polycrisis and Systemic Risk: Assessment, Governance, and Communication. International Journal of Disaster Risk Science. https://doi.org/10.1007/s13753-025-00636-3

Renn, O., Laubichler, M., Lucas, K., Kr枚ger, W., Schanze, J., Scholz, R. W., & Schweizer, P. (2022). Systemic Risks from Different Perspectives. Risk Analysis, 42(9), 1902鈥1920. https://doi.org/10.1111/risa.13657

Schweizer, P.-J. (2021). Systemic risks 鈥 concepts and challenges for risk governance. Journal of Risk Research, 24(1), 78鈥93. https://doi.org/10.1080/13669877.2019.1687574

Schweizer, P.-J., & Juhola, S. (2024). Navigating systemic risks: Governance of and for systemic risks. Global Sustainability, 7, e38. https://doi.org/10.1017/sus.2024.30

OCTOBER ARTICLES
Balancing between benefits and risks: The role of AI in education

Dr Polonca Serrano, Assist. Prof., Alma Mater Europaea University

Introduction

Artificial intelligence (AI) has emerged as one of the most transformative technologies in recent decades, reshaping industries ranging from healthcare to finance. Education is no exception. The integration of AI into classrooms, learning management systems (LMS), and academic research has sparked intense debates among educators, policymakers, and researchers. Intelligent tutoring systems, AI-driven chatbots, and learning analytics tools can adapt to individual learning styles, provide real-time feedback, and support data-driven decision-making, improving student outcomes and institutional efficiency. However, AI also introduces significant risks, including superficial learning, diminished emotional resilience, deepening inequalities, threats to academic integrity, and dependence on proprietary platforms. It cannot replace the human judgment, critical thinking, and interpersonal guidance provided by educators.听

This paper examines both the benefits and risks of integrating AI into educational settings, emphasizing the importance of implementing it in an ethical, inclusive, and strategically guided manner.

Benefits of AI in Education

  • Personalization and Adaptive Learning

One of the most significant contributions of AI to education is its ability to personalize learning experiences. Intelligent tutoring systems and AI-driven chatbots can provide tailored support, adapting to students鈥 pace, knowledge gaps, and learning styles. For example, ChatGPT and other AI tools can generate personalized feedback on essays or problem-solving tasks, helping students refine their skills in real-time. Villegas-Ch et al. (2025) stated that the intelligent tutor system for STEM subjects provided personalized feedback and dynamic learning path adjustments in real-time. In the study, the experimental group achieved significantly better results in task-solving accuracy than the control group, and students were very positive about how adaptive feedback helped them identify and correct their errors.

  • Efficiency in Administrative and Teaching Tasks

AI also contributes to reducing administrative burdens on educators. Automated grading systems, chatbots for student inquiries, and AI-generated lesson plans can save teachers a considerable amount of time. This allows educators to focus more on meaningful interactions with students and less on repetitive tasks. However, Zawacki-Richter et al. (2019) drew a conclusion based on a systematic review of almost 150 studies, suggesting that automated grading is reliable, especially for standardized tests (MCQs, short answers), and that grading open-ended tasks (essays, reflections) still requires human review. Similar findings that emphasize the importance of the teacher role are also reported, for example, by Langove and Khan (2024), Weegar and Idestam-Almquist (2024), and Meylani (2024).

  • Early Detection of Student Struggles

AI systems embedded in learning management platforms can track subtle indicators 鈥 such as late submissions, irregular log-ins, or declining engagement 鈥 to flag students at risk of dropping out (Ak莽ap谋nar et al., 2019; N.B. Mahesh Kumar et al., 2025; Tamada et al., 2022). This predictive function could allow universities to intervene earlier with targeted support.

  • Boosting Research and Innovation

AI facilitates interdisciplinary and data-intensive research, enabling scholars to process vast amounts of information quickly. Pisica et al. (2023) highlight how AI can support collaborative research by improving data collection, analysis, and communication among academics across disciplines. This accelerates the discovery of new insights and fosters innovation.

Risk that arises with AI in Education

  • Lack of Deep Understanding

While AI can generate plausible responses, it often lacks genuine comprehension of context and meaning. Language models like ChatGPT recognize linguistic patterns but may struggle to grasp complex theoretical concepts, which can lead to superficial learning if students rely too heavily on AI outputs without critical evaluation (Farrokhnia et al., 2024). An experiment by Pitts et al. (2025) demonstrated that students sometimes overestimate AI鈥檚 capabilities, accepting incorrect yet convincing suggestions鈥攈ighlighting how reliance on AI can create an appearance of knowledge without a deep understanding. Runjin Chen et al. (2025) were exploring the notion of 鈥渟uperficial knowledge鈥 in model alignment. They found that much of the alignment to human preferences relies on surface-level patterns rather than genuine causal understanding, with this superficiality dominating across tasks such as safety, detox, and behavior. A comparison of human and LLM (GPT-3.5, GPT -4, Gemini) assessments across cognitive domains in Bloom鈥檚 taxonomy found that LLMs are pretty reliable for lower-order tasks, such as understanding, like understand but perform less accurately on higher-order domains such as analyze and evaluate, indicating reduced reliability for complex, critical thinking (Teckwani et al., 2024).

  • Emotional Deskilling

AI tutors and feedback systems can reduce opportunities for students to practice emotional regulation and resilience 鈥 for example, coping with harsh feedback from a professor or negotiating with peers. Or, like some studies (Lehmann et al., 2024; Wang & Han, 2022) found out, that overreliance on LLM or AI tools can impair learning if students simply ask AI to do a task for them rather than putting in the cognitive effort themselves and after some time, they just look at the feedback as some technical improvement instead of deal with emotions that emerge from critic or uncertainty. If a student avoids challenges, this may also affect the development of strategies for coping with frustration. Many automated feedback systems improve academic performance; however, surprisingly little evidence suggests that automated feedback itself builds emotional tolerance or reminds students how to manage emotional responses, such as disappointment and frustration (Cavalcanti et al., 2021). Yin et al. (2024) also found out that feedback that includes metacognitive elements (e.g., reflection on how students arrived at an answer, not just what is right or wrong) reduces negative emotions and increases motivation. This implies that more 鈥渇eedback鈥 forms can help with emotional regulation; conversely, in the absence of such elements, the potential for developing these strategies decreases.

  • Inequality听

AI in higher education also poses risks that deepen existing inequalities. Research shows that wealthier institutions and students have greater access to advanced AI tools. At the same time, underfunded universities and learners from disadvantaged backgrounds face barriers due to limited infrastructure and digital skills (OECD, 2024). This uneven adoption risks widening achievement gaps rather than narrowing them. Holmes et al. (2022) state that AI systems can also replicate algorithmic biases. Predictive analytics tools can sometimes unfairly flag certain groups of students as at risk, potentially reinforcing discrimination in assessment and support. Unless carefully monitored, these systems could consolidate inequality rather than address it.

Nevertheless, recent pilots show AI can reduce inequalities if implemented with care. Rangel-de L谩zaro and Duart (2023) found that adaptive tutoring and translation tools, when made widely available, can support low-performing students and those with diverse linguistic backgrounds. Thus, whether AI reduces or amplifies inequality depends on governance, funding, and intentional design.

  • Data colonialism

The universities become dependent on external vendors and proprietary platforms. By outsourcing critical functions, institutions risk losing sovereignty over student data and becoming locked into costly, non-interoperable ecosystems (Couldry & Mejias, 2018). This consolidation favors large tech firms and elite institutions, leaving others at a disadvantage.

Conclusion

AI has already begun to reshape education, offering unprecedented opportunities for personalization, accessibility, efficiency, and innovation. Its ability to provide real-time feedback, streamline administrative processes, and support complex learning tasks makes it a powerful tool for modern education. However, there are still limitations, such as a shallow understanding, risks to academic integrity, ethical concerns, and unequal access, which underscore the need for cautious integration.

The challenge for educators and policymakers lies in striking a balance between AI鈥檚 benefits and its risks. Rather than replacing human educators, AI should complement their roles, enhancing rather than diminishing critical thinking, creativity, and social interaction. A thoughtful, ethical, and inclusive approach will be essential to ensure that AI contributes positively to the future of education, and what is truly important – including comprehensive training for all stakeholders in education.

References

Ak莽ap谋nar, G., Altun, A., & A艧kar, P. (2019). Using learning analytics to develop early-warning system for at-risk students. International Journal of Educational Technology in Higher Education, 16(1), 40. https://doi.org/10.1186/s41239-019-0172-z

Barrett, A., & Pack, A. (2023). Not quite eye to A.I.: student and teacher perspectives on the use of generative artificial intelligence in the writing process. International Journal of Educational Technology in Higher Education, 20(1). https://doi.org/10.1186/s41239-023-00427-0

Cavalcanti, A. P., Barbosa, A., Carvalho, R., Freitas, F., Tsai, Y.-S., Ga拧evi膰, D., & Mello, R. F. (2021). Automatic feedback in online learning environments: A systematic literature review. Computers and Education: Artificial Intelligence, 2. https://doi.org/https://doi.org/10.1016/j.caeai.2021.100027

Couldry, N., & Mejias, U. (2018). Data Colonialism: Rethinking Big Data鈥檚 Relation to the Contemporary Subject. Television & New Media, 20. https://doi.org/10.1177/1527476418796632

Farrokhnia, M., Banihashem, S. K., & Noroozi, O. (2024). A SWOT analysis of ChatGPT: Implications for educational practice and research. Innovations in Education and Teaching International, 61(3), 460鈥474. https://doi.org/10.1080/14703297.2023.2195846

Holmes, W., Porayska-Pomsta, K., Holstein, K., Sutherland, E., Baker, T., Shum, S. B., Santos, O. C., Rodrigo, M. T., Cukurova, M., Bittencourt, I. I., & Koedinger, K. R. (2022). Ethics of AI in Education: Towards a Community-Wide Framework. International Journal of Artificial Intelligence in Education, 32(3), 504鈥526. https://doi.org/10.1007/s40593-021-00239-1

Langove, S. A., & Khan, A. (2024). Automated Grading and Feedback Systems: Reducing Teacher Workload and Improving Student Performance. Journal of Asian Development Studies, 13(4), 202鈥212. https://doi.org/10.62345/jads.2024.13.4.16

Lehmann, M., Cornelius, P. B., & Sting, F. J. (2024). AI Meets the Classroom: When Do Large Language Models Harm Learning? ArXiv Preprint ArXiv:2409.09047. http://arxiv.org/abs/2409.09047

Meylani, R. (2024). Artificial Intelligence in the Education of Teachers: A Qualitative Synthesis of the Cutting-Edge Research Literature. Journal of Computer and Education Research, 12, 600鈥637. https://doi.org/10.18009/jcer.1477709

N.B. Mahesh Kumar, T. Chithrakumar, T. Thangarasan, J. Dhanasekar, & P. Logamurthy. (2025). AI-Powered Early Detection and Prevention System for Student Dropout Risk. International Journal of Computational and Experimental Science and Engineering, 11(1 SE-Research Article). https://doi.org/10.22399/ijcesen.839

OECD. (2024). The Potential Impact of Artificial Intelligence on Education: Opportunities and Challenges. In OECD Artifical intelligence papers (Issue 23). https://doi.org/10.1007/978-3-031-53963-3_39

Pisica, A. I., Edu, T., Zaharia, R. M., & Zaharia, R. (2023). Implementing Artificial Intelligence in Higher Education: Pros and Cons from the Perspectives of Academics. Societies, 13(118). https://doi.org/10.3390/soc13050118

Pitts, G., Rani, N., Mildort, W., & Cook, E.-M. (2025). Students鈥 Reliance on AI in Higher Education: Identifying Contributing Factors. ArXiv:2506.13845.

Rangel-de L谩zaro, G., & Duart, J. M. (2023). You Can Handle, You Can Teach It: Systematic Review on the Use of Extended Reality and Artificial Intelligence Technologies for Online Higher Education. Sustainability (Switzerland), 15(3507). https://doi.org/10.3390/su15043507

Runjin Chen, Perin, G. J., Chen, X., Chen, X., Han, Y., Hirata, N. S. T., Hong, J., & Bhavya Kailkhura. (2025). Extracting and Understanding the Superficial Knowledge in Alignment. ArXiv:2502.04602.

Tamada, M. M., Giusti, R., & Netto, J. F. (2022). Predicting Students at Risk of Dropout in Technical Course Using LMS Logs. In Electronics (Vol. 11, Issue 3). https://doi.org/10.3390/electronics11030468

Teckwani, S. H., Wong, A. H.-P., Luke, N. V., & Low, I. C. C. (2024). Accuracy and reliability of large language models in assessing learning outcomes achievement across cognitive domains. Advances in Physiology Education, 48(4), 904鈥914. https://doi.org/10.1152/advan.00137.2024

Villegas-Ch, W., Buenano-Fernandez, D., Navarro, A. M., & Mera-Navarrete, A. (2025). Adaptive intelligent tutoring systems for STEM education: analysis of the learning impact and effectiveness of personalized feedback. Smart Learning Environments, 41(12). https://doi.org/10.1186/s40561-025-00389-y

Wang, Z., & Han, F. (2022). The Effects of Teacher Feedback and Automated Feedback on Cognitive and听 Psychological Aspects of Foreign Language Writing: A Mixed-Methods Research. Frontiers in Psychology, 13, 909802. https://doi.org/10.3389/fpsyg.2022.909802

Weegar, R., & Idestam-Almquist, P. (2024). Reducing Workload in Short Answer Grading Using Machine Learning. International Journal of Artificial Intelligence in Education, 34(2), 247鈥273. https://doi.org/10.1007/s40593-022-00322-1

Yin, J., Goh, T.-T., & Hu, Y. (2024). Interactions with educational chatbots: the impact of induced emotions and students鈥 learning motivation. International Journal of Educational Technology in Higher Education, 47(21). https://doi.org/10.1186/s41239-024-00480-3

Zawacki-Richter, O., Mar铆n, V. I., Bond, M., & Gouverneur, F. (2019). Systematic review of research on artificial intelligence applications in higher education 鈥 where are the educators? International Journal of Educational Technology in Higher Education, 16(1), 39. https://doi.org/10.1186/s41239-019-0171-0

Beyond Efficiency: AI for Rhythm-Aware, Compassionate Healthcare

Kiriti Prasad Choudhury, Manager, Beximco Pharmaceuticals

AI is transforming how we live, work, and heal; however, technology alone cannot meet the rising challenges of aging populations, chronic diseases, and overstressed healthcare systems. The question is no longer what AI can do, but how wisely we guide it. Modern healthcare excels in precision but often misses empathy and context. To truly evolve with AI, it must integrate the strengths of four timeless forces: Medicine, Nature, Mind, and Rhythms – into one humane, adaptive model that restores balance between data and life. Ancient systems such as Ayurveda and Chinese Medicine have long recognized that timing (circadian and lunar cycles) profoundly influences well-being. Modern research echoes this insight, confirming links between rhythms, metabolism, and emotional states.

AI already supports diagnosis, drug discovery, and clinical decisions 鈥 from IBM Watson鈥檚 analytics to AlphaFold鈥檚 protein modeling. Wearables and neurofeedback tools track health signals in real time, while emerging 鈥渄igital twins鈥 simulate therapies safely. Yet, the healthcare landscape remains fragmented, with data silos, limited attention to the mind and environment, and poor integration of timing intelligence. The challenge ahead is not to invent more tools, but to connect what already exists 鈥 science, nature, and empathy 鈥 into one rhythm-aware framework. In this article, I aim to outline a vision for such a framework based on human use of AI tools.

The Vision: Humane Intelligence in Healthcare

The next step is not replacing human judgment with algorithms, but harmonizing them. A Humane Intelligence system learns from both science and life, merging biological precision with emotional and environmental awareness.

By synchronizing medical data, natural elements, mental feedback, and human rhythms, AI can evolve into a compassionate partner in care, guiding decisions that are effective, compassionate, and attuned to time and context.

Four Pillars of Humane Care

Medicine 鈥 The Core of Precision: Structured medical and physiological data, including lab results, genetics, and disease history, integrate with lifestyle and medication profiles to provide a comprehensive view of individual health. This forms a precise, context-aware baseline for care.

Nature 鈥 Environment & Nutrition (The Quiet Healer): Environmental and nutritional factors 鈥攍ight, air quality, water/hydration, and diet 鈥 are analyzed using sensors, logs, and geolocation. The system personalizes meal composition and chrononutrition (meal timing) to stabilize energy and metabolism, supports microbiome health, and respects cultural food patterns. Examples include timing sunlight to optimize vitamin D, matching meals to individual metabolic needs, and utilizing forest or sea-air exposure to aid recovery.

Mind 鈥 The Adaptive Feedback Loop: Wearables and neurofeedback tools monitor stress, mood, and brain activity, providing valuable insights into the mind. Through fuzzy logic and reinforcement learning, the model identifies factors that improve well-being and resilience.

Rhythms 鈥 The Synchronization Layer: Circadian, lunar, and seasonal cues guide the timing of medicines, meals, and therapies, aligning interventions with the body鈥檚 most receptive hours. Traditional Indian, Chinese, or Japanese healing systems have long recognized these lunar and daily cycles; modern chronobiology is now exploring similar influences on hormonal and metabolic patterns.

A dynamic mathematical matrix model then integrates data from these four pillars to find the most balanced outcome. This evolving algorithm recalculates as new inputs arrive, ensuring every recommendation remains personal, adaptive, and humane.
Digital twins simulate outcomes safely before clinical use, bridging precision with empathy.

How It Could Work in Practice

Integrated data flow: The AI engine gathers medical records, lab results, wearable device readings (e.g., heart rate, sleep, activity), environmental data (e.g., light, air quality, humidity, nutrition), and reflections on mood or stress. All signals are standardized into a unified feature matrix.

Computational engine: Using mathematical optimization, the system builds a Personalized Treatment Score Grid, ranking each therapy by six factors: Effectiveness, Safety, Circadian Fit, Adherence, Mental Support, and Learning Gain.
Adaptive feedback: Through Fuzzy Logic, Markov Modeling, and Reinforcement Learning, the model continuously adjusts. For example, if meditation improves heart-rate variability, it gains weight in plans.

Rhythm-aware scheduling: AI optimizes every intervention to the body鈥檚 optimal state, such as medication at 7:30 AM, meditation at 6:00 AM, sun exposure at 11:00 AM for vitamin D, meals aligned with metabolism, and sleep guided by lunar cycles.
Digital twin simulation: A virtual patient tests these schedules before they are used in real-world applications. It reduces risk and improves precision. The final interface delivers clear daily routines for patients and intuitive dashboards for clinicians.

Figure 1. Conceptual model showing AI integration across medical, environmental, mental, and rhythmic inputs.

Figure 1: High-Level System Architecture of Proposed System

From Vision to Reality

Phase I – Simulation and Review: The model should first be tested using synthetic and de-identified health data to verify its stability, accuracy, and bias. Panels of physicians, chronobiologists, and AI ethicists can review outputs to ensure clinical soundness and transparency.

Phase II – Pilot Implementation: Controlled studies in integrative clinics can compare model-guided care with standard treatment. Results should measure adherence, circadian alignment, and patient-reported outcomes, following standards such as the CONSORT-AI and SPIRIT-AI guidelines.

Phase III 鈥 Federated Multi-Center Trials: Larger trials across diverse institutions will refine parameters while protecting privacy through Federated Learning. Studies should align with standards such as ISO 14155 for the evaluation of medical technology.

Ethics and Governance: Every stage requires clinician oversight, transparency tools (e.g., SHAP, LIME), and continuous fairness checks. Governance must follow a framework, such as the FDA’s Good Machine Learning Practice and the WHO鈥檚 AI Ethics Framework.

Challenges Ahead: Real-world validation, data quality, privacy, interpretability, and patient acceptance remain key challenges that must be addressed. Cloud-based deployment and user education can widen access.

Global Impact: If addressed, rhythm-aware AI has the potential to transform healthcare worldwide, particularly in high-population regions. It helps systems shift from reactive treatment to preventive, personalized well-being guided by science, ethics, and compassion.

Looking Ahead 鈥 A Humane Future for Healthcare

The next chapter of AI for medicine will not be written by algorithms alone, but by how wisely we let them serve humanity. Rhythm-aware AI offers a chance to reconnect care with life鈥檚 natural order, where treatment follows the pulse of time, nature, and mind.

If guided with empathy and ethics, technology can help restore balance to systems under strain and dignity to care. The goal is simple: healthcare that heals intelligently, listens deeply, and moves in harmony with the rhythm of being human. Given the demographic burden on health systems in many countries, and the rising costs and shortages of medical staff, AI has a vital role to play, ironically, in deepening and re-humanizing medical care.

Europe’s Moral Compass for AI: From Regulation to Realisation
Why the EU’s Guardrails Took Time – But Are Still Utterly Necessary

Samraj Matharu, Founder, The AI Lyceum

The new era of Intelligence

Artificial Intelligence has truly arrived, transforming the world across every sector, from content creation in marketing to discovering new ways to treat disease. For the first time, we have created a creator, not just a tool. The difference between AI and a car is that a car cannot recreate itself, but AI can. We have built an intelligent species that is non-biological (for now), one that will continue to evolve, transform our society and reveal things we never thought possible. Generative AI brings both extraordinary opportunities and challenges equal to the scale of its innovation, and regulation has now become a matter of urgency.

In this article, I look at the European Artificial Intelligence Act, first proposed in April 2021 and formally adopted in 2024. The Act was created to ensure that AI systems are safe, ethical, and trustworthy for users, while promoting innovation and legislative clarity in Europe. The framework establishes a set of principles and clear rules for categorising AI systems based on their potential risk level, from minimal to unacceptable. I shall analyse this regulatory framework against the ethical principle expressed in the Latin maxim, utilem pete finem 鈥 ‘Seek a useful end!’

Is it necessary? Is it good? Is it truthful?

These three questions are known as the triple filter test, created by the Greek philosopher Socrates. I shall apply this test to AI. Before we make, deploy, or use these systems, we must ask ourselves: Will it be useful now or in the future? Is it good for us and for humanity? And is it truthful? Does it provide us with accurate information, and will we benefit from it?

Generative AI, in its current form, is based upon a particular type of architecture called a Generative Pre-Trained Transformer (GPT), which is used in tools such as OpenAI’s ChatGPT. Google invented the transformer architecture, which is detailed in the paper entitled ‘Attention Is All You Need’ by Vaswani et al, 2017. In this paper, they state: “We propose a new simple network architecture, the Transformer, based solely on attention mechanisms, dispensing with recurrence and convolutions entirely. Experiments on two machine translation tasks show these models to be superior in quality while being more parallelizable and requiring significantly less time to train “. OpenAI then created the GPT model, which enables us to interact with this transformer model (brain), allowing us to converse with a machine for the first time in a natural, conversational manner.

Generative models can create new content, such as text, images, or code, rather than just analysing data, as seen in ChatGPT generating an answer to your question. Pre-trained models learn from vast datasets before being fine-tuned for specific tasks, much like completing a degree before specialising as an engineer. The transformer architecture utilises neural networks inspired by the human brain, observing data, learning context, and using logic to understand relationships between words, for example, recognising that “cat,” “sat,” and “mat” are connected. Language modelling predicts the next word in a sequence using tokens, effectively mirroring how we write and think. Once trained, these models can be guided or prompted to perform across topics, from answering questions and summarising research to designing or creating entirely new content.

We have thus created a technology that mimics how the human brain thinks and operates. With innovations this radical, there are significant considerations across society, from environmental impact to new laws, ethics, and safety. Transparency, bias, and accountability are essential to ensure our decisions are sound and AI delivers tangible benefits. As technology evolves faster than policy, regulations often lag. The EU is now working to close that gap by building guardrails that enable safe and responsible AI innovation.

EU’s Moral Compass – The AI Act

A key reason the EU AI Act was created was to protect the fundamental rights of EU citizens, including safety, privacy and non-discrimination, mitigating harm from AI systems. The approach for developing Act I is illustrated in the following flowchart:

Specific examples of banned uses of AI, regulated by the act, include:

  1. Cognitive behavioural manipulation that could harm people. This is also known as ‘mind crime’.
  2. Social scoring by public authorities, which evaluates and classifies people based on their social behaviour.
  3. Certain biometric identification uses, including real-time remote facial recognition by law enforcement, are permitted by law, with exceptions for serious crimes.
  4. Predictive policing that perpetuates biases through profiling.

Before the act, the opacity of AI systems had created public concern. The act classifies AI by risk. Unacceptable risk, such as social scoring, is banned, while high-risk systems are tightly regulated. Chatbots are considered to be minimal risk, although this depends on the data used. Under the GDPR, personal data can pose hazards, meaning data and AI must align.

Most obligations fall on providers, but I believe developers, deployers and users all share responsibility. Users must also understand how these systems work to make informed choices. General-purpose AI models must provide documentation, adhere to copyright law, and report incidents to ensure cybersecurity and accountability.

Seven Principles of the Act

The act aims to build trust with us by guaranteeing that AI will operate safely and transparently through a set of seven principles:

  1. Human agency and oversight
    AI should augment, not replace, our decision-making. We must remain under ultimate control through mechanisms such as human-in-command, in-the-loop, and on-the-loop to ensure accountability and intervention as needed. We should only have a human out of the loop when we are confident that the results are replicable.

  1. Technical Robustness and Safety
    AI systems require reliability, security, and resilience against misuse, errors, or adversaries. Their consistency, as i mentioned in point one, should evidence across different conditions and be designed to prevent harm.
  2. Privacy and Data Governance
    The data used by AI must adhere to standards such as the General Data Protection Regulation (GDPR) and the Digital Protection Act (DPA) – ensuring it is lawful, high-quality, and well-protected. Confidentiality and privacy should be guarded at every stage to align with the ethical stances outlined within the regulations I mentioned.
  3. Transparency
    Decisions made by AI tools must be traceable and explainable. Take ChatGPT, for example. The example below illustrates Chain of Thought (CoT) and references/sources, ensuring I am aware of how the answer was derived, as well as the supporting information.

I asked ChatGPT’s Agent to answer a specific question. The question was, ‘Find me useful EU laws around AI.’ It spent 5 minutes researching for me – this is the power of AI. We no longer need to spend time researching, but we do need to check and guide these tools to return the information to us as intended, with the output we require. I thus requested a Chain of Thought (CoT), as shown below, which outlines the steps it went through and the reasoning behind them. These types of features in AI tools are critical. We, as users, should ask these tools why they referenced specific sources, to maximise transparency and understanding of how these tools work, for maximum gain and oversight. ‘A bad workman always blames his tools,’ they say. We must therefore learn how to maximise the outputs of these tools, which is part of ‘prompt engineering’.

  1. Diversity, Non-Discrimination and Fairness
    AI must be inclusive and unbiased to ensure that we promote equal access and prevent discrimination across society. We need to ensure that we’re guaranteeing fair outcomes for various demographic groups and upskill those who may be less technologically literate. The elderly population needs strong AI training, particularly to ensure they are well-equipped to live with this rapidly evolving technology. AI also needs guardrails to ensure that algorithmically, echo chambers of influence are observed and mitigated, as this can reinforce specific content consumption patterns.
  2. Societal and Environmental Well-Being
    This is about ensuring AI benefits both people and the planet, contributing to sustainable development, reducing environmental impact and promoting societal welfare. The systems we use must do the same. From my perspective, this means prompt iterative efficiency, making our chats as token-efficient as possible and building models sustainably from hardware to the AI supply chain. It also means equal access and awareness. I believe there should be official EU training to upskill every citizen to a common mandatory standard.
  3. Accountability
    There must always be a clear understanding of responsibility for the outcomes, which includes mechanisms for auditing, oversight, and ensuring that the creators of these tools are accountable for the systems they create. From my perspective, I believe that the accountability extends, as I mentioned, across those who develop the tools, those who deploy the tools and those who use the tools. We are all in this together, and we all contribute to the development of AI through its usage. Feedback from the public on their thoughts in an EU-wide survey would be highly beneficial, allowing the public, private, and academic sectors to work together to address those questions.

The EU’s new AI strategy

The European Union has now evolved from creating rules through the AI Act to activating enablement, ensuring it has a forward-looking approach to AI. This chart illustrates the overall strategy:


The EU has introduced the new recently as a policy initiative to help deploy and operate AI safely across public and private sectors, especially amongst small and medium-sized enterprises, with three key aims in mind:

  • 鈧1 Billion to Drive Real AI Deployment
    The EU will channel funding through Horizon Europe and the Digital Europe Programme, utilising an initial 鈧1 billion in public investment to unlock further co-financing from Member States and private partners鈥攁 scale-up fund aimed at transforming research into practical AI solutions.
  • Support for Businesses of All Sizes
    Enterprises, ranging from SMEs to large, will gain access to testing facilities, expert advisory services, and refocused Digital Innovation Hubs (DIHs), now branded as “Experience Centres for AI.” These hubs will accelerate companies’ testing, adoption, and integration of AI.
  • Sector-Specific Rollout and Compliance Infrastructure
    The first wave of projects targets industries such as healthcare, manufacturing, mobility, climate, and pharmaceuticals, with implementation milestones extending up to 2027. The EU has also launched an AI Act Service Desk鈥攁 central compliance platform offering a Checker, Explorer, and expert Q&A to help organisations meet new regulatory obligations and accelerate adoption.听

The challenge here is to ensure that guardrails inspire trust while enabling creativity and human agency through the responsible use of AI as a tool. The new strategy nevertheless shows promise that Europe shall seek to localise development across the region.听

The Agentic Future

To continue innovating, we must act rapidly, with speed and focus. As a keen technologist, I see the internet dissolving and evolving into a new format鈥攖he agentic web. The word ‘agentic’ comes from the word agency, which itself stems from the Latin’ agere‘, meaning ‘to act’ or ‘to do’. AI is now being given the ability to perform tasks on our behalf.

As we move into a new internet age, we need to continue establishing new ways to innovate safely, guided by the maxim in mind ‘to seek a useful end‘. Therein lies a contradiction, given that agency by definition assumes a capacity to act independently and therefore unpredictably. Regulation thus needs to consider the extent to which agency can be shaped, as it is also routinely shaped in children, to prevent future abuse of the freedom that agency entails.

SEPTEMBER ARTICLES
What Is Needed Most in Risk Reporting?

Michael Marien, WAAS/EXTRA Working Group

A noteworthy comment was made near the end of the remarkable six-part Netflix documentary series, 鈥淟ife on Our Planet.鈥 A mere five minutes or so of the final episode was devoted to current global trends in population, urbanization, climate, pollution, and related issues. Morgan Freeman, the moderator, admitted that these trends may lead to human extinction, but noted that ours is the first species to know what is happening and could do something about it. This is profoundly true. And profoundly misleading, because there are dozens of possible causes for the demise of all or a significant part of humanity, and hundreds of organizations offering explanations for single causes, occasional overviews, and numerous proposals to prevent or mitigate disaster. Can a new UN report stand above the others and make a difference?

INTRODUCTION

The title of the first EXTRA webinar on September 18 was 鈥淲hat is the Significance of the New UN Risk Report? United Nations Global Risk Report 2024 (July 2025, 28p) appears at first glance to offer significant insight into what is happening and what to do about it, underscored by the opening statement in the Preface by Secretary-General Guterres that 鈥淲e are at a defining moment for humanity鈥n a year marked by converging global crises.鈥

The UN report does offer much to consider, but is this first edition significant? 鈥淪ignificance鈥 has many meanings to many people, and will be explored below. A second, and more important, question is: what changes could make the next edition, promised for late 2026, more significant? This, too, will be considered. A still more important question, What is Needed Most in Risk Reporting?, considers how to overcome our fragmented understandings.

THE UN RISK REPORT鈥擜ND ITS COMPETITOR

The significance of the UN report depends on the definition of 鈥渟ignificance鈥 and how readers respond to it. Have busy readers/users seen all or part of it? What features add to one鈥檚 understanding? What policies and actions are new to the reader? What actions have been taken as a result of the report? These questions can be informed by a questionnaire sent to the 1,100 contributors and others on the UN mailing list.

Getting a better grip on 鈥渟ignificance鈥 is one problem that can be addressed. Another is that there is a worthy competitor, The Global Risks Report (Jan 2025, 102p), published annually for the past 20 years by the World Economic Forum in Davos. Although seen primarily as a meeting place for business elites, the WEF report focuses on risks that should concern all leaders and citizens. It gives no hint of a pro-business orientation.

The UN 鈥淕lobal Risk Report鈥 and the WEF 鈥淕lobal Risks Report鈥 have similar titles and methodologies. The UN report surveys 鈥渕ore than 1,100 stakeholders in 136 countries, with representatives of government, industry, civil society, and academia.鈥 The WEF report surveys 鈥渕ore than 900 global leaders across academia, business, government, international organizations, and civil society,鈥 as well as 100 thematic experts, including risk specialists.

The basic results are also quite similar. The UN ranks 28 risks (the top three are climate change inaction, large-scale pollution, and misinformation). The WEF report ranks 32 risks in the next two years (the top three are mis- and disinformation, extreme weather, and state-backed armed conflict).

Looking into the near future, the UN ranks risks for the next 1-7 years (the top three are AI and frontier technology, a pandemic, and cybersecurity breakdown), and for the next 8-15 years (the top three are a pandemic, a geoengineering disaster, and a supply chain collapse). The WEF ranks risks over the next 10 years (the top three are extreme weather, biodiversity/ecosystem collapse, and critical change to the Earth system). The WEF also has extensive analysis of 2025 risks (titled 鈥淎 World of Growing Divisions鈥 and of 2035 risks (titled 鈥淭he Point of No Return鈥).

The UN ranks the most critical risks by respondent location (top 10 risks in 7 geographic areas). The WEF ranks risks by six age groups and five stakeholder groups. Both the UN and WEF offer 鈥渟paghetti charts鈥 of risk interactions. See the UN鈥檚 鈥淣etwork Map of Global Risks鈥 (Fig.5) and compare with the WEF鈥檚 鈥淕lobal Risks Landscape鈥 (p.9), which uses straight-line connections (uncooked spaghetti) instead of curved lines.

There are, of course, some differences. The UN report analyses top 10 risks by connection strength, risks we are least prepared for (space-based event, cybersecurity breakdown, mis- and dis-information), the most critical global vulnerabilities (Fig 8 map), most effective actions to reduce risks (multi-government action, government and civil society, and an excellent list of 13 barriers to better global risk management (weak governance, lack of political consensus and trust, poor risk priorities, poor information, strong resistance, etc.)听

Concludes with four future scenarios (breakdown, status quo progress, breakthrough) based on a continuum of cooperation, 鈥淭he Path Forward鈥 (strengthen the UN to address risks and respond to complex global shocks), and three Annexes (risk definitions, survey methodology, and scenario methodology).

In addition to an extensive analysis of individual risks, the WEF report has a beneficial Appendix D (pp.93-96) on Risk Governance, showing how top risks can be addressed by R&D, national and local entities, development assistance, financial instruments, corporate strategies (e.g., labor shortages, supply chains), and multi-stakeholder engagement. An essential addition to this list on page 33 shows the top risks that treaties and agreements can address.

WHAT CHANGES CAN MAKE THE UN REPORT MORE SIGNIFICANT?

The above-mentioned similarities between the UN and the WEF reports cannot be dismissed. I will politely assume that the similarities are coincidental. Others may think otherwise. It is thus imperative for the authors of the following UN report to at least mention the WEF report, which has been published for 20 years, and, in my opinion, is the better choice, at least for now, for anyone who wishes an extensive and thoughtful risk overview. Better still, however, some form of cooperation between the UN and WEF might be feasible and productive.

As for changes to the following UN report, the questionnaire sent to readers/users above should provide many good suggestions for additions, deletions, and formatting changes. My personal suggestion is to delete the four overly simplified scenarios; there are too many near-term possibilities for both good and bad developments, and this should be stressed.

Another suggestion is to add reviews or at least some mention of relevant reports from other UN agencies such as the UN Environment Program, the UN Development Program, the UN Office of Disaster Risk Reduction, and the UN Economic and Social Council and perhaps even from NGOs with much to say about threats, risks, and governance (see EXTRA Info Hub, 鈥漈op 20 Recent Reports鈥 and EXTRA DIRECTORY to some 200 recent reports, books and articles).

WHAT IS NEEDED MOST IN RISK REPORTING?

Broad global surveys of major groups that rank risks are indeed valuable. But it is essential to mention that the EXTRA InfoHub is quite different from these surveys. It surveys, selects, and annotates recent literature: mostly reports (which get far too little attention) but also some books and journal articles. These items are grouped into five major categories (Overviews such as the UN and WER risks reports), Planet, People, Security, and Sustainability, and by 33 keywords under these categories, e.g., climate, biodiversity, oceans, migration, AI, finance, etc.

The most obvious lesson from these reports, books, and articles is the fragmentation among organizations and many hundreds of individuals concerned with threats and risks. Four significant gaps can be identified:

  • Between descriptive Generalists concerned with risk reports and other overviews, and general normative schemes such as the UN鈥檚 17 SDGs and the 被窝影视福利鈥檚 HS4All.听
  • Between these generalists and those concerned with single areas such as nuclear weapons, climate, public health and pandemics, and AI/AGI, which are pretty different from each other, demanding different actions.
  • Between cosmopolitan/progressive/evidence-based thinkers, actors, and politicians, and the growing ideologically-based right-wing nationalist/populist groups such as AfG and鈥攏otably鈥攖he Trump 2.0 regime in the U.S., bUS on the 900+ page Project 2025 of the Heritage Foundation. Overcoming this growing political polarization won鈥檛 be easy.
  • Between the world of the UN and related thinking on growing risks, and major media such as The New York Times, The Economist, The Guardian, Time, etc. Along with major talk shows in the US (and probably elsewhere), mention is made of individual issues such as climate change and AI. Still, there is not the slightest hint of the overall 鈥減olycrisis鈥 and the long list of growing risks. I am deeply puzzled by the gap and why The New York Times, for example, has no mention of sustainability, the SDGs, etc.

Ideally, all of these gaps between insular 鈥渟ilos鈥 should be bridged. Still, priority should be given to continuously getting the message out to the major media in each country, in various ways. Some criteria:

  • The key message is not simply that there are existential threats and risks鈥揳nd a polycrisis鈥揵ut that the polycrisis is widening and the risk list is growing, while components such as climate are worsening.
  • The message needs to be delivered in various ways by at least a dozen groups or individuals, ideally with several champions in each major country or region; perhaps they can be aligned in a 鈥2028 Superwoke Project鈥 or some such branding.
  • A one-off op-ed or article can help for a start, but the message must be repeated in various ways and updated as new events unfold and new evidence-based reports are published.
  • An alliance of several dozen groups should be formed to petition The New York Times (and similar publications) for a weekly 鈥淪ustainability鈥 section, and to solicit advertisers to make such a section profitable (it will never compete with the styles section or the sports section, but, arguably, we won鈥檛 have styles or sports if many trends worsen).听
  • Positive reports should accompany the negative message about the growing risks of the cost-effective actions needed to reduce the anticipated costs of climate change, species loss, pollution, and wars.

No single book or report can cover all the needed actions. Still, some reports can cover several concerns in a readable and authoritative fashion, notably The New Global Possible: Rebuilding Optimism in the Age of Climate Crisis (Disruption Books, Sept 2025, 332p), by Ani Dasgupta, President and CEO of the World Resources Institute, with chapters on how countries can collaborate, how technology can innovate for good, the limits of voluntary action by business, land rights as the foundation for justice, cities as laboratories for change, a new growth story for the economy, and orchestrating change for good.

AUGUST ARTICLES
Technology and The Crisis of Containment

Prof. Thomas Reuter, WAAS/EXTRA Working Group

In a recent discussion in the WAAS Working Group on Existential Threats to Human Security, David Harries, former chair of Pugwash Canada and Associate Executive Director of Foresight Canada, raised the concern that the conventional approach to threat containment, based to a large extent on early warning, is becoming obsolete.听

In the wake of technological innovations such as AI but also as a consequence of the increasing proliferation and speed of creation of new threats such as biological weapons, “state actors, state agents, public and private organizations, and individuals听are now more than equipped to escape ‘containment’ and defeat ‘early warning’, he noted. This article argues that containment may be regained and maintained only by applying principles of human security.

The current crisis of technology containment affects all aspects of contemporary life. It is testimony to a process of technological innovation that now appears to be increasingly out of control. The lack of containment in technology is not the result of an oversight or an accident, nor can it be reduced to innovation in pursuit of profit, though that obviously plays a role. Instead, I contend, it is driven by a relentless race for ever more extreme tech capabilities in the service of military supremacy.听

This race can be traced back to the dawn of history, but today it has reached unprecedented extremes, driven primarily by escalating geopolitical struggles for dominance among competing major powers. Once a slow and meandering trickle, the race for technical and general supremacy is now a raging torrent advancing at exponential speed, and recently has been accelerated to yet another level of recklessness by the use of super-human machine intelligence for technological development and deployment.

Tech innovation is a war machine. Technology, more generally, is about power and control over nature or other people. It intrinsically lends itself to hubris. Perhaps not necessarily, but so.

The political realists tell us: Stop military tech development, and you will be destroyed! So, of course, nobody is going to stop, even if they know a particular piece of tech innovation can kill all of humanity or all life on earth. Au contraire: All the more reason to push ahead with it relentlessly! Everyone wants the most murderous form of AI, the most deadly biotech, or other weaponizable technology under their control 鈥 first, before their opponents beat them to it.听

There is no scope for regulations in such a perpetual war, nor is there time to apply a precautionary principle. And given the ubiquitousness of so-called ‘dual use’, civilian technology is often directly and always indirectly implicated in this race. After all, contenders for global power require an economic surplus (or debt) to be able to pay the steep price for owning the fastest technological war machine. Money is thus also weaponised.

This situation has escalated during the industrial revolution and, more recently, the digital revolution. We have progressed from the industrial warfare of the world wars to the nuclear stand-off of the Cold War to the hybrid and drone warfare of today’s major conflicts. The associated escalation of risk to human survival is now so acute that it calls for a fundamental rethink of the way security is to be achieved. It asks for a shift away from a military to a human security paradigm. But what does that mean?

Only voluntary restraint or 鈥歩nner containment ‘will save us from ourselves. Human security comes from within us individually, and from within our social systems of mutuality. External, physical containment, based on out-innovating or pre-empting your opponents, is what is driving the game. It is not going to end. More technology is the problem, not the solution. The solution either lies within us, or there is none.

Inner containment is an ethic that does not assume or require intrinsic benevolence. It assumes merely the insight and genuine conviction that life requires containment, that it is wise to exercise moderation in dealing with other people and their interests. It is a commitment to law, and to the maintenance of effective mechanisms for correcting the few who prove incorrigible 鈥 law enforcement.

A functional and durable system of international law and law enforcement cannot be imposed. Law is based on agreement to exercise restraint in the pursuit of self-interest and self-preservation. For a law to be loved and jointly upheld, not just feared and obeyed under duress, it must be built on voluntary commitment. That can happen only if laws are rational and just and hence acceptable to all, and desirable as well, with all actors being well aware that security and even survival are not achievable long term in a world bristling with killer technology and devoid of commitment to lawful behaviour. This is the stance I refer to as inner containment. Such inner containment is the only way to de-escalate. It is the human quality that enables human security, not from others but with them.

This is not some far-fetched proposition, but already the majority position. As it is, most nations would be quite content to live safely under a just international law, just as most individuals are happy to live under a just national law, or would be if they had the opportunity. There are some national actors, however, who think themselves exceptional or entitled to dominate in the name of their security, and others who feel a need to avenge past wrongs, or wish to indulge their lust for more power, all in the name of their nation.

听These national actors cannot be policed at present. Their operations typically have the character of organised crime, ruthless, secretive, profitable, and hence very well funded, which is what gives them impunity. I say ‘actors’ rather than ‘people’ because the majority of people, even in these nations, do not want war, or at least not unless they face desperately difficult living conditions or are whipped into a frenzy with incendiary propaganda authored by well-organised criminal actors. Active war mongering and war profiteering are crimes against humanity, at home and abroad.听

We have never had a comprehensive global security concord, based on the insight that humanity can no longer afford to live without inward containment. But we do have a rudimentary international legal structure that is imperfect and sometimes unjust, but continues to evolve. Recent events in Ukraine and Gaza are instructive as to the limited effectiveness of taking matters to the International Court of Justice or War Crimes Tribunal – after the event. This is not a precautionary approach.听

On that front, however, there are some interesting precedents. Nuclear weapons treaties, despite their failures and rapidly increasing fragility, are an instructive case because, until now at least, 80 years after the obliteration of Hiroshima and Nagasaki, they have prevented a repeat of such actions and a potential global nuclear Armageddon. We now need to respond much more broadly to the fact that physical containment is a self-defeating process and is quickly becoming untenable not just in the context of nuclear weapons, but in general.听

We do not yet seem to be approaching such a concord, except perhaps by the painful and dubitable route of calamity. History shows that innovative frameworks for regulating human relations tend to emerge and find broad support in the period after a major conflagration. But there is no genuine precedent for the current and emerging state of war technology. A global conflagration today could take multiple forms and be initiated by numerous state and non-state actors.听

It may be very hard or impossible to come back from such a disaster. Waiting out the cycles of history may seem ok if one chooses to adopt a detached long-term perspective, but we cannot count on such cyclical developments anymore. History may end with the next downward turn, and not at all in the manner that Samuel Huntington predicted.

The duty of scientists and all rational thinkers and champions of peace is to lay out pathways toward de-escalation, and that means deceleration of the frantic technological supremacy race. We are not doing this, or not systematically and publicly enough. The vast majority of people and nations would welcome genuine rule of law, and we should be emboldening them to step up to the challenge by making the rational case for this more human approach to security.

EXTRA, the 被窝影视福利 of Arts and Science’s Working Group on Existential Threats and Risks to All, is therefore planning to organise a foresight session on the question of how the war technology machine may be stopped, and how the usurpation of lawful government by (suit-wearing) organised crime can be stopped. Various UN reform options need to be discussed in such a forum, drawing on lessons learnt from recent experiments such as the recent push for a universal nuclear weapons ban by non-nuclear-armed nations.听

The world may just be ready to embrace inner containment now, seeing that it has become a matter of human survival. It certainly needs to be tried.

First version published as: T.A. Reuter 2024. ‘The Crisis of Containment 鈥 Time for a new approach?’听Cadmus: Journal of the 被窝影视福利 of Arts and Science听5(3, Part 2):44-46.听

SUBSCRIBE

EXTRA NEWSLETTER