被窝影视福利

Generic selectors
Exact matches only
Search in title
Search in content
Post Type Selectors

Existential Threats and Risks to All

ARCHIVED REPORTS

听 Overview
听 Planet
听 People
听 Security
听 Sustainability
20 Notable Reports
Directory of Reports
Editor’s Remarks

SPECIAL COLLECTIONS
Recent Reports and Articles on the AI Race, Impacts, and Needed Guardrails

EXTRA Director of Research, Michael Marien
mdmarien@outlook.com

For better and worse, Artificial Intelligence or AI is already widespread, and still evolving, perhaps to AGI and superintelligence in the next few years. This overview seeks to identify many of the headlines听 and bottom lines of recent reports and articles, as well as three books鈥all published in 2025, with two exceptions.听 It is divided into three major parts:

I) The AI Race between the US and China, and a handful of听 massively-spending US technology organizations;
II) Impacts of AI: both current and expected; and
III) Creating Guardrails for this emerging and influential technology.

THE AI RACE

Two recent articles provide essential background to what鈥檚 happening. (1) describes Meta, Microsoft, Amazon, and Google planning to spend $320 billion on infrastructure in 2025, mostly on data centers. Critics argue that there is no guarantee that this risk will live up to its potential, but many executives believe that 鈥渢he bigger risk is not spending enough to keep up with rivals.鈥 (2) reports that Meta has revamped its AI strategy to include a new team dedicated to superintelligent AI that will improve 鈥渘early every aspect of what we do.鈥

This investment is given a political push by (3), reporting that the US president has signed three executive orders and outlined an 鈥淎I Action Plan to 鈥渞emove red tape and onerous regulation.鈥 America started the AI race, Trump said, and 鈥淎merica is going to win it.鈥 Arguably, the EU could stop Trump (4).

The investment in AI, requiring data centers, chip factories, and power supply, is pumping up the US stock market. 鈥淐ompanies will spend $375 billion globally in 2025 on AI infrastructure, projected to rise to $500 billion next year.鈥澨 The big tech companies 鈥渁re the largest financiers of the frenzy, but private equity firms have been pouring in capital, too. A major asset management firm estimates that 鈥淎I infrastructure will sop up $7 trillion over the next ten years.鈥 (5) 鈥淚f there were no data centers to build, dollars would flow into other types of investment (6).

A survey of the visions of AI firms, 鈥漟rom the very plausible to the fantastica濒,鈥 notes that tech companies 鈥渁re taking a leap of faith,鈥 fueled by FOMO, or 鈥淔ear of Missing Out, which does not come cheap (7).

In , a former investigative journalist with the Wall Street Journal describes a new age of empire where a small handful of globally scaled companies are in the field, headed by Chat GPT and OpenAI. This massively disruptive sector requires a huge number of resources to create large-language models, 鈥渁rguably the most fateful tech arms race in history鈥 (8).

IMPACTS

In “,鈥听a July Cover Feature of The Economist (9) argues that 鈥淢any fear a hellscape, in which AI-enabled terrorists build bioweapons that kill billions, or a misaligned AI that slips its leash and outwits humanity.鈥 But this crowds out thinking about 鈥渢he immediate, probable, predictable鈥攁nd equally astonishing鈥攅ffects of a non-apocalyptic AI. Hence, the possibility of an explosion of economic growth, with wild swings between stocks, 鈥渁s it becomes clear which companies were winning and losing winner-take-all contests.鈥

But these two poles of 鈥渉ellscape鈥 vs. a new economic era of global abundance are countered by recent doubts about AGI success, as well as the negative impacts of AI already visible and possible in the near future.

Gary Marcus, a founder of two AI companies and author of six AI books, argued in September that Chat GPT-5 is nowhere near the revolution many had expected, and that 鈥渢he chances of AGI鈥檚 arrival by 2027 now seem remote鈥 (10). Servaas Storm asserted in October that the US has reached 鈥淧eak Gen AI鈥 for current large language models, and that further scaling of chips and data centers will not deliver AGI (11).

Some negative impacts are already visible. A 鈥渇lood of fake photos and images鈥 has amplified social and partisan divisions and bolstered antigovernment sentiment (12).听 OpenAI鈥檚 new 鈥淪ora鈥 smartphone app enables the creation of videos entirely from AI, making disinformation easier and endless (13). 鈥淏e Wary of Chatbots Offering Guidance鈥 (NY Times, 30 Sept, 2025, D6) warns of 鈥渇alling prey to AI鈥檚 flattery鈥 and AI companions that lead to social deskilling.

Cyberattacks are escalating in speed, volume, and sophistication. GenAI chatbots serving as a 鈥渇orce multiplier鈥 for the global hacker toolbox (14). A report from RAND Europe warns of uncontrollable AI incidents and calls for mandatory security audits and independent oversight (15). Even scarier, an 80-page April report from Forethought warns of AI-enabled coups to seize power, reinforced by a 40-page July report from RAND on an AGI Coup as one of several scenarios (16).

An excellent overview is provided by an Elon University survey of 301 experts on 鈥鈥 (17), where positive changes are seen in the capacity to learn and problem-solving, with negative changes in social and emotional intelligence, capacity to think deeply, mental well-being, and sense of identity and purpose. Fundamental change in human capacities was expected by 23%, considerable change by 38%, moderate change by 31%, and little or no change by 8%. Overall, 16% saw AI as mostly for the better for most people worldwide, 50% saw equal changes for better and for worse, 23% saw changes mostly for the worse, 5% saw little or no change, and 5% don鈥檛 know.

Finally, if all of this seems too much to handle, a provocative new book by Eliezer Yudkowsky and Nate Soares, leaders of the Machine Intelligence Research Institute in Berkeley, argues that 鈥淚f Anyone Builds It Everyone Dies: Why Superhuman AI Would Kill Us All鈥, and that the stakes are existential (18).

GUARDRAILS

In August, the UN General Assembly established an Independent Scientific Panel on AI, and an annual Global Dialogue on AI Governance, recognizing the need to regulate AI 鈥渂efore it becomes a threat to world stability and peace.鈥 However, permanent members of the outdated Security Council can block initiatives to rein in the technology.听 Such members include China and the US, the leading competitors in AI development. The Director of the White House Office of Science and Technology, Michael Kratsios, said on Sept 24 that 鈥淲e totally reject all efforts by international bodies to assert centralized control in global governance of AI鈥 (19).

听听听听Three other UN bodies have advised on developing guardrails:

  • UN Economic and Social Council (Jan 2024) stressed the urgent need for governance as AI advances to AGI and 鈥減otential existential risks鈥 (20).
  • UN High Level Advisory Body on AI (July 2024) warned that AI governance is crucial to address challenges and risks, and common ground and coordination are needed (21).
  • UN Development Programme (March 2025) explored the transformative potential of AI for better and worse, including potential 鈥渆xistential risks,鈥 calling for balanced governance that supports innovation while safeguarding inclusion and equity (22).

Other organizations have also urged various guardrails:

  • Centre for International Governance Innovation听 (June 2024) sponsors an AI Global Risks Initiative concerned with loss of control of advanced AI systems and AI weaponization; proposes a Framework Convention to codify shared objectives for AI cooperation (23).
  • AI Action Summit (Feb 2025) initiated by France, UNEP, and the ITU with more than 100 partners in 11 countries, the Coalition supports the SDGs and desires an AI respectful of Planetary Boundaries, warning of 鈥渁 risks of fragmented and redundant initiatives鈥 (24).
  • Center for AI Safety (March 2025) warns that superintelligent AI would be a matter of national security, and advocates precautionary regulatory frameworks and alignment with human values. Describes a Mutual Assured AI Malfunction (MAIM) deterrence framework somewhat similar to the MAD policy for nuclear deterrence (25).
  • MIT AI Risk Initiative (March 2025) synthesizes 831 risk mitigations from 13 frameworks,听categorizing them into Governance & Oversight, Technical & Security, Operational Process, and Transparency & Accountability Controls (26).
  • Singapore Conference & Infocomm Media Development Authority (May 2025) synthesizes expert agreement on in-depth measures for safe GPAI systems to reduce collective harm and guide global safety R&D; proposes shared risk standards ,rapid incident response, and adaptive structures to manage cascading听 AI impacts (27).
  • International Panel on the Information Environment (Sept 2025) analyzes AI鈥檚 role in peacebuilding, as well as misinformation and ethical risks. Proposes ongoing risk assessment, cross-sector risk management, and inclusive and transparent solutions with local input (28).
  • Millennium Project (Sept 2025) published a 205-page book by CEO Jerome Clayton Glenn on global governance of the transition to AGI, with chapters on what might happen if AGI is not governed, managing international cooperation, flexible response to new issues, disruptions that could complicate enforcement, reducing and preventing crime and terrorism, continuous AGI audits, etc (29).
  • Humanity AI (Oct 2025 Press Release). A philanthropic coalition to back organizations shaping AI for people and communities. Funding priorities for defending democracy, education in the best interests of all students, humanities and culture, enhancing how people work, and security for deploying AI to protect people (30).

In sum, there are plenty of thoughtful organizations to advise the UN鈥檚 Independent Scientific Panel on AI, and to participate in the annual Global Dialogue on AI Governance.

CONCLUSION

A May 2025 article in Scientific American asked, 鈥淐ould听 AI Really Kill Off Humans?鈥 (31). It concludes that 鈥渘o scenario can be described where AI is conclusively an extinction threat to humanity,鈥 noting that it is very hard to kill all of us, through nuclear war, biological pathogens, or climate change. But it is potentially easy to kill off many billions of us through any number of AI-related possible catastrophes. Effective global governance of AI is obviously needed–the sooner the better, but probably too little, too late. This forecast will hopefully be wrong.

On a somewhat more positive note, the first 鈥淭ime 100AI鈥 on 鈥淭he 100 Most Influential People in Artificial Intelligence鈥 provides brief descriptions of 鈥淟eaders, Innovations, Shapers, and Thinkers鈥, both supporters and critics (32).

听REFERENCES
  1. ,鈥 New York Times, 30 June 2025, B1. Describes tech companies 鈥渁ccelerating their spending, pumping hundreds of billions of dollars into their frantic effort to create systems that can mimic or even exceed the abilities of the human brain.鈥
  2. ,鈥 New York Times, 9 Aug 2025, B3.听 Mark Zuckerberg is the CEO of Meta.
  3. ,鈥 New York Times, 24 July 2025, p1. This 鈥淎I Action Plan鈥 embraces the tech industry鈥檚 view that it must work with few guardrails, 鈥渁 forceful repudiation of other governments, including the EC, that have approved regulations to govern development of AI.鈥
  4. ,鈥 New York Times, 20 Aug 2025, A22. An expert on the EU argues that the EU has put in place a number of regulations over the past decade to balance AI innovation, transparency, and accountability. To operate in international markets, US companies must follow the rules of these markets; thus, the EU, an enormous market committed to regulating AI and establishing guardrails against possible risks, could well thwart Trump鈥檚 techno-optimist vision.听 ALSO SEE 鈥California听 Governor Signs AI Safety Law,鈥 New York Times, 1 Oct 2025, B4, on 鈥淭he Transparency in Frontier AI Act鈥 requiring advanced AI companies to report safety protocols. The same article also notes that 鈥38 states passed or enacted about 100 AI regulations鈥 in 2025.听听
  5. ,鈥 New York Times, 28 Aug 2025, p.1. 鈥淥ptimism around the windfall that AI may generate鈥(is lifting) the entire domestic economy.鈥
  6. ,鈥 New York Times, 7 Oct 2025, A21.听 A Yale economist notes that, despite tariff rates not seen in a century, the stock market has risen to new highs 30 times in 2025, in spite of Trump鈥檚 policies, not because of them. 鈥淭he coat of AI gloss is giving the administration runway to double down on bad ideas.鈥澨 The AI revolution 鈥渋s masking real problems.鈥
  7. New York Times, 29 Sept 2025, B1 & B6.
  8. .听 Karen Hao.听 Penguin, May 2025, 496p, $32.听 Published in the UK by Allen Lane as .听 Somewhat in contrast, ALSO SEE 鈥淎 Different Kind of Superpower: AI in India,鈥 The Economist, 20 Sept 2025, 11-12. Notes that India is now the second-largest market for OpenAI, which sells in India for a fifth of the price of its cheapest American plan; some 92% of Indian office workers regularly use AI tools, compared with 64% in the US. Frugal products infused with AI could be an Indian export across the developing world, a path different from the US or China, but 鈥渘o less consequential.鈥
  9. 濒,鈥 The Economist (Cover Feature), 26 July 2025, p.7.
  10. ,鈥 Gary Marcus, New York Times, 8 Sept 2025, A22. ALSO SEE: The Cost of the AI Delusion: By Chasing Superintelligence, America Is Falling Behind in the Real AI Race, Foreign Affairs, 26 Sept 2025.听 [Not seen]
  11. Servaas Storm. Policy Commons, 2 Oct 2025, 13p.听 Also cautions that 鈥渇ixation on AGI may crown out more practical applications of existing AI.鈥
  12. ,鈥 New York Times, 29 June 2025, p.1. Free and easy to use, AI tools undermine faith in electoral integrity.
  13. New York Times, 4 Oct 2025, B1 & B4. A companion article on B4 is entitled 鈥淎pp Makes Dissemination of Disinformation Easier, Convincing, and Endless,鈥 despite commenting that OpenAI 鈥渉ad made an effort to include guardrails鈥 for Sora.听 ALSO SEE New York Times, 5 Oct 2025, SR3, on the Sora app and creation of Tilly Norwood, a brunette actress created by AI and threatening a world run by fakes.
  14. . Crowdstrike (Austin ,TX), April 2025, 53p.
  15. . Elika Somani et al. RAND Europe, Aug 2025, 61p.
  16. by Tom Davidson et al. Forethought. 15 April 2025, 80p (including c.135 references).听 Warns that leaders could fully replace personnel with AI systems that are singularly loyal to them. ALSO SEE by Barry Pavel et al. RAND Corp, July 2025, 40p.听 鈥淎GI Coup鈥 is one of several scenarios.
  17. Elon University, Imagining the Digital Future Center (Elon, NC), April 2025, 286p.
  18. .听 Eliezer Yudkowsky and Nate Soares. Little, Brown, Sept 2025, 272p.听 A full-page profile of the book and its authors is given to 鈥淎.I. Prophet Wants It All Shut Down,鈥 New York Times, 15 Sept 2025, B1.
  19. ,鈥 New York Times, 26 Sept 2025, A7.听 A subsequent article on 28 Sept (p.10), 鈥淯N Seeks Global Guardrails on AI, to the Trump Administration鈥檚 Dismay,鈥 announces the formation of 鈥渁 40-member panel of scientific experts to synthesize and analyze the research on AI risks and opportunities,鈥 which could result in an independent AI watchdog similar to the IAEA on atomic energy. Also mentions that a group of more than 200 leaders 鈥渃alled last week for global AI guardrails.鈥
  20. . UN Economic and Social Council, 29 Jan 2024, 16p.
  21. .听 UN High-Level Advisory Body on AI, 7 July 2024, 95p.
  22. Human Development Report 2025. UN Development Programme, March 2025, 324p.
  23. . Centre for International Governance Innovation (Waterloo, Ontario, Canada), June 2024, 34p.听 Describes CIGI鈥檚 Global AI Risks Initiative.
  24. . AI Action Summit, Feb 2025, 5p. Launched at the Paris AI Action Summit, initiated by France, UNEP, and the ITU.
  25. . Center for AI Safety (Dan Hendrycks, Director), Eric Schmidt and Alexander Wang.听 March 2025, 40p.
  26. . MIT AI Risk Initiative, March 2025, 24p.
  27. . Singapore Conference & Info Comm Media Development Authority, May 2025, 33p.
  28. .听 International Panel on the Information Environment, Sept 2025, 60p.
  29. . Jerome Clayton Glenn (CEO, Millennium Project).听 De Gruyter Brill, Sept 2025, 205p., $147.听 Amazon Kindle edition $104.听 Derived from , Millennium Project, April 2024, 46p.
  30. .听 Humanity AI, 14 Oct 2025 Press Release by a coalition of 10 philanthropic leaders, including the AI Opportunity program of the MacArthur Foundation.听 Grants to begin in 2026.
  31. Michael听 J.D. Vermeer (RAND), Scientific American, 6 May 2025.
  32. ,鈥 Time, 8 Sept 2025, pp 37-53.听 This is followed by an AI Special Report with four articles (pp 55-71): Beyond Human Control: The Race for Artificial General Intelligence Poses New Risks to an Unstable World鈥, rising electricity bills due听 to 鈥漞nergy-guzzling data centers built to train and run AI models,鈥澨 how AI will reshape politics globally by scientific advances and job displacement, and 鈥渢he agentic age鈥 as a new frontier for AI and humans, where machines do cognitive work once performed by humans. Time has been identifying the 100 Most Influential People for 20 years, and is now specializing in a few areas, such as AI.听 听 听 听 听 听 听 听 听 听 听 听 听 听 听 听 听
REPORTS COLLECTION



March 2025, 324p. Explores the profound, dual-edged impact of artificial intelligence (AI) on human development, noting breakthroughs in creativity and productivity alongside risks of bias, inequality, and ethical dilemmas. Finds that “AI is increasingly enabling cross-border collaboration in research and innovation, fostering new networks of knowledge production across regions” but warns of existential risks, recommending balanced governance to promote inclusion, equity, and resilient systems. Key recommendations: Foster inclusive, trustworthy AI development; address IP challenges; implement equity-focused regulatory frameworks.



March 2025, 24p. Synthesizes 831 risk mitigations from 13 frameworks, categorizing them into “Governance & Oversight, Technical & Security, Operational Process, and Transparency & Accountability Controls.” Stresses operational safeguards, continuous monitoring, and that “AI risk management is an emerging concept,” serving as a foundational resource for global decision-makers. Key recommendations: Prioritize ongoing monitoring and robust risk processes; encourage community and stakeholder feedback on actionable strategies.



March 2025, 41p. Frames superintelligent AI as a strategic and security challenge. Details the Mutual Assured AI Malfunction (MAIM) deterrence framework, highlights chip vulnerability and supply chain risks, and stresses the need for legal, multipolar governance frameworks. “Outcomes hinge on what we do next” underlines the urgent importance of coordinated deterrence. Key recommendations: Strengthen supply chains, legal frameworks, and international deterrence to prevent the uncontrolled escalation of superintelligence.


Sarah Kreps et al., RAND Corp &听
Sept 2025, 72p. Assesses AGI’s impact on international stability, focusing on U.S.鈥揅hina competition and the transition phase before AGI’s maturity. Notes the inadequacy of traditional arms control for AGI’s dual-use nature and proposes innovative international governance approaches, including an “AI cartel”. Warns that “risks arise not only from AGI’s eventual power, but also critically from the ambiguous and volatile period preceding its arrival.” Key recommendations: Develop tailored governance for dual-use technology; maintain strategic communication and flexibility during AGI’s formative phase.


Barry Pavel et al.,听
July 2025, 40p. Explores scenarios for AGI’s influence on global power, including the “New Renaissance” through cooperative innovation versus “Governance Failure” or an “AGI Coup” driven by misaligned, centralized superintelligence. Identifies existential risks, including economic, security, and authoritarian shifts. Stresses the importance of public-private partnerships and robust alliance structures for alignment and safety. Key recommendations Include Promoting balanced oversight, resilient alliances, and proactive safety and ethics protocols.


Elika Somani et al.,听
Aug 2025, 61p. Calls for multi-layered strategies to prevent uncontrollable AI incidents, recommending mandatory reporting, security audits, independent oversight, and a safety-first development culture. Notes preparedness must precede deployment, addressing open-source risks and emphasizing information sharing. Key recommendations: Institute mandatory reporting, routine audits, and joint risk management across stakeholder groups.


Singapore Conference &听
May 2025, 33p. Synthesizes expert agreement on defense-in-depth measures for safe GPAI systems, spanning risk assessment, alignment, robustness, and real-time control and intervention mechanisms. Identifies cooperation in risk thresholds, incident response, and dynamic benchmarking as ways to reduce collective harm and guide global AI safety R&D. Key recommendations: Establish shared risk standards, rapid incident response, and adaptive institutional structures to manage cascading AI impacts.



Sept 2025, 60p. Analyzes AI’s dual role in peacebuilding, enhancing conflict analysis and citizen engagement while highlighting bias, misinformation, and ethical risks. Advocates rights-based, conflict-sensitive AI design, local participation, and ongoing risk assessment, emphasizing human oversight and contextual adaptation. Key recommendations: Design inclusive and transparent AI solutions with local input; foster international cooperation and cross-sectoral risk management for fragile states.



31 May 2024, 19p Summary. Solutions and innovations to support progress across the SDGs with a focus on Goals 1, 2, 13, 16, and 17, with >300 scientists submitting briefs and 99 passing peer review. Emphasis on AI driving precision farms to increase yields up to 70% by 2050 and AI potential in health care鈥攂ut AI data centers consume 1% of global electricity and use large amounts of freshwater.

Summit of the Future Outcome Documents:听,听, and听

Sept 2024, 66p. Final version of the Pact, listing 56 Actions on sustainable development and financing (“we will take bold, ambitious, accelerated, just and transformative actions to implement the 2030 Agenda”), international peace and security (“we will redouble our efforts to build and sustain peaceful, inclusive and just societies and address the root causes of conflicts”), science and technology, youth and future generations, and global governance. The Global Digital Compact (pp40-55) seeks to “close all digital divides” and enhance AI governance. The Declaration seeks stronger youth participation.



April 2025, 53p. “Cyberattacks are escalating in speed, volume, and sophistication.” Identifies a shift in 2024 toward streamlined, scalable attacks driven by a business-like approach. “Don’t underestimate today’s enterprising adversaries,” with a “force multiplier” impact of off-the-shelf chatbots making genAI “a popular addition to the global hacker toolbox.” In 2024, China-nexus activity surged 150% across all sectors. Voice-phishing (vishing) attacks skyrocketed, with the average e-crime breakout time averaging 48 minutes. Most detections were malware-free. Access broker ads increased, and 26 new adversaries tracked by CrowdStrike raised the total to 257. North America had 53% of interactive intrusions, followed by 14% in Russia, 11% in Europe, and 7% in India. Emphasizes the need for proactive defense strategies and adaptive cyber resilience as threat actors grow more agile and commercially motivated.



Feb 2025, 5p. Launched at the Paris AI Action Summit, the Coalition, initiated by France, UNEP, and the ITU, with >100 partners in 11 countries, unites global stakeholders to advance AI’s alignment with environmental and climate goals while addressing its environmental impact. It provides a “Platform of Engagement” to connect stakeholders and an “Initiatives Hub” to enhance collaboration, visibility, and avoid duplication. The Coalition supports the UN’s SDGs and coordinates with existing initiatives, “continuing work for an AI respectful of Planetary Boundaries.” The transformative potential of AI in tackling the climate and environmental crisis is already unfolding. Still, the environmental footprint of AI is also growing, and there is “a risk of fragmented and redundant initiatives that dilute impact.”


听(Elon Univ, Elon NC)
April 2025, 286p. A survey of 301 experts asked to predict AI impact by 2035 on 12 essential human traits and capabilities. Change is likely to be primarily positive in curiosity and capacity to learn, decision-making and problem-solving, and innovative thinking. Change is likely to be mostly negative in social and emotional intelligence, capacity to think deeply, trust in widely shared values and norms, mental well-being, sense of identity and purpose, etc. Dramatic and fundamental change in human capacities as advanced AI is broadly adapted was expected by 23% of the experts; considerable change by 38%, moderate but noticeable change by 31%, minor and barely perceptible change by 5%, and no noticeable change by 3%. Overall, 16% view AI as mostly beneficial for most people worldwide, 50% see fairly equal changes for better and worse, 23% believe changes will mostly be for the worse for most people, 6% expect little to no change overall, and 5% are unsure.

Empire of AI: Dreams and Nightmares in Sam Altman’s Open AI
Karen Hao (former Wall Street Journal writer)
Penguin, May 2025, 496p, $32. (Pub in UK by Allen Lane, as “Empire of AI: Inside the Reckless Race for Total Domination.”). An AI expert and investigative journalist describe a new and ominous age of empire, where a small handful of globally scaled companies are at the forefront, led by ChatGPT and OpenAI. The vision of success for this massively disruptive sector requires a huge number of resources to create massive large-language models, “arguably the most fateful tech arms race in history.”



7 July 2024, 95p. AI governance is “crucial” to address challenges and risks, and ensure its “tremendous potential” is realized. Despite much discussion, “the patchwork of norm and institutions is still nascent and full of gaps.” On common ground and benefits, coordination and implementation gaps, a UN AI Office, etc.


听(Waterloo, ON, Canada)
June 2024, 34p. CIGI’s Global AI Risks Initiative is concerned with loss of control of advanced AI systems and AI weaponization (misuse of AI systems to cause harm). A Framework Convention should codify the most important shared objectives for AI cooperation in addressing the most urgent issues posed by “accelerating development of AI.”



29 Jan 2024, 16p. On AI’s potential to accelerate progress in poverty reduction, education, and other areas, as well as its associated risks, including job displacement and data bias. Stresses the urgent need for governance, as AI rapidly advances to AGI and potential existential risks.


听(Jerome Glenn, CEO)
April 2024, 46p. AGI is an advanced AI capable of autonomous learning across domains. Most experts project the emergence of AGI within 3-5 years, followed by Artificial Superintelligence (ASI). Many of the 229 respondents warn of existential risks from unregulated AGI.


, Yoshua Bengio, G. Hinton, and 23 Others
20 May 2024, 5p. AI is progressing rapidly, as companies shift focus to generalist AI systems that act autonomously, but with risks including large-scale social harms, malicious uses, and loss of human control. “AI safety research is lagging.” Governance measures must prepare us for sudden AI breakthroughs with an automatic trigger when AI hits certain milestones.



Aug 2024, 79p. Classifies 777 risks into 7 AI risk domains and 23 sub-domains.

The Digitalist Papers: Artificial Intelligence and Democracy in America

24 Sept 2024, 240p. $36.72; $28.97pb from Amazon. Just as the Federalist Papers of the 18th century analyzed the great challenges of the day, these 12 essays by 19 “thought leaders” consider new challenges to democracy and participatory practices, AI’s potential to transform government operations and public service, the complex challenges of AI regulation, and the need for participatory frameworks and ethical considerations.


Rachel Adams and 20 Others,听
Feb 2024, 29p. Four AI development scenarios, ranging from global cooperation to fragmented policies.

The AI Revolution: What the New Age of Artificial Intelligence Means for Humanity
NewScientist Essential Guide No. 23
July 2024, $15. Why has AI suddenly leapt forward? Describes how the technology works, its capabilities, and “future horizons from utopia to annihilation.”

The Age of AI: And Our Human Future
Henry A. Kissinger, Eric Schmidt (former Google CEO & Chair), and Daniel Huttenlocher, Little, Brown and Company
Oct 2021, $30. Discusses the emerging human-machine partnership, the evolution of AI, the dream of AGI, global network platforms and disinformation, security and world order, conflict in the digital age, AI and the international order, managing AI, human identity and AI, and the essential need for an AI ethic.


, Ketan Patel
Jan 2024, 122p. Warns of the polycrisis, “a cascade of successive global disruptions diverting leaders’ attention and resources away from longer term systemic priorities.” Shows that 19 core technologies, now existing, can enable necessary transitions and advance the SDGs.

SUBSCRIBE

EXTRA NEWSLETTER