Emerging artificial intelligence (AI) tools have spurred reactions in the climate world ranging from despair over their water and energy use to optimism about their potential contributions to resilience efforts. But what is doomerism, and what is misguided techno-optimism? Do the potential benefits of AI outweigh the costs and unintended consequences? And what do these tradeoffs mean for security? The answers to these questions depend on the type of ‘AI’ being used and how it is being used. Regardless, it is crucial that civil society, researchers, and policymakers guide responsible AI policy and incentive structures in support of a more climate-resilient future.
These are some of the questions and key takeaways from a recent webinar hosted by the Center for Climate and Security (CCS), which brought together AI, climate, peace, and security experts for a discussion available to watch here. Director Erin Sikorsky gave an overview of CCS’s work on climate security and kicked off a new series of AI webinars across the Council on Strategic Risks Institutes that will take place throughout 2026.
A discussion among leading experts then explored the current landscape and path forward on this nexus.
- Dr. Costa Samaras, Director of Carnegie Mellon University’s Scott Institute for Energy Innovation and former Chief Advisor for the Clean Energy Transition at the White House Office of Science and Technology Policy (OSTP), noted that AI policy will dictate the future of electricity, which he called the security infrastructure of this century. AI could potentially offer real benefits to scale renewable energy and build greater resilience to extreme weather. However, these benefits will not be unlocked without action and guidance from researchers, civil society, and policymakers.
- Dr. Kyungmee Kim, Associate Senior Lecturer of Political Science at the Swedish Defence University and an Associate Researcher at the Stockholm International Peace Research Institute (SIPRI), explained how AI shapes causes of climate-related insecurity. From her experience working on fragile and conflict-affected settings, she noted potential positive use cases for AI, such as identifying migratory hotspots in Bangladesh and helping mitigate conflict over upstream waterways in China and Vietnam. Yet, there are significant risks to the narrative that AI is a “silver bullet” and ongoing concerns about the lack of co-development of systems with the Global South.
This conversation and Q&A offered a pragmatic look at the potential benefits and pitfalls of prolific AI systems across digital platforms and what it might mean for climate security going forward. Stay tuned for future webinars from the Converging Risks Lab on AI and human cognition, and from the Nolan Center on AI and biological risks.
