Is AI saving the world or breaking it? As the era-defining technology leapfrogs from what-if to what-next, it can be hard for us humans to know what to make of it all. You might be hopeful and excited, or existentially concerned, or both.
AI can track Antarctic icebergs 10,000 times faster than humans and optimize renewable energy grids in real time – capabilities that could help us fight climate change. But it also consumes incredible amounts of energy, and ever more of it, creating a whole new level of climate pollution that threatens to undermine those benefits.
All that dizzying transformation isn’t just the stuff of news headlines. It’s playing out in daily conversations for many of us.
“Have I told you what Chatty and I came up with yesterday?” My dad and I talk every Sunday. “It’s an environmental detective show – you’ll star in it, of course.”
He’s mostly retired and spends a lot of time at home while my stepmom is at work, so he’s happy to have found an exciting new hobby: storytelling sessions with his AI pals (the above-referenced ChatGPT, as well as Claude and “Gemmy,” aka Gemini). This is a good thing, I think. He should be having some fun in his sunset years.
But then the conversation turned to a much less fun AI story: I told my dad my sixth grader said he’d felt pressured to dumb down an essay at school because a classmate got heat for using AI. What made the teacher suspect the kid? She flagged it for college-level vocabulary. “Well, that just ain’t right,” said my dad. Agreed.
Grim laughter was my brother-in-law’s reaction to the subject of my son’s essay. Once a rock star graphic designer (literally for rock bands), he said AI has killed creative career prospects for all our kids. But who knows, he said, maybe it will solve climate change – or maybe it will only make it worse.
That tension is what brought me here. The more I read and heard, the more I saw that he and I are not alone in struggling with this topic. To help make sense of the complexity, I asked Ann Bostrom, the chair of the National Academies of Science’s Roundtable on Artificial Intelligence and Climate Change, what she thought of my brother-in-law’s comment. In a nutshell: Is AI good or bad for the climate? The answer is decidedly not straightforward.
“Right now, there is serious uncertainty about what can or might happen with AI,” she said. “But that’s partially because it’s a new tool we’re developing – AI is a tool. So what it does, or what it can do, is a function of what we do with it.”
Understanding the vast toolset that is AI (It’s not all ChatGPT)
AI isn’t a single technology but a vast toolbox containing many specialized tools, each with different purposes and environmental footprints. While dinner table conversations often focus on ChatGPT and similar systems, these represent just one part of a rapidly evolving landscape that’s difficult to neatly categorize.
The broader toolset includes everything from systems that analyze medical scans, predict weather patterns, and monitor coral reef health to those that generate text, optimize supply chains, and power autonomous vehicles. Large language models like ChatGPT and Claude represent just one branch of this diverse ecosystem, and they’re frequently updated with new versions, making it challenging to track their evolving capabilities and impacts. This constant iteration reflects a broader pattern across AI development – systems are continuously refined, retrained, and reimagined.
But here’s the thing about any AI tool: Despite their differences, they all share an insatiable appetite for energy – lots of it. And as they scale up, their hunger only grows. Early machine learning systems ran comfortably on desktop computers with minimal power consumption. Some of today’s most prominent AI systems use 100,000 GPUs (the specialized chips that crunch AI calculations), drawing as much electricity as a small city and filling server farms that span several football fields. For perspective, Meta’s flagship AI system relied on about 16,000 of these chips, a setup that would fit in a single, much smaller facility. As we speak, clusters of more than 300,000 GPUs are entering the drawing board, too.
Today, there are upward of 8,000 data centers worldwide – a number projected to double by 2026. The scale is getting so massive that in an extreme scenario, U.S. data centers could consume 12% of U.S. electricity, with one study estimating the extra energy demand will equal whole countries the size of Sweden or Argentina.
This surge in power consumption carries profound implications for our climate goals.
The climate reality: What we know and what we don’t
Every step of AI computing comes with a carbon cost. According to new analysis from MIT Technology Review, AI data centers now consume 4.4% of all U.S. energy, with projections showing AI alone could use as much electricity as 22% of U.S. households by 2028. These centers typically use electricity that’s 48% more carbon-intensive than the U.S. average.
The training process – where AI systems learn by digesting huge datasets – requires astronomical amounts of energy. Training GPT-4, for its part, gobbled through enough energy to power San Francisco for three days, at a cost of over $100 million.
And training accounts for just 10-20% of AI’s energy use. The real energy hog is inference – what happens every time someone asks a question, generates an image, or gets an AI recommendation. The MIT Technology Review study found that a simple text query uses about as much energy as riding six feet on an e-bike, while generating a five-second video burns the equivalent of a 38-mile ride.
The catch: These numbers represent snapshots based on highly specific parameters – particular models, data centers, energy grids, and time frames – making them tough to apply across the fast-shifting tech landscape.
“There’s a lot of discussion about how hard it is to get data,” Bostrom says. “And there’s not a common method of disclosing data.”
In other words, outside observers are working with fragments of a puzzle that companies often keep scattered. And most firms don’t track emissions at the granular level it would take to assess the relative impacts of different uses of AI, like search or ChatGPT queries.
What’s more, the available data also typically lacks key context, like where and when emissions were produced. For example, training a model on renewable energy in Sweden leaves a very different footprint than doing the same work on a coal-powered grid in West Virginia, but many reports treat these scenarios as equal. Competitive corporate secrecy only compounds the problem.
Spotty, unreliable, and missing data make it incredibly hard to accurately assess AI’s true climate impact and energy needs, let alone figure out what to do about it.
Existing regulatory frameworks have yet to catch up. Current accounting standards are patchy and still evolving. While recent rules like the European Union’s AI Act and the SEC’s climate disclosure requirements show progress, neither mandates detailed AI emissions reporting. Companies still get to decide what goes, often leading to selective reporting or none at all.
Political headwinds aren’t helping. The Trump administration has aimed to block AI regulation, and California’s SB 1047, a bill that would have required large AI developers to provide basic documentation, was recently defeated following heavy industry opposition.
AI has cascading environmental impacts that go beyond carbon
Here’s where the data gaps become yet more problematic: AI’s environmental impact extends far beyond carbon emissions, creating a web of consequences that’s even harder to track.
Take water, for instance. By some estimates, just 15 ChatGPT queries guzzle half a liter of clean water needed to cool those massive data centers, and two-thirds of new facilities are being built in water-scarce regions. Then there’s embodied carbon – the emissions required to manufacture all that hardware, mining rare earth minerals for GPUs, and shipping components around the globe. And because AI development moves at breakneck speed, perfectly good equipment becomes obsolete fast, creating a growing mountain of electronic waste.
Meanwhile, AI infrastructure pumps pollutants into vulnerable communities – by 2030, U.S. data centers could cause 1,300 premature deaths and 600,000 asthma cases. Elon Musk’s Colossus AI supercomputer in Memphis operates 35 unlicensed methane gas turbines in neighborhoods already struggling with poor air quality. These harms fall disproportionately on communities already bearing climate change’s heaviest burdens, deepening climate injustice through AI’s expansion.
What’s more, AI systems risk undermining climate action by generating convincing but scientifically inaccurate climate information, potentially spreading misinformation that delays urgent policy responses. For example, Grok, the chatbot created by xAI, has reportedly been promoting climate denial talking points.
When climate benefits may be worth the impact
Given AI’s complex environmental footprint, it’s easy to focus only on the costs. But there’s another side to this story: AI can also be a tool for tackling climate change itself.
Some of these climate-focused tools are already making headway on multiple fronts, from optimizing power grids to predicting disasters before they strike.
“There’s a lot of work on using AI to improve predictions of extreme weather,” Bostrom says. “Given those are severe impacts of climate change that people are already worried about, improving predictions can definitely help protect people.”
The most compelling cases share a common trait: They deliver outsize climate benefits relative to their computational demands. Here are applications that could clear that bar:
- Energy that pays for itself: AI-driven systems that simulate wind patterns and building performance, improve electric grid efficiency, and enable better renewable integration, potentially save much more energy than they consume.
- Cleaner, safer communities: AI processes massive datasets to forecast extreme weather, predict drought and model wildfire, track global carbon emissions, and monitor deforestation and pollution in real time – giving leaders and emergency responders the insights they need to act quickly.
- Efficient health breakthroughs: Medical AI trained on specialized datasets like MRI scans achieves impressive accuracy with relatively low computing power, while emerging tools may also help track and treat respiratory illness linked to poor air quality.
- Smarter farming: Agricultural precision technologies that optimize water and fertilizer use, boost yields, cut farming emissions, and support sustainable food systems.
- Climate-smart adaptation: AI can also help people adjust to the realities of a warming world – pinpointing the best locations for cooling centers during heat waves, identifying neighborhoods most at risk of flooding to prioritize drainage upgrades, and optimizing urban tree planting to reduce dangerous heat in vulnerable areas.
Yes, the climate potential is legit. But that doesn’t make it simple.
Why the promise gets complicated
“I think we’re at an inflection point,” Bostrom says. “Right now, it’s really hard to distinguish the hype from the realistic expectations.”
As it turns out, the same rapid development that creates new opportunities also introduces problems that can undermine the benefits.
For starters, unreliable outputs create dangerous inefficiencies. AI hallucinations – when systems generate false but confident-sounding information – can threaten any number of climate applications. Wrong information about weather predictions could lead to poor disaster preparedness. Faulty energy optimization recommendations could increase rather than reduce emissions.
Security vulnerabilities also threaten critical infrastructure. As AI becomes more integrated into climate-critical systems like power grids and weather monitoring, it’s proving a high-value target for cyberthreats like data poisoning – attacks that corrupt training data to make systems less reliable. And the more we rely on AI for climate solutions, the more these security risks multiply.
Perhaps most fundamentally, a scale-at-all-costs mindset compounds every problem. The AI development culture treats scale as an end in itself. As Bostrom points out, many AI tools are now incorporated into everyday platforms by default – you have to opt out rather than opt in. Case in point: Google now automatically provides AI search results as the standard.
“It’s similar to organ donation,” she said. “You get way more participation if people have to opt out than if they opt in.”
This design choice means climate costs accumulate from widespread AI usage that users never actively chose.
Opinion: Let’s free ourselves from the story of economic growth
These opt-out settings aren’t accidental design decisions.
“Systems-level decisions are being made to benefit commercial interests, and often at the expense of potential public good,” Bostrom says.
Companies compete on model size and capability rather than efficiency, with each new generation growing exponentially larger and demanding exponentially more energy – often for only marginal improvements in usefulness.
This obsession with scale creates a vicious cycle. Bigger models require more data and processing. More powerful models enable more applications, driving more usage. More usage creates demand for even more powerful models – and that translates into physical expansion: more chips, more data centers, more electricity use.
A runaway growth pattern is creating its own problems. As the climate costs become more visible, Bostrom sees a concerning trend toward polarization: “There’s stigmatization of AI going on – people are like, ‘AI is evil, it uses a lot of energy and is killing the planet.’”
But shutting down dialogue would prevent the nuanced thinking needed to harness AI’s genuine climate potential while addressing its real costs.
Our role in AI’s climate story
The good news? We’re not passive observers in this story. AI isn’t some unstoppable force of nature. It’s a tool that people are actively building right now – which means we still have the power to steer how it develops.
“We need to be thinking about solutions, or ways of keeping a system that would be fair for people and benefit society more broadly – a public good system,” she said. “That’s not the way it is right now.”
So where does that leave us? Is AI good or bad for the climate? The honest answer: It depends.
“It’s situationally specific – the context matters,” Bostrom said. She draws a parallel to electric vehicles: “If you have an EV on the West Coast where there’s a lot more hydropower, that’s very different from an EV running on a fossil fuel-heavy grid elsewhere.”
The same principle applies to AI – what matters are the specific applications, energy sources, and whether the outcomes justify the environmental costs.
For me, I’d say my dad’s joy in his storytelling sessions with “Chatty” isn’t the problem; it represents the kind of meaningful use that could warrant AI’s energy costs. If a model helps accelerate lifesaving research or reduces the need for resource-intensive travel, the climate trade-off may be worth it. But spinning up massive models for mundane tasks is starting to look like a high-emissions, low-rewards shortcut for things we once handled with far less energy.
Ultimately, the problem isn’t individual users making thoughtful choices – it’s an industry that treats scale as success, training ever-larger models for increasingly trivial purposes while communities face water shortages and polluted air. My brother-in-law’s grim laughter captures where we are: caught between promise and peril, unsure whether AI will help solve climate change or make it worse.
But that uncertainty also means the path forward isn’t fixed. What happens next depends on the choices we make today – and whether we can steer this technology toward its best climate potential rather than its worst.
13 ways to push for climate-friendlier AI
Individual actions that add up
- Use the “right jobs” principle: Ask whether your AI use justifies the energy it consumes. Prioritize meaningful uses over throwaway experiments.
- Focus on quality over quantity: Write one thoughtful AI-assisted story instead of generating 100 iterations you’ll delete. Use shorter, more intentional prompts. Think of it like digital water use: No need to ration every drop, but avoid leaving the tap running.
- Time it strategically: Bostrom suggests considering when you use AI-intensive tools, similar to how we think about running energy-heavy appliances like washing machines during off-peak hours when there’s less demand on the electrical grid.
- Push for transparency at every level: From local organizations to tech platforms, advocate for disclosure about AI use and its impacts. Press for labeling requirements and encourage local efforts to adopt more cautious, systems-level approaches to AI deployment.
- Ask better questions: In forums, feedback forms, or classrooms, raise energy efficiency and emissions as part of the conversation. The culture around AI is still forming, and these early signals help shape what’s prioritized.
- Stay informed with quality sources: The AI and climate conversation is moving fast. Follow specialized outlets like the Pulitzer Center’s AI Spotlight Series and organizations like AI Now to cut through hype and get grounded information.
Corporate responsibility
- Full impact reporting: Companies must move beyond cherry-picked carbon metrics to report water use, energy source, equity impacts, and societal benefits. That includes more precise emissions accounting that reflects where and when energy is used, not just global averages.
- Open-source transparency: Details like model size, training locations, power consumption, and update frequency are often kept secret, making accurate emissions estimates impossible. Companies should share this information.
- Strategic deployment: AI training doesn’t have to happen anywhere, anytime. Companies can lower emissions by shifting workloads to greener locations or off-peak times. Some providers use “carbon-aware computing” to schedule tasks when renewable energy is more available – an approach that could become standard as the industry matures.
- Think beyond carbon: Consider the broader human impact of AI deployment. Companies should think about how their AI use affects people and prioritize the public good, not just technical metrics.
Policy solutions
- Progress is uneven, but possible: The European Union’s AI Act will require high-risk systems to report energy consumption, and over 100 European data centers have pledged to go climate-neutral by 2030. In contrast, U.S. regulations remain limited – but elected officials on both sides of the aisle have worked to secure states’ rights to implement AI regulations.
- From voluntary to binding: Sustainability claims must be backed by enforceable standards. Expanding frameworks like the Greenhouse Gas Protocol to include AI workloads could create consistent, accountable benchmarks – and help keep “climate-friendly AI” from becoming another greenwashed slogan.
- Build flexible frameworks for the public good: While comprehensive AI governance remains challenging, meaningful policy work continues at multiple levels. The key is creating policies that embody public values while remaining adaptable as the technology evolves.