Imagery faked with AI’s help only added to the awfulness of Hurricane Melissa » Yale Climate Connections

0
6


Across Jamaica, Cuba, and the Bahamas, a trail of human suffering, devastated buildings and infrastructure, and smashed meteorological records left by titanic Hurricane Melissa came to an end at 11 a.m. EDT Friday, Oct. 31, when the storm was declared post-tropical by the National Hurricane Center. But when it comes to seeking clicks and clout, even the unprecedented qualities of Melissa didn’t go quite far enough for some “news”-mongers on social media, who spread disaster images faked with the help of AI.

Hurricane Melissa was already bad in real life

As discussed by Jeff Masters on Friday, it will take days or weeks more to get a fuller sense of the extensive destruction left by Melissa, which struck Jamaica with a central pressure as low as any landfalling Atlantic hurricane on record (892 millibars, tying with the 1935 Labor Day Hurricane in the Florida Keys).

But already, the catalog of jaw-dropping benchmarks set by Melissa (see the section “An astonishing slew of records,” compiled last week by Jeff) speaks for itself. So does the influence of human-caused warming in the Caribbean in stoking Melissa’s powerful winds, as already documented in two rapid-response studies.

You’d think there’d be no need to exaggerate the destruction caused by the storm, but social media platforms have long had a problem with users who share photos and videos from past weather events without credit or wildly out of context. One perennial example is plunking the dramatic shelf cloud of a Great Plains supercell into a beachfront photo and palming it off as a hurricane-landfall photo.

Speaking with Yale Climate Connections’ Samantha Harrington in 2024 about misleading images that proliferated in the wake of Hurricane Helene, “Disasterology” author Samantha Montano warned that such fakery was “a signal of where we’re headed.”

“This could be much, much, much worse if we do not take immediate action to try and make some kind of structural changes to prevent it,” Montano said.

Making the problem far worse is an explosion of generative AI models, especially over the past year. These tools allow virtually anyone to quickly generate an image that’s not a real photo but that looks just “truthy” enough to fool many. Shortly after the catastrophic flash floods that hit the Texas Hill Country in July, Facebook users encountered vivid AI-faked photos of college football celebrities who were ostensibly providing in-person rescues.

A phony Melissa image that’s for the birds

Rich Grumm, a retired meteorologist who served as science and operations officer at the National Weather Service office in State College, Pennsylvania, became suspicious last week when he saw a Facebook-shared image — supposedly taken from above — of Melissa’s eye. One person on X (formerly Twitter) who shared the same faked image added this poetic but completely made-up commentary: “Birds were trapped, circling endlessly inside the calm, unable to escape the violent winds of the eyewall. A calm sky above — a cage below.”

Sure enough, the image shows birds flying above the clear eye itself, as well as above the adjacent eyewall.

Hurricane Hunters on multiple occasions have noted the presence of birds in the eye or eyewall of a hurricane, as was reported from one flight into Melissa on October 27. But standard storm-sampling reconnaissance flights do not fly at altitudes high enough to look down at an entire hurricane eye, as shown in the widely shared photo; instead, they have to punch through the often violently turbulent eyewall into the eye itself. (One of the most dramatic videos from Melissa was captured from a reconnaissance flight that did exactly this.)

Moreover, as Grumm pointed out, “Based on the scale of the eye, these birds would be larger than football fields.”

Lee Grenci, a retired senior lecturer in meteorology at Pennsylvania State University, chimed in with another major tell: to be soaring above the eyewall, “these birds would have to have been flying at altitudes well above the summit of Mount Everest … the air temperature and air density are way too low for birds to fly.”

Another misleading image was passed off as portraying the storm-ravaged Black River Hospital. The community of Black River did experience the full force of Melissa’s Category 5 eyewall at landfall, and the hospital was severely damaged. But the photo definitely wasn’t from the Black River Hospital.

Discussing the photo on Bluesky, climate scientist and catastrophe modeler Kelly Hereid of Liberty Mutual Insurance noted that after a strike from a Cat 5 eyewall, “those palm trees should look way worse.” She added:

Also a lot of flat roofs in background for Caribbean — not that there aren’t any, but they have very low wind resistance so tend to go down. Flat roofs surviving when the hip roof in front is gone? Weird. But like … this is subtle! I’ve looked at a lot of damage pics. Easy for non-experts to miss.

How to tell real photos from AI-altered images

There’s no single entity charged with evaluating whether a photo making the rounds is real, fake, or simply out of context.

But if you’re not sure whether a photo is legitimate, try checking in with Full Fact, a UK-based nonprofit. Full Fact has published four takedowns of Melissa imagery, including two that touch on the examples discussed above:

On the Melissa-from-above video, Full Fact explains:

The earliest example of the fake footage we found was posted on TikTok on 26 October, which has a caption saying it is AI-generated. It says: “This is not real, it is a simulation made with ai for a ‘what if’ scenario” and the account’s bio says “AI disaster curiosity”. The account posts many similar videos of AI-generated aerial views of storms and cloud formations.

As for the purported Black River Hospital image, Full Fact used a Google reverse-image search to find the image and found that it had been tagged as being “made with Google AI.” They added:

A Google spokesperson confirmed that the image contained a SynthID watermark, which means Google AI has been used to process, edit or create the image. SynthID is a digital watermark that is undetectable with the human eye, but is embedded into content made with several Google AI products. The watermark remains detectable despite any changes made to the quality or size of the picture … While it’s true that Black River Hospital was damaged as a result of Hurricane Melissa, satellite images of the real hospital show that its layout and the surrounding geography differs from the image being shared online.

From an information-consumer perspective, Full Fact’s primer “How to spot deepfake videos and AI audio” provides many tools for approaching the ever-increasing flood of fake visual and audio content. Clues such as body parts that don’t match the person supposedly depicted (ears in particular) are one tipoff.

Another tried-and-true strategy: Hunt for the source. It’s certainly good hygiene on social media not to share content unless you can link to its origin (ideally a reputable one), or at the very least identify that source.

To paraphrase Smokey Bear’s venerable slogan: “Only you can prevent social-media wildfires.”

Samantha Harrington and Jeff Masters contributed to this post.

Republish our articles for free, online or in print, under a Creative Commons license.





Source link