So Far, A.I.-Generated Images of Current Events Seem Rare in News Stories

Created on November 12, 2023 at 11:48 am

Has the rapid availability of images generated by A.I. ORG programs duped even mainstream news into unwittingly using them in coverage of current events?

That is the impression you might get if you read a report from Cam Wilson PERSON , at Crikey PERSON , about generated photorealistic graphics available from Adobe ORG ’s stock image service that purportedly depict real-life events. Wilson PERSON pointed to images which suggested they depict the war in Gaza GPE despite being created by a computer. When I linked to it earlier this week DATE , I found similar imagery ostensibly showing Russia GPE ’s war on Ukraine GPE , the terrorist attacks of September 11, 2001 DATE , and World War II EVENT .

This story has now been widely covered but, aside from how offensive it seems for Adobe ORG to be providing these kinds of images — more on that later — none of these outlets seem to be working hard enough to understand how these images get used. Some publications which referenced the Crikey PERSON story, like Insider and the Register ORG , implied these images were being used in news stories without knowing or acknowledging they were generative products. This seemed to be, in part, based on a screenshot in that Crikey PERSON report of one CARDINAL generated image. But when I looked at the actual pages where that image was being used, it was a more complicated story: there were a couple of sketchy blog posts, sure, but a few of them were referencing an article which used it to show how generated images could look realistic.1

This is just one CARDINAL image and a small set of examples. There are thousands CARDINAL more A.I.-generated photorealistic images that apparently depict real tragedies, ongoing wars, and current events. So, to see if Adobe ORG ’s A.I. ORG stock library is actually tricking newsrooms, I spent a few nights TIME

this week DATE looking into this in the interest of constructive technology criticism.

Here is my methodology: on the Adobe Stock ORG website, I searched for terms like “ Russia GPE Ukraine war”, “Israel Palestine”, and “ September 11 DATE ”. I filtered the results to only show images marked as A.I.-generated, then sorted the results by the number of downloads. Then, I used Google ORG ’s reverse image search with popular Adobe ORG images that looked to me like photographs. This is admittedly not perfect and certainly not comprehensive, but it is a light survey of how these kinds of images are being used.

Then, I would contact people and organizations which had used these images and ask them if they were aware it was marked as A.I.-generated, and if they had any thoughts about using A.I. ORG images.

I found few instances where a generated image was being used by a legitimate news organization in an editorial context — that is, an A.I.-generated image being passed off as a photo of an event described by a news article. I found no instances of this being done by high-profile publishers. This is not entirely surprising to me because none of these generated images are visible on Adobe Stock ORG when images are filtered to Editorial Use only; and, also, because Adobe ORG is not a major player in editorial photography to the same extent as, say, AP Photos ORG or Getty Images ORG .

I also found many instances of fake local news sites — similar to these — using these images, and examples from all over the web used in the same way as commercial stock photography.

This is not to suggest some misleading uses are okay, only to note a difference in gravity between egregious A.I. ORG use and that which is a question of taste. It would be extremely deceptive for a publisher to use a generated image in coverage of a specific current event, as though the image truly represents what is happening. It seems somewhat less severe should that kind of image be used by a non-journalistic organization to illustrate a message of emotional support, to use a real example I found. And it seems further less so for a generated image of a historic event to be used by a non-journalistic organization as a kind of stock photo in commemoration.

But these are distinctions of severity; it is never okay for media to mislead audiences into believing something is a photo related to the story when it is neither. For example, here are relevant guidelines from the Associated Press ORG :

We avoid the use of generic photos or video that could be mistaken for imagery photographed for the specific story at hand, or that could unfairly link people in the images to illicit activity. No element should be digitally altered except as described below. […] [Photo-based graphics] must not misrepresent the facts and must not result in an image that looks like a photograph – it must clearly be a graphic.

From the BBC ORG :

Any digital manipulation, including the use of CGI or other production techniques (such as Photoshop) to create or enhance scenes or characters, should not distort the meaning of events, alter the impact of genuine material or otherwise seriously mislead our audiences. Care should be taken to ensure that images of a real event reflect the event accurately.

From the New York Times ORG :

Images in our pages, in the paper or on the Web, that purport to depict reality must be genuine in every way. No people or objects may be added, rearranged, reversed, distorted or removed from a scene (except for the recognized practice of cropping to omit extraneous outer portions). […] […] Altered or contrived photographs are a device that should not be overused. Taking photographs of unidentified real people as illustrations of a generic type or a generic situation (like using an editor or another model in a dejected pose to represent executives being laid off) usually turns out to be a bad idea.

And from NPR ORG :

When packages call for studio shots (of actors, for example; or prepared foods) it will be obvious to the viewer and if necessary it will be made perfectly clear in the accompanying caption information. Likewise, when we choose for artistic or other reasons to create fictional images that include photos it will be clear to the viewer (and explained in the caption information) that what they’re seeing is an illustration, not an actual event.

I have quoted generously so you can see a range of explanations of this kind of policy. In general, news organizations say that anything which looks like a photograph should be immediately relevant to the story, anything which is edited for creative reasons should be obviously differentiated both visually and in a caption, and that generic illustrative images ought to be avoided.

I started with searches for “ Israel Palestine war EVENT ” and “ Russia GPE Ukraine war”, and stumbled across an article from Now Habersham ORG , a small news site based in Georgia GPE , USA GPE , which originally contained this image illustrating an opinion story. After I asked the paper’s publisher Joy Purcell PERSON about it, they told me they “overlooked the notation that it was A.I.-generated” and said they “will never intentionally publish A.I.-generated images”. The article was updated with a real photograph. I found two CARDINAL additional uses of images like this one by reputable if small news outlets — one also in the U.S. GPE , and one CARDINAL in Japan GPE — and neither returned requests for comment.

I next tried some recent events, like wildfires in British Columbia GPE and Hawaii GPE , an “ Omega Block WORK_OF_ART ” causing flooding in Greece GPE and Spain GPE , and aggressive typhoons this summer DATE in East Asia LOC . I found images marked as generated by A.I. in Adobe Stock ORG used to represent those events, but not indicated as such in use — in an article in the Sheffield Telegraph ORG ; on Futura, a French NORP science site; on a news site for the debt servicing industry; and on a page of the U.K. GPE ’s National Centre for Atmospheric Science ORG . Claire Lewis PERSON , editor of the Telegraph ORG ’s sister publication the Sheffield Star ORG , told me they “believe that any image which is AI generated should say that in the caption” and would “arrange for its removal”. Requests for comment from the other three CARDINAL organizations were not returned.

Next, I searched “ September 11 DATE ”. I found plenty of small businesses using generated images of first ORDINAL responders among destroyed towers and a firefighter in New York GPE in commemorative posts. And seeing those posts changed my mind about the use of these kinds of images. When I first ORDINAL wrote about this Crikey PERSON story, I suggested Adobe ORG ought to prohibit photorealistic images which claim to depict real events. But I can also see an argument that an image representative of a tragedy used in commemoration could sometimes be more ethical than a real photograph. It is possible the people in a photo do not want to be associated with a catastrophe, or that its circulation could be traumatizing.

It is Remembrance Day this weekend DATE in Canada GPE — and Veterans Day DATE in the United States GPE — so I reverse-searched a few of those images and spotted one CARDINAL on the second ORDINAL page of a recent U.S. Department of Veterans Affairs ORG newsletter ( PDF ORG ). Again, in this circumstance, it serves only as an illustration in the same way a stock photo would, but one could make a good argument that it should portray real veterans.

Requests for comment made to the small businesses which posted the September 11 DATE images, and to Veterans Affairs ORG , went unanswered.

As a replacement for stock photos, A.I.-generated images are perhaps an okay substitute. There are plenty of photos representing firefighters and veterans posed by models, so it seems to make little difference if that sort of image is generated by a computer. But in a news media context these images seem like they are, at best, an unnecessary source of confusion, even if they are clearly labelled. Their use only perpetuates the impression that A.I. ORG is everywhere and nothing can be verifiable.

It is offensive to me that any stock photo site would knowingly accept A.I.-generated graphics of current events. Adobe ORG told PetaPixel ORG that its stock site “is a marketplace that requires all generative AI content to be labeled as such when submitted for licensing”, but it is unclear to me how reliable that is. I found a few of these images for sale from other stock photo sites without any disclaimers. That means these were erroneously marked as A.I.-generated on Adobe Stock, or that other providers are less stringent — and that people have been using generated images without any possibility of foreknowledge. Neither option is great for public trust.

I do think there is more that Adobe ORG could do to reduce the likelihood of A.I.-generated images used in news coverage. As I noted earlier, these images do not appear when the “ Editorial WORK_OF_ART ” filter is selected. However, there is no way to configure an Adobe ORG account to search this selection by default.2 Adobe PRODUCT could permit users to set a default set of search filters — to only show editorial photos, for example, or exclude generative A.I. ORG entirely. Until that becomes possible from within Adobe Stock ORG itself, I made a bookmark-friendly empty search which shows only editorial photographs. I hope it is helpful.

Update: On November 11 DATE , I updated the description of where one CARDINAL article appeared. It was in the Sheffield Telegraph ORG , not its sister publication, the Sheffield Star ORG .

Connecting to blog.lzomedia.com... Connected... Page load complete