Prediction
The press adopts a new level of transparency around images
Name
Ståle Grut
Excerpt
“The press has often been light on contextual information and details about the images they use.”
Prediction ID
5374c3a56c65-24
 

AI images became top of mind this year due to the increasing availability of updated tools, all capable of creating images in various styles based on other images.

Services like Dall-E 3, Midjourney, and Stable Diffusion have received the most attention for their ability to create images that resemble photographs. Midjourney v5, released earlier this year, received much attention, particularly after Bellingcat founder Eliot Higgins used it to create images prior to the arrest of Donald Trump this summer. The images spread like wildfire across social and traditional media.

Journalists have taken note. Fred Ritchin, the former photo editor at The New York Times, wrote in February about how AI images threaten the credibility of photography, and that “in the coming decade or so, it will be as if we managed to kill the goose that laid the golden egg: the credible witness.”

By April, the news agency AFP’s global news director said about disinformation, “It’s not good enough being an eye-witness; you need something more substantial.” The photo editor of the largest Norwegian newspaper offered a final nail in the coffin just last month, stating that “the image as a medium has lost its documentary value.”

The Norwegian press has almost unanimously rejected all use of AI-generated images, arguing that it can “distort the truth” and that “nothing could be trusted anymore,” according to prominent media outlets like NRK, TV 2, NTB, and Aftenposten. Only Nettavisen has chosen a more liberal approach, employing Midjourney to illustrate op-eds and other stories.

Photorealism is mainstream now.

But the photorealistic style is not exclusive to AI images. In the late ’60s, photorealism emerged as a kind of antithesis to abstract art. Here, artists like Chuck Close, Richard Estes, and Audrey Flack primarily used photographs as a template for their detailed and “hyper-believable” paintings. No longer a marginal or peripheral phenomenon, photorealism is mainstream now. It is arguably the core concept in today’s visual culture: film, games, special effects, digital experiences, 3D models, and so on.

A fascinating example is Toy Story 4, which in 2019 boasted near-perfect photorealism despite filming animation. At the time, Pixar’s director of photography offered a fascinating summary of where they were technologically: “My virtual camera is mathematically correct in relation to a physical camera. It has an aperture, lens distortion, depth of field, and I can mimic camera movements with cranes or dollies as if it were on a physical set.” It demonstrates how the camera is now recreated digitally in software and offers an example of the democratization of photorealism.

Photojournalist Jonas Bendiksen stressed this development during his project The Book of Veles, where he blended objects from the 3D app Blender with his photojournalism. At this year’s Blender Conference, one presentation was titled “The Photorealistic Mindset.” It suggested focusing on the limitations of the physical camera in order to recreate it better digitally.

But should tools like Blender be banned by the media because they can create photorealistic images? Outright bans on such tools in journalism will likely be hard to sustain, and the current developments should instead encourage the press to evaluate its relationship with images writ large.

A new relationship with images in the press

The press has often been light on contextual information and details about the images they use. Typically, publications only provide the reader with a tiny gray caption, perhaps with a name and maybe some context related to its use or production method or where it was found, such as “illustration,” “archival photo,” “photo,” or “social media.” Journalism’s — or perhaps content management systems’ — current limitations were on display when The Guardian referred to AI-generated images by Amnesty International as “photographs.”

A newfound level of transparency around images could be vital in educating the press and the public about images and their credibility. At a technical level, news outlets could adopt ideas from photo-sharing sites like Flickr that display images’ (meta)data in abundance. And, in line with this, they could build broader support for information about images in their CMS instead of removing and hiding information, as most social media platforms and news outlets do.

It would also harmonize with technical solutions, like the ones from Project Origin and the Content Authenticity Initiative — collaborations between technology and media companies that work on open industry standards for metadata related to the authenticity and the origin of images.

By not treating images like an afterthought — by actively labeling, writing precise captions, giving references to who made the image and in what ways, annotating and providing relevant context for images they publish — the press can open a bridge that can convey any image technology safely.

By contributing to educating the public about the different forms of images that exist (and how photorealism is but one technique), journalists could encourage a broader societal skepticism towards digital information and strengthen awareness of native digital objects, such as AI images in photorealistic style, among the public.

Ståle Grut is a doctoral research fellow on the Photofake-project at the University of Oslo.

AI images became top of mind this year due to the increasing availability of updated tools, all capable of creating images in various styles based on other images.

Services like Dall-E 3, Midjourney, and Stable Diffusion have received the most attention for their ability to create images that resemble photographs. Midjourney v5, released earlier this year, received much attention, particularly after Bellingcat founder Eliot Higgins used it to create images prior to the arrest of Donald Trump this summer. The images spread like wildfire across social and traditional media.

Journalists have taken note. Fred Ritchin, the former photo editor at The New York Times, wrote in February about how AI images threaten the credibility of photography, and that “in the coming decade or so, it will be as if we managed to kill the goose that laid the golden egg: the credible witness.”

By April, the news agency AFP’s global news director said about disinformation, “It’s not good enough being an eye-witness; you need something more substantial.” The photo editor of the largest Norwegian newspaper offered a final nail in the coffin just last month, stating that “the image as a medium has lost its documentary value.”

The Norwegian press has almost unanimously rejected all use of AI-generated images, arguing that it can “distort the truth” and that “nothing could be trusted anymore,” according to prominent media outlets like NRK, TV 2, NTB, and Aftenposten. Only Nettavisen has chosen a more liberal approach, employing Midjourney to illustrate op-eds and other stories.

Photorealism is mainstream now.

But the photorealistic style is not exclusive to AI images. In the late ’60s, photorealism emerged as a kind of antithesis to abstract art. Here, artists like Chuck Close, Richard Estes, and Audrey Flack primarily used photographs as a template for their detailed and “hyper-believable” paintings. No longer a marginal or peripheral phenomenon, photorealism is mainstream now. It is arguably the core concept in today’s visual culture: film, games, special effects, digital experiences, 3D models, and so on.

A fascinating example is Toy Story 4, which in 2019 boasted near-perfect photorealism despite filming animation. At the time, Pixar’s director of photography offered a fascinating summary of where they were technologically: “My virtual camera is mathematically correct in relation to a physical camera. It has an aperture, lens distortion, depth of field, and I can mimic camera movements with cranes or dollies as if it were on a physical set.” It demonstrates how the camera is now recreated digitally in software and offers an example of the democratization of photorealism.

Photojournalist Jonas Bendiksen stressed this development during his project The Book of Veles, where he blended objects from the 3D app Blender with his photojournalism. At this year’s Blender Conference, one presentation was titled “The Photorealistic Mindset.” It suggested focusing on the limitations of the physical camera in order to recreate it better digitally.

But should tools like Blender be banned by the media because they can create photorealistic images? Outright bans on such tools in journalism will likely be hard to sustain, and the current developments should instead encourage the press to evaluate its relationship with images writ large.

A new relationship with images in the press

The press has often been light on contextual information and details about the images they use. Typically, publications only provide the reader with a tiny gray caption, perhaps with a name and maybe some context related to its use or production method or where it was found, such as “illustration,” “archival photo,” “photo,” or “social media.” Journalism’s — or perhaps content management systems’ — current limitations were on display when The Guardian referred to AI-generated images by Amnesty International as “photographs.”

A newfound level of transparency around images could be vital in educating the press and the public about images and their credibility. At a technical level, news outlets could adopt ideas from photo-sharing sites like Flickr that display images’ (meta)data in abundance. And, in line with this, they could build broader support for information about images in their CMS instead of removing and hiding information, as most social media platforms and news outlets do.

It would also harmonize with technical solutions, like the ones from Project Origin and the Content Authenticity Initiative — collaborations between technology and media companies that work on open industry standards for metadata related to the authenticity and the origin of images.

By not treating images like an afterthought — by actively labeling, writing precise captions, giving references to who made the image and in what ways, annotating and providing relevant context for images they publish — the press can open a bridge that can convey any image technology safely.

By contributing to educating the public about the different forms of images that exist (and how photorealism is but one technique), journalists could encourage a broader societal skepticism towards digital information and strengthen awareness of native digital objects, such as AI images in photorealistic style, among the public.

Ståle Grut is a doctoral research fellow on the Photofake-project at the University of Oslo.