4 Ways to Recognise AI-Generated Images

AI-Generated Images: A picture of Pope Francis wearing a stylish, white Balenciaga puffer jacket went viral just last month, leading millions to believe the pontiff was prepared for a night out. Days before, a different set of images that showed the arrest of former president Donald Trump by New York City police officers wearing riot gear spread across social media.

AI-Generated Photos

Artificial intelligence systems that interpret a user’s language commands to create visuals created these viral phenomena. Even though certain AI systems, like Midjourney, DALL-E, and Stable Diffusion, have been around for a while, it has only lately proven possible to fool less experienced observers with the visuals they produce. And other experts believe we’re moving quickly towards a time when it won’t be possible to tell the difference between real photographs and artificial intelligence (AI)-generated fakes.

4 Ways to Recognise AI-Generated Images

According to James O’Brien, a professor of computer science at the University of California, Berkeley, “the systems are already very good.” “Even specialists can’t always spot some glaring, clear sign that reveals it’s fake without utilizing specialized equipment to analyze the photographs. They will improve as well.

O’Brien advises against relying solely on visual cues to evaluate the legitimacy of an image. However, there are currently a few indicators that keen viewers can be on the lookout for in addition to other measures to indicate potential visual misinformation without being duped.

Also Read  From the Saturn V to the Starship, Rocketry has Evolved

How Are Fake AI-Created Images Recognisable?

False photographs aren’t precisely a recent trend. Nearly as long as photography has existed, images have been created and altered. For instance, a well-known portrait of the 16th president of the United States, Abraham Lincoln, was actually a composite made by fusing an etching of the ironically pro-slavery John C. Calhoun with an image of Lincoln’s head. However, AI picture-generating systems don’t require nearly as much work, and they’re capable of producing realistic-looking images quickly from a brief written instruction.

The capacity of modern AI algorithms to produce realistic visuals varies. In particular, AI systems have had difficulty digitizing human hands, resulting in mangled extremities that occasionally have too many (or too few) fingers. One method of weeding out the fakes, according to Claire Wardle, co-director of the Information Futures Lab at Brown University School of Public Health, maybe to look for these strange fingers in pictures.

“It’s kind of like a fun exercise, like when you were a kid and would look at those puzzles of problems with an image,” she claims. “It won’t be long until these visuals are so good that we won’t be saved by things like the fingers. However, for the time being, there are still a few things to watch out for.

1. Watch for Wonky Fingers and Teeth

The technology frequently fails to produce convincing human hands because data sets used to train AI systems sometimes only include fragments of hands. Images with bloated hands, stretched wrists, spindly fingers, or an excessive number of digits may result from this, which are telltale signals that an AI-created image is a fake. Pope Francis’ right hand and the coffee cup it is clutching appear squished and distorted in the widely shared photo. Teeth can also cause issues.

Also Read  According to Reports, Google Intends to Permit Businesses to use AI-Generated Advertising Content.

“There’s a structure to your hands,” O’Brien observes. Not four, six, or any other number of fingers, you have five. While the more recent ones are becoming better at it, the [AI] models struggle with that kind of structure.

In fact, as technology develops, some AI programs, such as Midjourney V5, have begun to break the code, at least in some cases.

2. Be Wary of Textures That Are Too Smooth

Some AI image generators create textures that are overly smooth or skin that has a glossy sheen and looks plastic-like. According to O’Brien, this indicates that a jacket—let’s say, the Pope’s swaggering coat—might appear excessively fine.

It might turn out a little too perfect, as opposed to seeming like a material that has some wrinkles in it, he continues.

3. Look For Details That Don’t Fit

Inconsistencies in the logic of the image itself are possibly the most important things to look out for. O’Brien and Wardle both make reference to a current batch of artificial intelligence (AI)-generated pictures of a great white shark that washed up on a beach and went viral on social media.

Looking closely at the shark can let you realize that the image is a fake, according to O’Brien. You’ll note that the design surrounding the eye differs in each image.

O’Brien continues by mentioning things like clothing fabric that mixes together across many themes or backdrop patterns that repeat exactly as examples of inconsistency. But if we want to trust the reality that a phony image portrays, we could be more willing to overlook those details. Participants in research co-authored by O’Brien in 2021 were more inclined to accept the legitimacy of an image if it coincided with their pre-existing ideas.

Also Read  This Clever idea Addresses the Primary issue with Gaming Laptops

4. Do Your Research

Never be hesitant to compare what you’re seeing with other reliable sources if you’re unsure.

“The things that we see now, we just have to immediately Google to find other information about it,” claims Wardle. Yes, there are a few inconspicuous hints in the image, but just think about it and do your homework.

The Future of AI-Created Images

O’Brien does not believe that we will reach a point where we are unable to distinguish between photographs taken using a camera and photographs created using artificial intelligence. He believes we might cross that boundary in the upcoming years.

According to O’Brien, “That moment might not be as far off as some people might like to think.” “Because these systems’ rates of progress are accelerating. […] I believe that as a society, we need to become better at accepting the possibility that the imagery we encounter may not accurately represent reality.

However, if nothing else, the future of AI image production will be brave. According to a preprint posted on bioRxiv late last year, scientists from all around the world have already started utilizing AI systems to reproduce images that individuals have seen based on their brain scans.

Wardle continues, “There are amazing things coming out of this new technology.” But we merely need to drive a little more cautiously if a faster automobile approaches.

Sunil Kumar writes about smartphones and laptops for Gadgets360TechNews, out of Delhi. He is the Deputy Editor (Reviews) at Gadgets360TechNews. He has frequently written about the smartphone and PC industry and also has an interest in photography.

Sharing Is Caring:

Leave a Comment