views
With the Delhi Police under fire for their strong-arm tactics while detaining the protesting wrestlers, Tokyo Olympic bronze medallist Bajrang Punia on Sunday shared a morphed photograph of the detained wrestlers, claiming the “IT cell is spreading this”.
The allegedly morphed photograph shows Asian and Commonwealth Games medallist Vinesh Phogat, a prominent leader of the protesting wrestlers, and her sister Sangeeta Phogat smiling after being detained and taken away by the police.
“The IT Cell people are spreading this morphed photograph. We will like to make it clear that a complaint will be lodged against all those posting this fake photograph,” Bajrang said in a tweet on Sunday. The post was retweeted by Vinesh and other protesting wrestlers.
IT Cell वाले ये झूठी तस्वीर फैला रहे हैं। हम ये साफ़ कर देते हैं की जो भी ये फ़र्ज़ी तस्वीर पोस्ट करेगा उसके ख़िलाफ़ शिकायत दर्ज की जाएगी। #WrestlersProtest pic.twitter.com/a0MngT1kUa— Bajrang Punia ???????? (@BajrangPunia) May 28, 2023
Bajrang and Vinesh along with fellow Olympian Sakshi Malik were on a protest at Jantar Mantar since April seeking the arrest of Wrestling Federation of India (WFI) Brij Bhushan Sharan Singh, accusing him of sexually harassing female wrestlers including a minor.
Delhi police on Sunday swung into action as the wrestlers were getting ready to host a “Mahila Samman Mahapanchayat” and planning to intensify their agitation, detaining the star wrestlers and dismantling the tents at the protest site and removing their stuff from Jantar Mantar. The wrestlers were detained at different police stations around Delhi and the cops have vowed not to allow the protesting wrestlers to return to Jantar Mantar.
Image Morphed by AI?
An Indian Express report quoted experts as saying that the image in question was fake and had been manipulated using an Artificial Intelligence tool to add smiles to the wrestlers’ faces. Over the past few months, social media platforms have experienced a surge in such images, which are either generated entirely by AI or edited with the assistance of AI tools.
Let’s take a Look at simple ways to identify a morphed image
An image morphed by AI is often also referred to as a deepfake. There are several ways you can identify such an image, however, as artificial intelligence tools progress, these methods may become outdated.
According to a Digitbin report, these are the ways one can identify a morphed image:
Utilizing Online Tools
The advantageous aspect of the internet is that various common tools like file conversion, image downloading, and image analysis tools are readily available online. This eliminates the need to download specialized software for determining whether an image has been edited or photoshopped.
Relying on Visual Inspection
There are certain images that, despite being edited, can be easily identified by careful observation alone. By examining the image closely, you can look for indications of manipulation.
Nothing is flawless. However, in order to achieve a supposed level of perfection, we often manipulate images by distorting or resizing specific body parts or elements within the image.
Therefore, it is important to search for signs of distortion. Examine the image thoroughly, paying attention to straight lines. Then observe the surrounding objects to see if they adhere to the laws of physics. For example, if someone has shared an image showcasing a slim waist, a particular area around the waist may exhibit an unnaturally warped appearance, confirming that it has been altered.
Identifying Repetitive Objects
This technique works best when analyzing images that contain numerous elements, such as a football stadium filled with a crowd, a packed concert, or a garden with identical flowers—you get the idea.
In such cases, the method of “cloning” is employed, where the same elements are duplicated within the image to create a fuller appearance. However, with careful examination, it becomes relatively easy to detect alterations. Look closely at areas that appear densely populated and search for recognizable patterns.
Observing Shadows
One of the most common mistakes made by beginners or novices in the field of Photoshopping or image editing is the absence of regular or realistic shadows. Every object should cast a shadow, and if they don’t, then the image is likely to be manipulated.
Moreover, if an image does include shadows, you should examine them for irregularities. For instance, an object like a rock or a box should cast a well-defined shadow.
Other Indications to Watch For
In addition to the aforementioned aspects, you should also be vigilant for pixelation, sharpness, unnatural color saturation, or distortion in an image. An edited image may exhibit distortion due to imperfect coloring and the application of multiple effects. Additionally, pay attention to the fine edges around the elements of the image. If the image has been edited, you may notice irregular edges surrounding the elements.
Recent Pope Deepfake
Just recently, deepfake images of Pope Francis in a puffer jacket went viral. As people were perplexed with how real the image looked, Time explained in a report on how to spot whether an image is real or ‘deepfaked.’
If your AI dystopia doesn’t include images of the Pope in a Balenciaga puffer, I don’t want it. pic.twitter.com/7rWHyj35nZ— Franklin Leonard (@franklinleonard) March 26, 2023
“If you look closely at the image of the Balenciaga Pope, you can see a few clear indicators of its AI origins,” the report explains. For example – the cross on his chest is held bizarrely upright, with just a white puffer jacket replacing the other half of the chain. His right hand looks to be holding a hazy coffee cup, but his fingers are wrapped around thin air rather than the cup itself. His eyelid blends into his spectacles, which flow into their own shadow, it says.
Sounds pretty similar to spotting a fake Photoshopped image, right?
The Time report explains that AI picture generators are simply pattern-replicators: they’ve learned what the Pope looks like, as well as what a Balenciaga puffer jacket might look like, and they’re able to squeeze the two together seamlessly. They don’t (yet) understand physics, it says, adding that they have no idea why a crucifix shouldn’t be able to float in midair without a chain, or why eyeglasses and the shadow behind them aren’t the same thing. Humans are instinctively able to detect errors that AI cannot in these frequently peripheral regions of an image.
However, the report warns that these methods will quickly go out of date as AI continues to better itself. The only option then that remains, is media literacy.
Comments
0 comment