5 ways to identify fake photos
You can easily see that the wedding photos, portraits, models ... are fixed but the more 'sensitive' documentary images need a professional eye to identify.
You can easily see that the wedding photos, portraits, models . are fixed but the more "sensitive" document images need to have a professional eye to identify.
The elements that identify real - fake quite a lot, in which the basics include light, focus, eye gaze, technical characteristics of the image .
The light
Photo collage from many different images will be difficult to have a homogeneous light (light intensity, direction of light .).
For example, a sphere like this will be the brightest on the surface with a direct sun ray (the direction of the yellow arrow), the darkest on the opposite side, the areas around it will light up with varying degrees depending on the location. . The reflection of light rays into space or surrounding objects also has the corresponding level.
To identify the direction of the light source, you must know the direction of the light bulb on each position of the surface. It is difficult to look at the entire object to determine the light source but pay attention to the contours on the surface - where the direction of light is perpendicular to the surface. By measuring the brightness and direction along with some points on the border, the algorithms can determine the direction of the light source.
For example, the picture above is a composite image because the direction of the light source to the police officers does not correspond to the ducks (see arrow direction).
Looking at the eyes and location
Because the eyes are fixed in shape, they are useful for analyzing whether the photo has been edited or not. The iris of the eye is circular but we will see it has an ellipse as it moves to the side or up and down (a).
One can see what the eye looks like in a photo by looking for light from the eye to a point called the center of the camera (b). The picture comes from where the rays of light pass through the plane of the image (blue). The main point of the camera - the intersection of the image plane and the light ray - will be close to the center of the image.
A team of experts used the shape of the two irises in the image to infer how the eyes were directed toward the camera and obtained the center point of the camera (c).
When this main point is located far from the center of the camera or the person with a non-fixed center point is evidence that the image has been corrected (d).
The algorithm also works with other objects if its shape is known, such as two wheels of a car.
However, this technique is limited because it must be based on accurate calculations between the iris.
Bright spot on the eyes
The surrounding light reflected in the eyes will form small bright spots and based on their shape, color, position, one can determine light.
For example, in 2006 there was a photo of the American Idol stars that were about to be published and the highlights on their eyes were quite different (see thumbnail).
The position of the bright spot on the eye indicates the position of the light source (above, left). When the direction of the light source (yellow arrow) moves from left to right, the bright spot on the eye also moves.
The bright spot in American Idol photo is not fixed so it can be said that this is a collage. However, in many cases mathematical analysis is required, which considers factors such as eye shape, the relationship between the eyes, the camera and the light.
Some parts of the image are "cloned"
Photoshop's "Clone" feature is quite common to create more objects, essentially copying part of the image and pasting it onto another part of the image. The image above was taken from a TV advertisement in George W. Bush's campaign late in 2004.
Experts have found "humanized" areas by looking for differences on each pixel (each 6x6 pixel block) and recognizing 3 edited areas marked in red, green and blue.
Parameters from the camera
The digital camera's sensors are arranged in a rectangular pixel grid, but each pixel senses the density of light in just a near-wavelength range of colors thanks to the CFA color filter.
Red, blue and green filters are arranged as above. Each pixel in the raw data therefore has one color in these 3 colors. Missing data is filled by the processor itself or the software that translates raw data from the camera. The easiest way is to get the value of the nearest pixel.
Therefore, if the image does not have the automatic correction sign above, it means it is interfered with another type and this is an unrealistic image.
The correction of the image is different from the original image exposed in many cases as the Reuters collaborator bolded the smoke column to the real image (right).
You should read it
- 3 tips to improve low-resolution image quality
- Instructions for editing Live Photos on iOS 10
- How to use AI Image Editor to edit photos with AI
- How to edit photos with Camera360 on the computer
- Simulate the quality of film images in digital photos with Photoshop
- How to use Lensa AI to edit artwork
- How to edit Instagram photos right on your computer
- How to edit videos in Google Photos
- Learn how to use Paint to edit photos on Windows
- Complete the Photos app on iPhone / iPad - Part 4: Edit photos
- How to add photos to albums in Photos on iOS 13
- How to merge photos into the frame on Paint
Maybe you are interested
How to stretch to help your body stay flexible How to draw a mind map using Brainio How to mute the screen shot of Android phone 5 ways to take screenshots Samsung Galaxy S9 / S9 + How to use Screen Recorder Pro to record video of Windows 10 screen Why does anyone in Sillion Valley use the app called Droplr