Deep Fusion is a new technology that Apple equipped on the camera of the trio of iPhone 11, iPhone 11 Pro and iPhone 11 Pro Max. According to Apple's ad, Deep Fusion can completely change the way we take photos. So how does Deep Fusion work? Let's find out in the article below.
Deep Fuson relies on the neural engine built into the Apple A13 chip found on Apple's new iPhones. Instead of relying solely on ISPs, Deep Fuson uses a neural engine or neural processing unit (NPU) to create multiple shots, thereby fine-tuning and processing to produce the best quality image in terms of color. , white balance, noise reduction and detail.
Deep Fusion shows the most power when taking photos in low to medium lighting conditions.
Accordingly, Deep Fusion will take a total of 9 photos including:
- 4 main photos (short exposure time) + 4 extra photos: iPhone predicts the subject before you press the shutter button and automatically shoots first.
- 1 long exposure photo: Taken when the user presses the shutter button.
Shortly thereafter, these nine images were analyzed by the neural engine inside the A13 chip. The best details of each picture, each of those nine images, are selected and combined to create a result image with low noise and incredible detail. The whole process takes only 1 second.
This is the first time the user or hardware has given way to a neural processing unit playing the role of processing and creating the final image.
- Apple released iOS 13.2 in a series of bug fixes and added some new features
- Windows 10X will be available for laptops as well, these are the first images of this operating system