The iPhone SE 2020 was not awaited by people for elements like the latest cutting edge technology, a huge screen, or the flashiest add ons. It was always meant to be a small, compact device. The iPhone SE 2020 never played the numbers game. In fact, all the new phone does is swap out the iPhone 8’s chipset with the A13 used on the iPhone 11 series. A few very minor tweaks more and you have the iPhone SE 2020. With its minimal philosophy, the phone wasn’t meant to have numerous camera sensors on the back either. Also Read - Apple iPhone SE 2020 teardown video reveals how the Taptic Engine works
The iPhone SE 2020 has only one camera, likely the same sensor from the iPhone 8 launched years ago. It, however, uses something called “Single Image Monocular Depth Estimation.” This technology helps the smartphone generate the portrait depth effect into a regular 2D image. This implementation gives the phone something we haven’t really seen before. Portrait images usually require a depth sensor to get it right. So how does the iPhone SE do it? Also Read - Apple iPhone SE 2020 and OnePlus 8 Pro tell the tale of smartphone market
Watch: Best flagship smartphones one can buy in India right now
How is it different from the iPhone XR?
To understand how the iPhone SE 2020 takes portraits without a depth sensor, we need to compare it to the last budget-oriented Apple device. The iPhone XR too had a single camera and it took portraits. However, as per a report by PetaPixel, the device still got depth information from its hardware. The phone would tap into the focus pixels of the sensor and generate a rough depth estimate. It is able to do that because of the newer sensor. Also Read - iPhone SE 2020 new teardown reveals a near-identical iPhone 8 setup
On the new iPhone SE, there are no such focus pixels due to it being an old sensor. Instead, the sensor completely uses software and machine learning to understand the depth pattern n a picture. What this actually allows the phone to do is to generate multi-level depth data on any flat image taken by the picture.
Why is this iPhone SE feature useful?
Essentially, If you take a picture of a picture, even modern camera sensors may detect just two levels of depth at most, since the image is pretty much flat. There is the subject, and then there is the background. That’s it. However, the iPhone SE 2020 can generate tapered depth maps even on flat pictures of pictures, since it is done via software and ML completely.
Essentially, what this means it whether you take pictures of a real scene or of a photograph that was probably already taken years ago, you would still get the same level of depth and that is a neat trick to have. In real-life scenarios, it doesn’t beat multi-sensor devices like the iPhone 11 Pro for instance. However, for single-sensor cameras, this is a terrific achievement.