Google has just announced the Google Pixel 2 smartphones, a followup to the original smartphone Google Pixel. And, as expected, they appear to feature the best smartphone cameras yet.
DxOMark rated the Pixel 2 and Pixel 2 XL’s cameras as best in class. With an overall score of 98, these phones have scored a whopping 4 points above that of the recently launched Apple iPhone 8 Plus and Samsung Galaxy Note 8. DxOMark is among the only imaging benchmark sites that rates cameras, smartphone cameras included, using scientific tools and rigorous testing methodologies.
However, at a time when dual cameras seem to be the only way forward, where every major smartphone maker has switched to a dual-camera setup for their highest-end phones, it’s surprising that Google has gone ahead with a single camera setup on both devices. Both the Pixel 2 and Pixel 2 XL phones use a single 12.2-megapixel lens on the back instead of two. On the front of the phone is an 8MP camera.
Why does Google retreat from the dual-camera trend?
Dual-camera arrays give phones the ability to produce cool-looking portraits that attempt to mimic the bokeh effect easily created by DSLR cameras. Bokeh is simply an effect where the subject itself is in focus while blurring the background (and foreground) of a subject. Doing that requires creating a depth map, and the usual way to create that depth map is to use two separate camera sensors.
However, Google is able to create that bokeh effect using a single camera, without including a second camera. But how?
Google created an algorithm that first identifies a face, and then spokes out from there to connect the face to a body, hair, anything that’s part of the person. That sets the tone for what the Pixel 2 phones think should be in focus and what should fade to the blurry background. Google said it trained the algorithm on “millions” of faces to account for properly blurring around the hair.
Pixel 2 phones use a dual-pixel sensor on the back, which means that every single pixel is split into two. Each pixel in the Pixel 2 is a dual pixel that detects the scene from two perspectives; there’s a left and right sensor that capture a left and right photo. Instead of creating a depth map from images that are spaced a half-inch or so apart, it’s using images that are spaced less than a micron apart.
“Because of the dual pixel sensor system and HDR+, even though they’re only microns apart, we’re able to get multiple right and left images from one camera with each shot, so when combined with a whole bunch of crazy maths we can get a depth map of the scene,” Brian Rakowski, vice president product manager at Google said.
Portrait mode even works on the front-camera, allowing for beautiful “shadow depth-of-field” selfies, even though it doesn’t have split pixels.
“We can do it with one camera, which means we can do it on the main rear camera, but we can also do it on the selfie camera too, on both phones. Most phones need two cameras to do that, but we can do it with one and do it really well,” Google’s spokesperson said.