How Apple Designs iPhone Cameras

The Product Manager of the iPhone. Francesca Sweet and Apple Vice President of Software Engineering Jon McCormack speak in an interview on PetaPixel, on how Apple approaches camera design and development.

According to executives, Apple sees the technology that is included in the iPhone camera as a holistic union between hardware and software.

In the interview it is said that Apple sees as its main objective for the photography of the smartphone allowing users to “stay in the moment, take a great photo, and get on with what they were doing” without being distracted by the technology behind it.

McCormack explains that although professional photographers go through a whole process of retouching images, Apple tries to make the photos come out perfect the moment the shutter is pressed.

“We tried to replicate as much as possible what the photographer would do in post-production,” says McCormack. “There are two sides to taking a photograph: the exposure, and how you retouch it later. We use a lot of computational photography in the exhibition, but increasingly in post-production and everything is done automatically for the user. The objective of this is to produce photographs that seem more realistic, replicating what it would be like to have been there physically ”.

The following describes how Apple uses machine learning (machine learning) to divide the scenes into natural parts to be processed using computational imaging.

“The background, the foreground, the eyes, lips, hair, skin, clothes, sky… we process all of those parts independently like you would in the darkroom with a lot of local adjustments,” says McCormack. “We adjust everything: exposure, contrast, and saturation, to blend it together… We understand what food should look like, and we can optimize color and saturation accordingly to make it much more realistic.

The skies are especially difficult to get right, and Smart HDR 3 allows us to isolate the sky and treat it independently and then re-merge it to recreate more accurately what the sensation was like at the moment of seeing it. “

McCormack also explains Apple’s reasons for choosing to include the ability to record video in Dolby Vision HDR in the iPhone 12 range.

“Apple wants to unravel the complex industry that is HDR, and the way to do it is with great content creation tools. That goes from HDR video production which was niche and complicated because it needed huge, expensive cameras and a set of video tools that my fifteen year old daughter can now use to create HDR video with Dolby Vision. So soon there will be much more content in Dolby Vision available. It’s in the industry’s own interest that tools and compatibility now increase. “

Executives also talk about camera hardware enhancements on the iPhone 12 and iPhone 12 Pro: “The new wide-angle camera and improved image blending algorithms result in reduced noise and increased detail.” The specific advancements in the ‌iPhone 12 Pro‌ Max camera also deserve special mention:

“With the Pro Max we can go even further because the larger sensor allows us to capture more light in less time, which allows better freezing of movement in night environments.”

Asked why Apple has decided to increase the sensor size now, on the ‌iPhone 12 Pro‌ Max, McCormack says from Apple’s perspective:

“It is no longer relevant for any of us to talk about a specific speed and an image shutter, or a camera system,” he says, “while creating the camera system we think about all that, and we think about everything that can be done from the software… you can, of course, go for a larger sensor, which raises space problems, or you can look at it from the point of view of the complete system and ask yourself if there are other ways to achieve the same result. So we define what the goal is and it turns out that it is not to have a larger sensor to brag about. The goal is how to make photos more beautiful in the greatest number of situations people find themselves in. That is the path that led us to deep fusion (deep fusion), night mode, and temporal image signal processing ”.

Much more in the interview with Sweet and McCormack in PetaPixel.