How modern smartphone cameras work

Maybe questioning my own intelligence isn’t the best way to start this article, but if I’m going to write a column called “The Smarter Image,” I have to be honest. In many ways my cameras are smarter than I am, and while it sounds threatening to many photographers, it can be incredibly liberating.

Computer photography is changing the photography profession in a significant way (see my previous column, “The Next Age of Photography is Already Here”). As a result, the technology is generally presented as an adversary of traditional photography. If the camera “does all the work” then are we just button-pushing photographers? If the camera makes so many decisions about exposure and color, and mixes shots to create composite images for greater dynamic range, is there no more art in photography?

Smartphone cameras keep getting better at capturing images in low light conditions. But how? Through multi-image fusion. Getty Images

I’m exaggerating on purpose because we, as a species, tend to jump to extremes (see also: the world). But the extremes also make it possible to analyze the differences more clearly.

Part of this is our romanticized notion of photography. We maintain the idea that the camera simply captures the world around us as it is, mirroring the environment and burning those images onto a chemical emulsion or photosensitive sensor. In the beginning, the photographer had to choose the exposure, aperture, focus and film. None of these were automated. This engagement with all of these aspects made the process more convenient, requiring more skill and experience from the photographer.

Now, especially with smartphones, a good photo can be taken by simply pointing the lens and pressing a button. And in many cases, this photo will have better focus, more accurate colors, and an exposure that balances highlights and shadows, even in difficult lighting conditions.

Note that these examples, both traditional and modern, solve technical problems, not artistic. It’s always our job to find the right light, craft the composition and capture the emotion in the subjects. When the camera can take care of the technique, we gain more space to work on the artistic aspects.

Consider some examples of this in action. Currently, you’ll see a lot more computer photography features in smartphones, but AI is also creeping into DSLR and mirrorless systems.

Multi-image fusion

Zoom in on the many stages of Apple's multi-image processing pipeline.
Let’s take a closer look at the many stages in Apple’s multi-frame processing pipeline for the iPhone 13 and 13 Pro. Apple

This one feels the most “AI” in terms of being both contrived and intelligent, and yet the results are often quite good. Many smartphones, when you take a photo, save multiple photos at different exposure levels and merge them into a single composite photo. This is ideal for balancing difficult lighting situations and creating sharp images where long exposure would blur the subject.

Google’s Night Sight feature and Apple’s Night and Deep Fusion modes bring light out of the dark by capturing a series of images at different ISO and exposure settings, then denoise the rooms and merge the results. This is how you can get visible photos in low light even when taking handheld photos.

What you don’t get as a photographer is transparency on how the fusion occurs; You cannot reverse engineer the components. Even Apple’s ProRAW format, which combines the benefits of shooting in RAW (greater dynamic range, more data available for editing) with multi-frame merging, creates a demosaiced DNG file. It certainly contains more information than the ordinary processed HEIC or JPEG file, but the data is not as malleable as with a typical RAW file.

Scene recognition

Using the Google Pixel 6 and Pixel 6 Pro Camera "Your real" technology, which promises better precision of skin tones, for all types of people.
The Google Pixel 6 and Pixel 6 Pro cameras use “Real Tone” technology, which promises better skin tone accuracy. Triyansh Gill / Unsplash

A big part of computer photography is for the camera to understand what is in the frame. An obvious example is when a camera detects that a person is in front of the lens, allowing it to focus on the subject’s face or eyes.

Now more things are actively recognized in a shot. A smartphone can choose a blue sky and increase its saturation while keeping the color of a group of trees in their natural green color instead of drifting towards blue. It can recognize scenes of snow, sunsets, city skylines, etc., by making adjustments in these areas of the scene when writing the image into memory.

Another example is the ability to not only recognize people in a photo, but to preserve their skin tone when other areas are manipulated. The iPhone 13 and iPhone 13 Pro’s Photographic Styles feature lets you choose from a range of looks, such as Vibrant, but it’s smart enough that it doesn’t look like people are standing under lights. heated.

Or take a look at Google’s Real Tone technology, a long-awaited way to more accurately measure the skin tones of people with darker skin tones. For decades, color films have been processed using only white skin reference images, which has led to inaccurate representations of darker skin tones. (I highly recommend the “Shirley Cards” episode of the podcast 99% invisible for more information.) Google claims that Real Tone describes skin color gamut more accurately.

Identification of objects after capture

Modern smartphone cameras can easily identify subjects like my dog ​​Belvedere.
Modern smartphone cameras can identify subjects such as Popphoto publisher Dan Belvedere’s dog with ease. If you tap the ‘Search – Dog’ tab, Siri will present images and information on similar breeds. Dan Bracaglia

It’s time for software to help me hide a gap: I’m bad at identifying trees, flowers, and so much that I photograph. Too often I have written captions like “A bright yellow flower on a field of other flowers of many colors”.

Clearly, I’m not the only one, because image software is helping now. When I take a photo of almost any type of foliage with my iPhone 13 Pro, the Photos app uses machine learning to indicate that a type of plant is present. I can then press to display the possible matches.

This type of awareness extends to notable geographic locations, dog breeds, bird species, etc. In that sense, the camera (or more precisely, the online database that the software accesses) is smarter than me, which makes me feel like I’m more informed – or at least fundamentally knowledgeable.

Smart don’t have to be smart

Loading a film camera.
Are you exhausted by the automation associated with smartphone photography? Maybe it’s time to get down to the full manual and try your hand at a movie. Getty Images

I would like to reiterate that all of these features contribute, for the most part, to the technical aspects of photography. When I shoot with my smartphone, I don’t feel like I’ve given up on anything. On the contrary, the camera often makes corrections that I would otherwise have to make. I think more about what’s in the frame rather than how to display it.

It’s also important to note that alternatives are also at hand: apps that allow manual controls, RAW shots for more editing latitude later, and more. And hey, you can still take a used SLR camera and a few rolls of film and go completely manually.


Source link