Dec 19, 2020 | General, iPhone

What is Computational Photography?

Ralph

Ralph

blue sea under blue sky

With the release of the iPhone 12 Pro and Pro Max last month, computational photography has once again been brought to the forefront of Apple’s event.  But what exactly is computational photography?  Read on to learn more!

Computational photography defined


Computational photography is a term that was coined by Stanford professor Marc Leroy. Levoy coined the term, so his definition is what I will use.  You can see Leroy’s Stanford web page here.

Computational imaging techniques enhance or extend the capabilities of digital photography in which the output is an ordinary photograph, but one that could not have been taken by a traditional camera.” 

Marc Levoy

How was computational photography developed?


As mentioned above, the man most responsible for computational photography being developed as it has been was Stanford professor Marc Levoy.  Levoy has a PhD in Computer Science and he taught digital photography and computer graphics at Stanford.


In 2011, Levoy went to work for Google.  Levoy led a team at Google that worked on cameras and photography.  Levoy’s work at Google included work on burst-mode photography for HDR imaging, which later was applied to mobile photography in the form of the HDR+ mode on the Google Pixel phone, and continued to do so for several generations of that phone.  


The original Google Pixel phone camera earned the highest ever rating for a camera phone, as ranked by DxO.  The emergence of the Google Pixel camera’s superior imaging in the mobile space led other manufacturers to follow suit and up the capabilities of their own offerings.  Thanks to Levoy and Google’s work on computational photography, we now have the amazing imaging that we currently do on mobile devices that fit easily in a pocket.

Why do I need all of that fancy computational photography stuff anyway?


Mobile devices, and I am speaking of cell phones here, have cameras that are necessarily smaller than those found in more traditional, big-body cameras.  For example, full-frame sensors in a Nikon Z7 or a Canon R5 are about twenty times the size of the iPhone 12 Pro Max wide-angle sensor.


With such a small sensor size, mobile devices had to adapt software to make up for the shortcomings of having such a small sensor.  In many ways, software written to enhance the images captured by the sensors is producing images that are hard to differentiate from their full-sized, full-frame brethren.  It’s not perfect, but for many people, the differences between mobile device imaging and big-body camera imaging just aren’t noticeable (or not noticeable enough) to warrant carrying around a dedicated camera.


Thanks to all the software and hardware that crunches through every image taken on most mobile devices, there is not much editing that needs to be done to make images look good in most cases.  Very simply, the images that are taken are presented to the user in a state that is largely good enough to share to social media immediately after they are captured, though they can often be improved a bit by editing the images after capture.

How does computational photography work on my iPhone?


The computational photography implemented on the Apple iPhone is some of the best in the business.  The software is quite advanced as well as the hardware it runs on and the cameras that feed it information.  It is a system that consists of the advanced cameras in the iPhone itself (there are four), the software that analyzes the images, and the A14 Bionic processor that executes the demands of the software.  There is obviously much more to it than that simplified version, but that is the basic, 30,000 foot view of the system.


Before we go further with how computational photography works on your iPhone, let us get an idea of what computational photography used to be, before it was actually defined and refined by Levoy.

High Dynamic Range (HDR) photography


In the old days, well before smartphones became computers that happened to make phone calls, there was plain old HDR photography.  Let us talk about what it is and what problem it was meant to address.


There are scenes and landscapes and ideas that we might want to capture with our cameras that have widely varying light throughout the scene.  There might be areas that are well-lit, areas in shadow and maybe even areas in direct sunlight – all in the same composition in your viewfinder!  

Dynamic Range


Camera sensors only have so much ability to “see” into shadows and light at the same time.  This is called the dynamic range of the camera’s image sensor.  Camera sensors have definitely improved over the years, but they cannot see all of the things that a human eye can see.  So we have to help them out sometimes (cheat a little) to get realistic looking images.


What to do when the dynamic range of the scene exceeds your image sensor


The compromise with normal photography would be to expose for the shadows in the scene and let the highlights “blow out” and lose detail.  Once the detail is lost, those highlights turn into pure white and no amount of post work will bring them back.  


The other option would be to expose for the highlights and lose the shadows.  Once the shadow detail is lost, they tend to pure black and no amount of post-processing will bring them back either.


HDR was a method devised to help solve this problem.  In its simplest form, HDR would take a series of images, at varying exposures, and combine them later.  For example, in our situation above we have highlights and shadows that will be lost if we try to capture it in a single image.  The dynamic range of the scene exceeds that of the camera.


The “standard” solution with HDR would be to capture the scene at what the camera determines is the proper exposure (0), then at a lower exposure (-2) to capture the detail in the highlights and one final higher exposure (+2) to capture the details in the highlights.


Finally, all three exposures would be “stacked” on top of each other in software.  The best parts of the -2 exposure would be used to show the detail in the highlights and the best parts of the +2 exposure would be used to show the best parts of the shadows.  That would be added to the normal exposure to make the image resemble what the viewer would see if they were there.  It would be properly exposed in all areas of the photo, bright or dark.


Looking at the photos below, in the normal exposure, there is too much dynamic range to be captured in a single exposure. The +2 exposure on the left allows the details in the shadows to be seen, while the -2 exposure on the right gets the highlights in the clouds. When combined, with the normal exposure we arrive at the bottom photo.

Smart HDR in the iPhone


While Apple isn’t the only game in town for computational photography, the Apple iPhone 12 series is the device I am writing about because it is what I have direct, personal experience with.  Almost all of this information is also directly applicable to the iPhone 11 series as well, since they use Smart HDR as well.  HDR by itself isn’t considered computational photography per se, but combined with machine learning when the images are run through the iPhone 12’s A14 Bionic processor, I think it does rise to the level of computational photography.


The Smart HDR found in the iPhone 12 series is known as Smart HDR 3, as the process is in the third iteration.  It was created specifically to address the problem of large dynamic range in a photo composition. Much like standard HDR, Smart HDR will take multiple exposures of the scene and blend them together for increased dynamic range in the image.


Smart HDR 3 on the iPhone 12 series takes a series of images at various exposures and combines them together to provide the finished product.  While the images that are produced from Smart HDR 3 can be quite good, it does have limitations.  


Moving objects will not capture as well with Smart HDR 3 since they are, well, moving.  The moving subject will be in slightly different places in the image across all of the nine exposures taken.  That means that the phone will typically pick one image to use for the subject and then have to remove the subject in other images.  


A moving subject could also appear blurry in the final image as well.  Depending on the shutter speed used in the main exposure, the shutter speed might not be fast enough to freeze the subject’s motion.


It is a great system to have, but it will not work in every scenario.  Now you have an idea when not to use Smart HDR.

Smart HDR image from the iPhone 12 Pro Max Super Wide Angle Lens, straight from camera with no editing.

Deep Fusion


Deep Fusion is Apple’s foray into real, honest to goodness, unabashed computational photography that started with the iPhone 11 series in 2019.  Apple is tight lipped about the actual specs and details of what Deep Fusion actually does.  What information Apple has given us can lead on to infer some things about Deep Fusion.


Image stacking is a technique that has been around a while.  Simply put, image stacking is a method of taking multiple, usually identically exposed, images and then “stacking” them on top of one another in software.  Then the images are processed together to remove noise and increase sharpness.  I’ve used up to 20 images before to get a cleaner and more crisp image than I could get from a single exposure.

A look at Deep Fusion from Apple’s event in 2019.


Deep Fusion uses a combination of image stacking and Smart HDR combined to produce an image that is not only properly exposed in the highlights and shadows, but also very detailed.  It is also of worthy note that Deep Fusion processed different sections of a photo differently.  In areas with detail, Deep Fusion applies the extra detail from image stacking.  In areas that don’t need extra detail, like a person’s face, it doesn’t apply detail there.


In this way, Deep Fusion selectively applies detail enhancement where it is needed and doesn’t apply it where it might muddle or distort the image.  From experience, I can tell you that kind of work can be tedious and quite time consuming to do manually in Photoshop.  This will happen on your phone in under a second.

Night mode


Night Sight is a photographic mode on mobile devices that began on the Google Pixel line of cameras.  It combines long exposure times with HDR and computational algorithms to produce excellent results in what used to be conditions that were too dark for normal photography.


Shortly after the success Google had with Night Sight on the Pixel line, every other major manufacturer went to work on their own variation.  Apple’s implementation is known as Night Mode.


Essentially, Apple’s Night Mode is a variation on Smart HDR, just tweaked for use in dimly lit scenarios.

Conclusion


Computational photography has brought to cell phones the ability to capture images that is currently unmatched in the realm of standard photography.  Yes, big-body DSLRs and mirrorless cameras can take photos in automated HDR, but their implementation is less sophisticated, as are the results.


Working with DSLRs and mirrorless cameras will take more effort on the part of the photographer.  To do proper HDR on a DSLR, the photos are taken, transferred to a computer, and loaded into HDR software, and then merged.  Your iPhone will do that in about a second for you.


The importance of computational photography cannot be understated, in my opinion.  The same software and processing power in a larger camera with better optics should be impressive, indeed.  I eagerly await the day when Canon, Nikon, and Sony empower their cameras with the same technology.  If you wish to read more about big-bodied cameras, mobile photography, and what I see in their future, please click here to read more.


If you enjoyed this article, please leave a comment below.  If you’d like to contact me and leave a suggestion for future articles, click on the suggestion link above.  Until next time, go take some pictures!

Ralph

Ralph

Ralph is an avid photographer in his spare time. He spends a lot of his photography time shooting sports photos of his daughter, who plays softball and swims. He also has a keen interest in mobile photography.

0 Comments

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.

Pin It on Pinterest

Share This