This is a great time for photography. Cameras from Sony, Nikon, and Canon have never been better. Mobile photography is once again better than ever. So, where is photography headed from here? Read my predictions below.
The short answer is: photography as a whole isn’t going anywhere. Certain segments of photography will be more profitable than others, determined largely by the skills and experience needed to pull off the job. Other factors, such as demand and availability of acceptable substitutes, also determine the viability of any particular niche within photography as a whole.
I will not be able to touch on every niche within the universe of photography, but I will speak to a few of the more common subsets of the business and how I see them impacted; mainly portraiture, wedding, mobile and real estate or architectural photography since I have some experience with those.
I also will not speak to some of the more emerging genres of photography such as 360 and virtual reality as they are largely unique and have limited demand.
Mobile photography is running away with innovation
Cell phones are in everyone’s hands these days. Most often people would feel naked without their cell phones. Yet these tiny devices, which we depend on for communications every day, are responsible for more photos that you see every day than any of the big DSLR or mirrorless cameras.
Since we have cell phones with us each day, we take the lion’s share of photos with our cell phones. Rarely do I see normal folks walking around with large cameras anymore, unless they are pros. But things were not always thus.
The first camera phones came out around the year 2000. The images they captured were truly horrible, but that wasn’t what was important. The importance of camera phones was that they were convenient, and consumers latched on to that convenience. Image quality competition began, and we saw all manner of designs incorporating Carl-Zeiss lenses or Sony components.
From those humble beginnings of barely passable photos to today’s mobile photography heavy hitters have only been a span of fewer than two decades. In under twenty years, we have seen mobile phones go from barely being able to take a photo worthy of a contact image that pops up when your friend calls to images that are hard to distinguish from professional-level photography gear.
That’s pretty impressive compared to DSLR camera gear. The Canon 30D that I bought back in 2006 takes 8-megapixel images using lenses that have largely gone unchanged for decades. Sure, camera sensors have increased in quality and megapixel count and there have been some lens innovations, but nothing game-changing. A photo taken today with my 2007 Canon 30D compared to my 2016 Canon 80D will look remarkably similar.
Mobile photography has had much innovation in the same time period.
Size still matters, just not as much
Ten years ago, the size of the sensor in your camera determined a lot of what you could do with it. Large sensors, like a full-frame sensor, meant you could take professional-looking photos with your camera. Large sensors, fast lenses, and long focal lengths were what was needed to get the blurred background that was a hallmark of professional photography.
Without these large sensors and fast lenses, the rest of us could take the same photo but the whole image would be in focus. It lacked that blurred background that separated the subject from the rest of the photo and drew the viewer to the subject. It was simply a matter of physics that required larger sensors. No more.
Thanks to computational photography, smartphones with tiny sensors can blur the background of a photo giving the same, or much the same, results as a much larger professional camera. This “portrait mode” setting on smartphones has improved much in the past few years to the point where it has become difficult to tell if a portrait was captured on a cell phone or with pro gear.
There is still work to be done with camera phone imaging, but as I’ll discuss below, software and processing is making up for a lot of the shortcomings of these small sensors.
So, what is computational photography you might ask? I’ve read and heard many definitions, but the one I like best comes from the man that coined the term himself.
“Computational imaging techniques enhance or extend the capabilities of digital photography in which the output is an ordinary photograph, but one that could not have been taken by a traditional camera.” – Marc Levoy
Marc Levoy was a Standford professor of computer engineering teaching computer graphics and digital photography. He was the driving force behind the Google Pixel phone and its amazing camera capabilities. In 2017, the Google Pixel 2 phone was the first mobile device with portrait mode. Thanks to Levoy’s work on the Google Pixel, every other major manufacturer scrambled to add the same computational photography capabilities to their own devices.
Whether you use a Google Pixel phone for your own photographs or not is irrelevant. Levoy’s work at Google pushed the envelope of computational photography. He pushed it so far and so fast that we have seen astounding innovation in mobile imaging. The current iPhone 12 series takes very good photos, with portrait mode often being indistinguishable from a pro camera without pixel peeping. Looking at portrait photos I have taken with an iPhone 11 Pro, versus my 12 Pro Max, I can see subtle improvements in the portrait mode results as well as in the Smart HDR rendering as well.
Each year the software algorithms to process these photos get a little bit better, while the hardware processors that make all these computations in real-time get faster to the point that the user never notices all the processing happening in the background.
For more information on computational photography, see this article.
Small sensors are seeing diminishing returns
The Google Pixel 5 is an interesting phone with a great camera system. One of the more interesting bits about that particular device is that it has used the same image sensor on its camera since the Pixel 2. You read that right. The Pixel 5 that was released in October of 2020 is still using the same Sony IMX363 12.2-megapixel primary sensor that the Pixel 2 used in 2017.
It is the software that Google has been able to leverage that has made the year-over-year improvements possible. The Google Pixel 5 is no slouch when it comes to photography. It is a heavyweight in the mobile photography world and shares the same field with other heavyweights such as the Apple iPhone and Samsung Note.
I personally think Google will, at some point, be forced into a larger image sensor. Apple has already begun moving to a larger main sensor with its iPhone 12 Pro Max and Samsung has been using larger sensors for some time. Google has probably just about wrung all they are going to be able to out of the smaller sensor they are using now. While the flagship photography phones are forced into larger sensors to continue improving image quality, the mid-tier and lower-tier phones will still rely on the small sensors due to space and cost constraints.
Big-body camera manufacturers are going to have to step up or get left behind
What I term as the big-bodied camera manufacturers, Sony, Nikon, Canon, Fuji, Panasonic, etc., will at some point have to address the workflow of taking photos. It is simply easier to take a photo with a mobile device (such as my iPhone), post-process it on the phone while in the field, and then post it to whatever social media I choose or send them straight to the print house for printing.
I can do all of the above in a matter of minutes. I can take the photo, make edits and send it out from anywhere in a few minutes using nothing but my phone. Can you do that with a big body DSLR? No, you cannot.
The workflow of using a DSLR or a mirrorless camera currently involves a laptop or a desktop at some point for most users. Most pros will shoot their images on their big body, then transfer the images to a desktop or laptop for post-processing. Once they are finished with post-processing, the images are then uploaded to a photography website as a session or posted to social media.
I don’t generally take my laptop to a shoot with me. I will take the images at a shoot with my DSLR and then I will take them home to work on the post-processing. I will first backup all of the images I am going to use, then on to the post-work. Once I am done, the images get uploaded to a gallery and a link to the gallery is sent to the client.
With my iPhone (currently a 12 Pro Max) I can shoot images, touch them up, and send them out to either social media or an image gallery from my phone. I can send the link to the image gallery to the client while I am still on site. If something needs to be addressed such as a missed shot or the client would like an additional image that wasn’t discussed beforehand, it can happen right then and there.
The big body manufacturers will have to find a way to allow the same kind of “connected” experience that current mobile imagery does. I think they have begun working on this by allowing the newest crop of DSLRs and mirrorless cameras to transfer images to a mobile device as they are captured. The images can then be edited and uploaded by the mobile device. This is one way to make it work and solve the workflow issue, but the large cameras have no computational photography aspects. All of that work must be done manually.
Ease of use
The iPhone 12 series of phones have cameras that perform a lot of work behind the scenes. As mentioned above, computational photography has so expanded the capabilities of small mobile sensors that they are now playing on the same field as their big-bodied brethren with regard to image quality. There are some things that mobile devices still need improvement on but, overall, the gap has very much narrowed.
Being able to capture details in both the highlights and shadows of an image with a mobile device is impressive. The technique used is called HDR (High Dynamic Range), which is blending multiple images at varying exposures and using the best parts of the image to create one composite image having more range of contrast than a single image.
The very definition of computational photography is to be able to capture an image that could not be taken by a traditional camera, at least in a single image. The iPhone’s Smart HDR does great work on this as well as the Google Pixel’s Live HDR+ and they do it with one press of the shutter. One press of the shutter gives an HDR image.
A DSLR or mirrorless camera will need to take multiple exposures and blend them together. So, say three shots, each with its own shutter activation, and then either manually blending the photos in software later, or the camera itself will make the composite image. The problem is, there is no artificial intelligence driving the DSLR image so the camera-generated HDR image can often be lackluster.
The other option is to load the images on a laptop and process them in software, make manual adjustments, and export once the adjustments are done. Again, we are back to offloading the images to a laptop for editing.
I think the mobile device way is better, faster, and an easier workflow. Time for the big boys to catch up and get on board with AI and computational photography. It is here and it isn’t going away. If the big camera manufacturers ignore it long enough, they will be left behind.
Compositing photos is an area where the lines blur a bit and workflow becomes more equal between a mobile device and a DSLR. What I mean by compositing is taking elements from different exposures and manually blending them together to arrive at a photograph that would not otherwise be possible to create in camera. Sounds a lot like computational photography doesn’t it? Well, it is and it isn’t.
When I shot real estate photos, I most often composited several images together so that I could have a proper exposure on the inside of the building or residence as well as the view outside through the windows. Add to that some flash exposures to ensure proper interior lighting and color balance and you arrive at a composite photo involving two exposures on the low end and numerous, sometimes ten or more images, composited together on the high end.
Taking particular elements from that many exposures is generally not something that works well with computational photography algorithms. With machine learning, AI and given how smart the people working on these algorithms are, I’m not saying it can’t be done, simply that it hasn’t been done yet. There is a lot of artistic impression that goes into those types of composites.
There are also composites where the foreground, subject and background are all separate images and composited together to form some really impressive and creative images. I won’t speak too much on this type of artistry except to say that the photographers that engage in this work are very creative to imagine the image and make it happen. I don’t see AI getting to this anytime soon, at least until AI becomes imaginative on its own.
Composite photography is still firmly in the realm of human interaction.
Optics are still the one area of photography where the big-body camera still holds a definitive advantage. There is currently no better way to “reach out and touch someone” than with big glass on a big body. Current mobile phone offerings are largely ill-suited to taking sports images, especially those games played on a large field.
In the realm of magnification, there currently is no substitute for optical magnification. Digital zoom doesn’t make the grade currently as most of the digital zooms currently employed rely on cropping into an image with resultant degradation of the image quality. Optical zoom alleviates that problem with very little image distortion.
Parents wanting to take great photos of their child playing softball games on the weekends are still better served with a cheap DSLR and a good zoom lens than trying to rely on a mobile device and digital zoom. While there are some sports where a mobile device would be acceptable for photos, sports played on an open field simply involve too much distance for acceptable photography with a mobile device.
That problem is going to get solved sooner or later. Moment lenses that clip on or mount to a special phone case are already providing limited additional optical zoom. How long will it be before the optical zoom on a mobile device and available additional optics remove the need for big glass? I’m not sure, but I don’t think it is a priority in the mobile world right now. Give it another couple of years and it may be a selling point of the next great mobile picture shooter.
Making a living as a photographer
Making a living as a photographer can be a tough proposition these days. Everyone has a camera in their pocket masquerading as a cell phone. As has been discussed here, the imaging of those cameras is improving each year.
Good enough is good enough mentality
I have seen it in my work and have heard the same from other photographers, “Good enough is good enough.” Meaning that a technically inferior photo that still looks good is often good enough to the consumer. After years of being subjected to selfies and snapshots on social media have blunted most people’s tastebuds for quality work.
For example, I used to get paid for taking real estate and architectural photos of buildings and residences for real estate agents. They wanted quality photos to show off the home or building they were selling. Good photos give a prospective buyer an impression of the property.
Agents started finding out that iPhone photos taken of properties were being used and the properties still sold. Even though the photos were technically deficient (crooked, bad lighting, distortion, bad verticals, etc.) the homes were still selling. Once the COVID pandemic hit, and the panic that went along with it, many agents balked at using a pro photographer as a cost-saving measure and shot their own photos during the market downturn. Houses still sold and those agents never returned to using a pro, they just shot the images themselves. True architectural photography is still, for the time being, still firmly in the realm of the pro, but I’m not sure for how much longer.
Even though professional work would absolutely produce better images, photos that were good enough did the job. Additionally, the pandemic gave rise to virtual tours of properties, so photos were less relied upon by prospective buyers for understanding the layout of the house, but that is a whole other issue, at least in my local market
People will still pay for quality
There are times when quality really does matter. Real estate imagery is transient and the images lose their value after the sale of the home. Other types of imagery, however, do not.
Wedding photography is still in demand, even as the pandemic rages on. Many brides are not keen on risking the memories of their special day to their friend with a cell phone. It has been tried, and was trendy at one time, to have guests take candid photos with disposable cameras for the newlywed couple. As you might imagine, their results were quite, ahem, varied.
The same can be said of portrait photography. While anyone with a recent smartphone can take a pretty good image in portrait mode, a pro photographer has something that the best mobile device will never have. Their eye.
A photographer’s eye is what you are really paying for when you hire a portrait or wedding photographer. Not only will that photographer have experience in composition, they will also have preferences in posing the subjects. That is, they have their own style that they have come to develop through experience. That isn’t something that the average person is just going to pick up through YouTube videos and taking photos with their iPhone. That is a set of skills that is earned not taught. That is a quality that people will pay for if they want high end photos.
Being able to edit photos in the post-process is something else that people will pay for. Much like the composition of photos and the posing of subjects, the post-processing of images also ends up being largely characteristic of the photographer’s style. A photographer’s finished photos will have a certain look to them, a style that points back to their individual look. I like contrasty photos so, among other things, my photos tend to have a lot of contrast at the end of the editing process.
In decades gone by, photographers were wizards that understood the magic of capturing light. They had to be, there was no instant gratification viewscreen on the back of cameras twenty or thirty years ago. They had to know what they were doing at the time of image capture or they ended up in a darkroom with unusable images.
In the present, anyone can snap a photo and know right away what it looked like. Image capture is much easier than it used to be. Automated settings, bolstered by software algorithms and machine learning, has evolved to the point that shooting on “auto” in many instances is easier and produces acceptable or better results.
In this manner, general photography is no longer squarely in the realm of the expert. General photographic ability is now the venue of the everyman and everywoman. That doesn’t make every iPhone owner Ansel Adams, but it does make them considerably better at capturing properly exposed images than any DSLR owner twenty years ago.
I do believe that when the desired quality of work passes a certain threshold, the professional photographer will be employed more for their experience in the art and their style rather than technical expertise in capturing photons. Year over year, technical expertise will mean less and less as camera capabilities, especially mobile, grow. Style and experience will be what sets the professional apart from the amateur.
Finally, Nikon, Canon, Sony, et al. must align their product lines to incorporate computational photography. Even though the software algorithms driving the current computational trends are still in a relative infancy stage, the results are most impressive. The big-body camera players would do well to move mountains to incorporate the technology into their future offerings before it becomes too late and they are seen as antiquated relics.
If you enjoyed this article, please leave a comment below. If you would like to suggest ideas for future articles, use the suggestion link at the top. Until next time, go take some pictures!