Moving Past Analog Analogies
The first wave of development in digital photography was a struggle to replace film in quality and use. Now that it has, a new wave of development will push photography into new places and redefine still image creation. Digital Photography is about twenty years old; it’s acne is gone and its about time to move out of the family house, and become its own medium.
Photoshop has already begun this transition. Most features of Photoshop replace analog tools such as airbrushing and dodging and burning. Even seemingly new tools such as Unsharp Mask and Liquify have analog origins. But tools such as HDR move past the imaginings of the analog world into entirely new techniques possible only in a digital workflow. HDR is part of the new wave in photography with entirely new way of representing photographic vision.
Four attributes of digital photography have improved rapidly since the first mainstream professional digital cameras: resolution, ISO, burst-capture speed, and dynamic range (hereafter: DR). New workflows will arise from these technological advancements and each will bring complete virtualization closer. By virtualization I mean that a particular decision, now made at the time of capture will become part of the information contained in a raw file, able to be easily and accurately changed during post.
Some of these changes may not take place for decades, but some have already taken place. Exposure is virtualized in current high-end cameras. At the time of capture a photographer has to pick a certain exposure, but contained in the raw file is enough information that the exposure can manipulated during post without artifacts.
Drastically higher ISOs, burst speeds, and dynamic range will lead to the increased virtualization of photography which will shake the very foundations of still photography. If you thought the transition from film to digital was a game-changer you ain’t seen nothing yet
Read on for
ISOs, Burst Speeds and the Path to the Virtualized Shutter
In the next several generations of existing camera lines, higher ISOs and higher burst-capture speeds will incrementally improve the flexibility with which a photographer can pursue a photo. These improvements will be incremental because shooting at ISO 25,600 is very much like shooting at ISO 400. That five stops may open up venues that were previously not photo friendly (e.g. on-stage music performances, night clubs, and dark wedding receptions), but the difference is evolutionary rather than revolutionary. A photographer plucked out of 1998 might be incredulous that he could shoot at ISO 25,600, but show him where the buttons are and he would be immediately at home.
The true revolution, a virtual shutter, is on the horizon, and is almost inevitable given the constant progress in higher ISOs, DR, and faster burst speeds. I am NOT about to suggest that the future of still imagery is in frame-pulls from video cameras. That argument has been made and is fundamentally flawed. The technologies of moving and still image capture may merge, but the subtleties of capturing powerful emotions in each medium will never be identical. Some concepts and imagery are better conveyed through a single frame than through video so shooting with the intention to produce either video or stills necessarily means sacrificing the nuances of the other.
Still, the underlying concept of replacing the mechanical shutter with a virtual one introduces flexibility previously unimagined. By starting and stopping exposures by turning on and off the sensor rather than using a mechanical shutter the limit on frame rates will become a function of chip design rather than mechanics. This will revolutionize sports and action shooting. The first step will come when 100+ frames per second of production quality can be captured and the process, interface, and mindset, of selecting the hero shot during editing becomes more akin to scrubbing through video than selecting amount a image sequence. The next step will come when the concept of individual exposures gives way to photographic raw files with time-line based exposures. A single raw file will contain several seconds of the bride and groom cutting the wedding cake, and in post the photographer will pick out the perfect 1/100th of a second to process into a final image. Though this will blur the line between video and still capture there will remain a fundamental difference between this time-line based exposure and shooting video because all elements of the workflow will be optimized for the capture and reproduction of a single image.
Already at a major media event there are video cameras dedicated to wide shots, tight shots, and detail shots, so imagining the photographer becoming part of the video team, but remaining focused on the creation of still images seems quite natural.
The Path to Virtualized Cameras
The use of virtual shutters to capture variable-time raw-files will push photography closer to the post-processing intensive world of videography. Already at major fashion shoots there is a sense more of a video production than a traditional photo shoot, with a producer and camera operator (i.e. digital tech) added to the traditional cast of photographer and photo assistant, pushing the role of the photographer towards that of a movie director. This chasm between the camera operator (i.e. digital tech) and director (i.e. photographer) will continue to widen until the development of virtualized cameras finally kills off the old idea of the photographer as a mechanical tradesmen.
A single lens camera is limited to reproducing a single point of view. As dynamic range and ISO continue to improve the concept of a proper exposure at time of capture will disappear; the raw file will contain clean and accurate data in all tonal ranges and the reproduction exposure and response curves will be decided in post. The remaining creative decisions at the time of capture, aperture and camera position, will become part of the raw file when the camera itself is virtualized. The concept is simple; place multiple lens/sensor systems around the subject (e.g. a football field) and feed all image streams, along with position and lens information to a central computer. This computer will create a 3d model of the subject with such detail that scenes shown within it will be indistinguishable from a photograph. There, at that computer (perhaps with an immersive headset) the photographer-director will control the position, shutter speed, and aperture of the virtual camera. Imagine being able to walk out into the middle of the field during a quarterback and snap a wide-angle picture of a quarterback sack from the quarterback’s point of view, all the while scrubbing forward and backward through time to snap the photo at just the right moment.
For portable photojournalistic photography where the scene cannot be rigged in advance a camera with three or more lenses could allow for virtualized focus as well as limited point-of-view repositioning (imagine a camera lens in each hand and one on the head). In addition a system capable of 100 frames per second could also bracket several focus points during
Post production lighting
The final step in moving the creation of still images to post production will be narrow-band lights and the separation of luminance capture from color capture. Already, programs like Capture One work in the LAB color space which treats color information separately from luminance information. By assigning each light a specific frequency outside of the visible spectrum the photographer will be able to balance lighting ratios or turn specific lights “on” or “off” in post. To maintain color accuracy, which is dependent on source lighting that hits the entire visible spectrum, a base light must be present in the scene, but because ISO and dynamic range will be so enhanced, this ambient light used for color accuracy could be many stops less than required for an “optimal exposure”. The creative lighting could then be added outside of the visible spectrum. Imagine a typical portrait with four lights: key, fill, hair, and background. Now filter the key light to emit only 700nm, the fill light 720nm, the hair light 740nm, and the background light 760nm. A camera that is sensitive to this range could then isolate the luminance at each of these frequencies, assign them to virtual light sources and allow the photographer to boost or dim the light from each, leaving only the position and shape of the light sources a capture-time decision. In fact, by increasing the number of lights with unique frequencies and decreasing the frequency spacing between them these attributes as well could be virtualized. This technique has the added benefit of keeping the main light sources invisible to the subject allowing for less stress and distraction, especially useful in event and wedding coverage.
If this sounds far off it probably is. Consider, however, that this can already be done for the production of black and white images. A red key-light, a blue fill-light, and a green background-light can be individually controlled using Photoshop’s channel mixer provided the output is in black and white. For example by increasing the ratio of blue to red in the creation of the black and white image the key-to-fill lighting ratio could be reduced.
Analogous progress in music post production to the next level, allowing producers to change the pitch, length, and loudness of any note within a live recordings
Generation of 3D models from still images (see the lighting corrections at 4:06)
Adobe’s 3D camera (retroactive camera repositioning and focus shifts)
*manipulating a Polaroid emulsion is NOT an approximation of the liquify tool