• Home
  • about this blog
  • Blog Posts

Parasam

Menu

  • design
  • fashion
  • history
  • philosophy
  • photography
  • post-production
    • Content Protection
    • Quality Control
  • science
  • security
  • technology
    • 2nd screen
    • IoT
  • Uncategorized
  • Enter your email address to follow this blog and receive notifications of new posts by email.

  • Recent Posts

    • Take Control of your Phone
    • DI – Disintermediation, 5 years on…
    • Objective Photography is an Oxymoron (all photos lie…)
    • A Historical Moment: The Sylmar Earthquake of 1971 (Los Angeles, CA)
    • Where Did My Images Go? [the challenge of long-term preservation of digital images]
  • Archives

    • September 2020
    • October 2017
    • August 2016
    • June 2016
    • May 2016
    • November 2015
    • June 2015
    • April 2015
    • March 2015
    • December 2014
    • February 2014
    • September 2012
    • August 2012
    • June 2012
    • May 2012
    • April 2012
    • March 2012
    • February 2012
    • January 2012
  • Categories

    • 2nd screen
    • Content Protection
    • design
    • fashion
    • history
    • IoT
    • philosophy
    • photography
    • post-production
    • Quality Control
    • science
    • security
    • technology
    • Uncategorized
  • Meta

    • Register
    • Log in
    • Entries feed
    • Comments feed
    • WordPress.com

Browsing Tags still photography

Objective Photography is an Oxymoron (all photos lie…)

August 18, 2016 · by parasam

There is no such thing as an objective photograph

A recent article in the Wall Street Journal (here) entitled “When Pictures Are Too Perfect” prompted this post. The premise of the article is that too much ‘manipulation’ (i.e. Photoshopping) is present in many of today’s images, particularly in photojournalism and photo contests. There is evidently an arbitrary standard (that no one can appear to objectively define) that posits that essentially only an image ‘straight out of the camera’ is ‘honest’ or acceptable – particularly if one is a photojournalist or is entering your image into some form of competition. Examples are given, such as Harry Fisch having a top prize from National Geographic (for the image “Preparing the Prayers at the Ganges”) taken away because he digitally removed an extraneous plastic bag from an unimportant area of the image. Steve McCurry, best known for his iconic “Afghan Girl” photo on the cover of National Geographic magazine in 1985, was accused of digital manipulation of some images shot in 1983 in Bangladesh and India.

On the whole, I find this absurd and the logic behind such attempts at defining an ‘objective photograph’ fatally flawed. From a purely scientific point of view, there is absolutely no such thing as an ‘objective’ photograph – for a host of reasons. All photographs lie, permanently and absolutely. The only distinction is by how much, and in how many areas.

The First Lie: Framing

The very nature of photography, from the earliest days until now, has at its core an essential feature: the frame. Only a certain amount of what can be seen by the photographer can be captured as an image. There are four edges to every photograph. Whether the final ‘edges’ presented to the viewer are due to the limitations of the camera/film/image sensor, or cropping during the editing process, is immaterial. The initial choice of frame is made by the photographer, in concert with the camera in use, which presents physical limitations that cannot be exceeded. The choice of frame is completely subjective: it is the eye/brain/intuition of the photographer that decides in the moment where to point the camera, what to include in the frame. Is pivoting the camera a few degrees to the left to avoid an unsightly telephone pole “unwarranted digital manipulation?” Most news editors and photo contest judges would probably not agree. But what if the same exact result is obtained by cropping the image during an editing process – already we start to see disagreement in the literature.

If Mr. Fisch had simply walked over and picked up the offending plastic bag before exposing the image, he would likely be the deserved recipient of his 1st place prize from National Geographic, but as he removed the bag during editing his photograph was disqualified. By this same logic, when Leonardo Da Vinci painted the “Mona Lisa” there is a balustrade with two columns behind her. There is perfect symmetry in the placement of Lisa Gherardini (the presumed model) between the columns, which helps frame the subject. Painting takes time, it is likely that a bird would land from time to time on the balustrade. Was Leonardo supposed to include the bird or not? Did he ‘manipulate’ the image by only including the parts of the image that were important to the composition? Would any editor or judge dare ask him today, if that was possible?

Framing and large depth-of-field was necessary in this shot to tell the story: Background and facial expression of the subject both needed to be in focus. [©2012 Ed Elliott / Clearlight Imagery]

“So-So Happy!” Framing and large depth-of-field was necessary in this shot to tell the story: Background and facial expression of the subject both needed to be in focus. [©2012 Ed Elliott / Clearlight Imagery]

Framing and large depth-of-field was necessary in this shot to tell the story: Background and facial expression of the subject both needed to be in focus. [©2012 Ed Elliott / Clearlight Imagery]

“So-So Happy… NOT!” Framing and large depth-of-field was necessary in this shot to tell the story: Background and facial expression of the subject both needed to be in focus. [©2012 Ed Elliott / Clearlight Imagery]

A combination example of framing and depth-of-field. One photographer is standing 6 ft further away (from my camera position) than the other, but the foreshortening of the 200mm telephoto appears to depict 'dueling photographers'. [©2012 Ed Elliott / Clearlight Imagery]

A combination example of framing and depth-of-field. One photographer is standing 6 ft further away (from my camera position) than the other, but the foreshortening of the 200mm telephoto appears to depict ‘dueling photographers’. [©2012 Ed Elliott / Clearlight Imagery]

The Second Lie: The Lens

No photograph can occur without a lens. Every lens has certain irrefutable properties: focal length and maximum aperture being the most important. Each of these parameters impart a vital, and subjective, aspect to the image subsequently captured. Since the ‘lingua franca’ of focal length is the ubiquitous 35mm camera, we can generalize here: 50mm being the so-called ‘normal’ lens; 35mm is considered ‘wide angle’, 24mm ‘very wide angle’ and 10mm a ‘fisheye’. Going in the other direction, 85mm is often considered a ‘portrait’ lens (slight close-up), 105mm a medium ‘telephoto’, 200mm a ‘telephoto’ and anything beyond is for sports or space exploration. Each focal length brings more or less of the frame into focus, and inversely shortens the depth of field. Wide angle lenses tend to bring the entire field of view into sharp focus, while telephotos blur out everything except what the photographer has selected as the prime focus point.

Normal

Normal lens [©2016 Ed Elliott / Clearlight Imagery]

Telephoto lens (for illustrative purpose only, please ignore the soft focus at edges caused by lens adapter)

Telephoto lens (for illustrative purpose only, please ignore the soft focus at edges caused by lens adapter) [©2016 Ed Elliott / Clearlight Imagery]

Wide Angle lens (for illustrative purpose only, please ignore the soft focus at edges caused by lens adapter)

Wide Angle lens (for illustrative purpose only, please ignore the soft focus at edges caused by lens adapter) [©2016 Ed Elliott / Clearlight Imagery]

FishEye lens (for illustrative purpose only, please ignore the soft focus at edges caused by lens adapter) - curvature and edge distortions are normal for such an extreme angle-of-view lens

FishEye lens (for illustrative purpose only, please ignore the soft focus at edges caused by lens adapter) – curvature and edge distortions are normal for such an extreme angle-of-view lens   [©2016 Ed Elliott / Clearlight Imagery]

In addition, each lens type distorts the field of view noticeably: wide angle lenses tend to exaggerate the distance between foreground and background, making the closer objects in the frame look larger than they actually are, and making distant objects even smaller. Telephoto lenses have the opposite effect, foreshortening the image and ‘flattening’ the resulting picture. For example, in a long telephoto shot of a tree on a ridge backlit by the moon, both the tree and the moon can be tack sharp and apparently the moon is directly behind the tree, even though it is 239,000 miles away.

The other major ‘subjective’ quality of any lens is the aperture chosen by the photographer. Otherwise commonly known as the “f-stop” this is the ratio of the focal length of the lens divided by the diameter of the ‘entrance pupil’ (the size of the hole that the aperture diaphragm is set to on a given capture). The maximum aperture (the largest ‘hole’ that can be set by the photographer) depends on the diameter of the lens itself, in relation to the focal length. For example, with a ‘normal’ 50mm lens if the lens is 25mm in diameter then the maximum aperture is f/2 (50/25). Larger apertures (lower f-stop ratios) require larger lenses, and are correspondingly more difficult to use, heavy and expensive. One can see that an f/2 lens for a 50mm focal length is not that huge, to obtain the same f/2 ratio for a 200mm telephoto would require a lens that is at least 100mm (4in) in diameter – making such a device huge, heavy and obscenely expensive. As a quick comparison, (Nikon lenses, full frame, prime lens, priced from B&H Photo – discount photo equipment supplier) a 50mm f/2.8 lens costs $300, while the same lens in f/1.2 costs $700. A 400mm telephoto in f/5.6 would be $2,200, while an identical focal length with a maximum aperture of f/2.8 will set you back a little over $12,000.

Exaggeration of object size with wide angle lens: farther objects appear much smaller than in 'reality'.

Exaggeration of object size with wide angle lens: farther objects appear much smaller than in ‘reality’. [©2011 Ed Elliott / Clearlight Imagery]

Flattening and foreshortening of the image as a result of long telephoto lens (f8, 400mm lens) - the crane is hundreds of feet closer to the camera than the dark buildings behind, but looks like they are directly adjacent.

Flattening and foreshortening of the image as a result of long telephoto lens (f/8, 400mm lens) – the crane is hundreds of feet closer to the camera than the dark buildings behind, but looks like they are directly adjacent. [©2013 Ed Elliott / Clearlight Imagery]

Depth of field with shallow aperture (f/2.4) - in this case even with a wide angle lens the background is out of focus due to the large distance between the foreground and the background (in this case the Hudson River separated the two...)

Depth of field with shallow aperture (f/2.4) – in this case even with a wide angle lens the background is out of focus due to the large distance between the foreground and the background (in this case the Hudson River separated the two…) [©2013 Ed Elliott / Clearlight Imagery]

Flattening and foreshortening of the image with a long telephoto lens. The ship is almost 1/4 mile further away than the green roadway sign, yet appears to be directly behind it... (f4, 400mm)

Flattening and foreshortening of the image with a long telephoto lens. The ship is almost 1/4 mile further away than the green roadway sign, yet appears to be directly behind it… (f/4, 400mm) [©2013 Ed Elliott / Clearlight Imagery]

Wide angle lens (14-24mm zoom lens, set at 16mm - f2.8)

Wide angle lens (14-24mm zoom lens, set at 16mm – f/2.8) [©2012 Ed Elliott / Clearlight Imagery]

Shallow depth of field due to large aperture on telephoto lens (f/4 - 200mm lens on full-frame 35mm DSLR)

Shallow depth of field due to large aperture on telephoto lens (f/4 – 200mm lens on full-frame 35mm DSLR) [©2012 Ed Elliott / Clearlight Imagery]

Wide angle shot, demonstrating sharp focus from foreground to the background. Also exaggeration of perspective makes the bow of the vessel appear much taller than the stern.

Wide angle shot, demonstrating sharp focus from foreground to the background. Also exaggeration of perspective makes the bow of the vessel appear much taller than the stern. [©2013 Ed Elliott / Clearlight Imagery]

The bottom line is that the choice of lens and aperture is a controlling element of the photographer (or her pocketbook) – and has a huge effect on the image taken with that lens and setting. None of these choices can be deemed to be either ‘analog’ or ‘digital’ manipulation of the image during editing, but they have arguably a greater effect on the outcome, message, impact and tenor of the photograph than anything that can be done subsequently in the darkroom (whether chemical or digital).

The Third Lie: Shutter Speed

Every exposure is a product of two factors: Light X Time. The amount of light that strikes a negative (or digital sensor) is governed solely by the selected aperture (and possibly by any additional filters placed in front of the lens); the duration for which the light is allowed to impinge on the negative is set by the shutter speed. While the main property of setting the shutter speed is to produce the correct exposure once the aperture has been selected (to avoid either under or over-exposing the image), there is a huge secondary effect of shutter speed on any motion of either the camera or objects in the frame. Fast shutter speeds (over 1/125th of a second with a normal lens) will essentially freeze any motion, while slow shutter speeds will result in ‘shake’, ‘blur’ and other motion artifacts. While some of these can be just annoying, in the hands of a skilled photographer motion artifacts tell a story. And likewise a ‘freeze-frame’ (from a very fast shutter speed) can distort reality in the other direction, giving the observer a point of view that the human eye could never glimpse in reality. The hours-long time exposure of star trails or the suspended animation shot of a bullet about to pierce a balloon are both ‘manipulations’ of reality – but they take place as the image is formed, not in the darkroom. The subjective experience of a football distorted as the kicker’s foot impacts it – locked in time by a shutter speed of 1/2000th second – is very different to the same shot of the kicker at 1/15th second where his leg is a blurry arc against a sharp background of grass. Two entirely different stories, just from shutter speed choice.

Fast shutter speed to stop action

Fast shutter speed to stop action [©2013 Ed Elliott / Clearlight Imagery]

Combination of two effects: fast shutter speed to stop motion (but not too fast, slight blurring of left foot imparts motion) - and shallow depth of field to render background soft-focus (f4, 200mm lens)

Combination of two effects: fast shutter speed to stop motion (but not too fast, slight blurring of left foot imparts motion) – and shallow depth of field to render background soft-focus (f/4, 200mm lens) [©2013 Ed Elliott / Clearlight Imagery]

High shutter speed to freeze the motion. 1/2000 sec. [©2012 Ed Elliott / Clearlight Imagery]

High shutter speed to freeze the motion. 1/2000 sec. [©2012 Ed Elliott / Clearlight Imagery]

Fast shutter speed to provide clarity and freeze the motion. 1/800 sec @ f/8 [©2012 Ed Elliott / Clearlight Imagery]

Fast shutter speed to provide clarity and freeze the motion. 1/800 sec @ f/8 [©2012 Ed Elliott / Clearlight Imagery]

Although a hand-held shot, I wanted as fine-grained a result as possible, so took advantage of the stillness of the subjects and a convenient wall on which to place the camera. 2 sec exposure with ISO 500 at f8 to keep the depth of field. [©2012 Ed Elliott / Clearlight Imagery]

Although a hand-held shot, I wanted as fine-grained a result as possible, so took advantage of the stillness of the subjects and a convenient wall on which to place the camera. 2 sec exposure with ISO 500 at f/8 to keep the depth of field. [©2012 Ed Elliott / Clearlight Imagery]

The Fourth Lie: Film (or Sensor) Sensitivity [ISO]

As if Pinocchio’s nose hasn’t grown long enough already, we have yet another ‘distortion’ of reality that every image contains as a basic building block: that of film/sensor sensitivity. While we have discussed exposure as a product of Light Intensity X Time of Exposure, one further parameter remains. A so-called ‘correct’ exposure is one that has a balance of tonal values, and (more or less) represents the tonal values of the scene that was photographed. This means essentially that blacks, shadows, mid-tones, highlights and whites are all apparent and distinct in the resulting photograph, and the contrast values are more or less in line with that of the original scene. The sensitivity of the film (or digital sensor) is critical in this regard. Very sensitive film will allow a correct image with a lower exposure (either a smaller aperture, faster shutter speed, or both), while a ‘slow’ [insensitive] film will require the opposite.

A high ISO was necessary to capture the image during late twilight. In addition a slow shutter speed was used - 1/15 sec with ISO of 6400. [©2011 Ed Elliott / Clearlight Imagery]

A high ISO was necessary to capture the image during late twilight. In addition a slow shutter speed was used – 1/15 sec with ISO of 6400. [©2011 Ed Elliott / Clearlight Imagery]

Low ISO (50) to achieve relatively fine grain and best possible resolution (this was a cellphone shot). [©2015 Ed Elliott / Clearlight Imagery]

Low ISO (50) to achieve relatively fine grain and best possible resolution (this was a cellphone shot). [©2015 Ed Elliott / Clearlight Imagery]

Cellphone image at dusk, resulting in ISO 800 with 1/15 sec exposure. Taken from a parking garage, the highlight on the palm is from car headlights. [©2012 Ed Elliott / Clearlight Imagery]

Cellphone image at dusk, resulting in ISO 800 with 1/15 sec exposure. Taken from a parking garage, the highlight on the palm is from car headlights. [©2012 Ed Elliott / Clearlight Imagery]

Night photography often requires very high ISO values and slow shutter speeds. The resulting grain can provide texture as opposed to being a detriment to the shot. [©2012 Ed Elliott / Clearlight Imagery]

Night photography often requires very high ISO values and slow shutter speeds. The resulting grain can provide texture as opposed to being a detriment to the shot. [©2012 Ed Elliott / Clearlight Imagery]

Fine grain achieved with low ISO of 50. [©2012 Ed Elliott / Clearlight Imagery]

Fine grain achieved with low ISO of 50. [©2012 Ed Elliott / Clearlight Imagery]

Slow ISO setting for high resolution, minimal grain (ISO 50) [©2012 Ed Elliott / Clearlight Imagery]

Slow ISO setting for high resolution, minimal grain (ISO 50) [©2012 Ed Elliott / Clearlight Imagery]

Sometimes you frame the shot and do the best you can with the other parameters - and it works. Cellphone image at night meant slow shutter speed (1/15 sec) and lots of grain with ISO 800 - but the resultant grain and blurring did not detract from the result. [©2012 Ed Elliott / Clearlight Imagery]

Sometimes you frame the shot and do the best you can with the other parameters – and it works. Cellphone image at night meant slow shutter speed (1/15 sec) and lots of grain with ISO 800 – but the resultant grain and blurring did not detract from the result. [©2012 Ed Elliott / Clearlight Imagery]

A corollary to film sensitivity is grain (in film) or noise (in digital sensors). If you desire a fine-grained, super sharp negative, then you must use a slow film. If you need a fast film that can produce an acceptable image in low light without a flash, say for photojournalism or surveillance work, then you must use a fast film and accept grain the size of rice in some cases… Life is all about compromise. Again, the final outcome is subjective, and totally within the control of the accomplished photographer, but this exists completely outside the darkroom (or Photoshop). Two identical scenes shot with widely disparate ISO films (or sensor settings) will give very different results. A slow ISO will produce a very sharp, super-realistic image; while a very fast ISO will be grainy, somewhat fuzzy and can tend towards surrealism if pushed to an extreme.  [technical note: the arithmetic portion of the ISO rating is the same as the older ASA rating scale, I use the current nomenclature]

Editing: White Lies, Black Lies, Dutone and Technicolor…

In my personal work as a streetphotographer (my gallery is here) I tell ‘white lies’ all the time in editorial. By that I mean the small adjustments to focus, color balance, contrast, highlight and shadow balance, etc. This is a highly personal and subjective experience. I learned from master photographers (including Ansel Adams), books and much trial and even more error… to pre-visualize my shots, and mentally place the components of the image on the Zone Scale as accurately as possible with the equipment and lighting on hand. This process was most helpful when in university with no money – every shot cost, both in film and developing ingredients. I would often choose between beer and film.. film always won… fewer friends, more images.. not quite sure about that choice but I was fascinated with imagery. While pre-visualization is, I feel, an important magic and can result in the difference between an ok image and a great one – it’s not an easy process to follow in candid streetphotography, where the recognition of a potential shot and the chance to grab it is often 1-2 seconds.

This results, quite frequently, with things in the image not being where I imagined them in terms of composition, lighting, color balance, etc. So enter my ‘white lies’. I used to accomplish this in the darkroom with push/pull of developing, and significant tweaking during printing (burning, dodging, different choice of contrast printing papers, etc.). Now I use Photoshop (I’m not particularly an Adobe disciple, but I started with this program in 1989 with version 0.87 (known as part of Barneyscan, on my Mac Classic) and we’ve kind of grown up together… I just haven’t bothered to learn another program. It does what I need, I’m sure that I only know about 20% of its current capabilities, but that’s enough for my requirements.

The other extreme that can be accomplished by Photoshop experts (and I use the term generically here) are the ‘black lies’. This is where one puts Oprah’s head on someone else’s body, performs ‘digital liposuction’ to the extent that Lena Dunham and Adele both scream “enough!”, and many celebrities find their faces applied to actors and scenes (typically in North Hollywood) where they have never been, nor would want to… There’s actually a great novel by the late Michael Crichton [Rising Sun, 1992] that contains a detailed subplot about digital photomanipulation of video imagery. At that time, it took a supercomputer to accomplish the detailed and sophisticated retouching of long video sequences – today tools such as Photoshop and After Effects could accomplish this on a desktop workstation in a matter of hours.

"Duotone" technique [background masked and converted to monochrome to focus the viewer on the foreground image]

“Duotone” technique [background masked and converted to monochrome to focus the viewer on the foreground image] [©2016 Ed Elliott / Clearlight Imagery]

A technique I frequently use is Duotone – and even here I am being technically inaccurate. What I mean by this is separating the object of interest from the background by masking the subject and turning the rest of the image into black and white. The juxtaposition of a color subject against a monochrome background helps isolate and focus the viewer’s attention on the subject. Frequently in streetphotography the opportunity to place the subject against a non-intrusive background doesn’t exist, so this technique is quite effective in ‘turning down’ the importance of the often busy and distracting surrounds. [Technically the term duotone is used for printing the entire image in gradations of only two colors]. Is this ‘manipulation’? Yes. Does it materially detract from, or alter the intent of, the original image that I pre-visualized in my head? No. I firmly stand behind this point of view, that all photographs “lie” to one extent or another, and any tool that the photographer has at his or her hand to generate a final image that is in accordance with the original intent is fair game. What matters is the act of conveying the vision of the photographer to the brain of the viewer. Period.

The ‘photograph’ is just the medium that transports that image. At the end of the day, a photo is a conglomeration of pixels (either printed or glowing) that transmit photons to the human visual system, and ultimately end up in the visual cortex in the back of the human brain. That is where we actually “see”.

Early photography (and motion picture films) were only available in black & white. When color photography first came along, the colors were not ‘natural’. As emulsions improved things got better, but even so there was a marked deviation from ‘natural’ that was actually ‘designed in’ by Kodak and other film manufacturers. The saturation and color mapping of Kodachrome did not match reality, but it did satisfy the public that equated punchy colors with a ‘good color photo’ and made those vacation memories happy ones.. and therefore sold more film. The more subdued, and realistic, Ektachrome came along as professional photographers pushed for choice (and quite frankly an easier and more open developing process – Kodachrome could only be processed by licensed labs and it was notoriously difficult to process well). The down side of early Ektachrome emulsions was the unfortunate instability of the dye layers in color transparency film – leading to rapid fading of both slides and movies.

As one who has worked in film preservation and restoration for decades, it was interesting to note that an early color process (the Technicolor 3-stripe method) that was originally designed just to get vibrant colors on the movie screen in the 1930’s had a resurgence in film preservation. Turned out that so many of the early Ektachrome films from the 1950’s and 1960’s experienced rapid fading that significant restoration efforts were necessary to salvage some important movies. The only way at that time (before economical digital scanning of movies was possible) was to – after restoration of the color negative – scan using the Technicolor process and make 3 separate black & white films that represented the cyan, magenta and yellow dye layers. Then someday in the future the 3 negatives could be optically combined and printed back on to color film for viewing.

There is No Objective Truth in Photography (or Painting, Music…)

All photography is an illusion. Using a lens, a photo-sensitive element of some sort and a box to restrict the image to only the light coming through the lens, a photograph is a rendering of what is before the lens. Nothing more. Distorted and limited by the photographer’s choice of point of view, lens, aperture, shutter speed, film/sensor and so on; the resultant image – if correctly executed, reflects at most the inner vision of the photographer’s mind/perception of the original scene. Every photograph has a story (some more boring than others).

One of the great challenges of photography (and possibly one of the reasons that until quite recently this art form was not taken seriously) is that on first glance many photos appear to be just a ‘copy of reality’ – and therefore contain no inherent artistic value. Nothing could be further from the truth. It’s just that that ‘art’ hides in plain sight… It is our collective, subjective, and inaccurate view that photographs are ‘truthful’ and accurately represent the reality that was before the lens that is the root of the problem that engendered this post. We naively assume that photos can be trusted, that they show us the only possible view of reality. It’s time to grow up, to accept that photography, just like all other art forms, is a product of the artist, first and foremost.

Even the unassuming mom who is taking snapshots of her kids is making choices – whether she knows it or not – about each of the parameters already discussed. Since most snapshot (or cellphone) cameras have wide angle lenses, the ‘huge nose’ effect of close-up pics of babies and youngsters (that will haunt these innocent children forever on Facebook and Instagram – data never dies…) is just an objective artifact of lens choice and distance to subject. Somewhere along the line our moral compass became out of whack when we started drawing highly artificial lines around ‘acceptable editorial behavior’ and so on. An entirely different discussion – which is worthy of a separate post – can be had in terms of the photographer’s (or publisher’s) intention in sharing an image. If a deliberate attempt to misrepresent the scene, for financial gain, allocation of justice, change in power, etc. is taken, that is an issue. But the same issue exists whether the medium that transports such a distortion is the written word, an audio recording, a painting or a 3D holograph. It is illogical to apply a set of standards or restrictions to one art form and not another, just to attempt to reign in inadvertent or deliberate distortions in a story that may be deduced from the art by an observer.

To use another common example, we have all seen many photos of a full moon rising behind a skyline, trees on a ridge, etc. – typically with a really large moon – and most observers just appreciate the image, the impact, the feeling. Even some rudimentary science, and a bit of experience with photography, reveals that most such images are a composite, with a moon image enlarged and layered in behind the foreground. The moon, is simply never that large, in relation to the rest of the image. In many cases I have seen, the lighting of the rest of the scene clearly shows that the foreground was shot at a different time of night than the moon (a full moon on the horizon only occurs at dusk). I have also seen many full moons in photographs that are at astronomically impossible locations in the sky, given the longitude and latitude of the foreground that is shown in the image.

An example of "Moon on Steroids"... The actual size of the moon (31 minutes of arc) is about the same size as your thumbnail if you extend your arm fully. In this picture it's obvious (look at the grasses on the ground) that the tree is approximately 10 ft tall. In reality, the moon would be nestled in between a couple of the smaller branches.

An example of “Moon on Steroids”… The actual size of the moon (31 minutes of arc) is about the same size as your thumbnail if you extend your arm fully. In this picture it’s obvious (look at the grasses on the ground) that the tree is approximately 10 ft tall. In reality, the moon would be nestled in between a couple of the smaller branches.

Why is it that such an esteemed, and talented, photographer as Steve McCurry is chastised for removing some distracting bits of an image – which in no way detracted from the ‘story’ of the image – and yet I dare say that no one in their right mind wold criticize Leonardo da Vinci for including a physically impossible background (the almost mythological mountains and seas) in his rendition of Lisa Gherardini for his painting of “Mona Lisa”? As someone who has worked in the film/video/audio industry for my entire professional life, I can tell you with absolute certainty that no modern audio recording – from Adele to Ziggy Marley – is released that is not ‘digitally altered’ in some fashion. Period. It is just an absolute in today’s production environment to ‘clean up’ every track, every mix, every completed master – removing unwanted echoes, noise, coughs, burps, and other audio equivalents of Mr. Fisch’s plastic bag… and no one, ever, has complained about this or accused the artists of being ‘dishonest’.

This double standard needs to be put to rest permanently. It reflects poorly on those who take this position, demonstrating their lack of technical knowledge and a narrow perception of the art form of photography, and furthermore gives power to those whose only interest is to malign others and detract from the powerful impact that a great image can create. If ignorant observers can really believe that an airplane in an image as depicted is ‘real’ (for the airplane to be of such a size in relation to the tunnel and ladders it would have to be flying at a massively illegal low altitude in that location) then such observers must take responsibility. Does the knowledge that this placement of the plane is ‘not real’ detract from the photo? Does the contraposition of ‘stillness vs movement’ (concrete and steel silo vs rapidly moving aircraft) create a visually stimulating image? Is it important whether that occurred ‘in reality’ or not? Would an observer judge it differently if this was a painting or a sketch instead of a photograph?

I love the art and science of photography. I am daily enamored with the images that talented and creative people all over the world share, whether a mixture of camera originals, composites, pure fiction created in the ‘darkroom’ or some combination of all. This is a wondrous art form, and must be supported at all costs. It’s not easy, it takes dedication, effort, skill, perseverance, money, time and love – just as any art form. I would hope that we could move the conversation to what matters: ‘truth in advertising’. In a photo contest, nothing, repeat nothing, should matter except the image itself. Just like painting, sculpture, music, ceramics, dance, etc. – the observed ‘art’ should be judged only by the merits of the entity itself, without subjective expectations or philosophical distortions. If an image is used to reinforce a particular ‘story’ – whether for ethical, legal or news purposes, then both the words and the images must be authentic. Authentic does not mean ‘un-retouched’, it does mean that there is no ‘black lie’ in what is conveyed.

To summarize, let’s stop believing that photographs are ‘real’ – but let’s start accepting the art, craftsmanship, effort and focus that this medium brings to all of us. Let’s apply a common frame of reference to all forms of art, whether they be painting, writing, photography, music, etc. – terms of authenticity and purpose. Would we chide Escher for attempting to fool us with visual cues of an impossible reality?

iPhone4S – Section 4: Software

March 13, 2012 · by parasam

This section of the series of posts on the iPhone4S camera system will address the all-important aspect of software – the glue that connects the hardware we discussed in the last section with the human operator. Without software, the camera would have little function. Our discussion will be divided into three parts:  Overview; the iOS camera subsystem of the Operating System; and the actual applications (apps) that users normally interact through to take and process images.

As the audience of this post will likely cover a wide range of knowledge, I will try to not assume too much – and yet also attempt not to bore those of you who likely know far more than I do about writing software and getting it to behave in a somewhat consistent fashion…

Overview

The iPhone, surprise-surprise – is a computer. A full-fledged computer, just like what sits on your desk (or your lap). It has a CPU (brain), memory, graphics controller, keyboard, touch surface (i.e. mouse), network card (WiFi & Bluetooth), a sound card and many other chips and circuits. It even has things most desktops and laptops don’t have:  a GPS radio for location services, an accelerometer (a really tiny gyroscope-like device that senses movement and position of the phone), a vibrating motor (to bzzzzzz at you when you get a phone call in a meeting) – and a camera. A rather cool, capable little camera. Which is rather the point of our discussion…

So… like any good computer, it needs an operating system – a basic set of instructions that allows the phone to make and receive calls, data to be written to and read from memory, information to be sent and retrieved via WiFi – and on and on. In the case of the iDevice crowd (iPod, iPhone, iPad) this is called iOS. It’s a specialized, somewhat scaled down version of the full-blown OS that runs on a Mac. (Actually it’s quite different in the details, but the concept is exactly the same). The important part of all this for our discussion is that a number of basic functions that affect camera operation are baked into the operating system. All an app has to do is interact via software with these command structures in the OS, present the variable to the user in a friendly manner (like turn the flash on or off), and most importantly, take the image data (i.e the photograph) and allow the user to save it or modify it, based on the capability of the app in question.

The basic parameters that are available to the developer of an app are the same for everyone. It’s an equal playing field. Every app developer has exactly the same toolset, the same available parameters from the OS, and the same hardware. It’s up to the cleverness of the development team to achieve either brilliance or mediocrity.

The Core OS functions – iOS Camera subsystem

The following is a very brief introduction to some of the basic functions that the OS exposes to any app developer – which forms the basis for what an app can and cannot do. This is not an attempt to show anyone how to program a camera app for the iPhone! Rather, a small glimpse into some of the constraints that are put on ALL app developers – the only connection any app has with the actual hardware is through the iOS software interface – also known as the API (Application Programming Interface). For instance, Apple passes on to the developers through the API only 3 focus modes. That’s it. So you will start to see certain similarities between all camera apps, as they all have common roots.

There are many differences, due to the way a given developer uses the functions of the camera, the human interface, the graphical design, the accuracy and speed of computations in the app, etc. It’s a wide open field, even if everyone starts from the same place.

In addition, the feature sets made available through the iOS API change with each hardware model, and can (and do!) change with upgrades of the iOS. Of course, each time Apple changes the underlying API, each app developer is likely to need to update their software as well. So then you’ll get the little red number on your App Store icon, telling you it’s time to upgrade your app – again.

The capabilities of the two cameras (front-facing and rear-facing) are markedly different. In fact, all of the discussion in this series has dealt only with the rear-facing camera. That will continue to be the case, since the front-facing camera is of very low resolution, intended pretty much just to support FaceTime and other video calling apps.

Basic iOS structure

The iOS is like an onion, layers built upon layers. At the center of the universe… is the Core. The most basic is the Core OS. Built on top of this are additional Core Layers: Services, Data, Foundation, Graphics, Audio, Video, Motion, Media, Location, Text, Image, Bluetooth – you get the idea…

Wrapped around these “apple cores” are Layers, Frameworks and Kits. These Apple-provided structures further simplify the work of the developer, provide a common and well tuned user interface, and expand the basic functionality of the core systems. Some examples are:  Media Layer (including MediaPlayer, MessageUI, etc.); the AddressBook Framework; the Game Kit; and so on.

Our concern here will be only with a few structures – the whole reason for bringing this up is to allow you, the user, to understand what parameters on the camera and imaging systems can be changed and what can’t.

Focus Modes

There are three focus modes:

  • AVCaptureFocusModeLocked: the focal area is fixed.

This is useful when you want to allow the user to compose a scene then lock the focus.

  • AVCaptureFocusModeAutoFocus: the camera does a single scan focus then reverts to locked.

This is suitable for a situation where you want to select a particular item on which to focus and then maintain focus on that item even if it is not the center of the scene.

  • AVCaptureFocusModeContinuousAutoFocus: the camera continuously auto-focuses as needed.

Exposure Modes

There are two exposure modes:

  • AVCaptureExposureModeLocked: the exposure mode is fixed.
  • AVCaptureExposureModeAutoExpose: the camera continuously changes the exposure level as needed.

Flash Modes

There are three flash modes:

  • AVCaptureFlashModeOff: the flash will never fire.
  • AVCaptureFlashModeOn: the flash will always fire.
  • AVCaptureFlashModeAuto: the flash will fire if needed.

Torch Mode

Torch mode is where a camera uses the flash continuously at a low power to illuminate a video capture. There are three torch modes:

  •    AVCaptureTorchModeOff: the torch is always off.
  •    AVCaptureTorchModeOn: the torch is always on.
  •    AVCaptureTorchModeAuto: the torch is switched on and off as needed.

White Balance Mode

There are two white balance modes:

  •    AVCaptureWhiteBalanceModeLocked: the white balance mode is fixed.
  •    AVCaptureWhiteBalanceModeContinuousAutoWhiteBalance: the camera continuously changes the white balance as needed.

You can see from the above examples that many of the features of the camera apps you use today inherit these basic structures from the underlying CoreImage API. There are obviously many, many more parameters that are available for control by a developer team – depending on whether you are doing basic image capture, video capture, audio playback, modifying images with built-in filter, etc. etc.

While we are on the subject of core functionality exposed by Apple, let’s discuss camera resolution.

Yes, I know we have heard a million times already that the iPhone4S has an 8MP maximum resolution (3264×2448). But there ARE other resolutions available. Sometimes you don’t want or need the full resolution – particularly if the photo function is only a portion of your app (ID, inventory control, etc.) – or even as a photographer you want more memory capacity and for the purpose at hand a lower resolution image is acceptable.

It’s almost impossible to find this data, even on Apple’s website. Very few apps give access to different resolutions, and the ones that do don’t give numbers – it’s ‘shirt sizes’ [S-M-L]. Deep in the programming guidelines for CoreImage I found a parameter AVCaptureStillIMageOutput that allows ‘presetting the session’ to one of the values below:

PresetNameStill           PresetResolutionStill

Photo                              3264×2448

High                                1920×1080

Med                                 640×480

Lo                                   192×144

PresetNameVideo         PresetResolutionVideo

1080P                              1920×1080

720P                                1280×720

480P                                640×480

I then found one of the very few apps that support ALL of these resolutions (almost DSLR) and shot test stills and video at each resolution to verify. Everything matched the above settings EXCEPT for the “Lo” preset in Still image capture. The output frame measured 640×480, the same as “Med” – however the image quality was much lower. I believe that the actual image IS captured at 192×144, but then is scaled up to 640×480 – why I am not sure, but it is apparent that the Lo image is of far lower quality than Med. The image size was lower for the Lo quality image – but not enough that I would ever use it. On the tests I shot, Lo = 86kB, Med = 91kB. The very small difference in size is not worth the big drop in quality.

So… now you know. You may never have need of this, or not have an app that supports it – but if you do require the ability to shoot thousands of images and have them all fit in your phone, now you know how.

There are two other important aspects of image capture that are set by the OS and not changeable by any app:  color space and image compression format. These are fixed, but different, for still images and video footage. The color space (which for the uninitiated is essentially the gamut – or range of colors – that can be reproduced by a color imaging system) is set to sRGB. This is a common and standard setting for many digital cameras, whether full sized DSLR or cellphones.

It’s beyond the scope of this post to get into color space, but I personally will be overjoyed when the relatively limited gamut of sRGB is put to rest… however, it is appropriate for the iPhone and other cellphone camera systems due to the limitations of the small sensors.

The image compression format used by the iPhone (all models) is JPEG, producing the well-known .jpg file format. Additional comments on this format, and potential artifacts, were discussed in the last post. Since there is nothing one can do about this, no further discussion at this time.

In the video world, things are a little different. We actually have to be aware of audio as well – we get stereo audio along with the video, so we have two different compression formats to consider (audio and video), as well as the wrapper format (think of this as the envelope that contains the audio and video track together in sync).

One note on audio:  if you use a stereo external microphone, you can record stereo audio along with the video shot by the iPhone4S. This requires an external device which connects via the 30-pin docking connector. You will get far superior results – but of course it’s not as convenient. Video recordings made with the on-board microphone (same one you use to speak into the phone) are mono only.

The parameters of the video and audio streams are detailed below: (this example is for the full 1080P resolution)

General

Format : MPEG-4
Format profile : QuickTime
Codec ID : qt
Overall bit rate : 22.9 Mbps

Video

ID : 1
Format : AVC
Format/Info : Advanced Video Codec
Format profile : Baseline@L4.1
Format settings, CABAC : No
Format settings, ReFrames : 1 frame
Codec ID : avc1
Codec ID/Info : Advanced Video Coding
Bit rate : 22.4 Mbps
Width : 1 920 pixels
Height : 1 080 pixels
Display aspect ratio : 16:9
Rotation : 90°
Frame rate mode : Variable
Frame rate : 29.500 fps
Minimum frame rate : 15.000 fps
Maximum frame rate : 30.000 fps
Color space : YUV
Chroma subsampling : 4:2:0
Bit depth : 8 bits
Scan type : Progressive
Bits/(Pixel*Frame) : 0.367
Title : Core Media Video
Color primaries : BT.709-5, BT.1361, IEC 61966-2-4, SMPTE RP177
Transfer characteristics : BT.709-5, BT.1361
Matrix coefficients : BT.709-5, BT.1361, IEC 61966-2-4 709, SMPTE RP177

Audio

ID : 2
Format : AAC
Format/Info : Advanced Audio Codec
Format profile : LC
Codec ID : 40
Bit rate mode : Constant
Bit rate : 64.0 Kbps
Channel(s) : 1 channel
Channel positions : Front: C
Sampling rate : 44.1 KHz
Compression mode : Lossy
Title : Core Media Audio

The highlights of the video/audio stream format are:

  • H.264 (MPEG-4) video compression, Baseline Profile @ Level 4.1, 22Mb/s
  • QuickTime wrapper (.mov)
  • AAC-LC audio compression, 44.1kHz, 64kb/s

The color space for the video is the standard adopted for HD television, Rec709. Importantly, this means that videos shot on the iPhone will look correct when played out on an HDTV.

This particular sample video I shot for this exercise was recorded at just under 30 frames per second (fps), the video camera supports a range of 15-30fps, controlled by the application.

Software Applications for Still & Video Imaging on the iPhone4S

The following part of the discussion will cover a few of the apps that I use on the iPhone4S. These are just what I have come across and find useful – this is not even close to all the apps available for the iPhone for imaging. I obtained all of the apps via normal retail Apple App Store – I have no relationship with any of the vendors – they are unaware of this article (well, at least until it’s published…)

I am not a professional reviewer, and take no stance as to absolute objectivity – I do always try to be accurate in my observations, but reserve the right to have favorites! The purpose in this section is really to give examples of how a few representative apps manage to expose the hardware and underlying iOS software to the user, showing the differences in design and functionality.

These apps are mostly ‘purpose-built’ for photography – as opposed to some other apps that have a different overall purpose but contain imaging capabilities as part of the overall feature set. One example (that I have included below) is EasyRelease, an app for obtaining a ‘model release’ [legal approval from the subject to use his/her likeness for commercial purposes]. This app allows taking a picture with the iPhone/iPad for identification purposes – so has some very basic image capture abilities – it’s not a true ‘photo app’.

BTW, this entire post has been focused on only the iPhone camera, not the iPad (both 2nd & 3rd generation iPads contain cameras) – I personally don’t think a tablet is an ideal imaging device – it’s more like a handy accessory if you have your tablet out and need to take a quick snap – than a camera. Evidently Apple feels this way as well, since the camera hardware in the iPads have always lagged significantly behind that of the iPhone. However, most photo apps will work on both the iPad as well as the iPhone (even on the 1st generation model – with no camera), since many of the apps support working with photos from the Camera Roll (library) as well as directly from the camera.

I frequently work this way – shoot on iPhone, transfer to iPad for easier editing (better for tired eyes and big fingers…), then store or share. I won’t get into the workflows of moving images around – it’s not anywhere near as easy as it should be, even with iCloud – but it’s certainly possible and often worth the effort.

Here is the list of apps that will be covered. For quick reference I have listed them all below with a simple description, a more detailed set of discussions on each app follows.

[Note:  due to the level of detail, including many screenshots and photo examples used for the discussion of each app, I have separated the detailed discussions into separate posts – one for each app. This allows the reader to only select the app(s) they may be interested in, as well as keep the overall size of an individual post to a reasonable size. This is important for mobile readers…]

Still Imaging

Each of the app names (except for original Camera) is a link that will take you to the corresponding page in the App Store.

Camera  The original photo app included on every iPhone. Basic but intuitive – and of course the biggest plus is the ability to fast-launch this without logging in to the home page first. For streetphotography (my genre) this a big feature.

Camera+  I use this as much for editing as shooting, biggest advantage over native iPhone camera app is you can set different part of frame for exposure and focus. The info covers the just-released version 3.0

Camera Plus Pro  This is similar to the above app (Camera+) – some additional features, not the least of which it shoots video as well as still images. Although made by a different company, it has many similar features, filters, etc. It allows for some additional editing functions and features ‘live filters’ – where you can add the filter before you start shooting, instead of as a post-production workflow in Camera+. However, there are tradeoffs (compression ratio, shooting speed, etc.)  Compare the apps carefully – as always, know your tools…  {NOTE: There are two different apps with very similar names: Camera+, made by TapTapTap with the help of pixel wizard Lisa Bettany; and Camera Plus, made by Global Delight Technologies – who also make Camera Plus Pro – the app under discussion here. Camera+ costs $0.99 at the time of this post; Camera Plus is free; Camera Plus Pro is $1.99 — are you confused yet? I was… to the point where I felt I needed to clarify this situation of unfortunately very similar brand names for somewhat similar apps – but there are indeed differences. I’m going to be as objective in my observations as possible. I am not reviewing Camera Plus, as I don’t use it. Don’t infer anything from that – this whole blog is about what I know about what I personally use. I will be as scientific and accurate as possible once I write about a topic, but it’s just personal preference as to what I use}

almost DSLR is the closest thing to fully manual control of iPhone camera you can get. Takes some training, but is very powerful once you get the hang of it.

ProHDR I use this a lot for HDR photography. Pic below was taken with this. It’s unretouched! That’s how it came out of the camera…

Big Lens This allows you to manually ‘blur’ background to simulate shallow depth of field. Quite useful since 30mm focal length lens (35mm equivalent) puts almost everything in focus.

Squareready  If you use Instagram then you know you need to upload in square format. Here’s the best way to do that.

PhotoForge2  Powerful editing app. Basically Photoshop on the iPhone.

Snapseed  Another very good editing app. I use this for straightening pix, as well as ability to tweak small areas of picture differently. On some iPhone snaps I have changed 9 different areas of picture with things like saturation, contrast, brightness, etc.

TrueDoF  This one calculates true depth-of-field for a given lens, sensor size, etc. I use this when shooting DSLR to plan my range of focus once I know my shooting distance.

OptimumCS-Pro  This is sort of inverse of the above app – here you enter the depth of field you want, then OCSP tells you the shooting distance and aperture you need for that.

Iris Photo Suite  A powerful editing app, particularly in color balance, changing histograms, etc. Can work with layers like Photoshop, perform noise reduction, etc.

Filterstorm  I use this app to add visible watermarks to images, as well as many other editing functions. Works with layers, masks, variable brushes for effects, etc.

Genius Scan+  While this app was intended for (and I use it for this as well) scanning documents with the camera to pdf, I found that it works really well to straighten photos… like when you are shooting architectural and have unavoidable keystoning distortion… Just be sure to pull back and give yourself some surround on your subject, as the perspective cropping technique that is used to straighten costs you some of your frame…

Juxtaposer  This app lets you layer two different photos onto each other, with very controllable blending.

Frame X Frame  Camera app, used for stop motion video production as well as general photography.

Phonto  One of the best apps for adding titles and text to shots.

SkipBleach  This mimics the effect of skipping (or reducing) the bleach step in photochemical film processing. It’s what gives that high contrast, faded and harsh ‘look’.

Monochromia  You probably know that getting a good B&W shot out of a color original is not as simple as just desaturating.. here’s the best iPhone app for that.

MagicShutter  This app is for time exposures on iPhone, also ‘light painting’ techniques.

Easy Release  Professional model release. Really, really good – I use it on iPad and have never gone back to paper. Full contractual terms & conditions, you can customize with your additional wording, logo, etc. – a relatively expensive app ($10) but totally worth it in terms of convenience and time saved if you need this function.

Photoshop Express  This is actually a bit disappointing for a $5 app, others above do more for less – except the noise reduction (a new feature) is worth it for that alone. It’s really, really good.

Motion Imaging

Movie*Slate  A very good slate app.

Storyboard Composer  Excellent app for building storyboards from shot or library photos, adding actors, camera motion, script, etc. Powerful.

Splice  Unbelievable – a full video editor for the iPhone/iPad. Yes, you can: drop movies and stills on a timeline, add multiple sound tracks and mix them, work in full HD, has loads of video and audio efx, add transitions, burn in titles, resize, crop, etc. etc. Now that doesn’t mean that I would choose to edit my next feature on a phone…

iTC Calc  The ultimate time code app for iDevices. I use on both iPad and iPhone.

FilmiC Pro  Serious movie camera app for iPhone. Select shooting mode, resolution, 26 frame rates, in-camera slating, colorbars, multiple bitrates for each resolution, etc. etc.

Camera+ Pro  This app is listed under both sections, as it has so many features for both still and motion photography. The video capture/edit portion even has numerous filters that can be used during capture.

Camcorder Pro  simple but powerful HD camera app. Anti-shake and other features.

This concludes this post on the iPhone4S camera software. Please check out the individual posts following for each app mentioned above. I will be posting each app discussion as I complete it, so it may be a few days before all the app posts are uploaded. Please remember these discussions on the apps are merely my observations on their behavior – they are not intended to be a full tutorial, operations manual or other such guide. However, in many cases, the app publisher offers little or no extra information, so I believe the data provided will be useful.

  • Blog at WordPress.com.
  • Connect with us:
  • Twitter
  • Vimeo
  • YouTube
  • RSS
  • Follow Following
    • Parasam
    • Join 95 other followers
    • Already have a WordPress.com account? Log in now.
    • Parasam
    • Customize
    • Follow Following
    • Sign up
    • Log in
    • Report this content
    • View site in Reader
    • Manage subscriptions
    • Collapse this bar
 

Loading Comments...