• Home
  • about this blog
  • Blog Posts

Parasam

Menu

  • design
  • fashion
  • history
  • philosophy
  • photography
  • post-production
    • Content Protection
    • Quality Control
  • science
  • security
  • technology
    • 2nd screen
    • IoT
  • Uncategorized
  • Enter your email address to follow this blog and receive notifications of new posts by email.

  • Recent Posts

    • Take Control of your Phone
    • DI – Disintermediation, 5 years on…
    • Objective Photography is an Oxymoron (all photos lie…)
    • A Historical Moment: The Sylmar Earthquake of 1971 (Los Angeles, CA)
    • Where Did My Images Go? [the challenge of long-term preservation of digital images]
  • Archives

    • September 2020
    • October 2017
    • August 2016
    • June 2016
    • May 2016
    • November 2015
    • June 2015
    • April 2015
    • March 2015
    • December 2014
    • February 2014
    • September 2012
    • August 2012
    • June 2012
    • May 2012
    • April 2012
    • March 2012
    • February 2012
    • January 2012
  • Categories

    • 2nd screen
    • Content Protection
    • design
    • fashion
    • history
    • IoT
    • philosophy
    • photography
    • post-production
    • Quality Control
    • science
    • security
    • technology
    • Uncategorized
  • Meta

    • Register
    • Log in
    • Entries feed
    • Comments feed
    • WordPress.com

Browsing Tags photography

Objective Photography is an Oxymoron (all photos lie…)

August 18, 2016 · by parasam

There is no such thing as an objective photograph

A recent article in the Wall Street Journal (here) entitled “When Pictures Are Too Perfect” prompted this post. The premise of the article is that too much ‘manipulation’ (i.e. Photoshopping) is present in many of today’s images, particularly in photojournalism and photo contests. There is evidently an arbitrary standard (that no one can appear to objectively define) that posits that essentially only an image ‘straight out of the camera’ is ‘honest’ or acceptable – particularly if one is a photojournalist or is entering your image into some form of competition. Examples are given, such as Harry Fisch having a top prize from National Geographic (for the image “Preparing the Prayers at the Ganges”) taken away because he digitally removed an extraneous plastic bag from an unimportant area of the image. Steve McCurry, best known for his iconic “Afghan Girl” photo on the cover of National Geographic magazine in 1985, was accused of digital manipulation of some images shot in 1983 in Bangladesh and India.

On the whole, I find this absurd and the logic behind such attempts at defining an ‘objective photograph’ fatally flawed. From a purely scientific point of view, there is absolutely no such thing as an ‘objective’ photograph – for a host of reasons. All photographs lie, permanently and absolutely. The only distinction is by how much, and in how many areas.

The First Lie: Framing

The very nature of photography, from the earliest days until now, has at its core an essential feature: the frame. Only a certain amount of what can be seen by the photographer can be captured as an image. There are four edges to every photograph. Whether the final ‘edges’ presented to the viewer are due to the limitations of the camera/film/image sensor, or cropping during the editing process, is immaterial. The initial choice of frame is made by the photographer, in concert with the camera in use, which presents physical limitations that cannot be exceeded. The choice of frame is completely subjective: it is the eye/brain/intuition of the photographer that decides in the moment where to point the camera, what to include in the frame. Is pivoting the camera a few degrees to the left to avoid an unsightly telephone pole “unwarranted digital manipulation?” Most news editors and photo contest judges would probably not agree. But what if the same exact result is obtained by cropping the image during an editing process – already we start to see disagreement in the literature.

If Mr. Fisch had simply walked over and picked up the offending plastic bag before exposing the image, he would likely be the deserved recipient of his 1st place prize from National Geographic, but as he removed the bag during editing his photograph was disqualified. By this same logic, when Leonardo Da Vinci painted the “Mona Lisa” there is a balustrade with two columns behind her. There is perfect symmetry in the placement of Lisa Gherardini (the presumed model) between the columns, which helps frame the subject. Painting takes time, it is likely that a bird would land from time to time on the balustrade. Was Leonardo supposed to include the bird or not? Did he ‘manipulate’ the image by only including the parts of the image that were important to the composition? Would any editor or judge dare ask him today, if that was possible?

Framing and large depth-of-field was necessary in this shot to tell the story: Background and facial expression of the subject both needed to be in focus. [©2012 Ed Elliott / Clearlight Imagery]

“So-So Happy!” Framing and large depth-of-field was necessary in this shot to tell the story: Background and facial expression of the subject both needed to be in focus. [©2012 Ed Elliott / Clearlight Imagery]

Framing and large depth-of-field was necessary in this shot to tell the story: Background and facial expression of the subject both needed to be in focus. [©2012 Ed Elliott / Clearlight Imagery]

“So-So Happy… NOT!” Framing and large depth-of-field was necessary in this shot to tell the story: Background and facial expression of the subject both needed to be in focus. [©2012 Ed Elliott / Clearlight Imagery]

A combination example of framing and depth-of-field. One photographer is standing 6 ft further away (from my camera position) than the other, but the foreshortening of the 200mm telephoto appears to depict 'dueling photographers'. [©2012 Ed Elliott / Clearlight Imagery]

A combination example of framing and depth-of-field. One photographer is standing 6 ft further away (from my camera position) than the other, but the foreshortening of the 200mm telephoto appears to depict ‘dueling photographers’. [©2012 Ed Elliott / Clearlight Imagery]

The Second Lie: The Lens

No photograph can occur without a lens. Every lens has certain irrefutable properties: focal length and maximum aperture being the most important. Each of these parameters impart a vital, and subjective, aspect to the image subsequently captured. Since the ‘lingua franca’ of focal length is the ubiquitous 35mm camera, we can generalize here: 50mm being the so-called ‘normal’ lens; 35mm is considered ‘wide angle’, 24mm ‘very wide angle’ and 10mm a ‘fisheye’. Going in the other direction, 85mm is often considered a ‘portrait’ lens (slight close-up), 105mm a medium ‘telephoto’, 200mm a ‘telephoto’ and anything beyond is for sports or space exploration. Each focal length brings more or less of the frame into focus, and inversely shortens the depth of field. Wide angle lenses tend to bring the entire field of view into sharp focus, while telephotos blur out everything except what the photographer has selected as the prime focus point.

Normal

Normal lens [©2016 Ed Elliott / Clearlight Imagery]

Telephoto lens (for illustrative purpose only, please ignore the soft focus at edges caused by lens adapter)

Telephoto lens (for illustrative purpose only, please ignore the soft focus at edges caused by lens adapter) [©2016 Ed Elliott / Clearlight Imagery]

Wide Angle lens (for illustrative purpose only, please ignore the soft focus at edges caused by lens adapter)

Wide Angle lens (for illustrative purpose only, please ignore the soft focus at edges caused by lens adapter) [©2016 Ed Elliott / Clearlight Imagery]

FishEye lens (for illustrative purpose only, please ignore the soft focus at edges caused by lens adapter) - curvature and edge distortions are normal for such an extreme angle-of-view lens

FishEye lens (for illustrative purpose only, please ignore the soft focus at edges caused by lens adapter) – curvature and edge distortions are normal for such an extreme angle-of-view lens   [©2016 Ed Elliott / Clearlight Imagery]

In addition, each lens type distorts the field of view noticeably: wide angle lenses tend to exaggerate the distance between foreground and background, making the closer objects in the frame look larger than they actually are, and making distant objects even smaller. Telephoto lenses have the opposite effect, foreshortening the image and ‘flattening’ the resulting picture. For example, in a long telephoto shot of a tree on a ridge backlit by the moon, both the tree and the moon can be tack sharp and apparently the moon is directly behind the tree, even though it is 239,000 miles away.

The other major ‘subjective’ quality of any lens is the aperture chosen by the photographer. Otherwise commonly known as the “f-stop” this is the ratio of the focal length of the lens divided by the diameter of the ‘entrance pupil’ (the size of the hole that the aperture diaphragm is set to on a given capture). The maximum aperture (the largest ‘hole’ that can be set by the photographer) depends on the diameter of the lens itself, in relation to the focal length. For example, with a ‘normal’ 50mm lens if the lens is 25mm in diameter then the maximum aperture is f/2 (50/25). Larger apertures (lower f-stop ratios) require larger lenses, and are correspondingly more difficult to use, heavy and expensive. One can see that an f/2 lens for a 50mm focal length is not that huge, to obtain the same f/2 ratio for a 200mm telephoto would require a lens that is at least 100mm (4in) in diameter – making such a device huge, heavy and obscenely expensive. As a quick comparison, (Nikon lenses, full frame, prime lens, priced from B&H Photo – discount photo equipment supplier) a 50mm f/2.8 lens costs $300, while the same lens in f/1.2 costs $700. A 400mm telephoto in f/5.6 would be $2,200, while an identical focal length with a maximum aperture of f/2.8 will set you back a little over $12,000.

Exaggeration of object size with wide angle lens: farther objects appear much smaller than in 'reality'.

Exaggeration of object size with wide angle lens: farther objects appear much smaller than in ‘reality’. [©2011 Ed Elliott / Clearlight Imagery]

Flattening and foreshortening of the image as a result of long telephoto lens (f8, 400mm lens) - the crane is hundreds of feet closer to the camera than the dark buildings behind, but looks like they are directly adjacent.

Flattening and foreshortening of the image as a result of long telephoto lens (f/8, 400mm lens) – the crane is hundreds of feet closer to the camera than the dark buildings behind, but looks like they are directly adjacent. [©2013 Ed Elliott / Clearlight Imagery]

Depth of field with shallow aperture (f/2.4) - in this case even with a wide angle lens the background is out of focus due to the large distance between the foreground and the background (in this case the Hudson River separated the two...)

Depth of field with shallow aperture (f/2.4) – in this case even with a wide angle lens the background is out of focus due to the large distance between the foreground and the background (in this case the Hudson River separated the two…) [©2013 Ed Elliott / Clearlight Imagery]

Flattening and foreshortening of the image with a long telephoto lens. The ship is almost 1/4 mile further away than the green roadway sign, yet appears to be directly behind it... (f4, 400mm)

Flattening and foreshortening of the image with a long telephoto lens. The ship is almost 1/4 mile further away than the green roadway sign, yet appears to be directly behind it… (f/4, 400mm) [©2013 Ed Elliott / Clearlight Imagery]

Wide angle lens (14-24mm zoom lens, set at 16mm - f2.8)

Wide angle lens (14-24mm zoom lens, set at 16mm – f/2.8) [©2012 Ed Elliott / Clearlight Imagery]

Shallow depth of field due to large aperture on telephoto lens (f/4 - 200mm lens on full-frame 35mm DSLR)

Shallow depth of field due to large aperture on telephoto lens (f/4 – 200mm lens on full-frame 35mm DSLR) [©2012 Ed Elliott / Clearlight Imagery]

Wide angle shot, demonstrating sharp focus from foreground to the background. Also exaggeration of perspective makes the bow of the vessel appear much taller than the stern.

Wide angle shot, demonstrating sharp focus from foreground to the background. Also exaggeration of perspective makes the bow of the vessel appear much taller than the stern. [©2013 Ed Elliott / Clearlight Imagery]

The bottom line is that the choice of lens and aperture is a controlling element of the photographer (or her pocketbook) – and has a huge effect on the image taken with that lens and setting. None of these choices can be deemed to be either ‘analog’ or ‘digital’ manipulation of the image during editing, but they have arguably a greater effect on the outcome, message, impact and tenor of the photograph than anything that can be done subsequently in the darkroom (whether chemical or digital).

The Third Lie: Shutter Speed

Every exposure is a product of two factors: Light X Time. The amount of light that strikes a negative (or digital sensor) is governed solely by the selected aperture (and possibly by any additional filters placed in front of the lens); the duration for which the light is allowed to impinge on the negative is set by the shutter speed. While the main property of setting the shutter speed is to produce the correct exposure once the aperture has been selected (to avoid either under or over-exposing the image), there is a huge secondary effect of shutter speed on any motion of either the camera or objects in the frame. Fast shutter speeds (over 1/125th of a second with a normal lens) will essentially freeze any motion, while slow shutter speeds will result in ‘shake’, ‘blur’ and other motion artifacts. While some of these can be just annoying, in the hands of a skilled photographer motion artifacts tell a story. And likewise a ‘freeze-frame’ (from a very fast shutter speed) can distort reality in the other direction, giving the observer a point of view that the human eye could never glimpse in reality. The hours-long time exposure of star trails or the suspended animation shot of a bullet about to pierce a balloon are both ‘manipulations’ of reality – but they take place as the image is formed, not in the darkroom. The subjective experience of a football distorted as the kicker’s foot impacts it – locked in time by a shutter speed of 1/2000th second – is very different to the same shot of the kicker at 1/15th second where his leg is a blurry arc against a sharp background of grass. Two entirely different stories, just from shutter speed choice.

Fast shutter speed to stop action

Fast shutter speed to stop action [©2013 Ed Elliott / Clearlight Imagery]

Combination of two effects: fast shutter speed to stop motion (but not too fast, slight blurring of left foot imparts motion) - and shallow depth of field to render background soft-focus (f4, 200mm lens)

Combination of two effects: fast shutter speed to stop motion (but not too fast, slight blurring of left foot imparts motion) – and shallow depth of field to render background soft-focus (f/4, 200mm lens) [©2013 Ed Elliott / Clearlight Imagery]

High shutter speed to freeze the motion. 1/2000 sec. [©2012 Ed Elliott / Clearlight Imagery]

High shutter speed to freeze the motion. 1/2000 sec. [©2012 Ed Elliott / Clearlight Imagery]

Fast shutter speed to provide clarity and freeze the motion. 1/800 sec @ f/8 [©2012 Ed Elliott / Clearlight Imagery]

Fast shutter speed to provide clarity and freeze the motion. 1/800 sec @ f/8 [©2012 Ed Elliott / Clearlight Imagery]

Although a hand-held shot, I wanted as fine-grained a result as possible, so took advantage of the stillness of the subjects and a convenient wall on which to place the camera. 2 sec exposure with ISO 500 at f8 to keep the depth of field. [©2012 Ed Elliott / Clearlight Imagery]

Although a hand-held shot, I wanted as fine-grained a result as possible, so took advantage of the stillness of the subjects and a convenient wall on which to place the camera. 2 sec exposure with ISO 500 at f/8 to keep the depth of field. [©2012 Ed Elliott / Clearlight Imagery]

The Fourth Lie: Film (or Sensor) Sensitivity [ISO]

As if Pinocchio’s nose hasn’t grown long enough already, we have yet another ‘distortion’ of reality that every image contains as a basic building block: that of film/sensor sensitivity. While we have discussed exposure as a product of Light Intensity X Time of Exposure, one further parameter remains. A so-called ‘correct’ exposure is one that has a balance of tonal values, and (more or less) represents the tonal values of the scene that was photographed. This means essentially that blacks, shadows, mid-tones, highlights and whites are all apparent and distinct in the resulting photograph, and the contrast values are more or less in line with that of the original scene. The sensitivity of the film (or digital sensor) is critical in this regard. Very sensitive film will allow a correct image with a lower exposure (either a smaller aperture, faster shutter speed, or both), while a ‘slow’ [insensitive] film will require the opposite.

A high ISO was necessary to capture the image during late twilight. In addition a slow shutter speed was used - 1/15 sec with ISO of 6400. [©2011 Ed Elliott / Clearlight Imagery]

A high ISO was necessary to capture the image during late twilight. In addition a slow shutter speed was used – 1/15 sec with ISO of 6400. [©2011 Ed Elliott / Clearlight Imagery]

Low ISO (50) to achieve relatively fine grain and best possible resolution (this was a cellphone shot). [©2015 Ed Elliott / Clearlight Imagery]

Low ISO (50) to achieve relatively fine grain and best possible resolution (this was a cellphone shot). [©2015 Ed Elliott / Clearlight Imagery]

Cellphone image at dusk, resulting in ISO 800 with 1/15 sec exposure. Taken from a parking garage, the highlight on the palm is from car headlights. [©2012 Ed Elliott / Clearlight Imagery]

Cellphone image at dusk, resulting in ISO 800 with 1/15 sec exposure. Taken from a parking garage, the highlight on the palm is from car headlights. [©2012 Ed Elliott / Clearlight Imagery]

Night photography often requires very high ISO values and slow shutter speeds. The resulting grain can provide texture as opposed to being a detriment to the shot. [©2012 Ed Elliott / Clearlight Imagery]

Night photography often requires very high ISO values and slow shutter speeds. The resulting grain can provide texture as opposed to being a detriment to the shot. [©2012 Ed Elliott / Clearlight Imagery]

Fine grain achieved with low ISO of 50. [©2012 Ed Elliott / Clearlight Imagery]

Fine grain achieved with low ISO of 50. [©2012 Ed Elliott / Clearlight Imagery]

Slow ISO setting for high resolution, minimal grain (ISO 50) [©2012 Ed Elliott / Clearlight Imagery]

Slow ISO setting for high resolution, minimal grain (ISO 50) [©2012 Ed Elliott / Clearlight Imagery]

Sometimes you frame the shot and do the best you can with the other parameters - and it works. Cellphone image at night meant slow shutter speed (1/15 sec) and lots of grain with ISO 800 - but the resultant grain and blurring did not detract from the result. [©2012 Ed Elliott / Clearlight Imagery]

Sometimes you frame the shot and do the best you can with the other parameters – and it works. Cellphone image at night meant slow shutter speed (1/15 sec) and lots of grain with ISO 800 – but the resultant grain and blurring did not detract from the result. [©2012 Ed Elliott / Clearlight Imagery]

A corollary to film sensitivity is grain (in film) or noise (in digital sensors). If you desire a fine-grained, super sharp negative, then you must use a slow film. If you need a fast film that can produce an acceptable image in low light without a flash, say for photojournalism or surveillance work, then you must use a fast film and accept grain the size of rice in some cases… Life is all about compromise. Again, the final outcome is subjective, and totally within the control of the accomplished photographer, but this exists completely outside the darkroom (or Photoshop). Two identical scenes shot with widely disparate ISO films (or sensor settings) will give very different results. A slow ISO will produce a very sharp, super-realistic image; while a very fast ISO will be grainy, somewhat fuzzy and can tend towards surrealism if pushed to an extreme.  [technical note: the arithmetic portion of the ISO rating is the same as the older ASA rating scale, I use the current nomenclature]

Editing: White Lies, Black Lies, Dutone and Technicolor…

In my personal work as a streetphotographer (my gallery is here) I tell ‘white lies’ all the time in editorial. By that I mean the small adjustments to focus, color balance, contrast, highlight and shadow balance, etc. This is a highly personal and subjective experience. I learned from master photographers (including Ansel Adams), books and much trial and even more error… to pre-visualize my shots, and mentally place the components of the image on the Zone Scale as accurately as possible with the equipment and lighting on hand. This process was most helpful when in university with no money – every shot cost, both in film and developing ingredients. I would often choose between beer and film.. film always won… fewer friends, more images.. not quite sure about that choice but I was fascinated with imagery. While pre-visualization is, I feel, an important magic and can result in the difference between an ok image and a great one – it’s not an easy process to follow in candid streetphotography, where the recognition of a potential shot and the chance to grab it is often 1-2 seconds.

This results, quite frequently, with things in the image not being where I imagined them in terms of composition, lighting, color balance, etc. So enter my ‘white lies’. I used to accomplish this in the darkroom with push/pull of developing, and significant tweaking during printing (burning, dodging, different choice of contrast printing papers, etc.). Now I use Photoshop (I’m not particularly an Adobe disciple, but I started with this program in 1989 with version 0.87 (known as part of Barneyscan, on my Mac Classic) and we’ve kind of grown up together… I just haven’t bothered to learn another program. It does what I need, I’m sure that I only know about 20% of its current capabilities, but that’s enough for my requirements.

The other extreme that can be accomplished by Photoshop experts (and I use the term generically here) are the ‘black lies’. This is where one puts Oprah’s head on someone else’s body, performs ‘digital liposuction’ to the extent that Lena Dunham and Adele both scream “enough!”, and many celebrities find their faces applied to actors and scenes (typically in North Hollywood) where they have never been, nor would want to… There’s actually a great novel by the late Michael Crichton [Rising Sun, 1992] that contains a detailed subplot about digital photomanipulation of video imagery. At that time, it took a supercomputer to accomplish the detailed and sophisticated retouching of long video sequences – today tools such as Photoshop and After Effects could accomplish this on a desktop workstation in a matter of hours.

"Duotone" technique [background masked and converted to monochrome to focus the viewer on the foreground image]

“Duotone” technique [background masked and converted to monochrome to focus the viewer on the foreground image] [©2016 Ed Elliott / Clearlight Imagery]

A technique I frequently use is Duotone – and even here I am being technically inaccurate. What I mean by this is separating the object of interest from the background by masking the subject and turning the rest of the image into black and white. The juxtaposition of a color subject against a monochrome background helps isolate and focus the viewer’s attention on the subject. Frequently in streetphotography the opportunity to place the subject against a non-intrusive background doesn’t exist, so this technique is quite effective in ‘turning down’ the importance of the often busy and distracting surrounds. [Technically the term duotone is used for printing the entire image in gradations of only two colors]. Is this ‘manipulation’? Yes. Does it materially detract from, or alter the intent of, the original image that I pre-visualized in my head? No. I firmly stand behind this point of view, that all photographs “lie” to one extent or another, and any tool that the photographer has at his or her hand to generate a final image that is in accordance with the original intent is fair game. What matters is the act of conveying the vision of the photographer to the brain of the viewer. Period.

The ‘photograph’ is just the medium that transports that image. At the end of the day, a photo is a conglomeration of pixels (either printed or glowing) that transmit photons to the human visual system, and ultimately end up in the visual cortex in the back of the human brain. That is where we actually “see”.

Early photography (and motion picture films) were only available in black & white. When color photography first came along, the colors were not ‘natural’. As emulsions improved things got better, but even so there was a marked deviation from ‘natural’ that was actually ‘designed in’ by Kodak and other film manufacturers. The saturation and color mapping of Kodachrome did not match reality, but it did satisfy the public that equated punchy colors with a ‘good color photo’ and made those vacation memories happy ones.. and therefore sold more film. The more subdued, and realistic, Ektachrome came along as professional photographers pushed for choice (and quite frankly an easier and more open developing process – Kodachrome could only be processed by licensed labs and it was notoriously difficult to process well). The down side of early Ektachrome emulsions was the unfortunate instability of the dye layers in color transparency film – leading to rapid fading of both slides and movies.

As one who has worked in film preservation and restoration for decades, it was interesting to note that an early color process (the Technicolor 3-stripe method) that was originally designed just to get vibrant colors on the movie screen in the 1930’s had a resurgence in film preservation. Turned out that so many of the early Ektachrome films from the 1950’s and 1960’s experienced rapid fading that significant restoration efforts were necessary to salvage some important movies. The only way at that time (before economical digital scanning of movies was possible) was to – after restoration of the color negative – scan using the Technicolor process and make 3 separate black & white films that represented the cyan, magenta and yellow dye layers. Then someday in the future the 3 negatives could be optically combined and printed back on to color film for viewing.

There is No Objective Truth in Photography (or Painting, Music…)

All photography is an illusion. Using a lens, a photo-sensitive element of some sort and a box to restrict the image to only the light coming through the lens, a photograph is a rendering of what is before the lens. Nothing more. Distorted and limited by the photographer’s choice of point of view, lens, aperture, shutter speed, film/sensor and so on; the resultant image – if correctly executed, reflects at most the inner vision of the photographer’s mind/perception of the original scene. Every photograph has a story (some more boring than others).

One of the great challenges of photography (and possibly one of the reasons that until quite recently this art form was not taken seriously) is that on first glance many photos appear to be just a ‘copy of reality’ – and therefore contain no inherent artistic value. Nothing could be further from the truth. It’s just that that ‘art’ hides in plain sight… It is our collective, subjective, and inaccurate view that photographs are ‘truthful’ and accurately represent the reality that was before the lens that is the root of the problem that engendered this post. We naively assume that photos can be trusted, that they show us the only possible view of reality. It’s time to grow up, to accept that photography, just like all other art forms, is a product of the artist, first and foremost.

Even the unassuming mom who is taking snapshots of her kids is making choices – whether she knows it or not – about each of the parameters already discussed. Since most snapshot (or cellphone) cameras have wide angle lenses, the ‘huge nose’ effect of close-up pics of babies and youngsters (that will haunt these innocent children forever on Facebook and Instagram – data never dies…) is just an objective artifact of lens choice and distance to subject. Somewhere along the line our moral compass became out of whack when we started drawing highly artificial lines around ‘acceptable editorial behavior’ and so on. An entirely different discussion – which is worthy of a separate post – can be had in terms of the photographer’s (or publisher’s) intention in sharing an image. If a deliberate attempt to misrepresent the scene, for financial gain, allocation of justice, change in power, etc. is taken, that is an issue. But the same issue exists whether the medium that transports such a distortion is the written word, an audio recording, a painting or a 3D holograph. It is illogical to apply a set of standards or restrictions to one art form and not another, just to attempt to reign in inadvertent or deliberate distortions in a story that may be deduced from the art by an observer.

To use another common example, we have all seen many photos of a full moon rising behind a skyline, trees on a ridge, etc. – typically with a really large moon – and most observers just appreciate the image, the impact, the feeling. Even some rudimentary science, and a bit of experience with photography, reveals that most such images are a composite, with a moon image enlarged and layered in behind the foreground. The moon, is simply never that large, in relation to the rest of the image. In many cases I have seen, the lighting of the rest of the scene clearly shows that the foreground was shot at a different time of night than the moon (a full moon on the horizon only occurs at dusk). I have also seen many full moons in photographs that are at astronomically impossible locations in the sky, given the longitude and latitude of the foreground that is shown in the image.

An example of "Moon on Steroids"... The actual size of the moon (31 minutes of arc) is about the same size as your thumbnail if you extend your arm fully. In this picture it's obvious (look at the grasses on the ground) that the tree is approximately 10 ft tall. In reality, the moon would be nestled in between a couple of the smaller branches.

An example of “Moon on Steroids”… The actual size of the moon (31 minutes of arc) is about the same size as your thumbnail if you extend your arm fully. In this picture it’s obvious (look at the grasses on the ground) that the tree is approximately 10 ft tall. In reality, the moon would be nestled in between a couple of the smaller branches.

Why is it that such an esteemed, and talented, photographer as Steve McCurry is chastised for removing some distracting bits of an image – which in no way detracted from the ‘story’ of the image – and yet I dare say that no one in their right mind wold criticize Leonardo da Vinci for including a physically impossible background (the almost mythological mountains and seas) in his rendition of Lisa Gherardini for his painting of “Mona Lisa”? As someone who has worked in the film/video/audio industry for my entire professional life, I can tell you with absolute certainty that no modern audio recording – from Adele to Ziggy Marley – is released that is not ‘digitally altered’ in some fashion. Period. It is just an absolute in today’s production environment to ‘clean up’ every track, every mix, every completed master – removing unwanted echoes, noise, coughs, burps, and other audio equivalents of Mr. Fisch’s plastic bag… and no one, ever, has complained about this or accused the artists of being ‘dishonest’.

This double standard needs to be put to rest permanently. It reflects poorly on those who take this position, demonstrating their lack of technical knowledge and a narrow perception of the art form of photography, and furthermore gives power to those whose only interest is to malign others and detract from the powerful impact that a great image can create. If ignorant observers can really believe that an airplane in an image as depicted is ‘real’ (for the airplane to be of such a size in relation to the tunnel and ladders it would have to be flying at a massively illegal low altitude in that location) then such observers must take responsibility. Does the knowledge that this placement of the plane is ‘not real’ detract from the photo? Does the contraposition of ‘stillness vs movement’ (concrete and steel silo vs rapidly moving aircraft) create a visually stimulating image? Is it important whether that occurred ‘in reality’ or not? Would an observer judge it differently if this was a painting or a sketch instead of a photograph?

I love the art and science of photography. I am daily enamored with the images that talented and creative people all over the world share, whether a mixture of camera originals, composites, pure fiction created in the ‘darkroom’ or some combination of all. This is a wondrous art form, and must be supported at all costs. It’s not easy, it takes dedication, effort, skill, perseverance, money, time and love – just as any art form. I would hope that we could move the conversation to what matters: ‘truth in advertising’. In a photo contest, nothing, repeat nothing, should matter except the image itself. Just like painting, sculpture, music, ceramics, dance, etc. – the observed ‘art’ should be judged only by the merits of the entity itself, without subjective expectations or philosophical distortions. If an image is used to reinforce a particular ‘story’ – whether for ethical, legal or news purposes, then both the words and the images must be authentic. Authentic does not mean ‘un-retouched’, it does mean that there is no ‘black lie’ in what is conveyed.

To summarize, let’s stop believing that photographs are ‘real’ – but let’s start accepting the art, craftsmanship, effort and focus that this medium brings to all of us. Let’s apply a common frame of reference to all forms of art, whether they be painting, writing, photography, music, etc. – terms of authenticity and purpose. Would we chide Escher for attempting to fool us with visual cues of an impossible reality?

Where Did My Images Go? [the challenge of long-term preservation of digital images]

August 13, 2016 · by parasam
Littered Memories - Photos in the Gutter

Littered Memories – Photos in the Gutter (© 2016 Paul Watson, used with permission)

Image Preservation – The Early Days

After viewing the above image from fellow streetphotographer Paul Watson, I wanted to update an issue I’ve addressed previously: the major challenge that digital storage presents in terms of long-term archival endurance and accessibility. Back in my analog days, when still photography was a smelly endeavor in the darkroom for both developing and printing, I slowly learned about careful washing and fixing of negatives, how to make ‘museum’ archival prints (B&W), and the intricacies of dye-transfer color printing (at the time the only color print technology that offered substantial lifetimes). Prints still needed carefully restricted environments for both display and storage, but if all was done properly, a lifetime of 100 years could be expected for monochrome prints and even longer for carefully preserved negatives. Color negatives and prints were much more fragile, particularly color positive film. The emulsions were unstable, and many of the early Ektachrome slides (and motion picture films) faded rapidly after only a decade or so. A well-preserved dye-transfer print could be expected to last for almost 50 years if stored in the dark.

I served for a number of years as a consultant to the Los Angeles County Museum of Art, advising them on photographic archival practices, particularly relating to motion picture films. The Bing Theatre for many years offered a fantastic set of screenings that offered a rare tapestry of great movies from the past – and helped many current directors and others in the industry become better at their craft. In particular, Ron Haver (the film historian, preservationist and LACMA director with whom I worked during that time) was instrumental in supervising the restoration, screening and preservation of many films that would now be in the dust bin of history without his efforts. I learned much from him, and the principles last to this day, even in a digital world that he never experienced.

One project in particular was interesting: bringing the projection room (and associated film storage facilities) up to Los Angeles County Fire Code so we could store and screen early nitrate films from the 1920’s. [For those that don’t know, nitrate film is highly flammable, and once on fire will quite happily burn under water until all the film is consumed. It makes its own oxygen while burning…] Fire departments were not great fans of this stuff… Due to both the large (and expensive) challenges in projecting this type of film, as well as the continual degradation of the film stock, almost all nitrate film left has since been digitally scanned for preservation and safety. I also designed the telecine transfer bay for the only approved nitrate scanning facility in Los Angeles at that period.

What this all underscored was the considerable effort, expense and planning that is required for long term image preservation. Now, while we may think that once digitized, all our image preservation problems are over – the exact opposite is true! We have ample evidence (glass plate negatives from the 1880’s, B&W sheet film negatives from the early 1900’s) that properly stored  monochrome film can easily last 100 years or more, and is readable today as it was the day the film was exposed with no extra knowledge or specialized machinery. B&W movie film is also just as stable as long as printed onto safety film base. Due to the inherent fading of so many early color emulsions, the only sure method for preservation (in the analog era) was to ‘color separate’ the negative film and print the three layers (cyan, magenta and yellow) onto three individual B&W films. – the so-called “Technicolor 3-stripe process”.

Digital Image Preservation

The problem with digital image preservation is not due to the inherent technology of digital conversion – if done well that can yield a perfect reproduction of the original after theoretically an infinite time period. The challenge is how we store, read and write the “0s and 1s” that make up the digital image. Our computer storage and processing capability has moved so quickly over the last 40 years that almost all digital storage from more than 25 years ago is somewhere between difficult and impossible to recover today. This problem is growing worse, not better, in every succeeding year…

IBM 305 RAMAC Disk System 1956: IBM ships the first hard drive in the RAMAC 305 system. The drive holds 5MB of data at $10,000 a megabyte.

IBM 305 RAMAC Disk System 1956: IBM ships the first hard drive in the RAMAC 305 system. The drive holds 5MB of data at $10,000 a megabyte.

This is a hard drive. It holds less than .01% of the data as the smallest iPhone today...

This is a hard drive. It holds less than .01% of the data as the smallest iPhone today…

One of the earliest hard drives available for microcomputers, c.1980. The cost then was $350/MB, today's cost (based on 1TB hard drive) is $0.00004/MB or a factor of 8,750,000 times cheaper.

One of the earliest hard drives available for microcomputers, c.1980. The cost then was $350/MB, today’s cost (based on 1TB hard drive) is $0.00004/MB or a factor of 8,750,000 times cheaper.

Paper tape digital storage as used by DEC PDP-11 minicomputers in 1975.

Paper tape digital storage as used by DEC PDP-11 minicomputers in 1975.

Paper punch card, a standard for data entry in the 1970s.

Paper punch card, a standard for data entry in the 1970s.

Floppy disks: (from left) 8in; 5-1/4"; 3-1/2". The standard data storage format for microcomputers in the 1980s.

Floppy disks: (from left) 8in; 5-1/4″; 3-1/2″. The standard data storage format for microcomputers in the 1980s.

As can be  seen from the above examples, digital storage has changed remarkably over the last few decades. Even though today we look at multi-terabyte hard drives and SSD (Solid State Drives) as ‘cutting edge’, will we chuckle 20 years from now when we look back at something as archaic as spinning disks or NAND flash memory? With quantum memory, holographic storage and other technologies already showing promise in the labs, it’s highly likely that even the 60TB SSD disks that Samsung just announced will take their place alongside 8-inch floppy disks in a decade or so…

And these issues are actually the least of the problem (the physical storage medium). Yes, if you put your ‘digital negatives’ on a floppy disk 15 years ago and now want to read them you have a challenge at hand… but with patience and some time on eBay you could probably assemble the appropriate hardware to retrieve the data into a modern computer. The bigger issue is that of the data format: both of the drives themselves and the actual image files. The file systems – the method that was used to catalog and find the individual images stored on whatever kind of physical storage device, whether ancient hard drive or floppy disk – have changed rapidly over the years. Most early file systems are no longer supported by current OS (Operating Systems), so hooking up an old drive to a modern computer won’t work.

Even if one could find a translator from an older file system to a current one (there is a very limited capability in this regard, many older file systems can literally only be read by a computer as old as the drive), that doesn’t solve the next issue: the image format itself. The issue of ‘backwards compatibility’ is one of the great Achilles Heels of the entire IT industry. The huge push by all vendors to keep all their users relentlessly updating to the latest software, firmware and hardware is just to avoid these same companies having to support older versions of hardware and software. This is not totally a self-serving issue (although there are significant costs and time involved in doing so) – frequently certain changes in technology just can’t support an older paradigm any longer. The earliest versions of Photoshop files, PICT, etc are not easily opened with current applications. Anyone remember Corel Draw?? Even ‘common interchange’ formats such as TIFF and JPEG have evolved, and not every version is supported by every current image processing application.

The more proprietary and specific the image format is, the more fragile it is – in terms of archival longevity. For instance, it may seem that the best archival format would be the Camera Raw format – essentially the full original capture directly from the camera. File types such as RAW, NEF, CR2 and so on are typical. However, each of these is proprietary and typically has about a 5 year life span, in terms of active application support by the vendor. As camera models keep changing – more or less on a yearly cycle – the Raw formats change as well. 3rd party vendors, such as Adobe Photoshop, are under no obligation to support earlier Raw formats forever… and as previously discussed the challenge of maintaining backwards compatibility grows more complex with each passing year. There will always come a time when such formats will no longer be supported by currently active image retrieval, viewing or processing software.

Challenges of Long-Term Digital Image Preservation

Therefore two major challenges must be resolved in order to achieve long term storage and future accessibility of digital images. The first is the physical storage medium itself, whether that is tape (such as LTO-6), hard disk, SSD, optical, etc. The second is the actual image format. Both must be usable and able to transfer images back to the operating system, device and software that is current at the time of retrieval in order for the entire exercise of archival digital storage to be successful. Unfortunately, this is highly problematic at this time. As the pace of technological advance is exponentially increasing, the continual challenge of obsolescence becomes greater every year.

Currently there is no perfect answer for this dilemma – the only solution is one of proactivity on the part of the user. One must accommodate the continuing obsolescence of physical storage mediums, file systems, operating systems and file formats by moving the image files on a regular and continual basis to current versions of all of the above. Typically this is an exercise that must be repeated every five years – at current rates of technological development. For uncompressed images, other than the cost of the move/update there is no impact on the digital image – that is one of the plus sides of digital imagery. However, many images (almost all if you are other than a professional photographer or filmmaker) are stored in a compressed format (JPG, TIFF-LZW/ZIP, MPG, MOV, WMV, etc.). These images/movies will experience a small degradation in quality each time they are copied. The amount and type of artifacts introduced are highly variable, depending on the level of compression and many other factors. The bottom line is that after a number of copy cycles of a compressed file (say 10) it is quite likely that a visible difference from the original file can be seen.

Therefore, particularly for compressed files, a balance must be struck between updating often enough to avoid technical obsolescence and making the fewest number of copies over time in order to avoid image degradation. [It should be noted that potential image degradation will typically only be due to changing/updating the image file format, not moving a bit-perfect copy from one type of storage medium to another].

This process, while a bit tedious, can be automated with scripts or other similar tools, and for the casual photographer or filmmaker will not be too arduous if undertaken every five years or so. It’s another matter entirely for professionals with large libraries, or for museums, archives and anyone else with thousands or millions of image files. A lot of effort, research and thought has been applied to this problem by these professionals, as this is a large cost of both time and money – and no solution other than what’s been described above has been discovered to date. Some useful practices have been developed, both to preserve the integrity of the original images as well as reduce the time and complexity of the upgrade process.

Methods for Successful Digital Image Archiving

A few of those processes are shared below to serve as a guide for those that are interested. Further search will yield a large amount of sites and information that addresses this challenge in detail.

  • The most important aspect of ensuring a long-term archival process that will result in the ability to retrieve your images in the future is planning. Know what you want, and how much effort you are willing to put in to achieve that.
  • While this may be a significant undertaking for professionals with very large libraries, even a few simple steps will benefit the casual user and can protect family albums for decades.
  • In addition to the steps discussed above (updating storage media, OS and file systems, and image formats) another very important aspect is “Where do I store the backup media?” Making just one copy and having it on the hard drive of your computer is not sufficient. (Think about fire, theft, complete breakdown of the computer, etc.)
    • The current ‘best practices’ recommendation is the “3-2-1” approach: Make 3 copies of the archival backup. Store in at least 2 different locations. Place at least 1 copy off-site. A simple but practical example (for a home user) would be: one copy of your image library in your computer. A 2nd copy on a backup drive that is only used for archival image storage. A 3rd copy either on another hard drive that is stored in a vault environment (fireproof data storage or equivalent) or cloud storage.
    • A note on cloud storage: while this can be convenient, be sure to check the fine print on liability, access, etc. of the cloud provider. This solution is typically feasible for up to a few terabytes, beyond that the cost can become significant, particularly when you consider storage for 10-20  years. Also, will the cloud provider be around in 20 years? What insurance do they provide in terms of buyout, bankruptcy, etc.? While the issue of storage media is not an issue with cloud storage and file formats (it is incumbent on the cloud provider to keep that updated) you are still personally responsible for the image format issue: the cloud vendor is only storing a set of binary files, they cannot guarantee that these files will be readable in 20 years.
    • Unless you have a fairly small image library, current optical media (DVD, etc.) is impractical: even double-sided DVDs only hold about 8GB of formatted data. In addition, as one would need to burn these DVDs in your computer, the longevity of ‘burned’ DVDs is not great (compared to printed DVDs like you purchase when you buy a movie). With DVD usage falling off noticeably this is most likely not a good long-term archival format.
    • The best current solution for off-premise archival storage is to physically store external hard drives (or SSDs) with a well known data vaulting vendor (Iron Mountain is one example). The cost is low, and since you only need access every 5 years or so the extra cost for retrieval and re-storage (after updating the storage media) is acceptable even for the casual user.
  • Another vitally important aspect of image preservation is metadata. This is the information about the images. If you don’t know what you have then future retrieval can be difficult and frustrating. In addition to the very basic metadata (file name, simple description, and a master catalog of all your images) it is highly desirable to put in place a metadata schema that can store keywords and a multitude of other information about the images. This can be invaluable to yourself or others who may want to access these images decades in the future. A full discussion of image metadata is beyond the scope of this post, but there is a wealth of information available. One notable challenge is the most basic (and therefore future-proof) still image formats in use today [JPG and TIFF] do not have any facility to attach metadata directly within the image file – it must be stored externally and cross-referenced somehow. Photoshop files on the other hand store both metadata and the image within the same file – but as discussed above this is not the best format for archival storage. There are techniques to cross-reference information to images: from purpose-built archival image software to a simple spreadsheet that uses the filename of the image as a key to the metadata.
  • An important reminder: the whole purpose of an archival exercise is to be able to recover the images at a future date. So test this. Don’t just assume. After putting it all in place, pull up some images from your local offline storage every 3-6 months and see that everything works. Pull one of your archival drives from off-site storage once a year and test it to be sure you can still read everything. Set up reminders in your calendar – it’s so easy to forget until you need a set of images that was accidentally deleted from your computer – and then find out your backup did work as expected.

A final note:  if you look at entities that store valuable images as their sole activity (Library of Congress, The National Archives, etc.) you will find [for still images] that the two most popular image formats are low-compression JPG and uncompressed TIFF. It’s a good place to start…

 

iPhone4S – Section 4c: Camera Plus Pro app

April 13, 2012 · by parasam

This is a similar app to Camera+ (but not made by the same developers). (This version costs $1.99 at the time of this post – $1 more than Camera+). It’s similar in design and function. The biggest differences are:

  • Ability to tag photos
  • More setup options on selections (self-timer, burst mode, resolution, time lapse, etc.)
  • More sharing options
  • Ability to add date and copyright text to photo
  • A ‘Quick Roll’ (light table type function) has 4 ‘bins’ (All, Photos, Video, Private – can be password protected)
  • Can share photos via WiFi or FTP
  • Bing search from within app
  • Separate ‘Digital Flash’ filter with 3 intensity settings
  • Variable ‘pro’ adjustments in edit mode (Brightness, Saturation, Hue, Contrast, Sharpness, Tint, Color Temperature)
  • Different filters than Camera+, including special ‘geometric distortion’ filters
  • Quick Roll design for selecting which photos to Edit, Share, Sync, Tag, etc.
  • Still Camera Functions  [NOTE: the Video Camera functions will be discussed separately in later in this series when I compare video apps for the iPhone]
    • Ability to split Focus area from Exposure area
    • Can lock White Balance
    • Flash: Off/On (for the 4 & 4S); this feature changes to “Soft Flash” for iPhone 3GS and Touch 4G.
    • Front or Rear camera selection
    • Digital Zoom
    • 4 Shooting Modes: Normal/Stabilized/Self-Timer/Burst (part of the below Photo Options menu)
    • Photo options:
      • Sound On/Off
      • Zoom On/Off
      • Grid Lines On/Off
      • Geo Tags On/Off
      • SubMenu:
        • Tags:  select and add tags from list to the shot; or add a new tag
        • Settings:  a number of advanced settings for the app
          • Photos:
            • Timer (select the time delay for self-timer: 2-10 seconds in 1 second increments
            • Burst Mode (select the number of pictures taken when in burst mode: 3-10
            • Resolution (Original [3264×2448]; Medium [1632×1224]; Low [816×612]) – NOTE: these resolutions for the iPhone4S, each different hardware model supported by this app has a different set of resolutions set by the sensor. Essentially it is Full, Half and Quarter resolution. The exact numbers for each model are in the manual.
            • Copyright (sets the copyright text and text color)  [note: this is a preset – the actual ‘burn in’ of the copyright notice into the image is controlled during Editing]
            • Date (toggle date display on/off; set date format; text color)
        • Videos (covered in later section)
        • Private Access Restriction (Set or Change password for the Private bin inside the Quick Roll)
        • Tags (edit, delete, add tag names here)
        • Share (setup and credentials for social sharing services are entered here):
          • Facebook
          • Twitter
          • Flickr
          • Picasa
          • YouTube (for videos)
        • Review (a link to review the app)
      • Info:
        • Some ‘adware’ is here for other apps from this vendor, and a list of FAQs, Tips, Tricks (all of which are also in the manual available for download as a pdf from here)
  • Live Filters:
    • A set of 18 filters that can be applied before taking your shot, as opposed to adding a filter after the shot during Editing.
      • BW
      • Vintage
      • Antique
      • Retro
      • Nostalgia
      • Old
      • Holga
      • Polaroid
      • Hipster
      • XPro
      • Lomo
      • Crimson
      • Sienna
      • Emerald
      • Bourbon
      • Washed
      • Arctic
      • Warm
    • A note on image quality using Live Filters. A bit more about the filters will be discussed below when we dive into the filter details, but some test shots using various Live Filters show a few interesting things:
      • The pixel resolution stays the same whether the filter is on or off (3264×2448 in the case of the iPhone4S).
      • While the Live Filter function is fully active during preview of an image, once you take the shot there is a delay of about 3 seconds while the filtering is actually applied to the image. Some moving icons on the screen notify the user. Remember that the screen is 960×640 while the full image is 3264×2448 (13 X larger!) so it takes a few seconds to filter all those additional pixels.
      • This does mean that when using Live Filter you can’t use Burst Mode (it is turned off when you turn on a Live Filter), and you can’t shoot that rapidly.
      • Although the pixel dimensions are unchanged, the size of the image file is noticeably smaller when using Live Filters than when not. This can only mean that the jpeg compression ratio is higher (same amount of input data; smaller output data; compression ratio mathematically must be higher).
      • I first noticed this when I went to email myself a full resolution image from my phone to my laptop [faster for one or two pix than syncing with iTunes] as I’m researching for this blog – the images were on average 1.7MB instead of the 2.7MB average for normal iPhone shots.
      • I tested against four other camera apps, including the native Camera app from Apple, and all of them delivered images averaging 2.7MB per image.
      • I then tested this app (Camera Plus Pro) in Unfiltered mode, and the size of the output file jumps up to an average of 2.3MB per image. Not as high as most of the others, but 35% larger. Therefore a 35% reduction in compression ratio. I’ll run some more objective tests during the filter analysis section below, but both in file size and visual observation, the images appear more highly compressed.
      • This does not mean that a more compressed picture is inferior, or softer, etc. – it is highly dependent on subject material, lighting, etc. But, what is true is that a more highly compressed picture will tend to show artifacts more easily in difficult parts of the frame than will the same image at a lower compression ratio.
      • Just all part of my “Know Your Tools” motto…
      • Edit Functions
        • Crop
          • Freeform (variable aspect ratio)
          • Square (1:1 aspect ratio)
          • Rectangular (2:3 aspect ratio) [portrait]
          • Rectangular (3:2 aspect ratio) [landscape]
          • Rectangular (4:3 aspect ratio) [landscape]
  • Rotation
    • Flip Horizontal
    • Right
    • Left
    • Flip Vertical
  • Digital Flash [a filter that simulates flash illumination]
    • Small
    • Medium
    • Large
  • Adjust [image parameter adjustments]
    • Brightness
    • Saturation
    • Hue
    • Contrast
    • Sharpness
    • Tint
    • Color Temperature
  • Effects
    • Nostalgia – 9 ‘retro’ effects
      • Coffee
      • Retro Red
      • Vintage
      • Nostalgia
      • Retro
      • Retro Green
      • 70s
      • Antique
      • Washed
    • Special – 9 custom effects
      • XPro
      • Pop
      • Lomo
      • Holga
      • Diana
      • Polariod
      • Rust
      • Glamorize
      • Hipster
    • Color – 9 tints
      • Black & White
      • Sepia
      • Sunset
      • Moss
      • Lucifer
      • Faded
      • Warm
      • Arctic
      • Allure
    • Artistic – 9 special filters
      • HDR
      • Fantasy
      • Vignette
      • Grunge
      • Pop Art
      • GrayScale
      • Emboss
      • Xray
      • Heat Signature
    • Distortion – 9 geometric distortion (warping) filters
      • Center Offset
      • Pixelate
      • Bulge
      • Squeeze
      • Swirl
      • Noise
      • Light Tunnel
      • Fish Eye
      • Mirror
  • Borders
    • Original (no border)
    • 9 border styles
      • Thin White
      • Rounded Black
      • Double Frame
      • White Frame
      • Polaroid
      • Stamp
      • Torn
      • Striped
      • Grainy

Camera Functions

[Note:  Since this app has a manual available for download that does a pretty fair job of describing the features and how to access and use them, I will not repeat that information here. I will discuss and comment on the features where I believe this will add value to my audience. You may want to have a copy of the manual available for clarity while reading this blog.]

The basic use and function of the camera is addressed in the manual, what I will discuss here are the Live Filters. I have run a series of tests to attempt to illustrate the use of the filters, and provide some basic analysis of each filter to help the user understand how the image will be affected by the filter choice. The resolution of the image is not reduced by the use of a Live Filter – in my case (testing with iPhone4S) the resultant images are still 3264×2448 – native resolution. There are of course the effects of the filter, which in some cases can reduce apparent sharpness, etc.

A note on my testing procedure:  In order to present a uniform set of comparison images to the reader, and have them be similar to my standard test images, the following steps were taken:

Firstly:  my standard test images that I use to analyze filters/scenes/etc for any iPhone camera app consists of two initial test images:  a technical image (calibrated color and grayscale image), and a ‘real-world’ image – a photo I shot of a woman in the foreground with a slightly out-of-focus background. The shot has a wide range of lighting, color, a large amount of skin tone for judging how a given filter changes that important parameter, and a fairly wide exposure range.

 The original source for the calibration chart was a precision 35mm slide (Kodak Q60, Ektachrome) that was scanned on a Nikon Super Coolscan 5000ED using Silverfast custom scanner software. The original image was scanned at 4000dpi, yielding a 21megapixel image sampled at 16bits per pixel. This image was subsequently reduced in gamut (from ProPhotoRGB to sRGB) and size (to match the native iPhone4S resolution of 3264×2448) and bit depth (8bits per pixel) . The image processing was performed using Photoshop CS5.5 in a fully color-calibrated workflow.

The source for the ‘real-world’ image was initially captured using a Nikon D5000 DSLR fitted with a Nikkor 200mm F2.8 prime lens (providing an equivalent focal length of 300mm compared to full-frame 35mm – the D5000 is a 2/3 size sensor [4288×2848]). The exposure was 1/250 sec @ f5.6 using camera raw format – no compression. That camera body captures in sRGB color space, and although outputs a 16bit per pixel format, the sensor is really not capable of anything more than 12 bits in a practical sense. The image was processed in Photoshop CS5.5 in a similar manner as above to yield a working image of 3264×2448, 8 bits per pixel, sRGB.

These image pairs are what are used throughout my blog for analyzing filters, by importing into each camera app as a file.

For this test of Live Filters, I needed to actually shoot with the iPhone, since there is no way using this app to apply the Live Filters to a pre-existing image. To replicate the images discussed above as closely as possible, the following procedure was used:

For the calibration chart, the same source image was used (Kodak Q60), this time as a precision print in 4″x5″ size. These prints were manufactured by Kodak under rigidly controlled processes and yield a highly accurate reflective target. (Most unfortunately, with the demise of Kodak, and film/print processing in general, these are no longer available. Even with the best of storage techniques, prints will fade and become inaccurate for calibration. It will be a challenge to replace these…)  I used my iPhone4S to make the exposures under controlled lighting (special purpose full-spectrum lighting set to 5000°K).

For the ‘real-world’ image, I wanted to stay with the same image of the woman for uniformity, and it provides a good range of test values. To accomplish that (and be able to take the pictures with the iPhone) was challenging, since the original shot was impossible to duplicate in real life. I started with the same original high resolution image (in Photoshop) in its original 16bit, high-gamut format. I then printed that image using a Canon fine art inkjet printer (Pixma Pro 9500 MkII), using a 16 bit driver, on to high quality glossy photo paper at a paper size of 13″ x 19″. At a print density of 267dpi, this yielded an image of over 17megapixels when printed. The purpose was to ensure that no subsampling of printed pixels would occur when photographed by the 8megapixel sensor in the iPhone. [Nyquist sampling theory demands a minimum of 2x sampling – 16megapixels in this case – to ensure that). I photographed the image with the same controlled lighting as used above for the calibration chart. I made one adjustment to each image for normalization purposes:  I mapped the highest white level in the photograph (the clipped area on the subject’s right shoulder – which was pure white in the original raw image) to just reach pure white in the iPhone image. This matched the tonal range for each shot, and made up for the fact that even with a lot of light in the studio it wasn’t enough to fully saturate the little tiny iPhone sensor. No other adjustments of any kind were made. [This adjustment was carried out by exporting the original iPhone image to Photoshop to map the levels].

While even further steps could have been taken to make the process more scientifically accurate, the purpose here is one of relative comparison, not absolute measurement, so I feel the steps taken are sufficient for this exercise.

The Live Filters:

Live Filter = BW

Live Filter = BW

The BW filter provides a monochrome adaptation of the original scene. It is a high contrast filter, this can clearly be seen in the test chart, where columns 1-3 are solid black, as well as all grayscale chips from 19-22. Likewise, on the highlight end of the scale, chips 1-3 have no differentiation. The live image shows this as well, with a strong contrast throughout the scene.

Live Filter = Vintage

Live Filter = Vintage

The Vintage filter is a warming filter that adds a reddish-brown cast to the image. It increases the contrast some (not nearly as much as the previous BW filter) – this can be seen in the chart in the area of columns 1-2 and rows A-J. The white and black ends of the grayscale are likewise compressed. Any cool pastel colors either turn white or a pale warm shade (look at columns 9-11). The live image shows these effects, note particularly how the man’s blue shirt and shorts change color remarkably. The increase in contrast, couple with the warming tint, does tend to make skin tones blotchy – note the subject’s face and chest.

Live Filter = Antique

Live Filter = Antique

The Antique filter offers a large amount of desaturation, a cooling of what color remains, and an increase in contrast. Basically, only pinks and navy blues remain in the color spectrum, and the chart shows the clipping of blacks and whites. The live image shows very little saturation, only some dark blue remains, with a faint pink tinge on what was originally the yellow sign in the window.

Live Filter = Retro

Live Filter = Retro

The Retro filter attempts to recreate the look of cheap film cameras of the 1960’s and 1970’s. These low quality cameras often had simple plastic lenses, light leaks due to imperfect fit of components, etc. The noticeable chromatic aberrations of the lens and other optical ‘faults’ have now seen a resurgence as a style, and that is emulated with digital filters in this and others shown below. This particular filter shows a general warming, but with a pronounced red shift in the low lights. This is easily observable in the gray scale strip on the chart.

Live Filter = Nostalgia

Live Filter = Nostalgia

Nostalgia offers another variation on early low-cost film camera ‘look and feel’. As opposed to the strong red shift in the lowlights of Retro, this filter shifts the low-lights to blue. There is also an increase in saturation of both red and blue, notice that in the chart. The green column, #18, hardly has any change in saturation from the original, while the reds and blues show noticeable increases, particularly in the low-lights. The highlights have a general warming trend, showed in the area bounded by columns 13-19 and rows A-C. The live shot shows the strong magenta/red shift that this filter caused on skin tones.

Live Filter = Old

Live Filter = Old

The Old filter applies significant shifts to the tonal range. It’s not exactly a high contrast filter, although that result is apparent in the ratio of the highlight brightness to the rest of the picture. There is strong overall reduction in brightness – in the chart all differentiation is lost below chip #16. There is also desaturation, this is more obvious when studying the chart. The highlights, like many of these filter types, are warmed toward the yellow spectrum.

Live Filter = Holga

Live Filter = Holga

The Holga filter is named after the all-plastic camera of the same name – from Hong Kong in 1982. A 120 format roll-film camera, the name comes from the phrase “ho gwong” – meaning ‘very bright’. The marketing people twisted that phrase into HOLGA. The actual variations show a warming in the highlights and cooling (blue) in the lowlights. The contrast is also increased. In addition, as with many of the Camera Plus Pro filters, there is a spatial element as well as the traditional tonal and chromatic shifts:  in this case a strong red tint in one corner of the frame. My tests appear to indicate that the placement of this (which corner) is randomized, but the actual shape of the red tint overlay is relatively consistent. Notice that in the chart the overlay is in the upper right corner, in the live shot it moved to lower right. There is also desaturation, this is noticeable in her skin, as well as the central columns of the chart.

Live Filter = Polaroid

Live Filter = Polaroid

The Polaroid filter mimics the look of one of the first ‘instant gratification’ cameras – the forerunner of digital instant photography. The PLC look (Polariod Land Camera) was contrasty with crushed blacks, tended towards blue in the shadows, and had slightly yellowish highlights. This particular filter has a pronounced magenta shift in the skin tones that is not readily apparent from the chart – one of the reasons I always use these two different types of test images.

Live Filter = Hipster

Live Filter = Hipster

The Hipster filter effect is another of the digital memorials to the original Hipstamatic camera – a cheap all plastic 35mm camera that shot square photos. Copied from an original low-cost Russian camera, the two brothers that invented it only produced 157 units. The camera cost $8.25 in 1982 when it was introduced. With a hand-molded plastic lens, this camera was another of the “Lo-Fi” group of older analog film cameras whose ‘look’ has once again become popular. The CameraPlusPro version shows pronounced red in the midtones, crushed blacks (see column 1-2 in the chart and chips #18 and below), along with increased contrast and saturation. In my personal view, this look is harsher and darker than the actual Hipstmatic film look, which tended towards raised blacks (a common trait of cheap film cameras, the backs always leaked a bit of light so a low level ‘fog’ of the film base always tended to raise deep blacks [areas of no light exposure in a negative] to a dull gray); a softer look (lower contrast due to raised blacks) and brighter highlights. But that’s purely a personal observation, the naming of filters is arbitrary at best, that’s why I like to ‘look under the hood’ with these detailed comparisons.

Live Filter = XPro

Live Filter = XPro

The XPro filter as manifested by the CameraPlusPro team looks very similar to their Nostalgia version, but the XPro has highlights that are more white than the yellow of Nostalgia. The term XPro comes from ‘cross-process’ – what happens when you process film in the wrong developer, for instance developing E-6 transparency film in C-41 color negative chemistry. The effects of this process are highly random, although there is a general tendency towards high contrast, unnatural colors, and staining. In this instance, the whites are crushed a bit, blacks tend blue, and contrast is raised.

Live Filter = Lomo

Live Filter = Lomo

The Lomo filter effect is designed to mimic some of the style of photograph produced by the original LOMO Plc camera company of Russia (Leningrad Optical Mechanical Amalgamation). This was a low cost automatic 35mm film camera. While still in production today, this and similar cameras account for only a fraction of LOMO’s production – the bulk is military and medical optical systems – and are world class… Due to the low cost of components and production methods, the LOMO camera exhibited frequent optical defects in imaging, color tints, light leaks, and other artifacts. While anathema to professional photographers, a large community that appreciates the quirky effects of this (and other so-called “Lo-Fi” or Low Fidelity) cameras has sprung up with a world-wide following. Hence the Lomo filter…

This particular instance shows increased contrast and saturation, warming in the highlights, green midtones, and like some other CameraPlusPro filters, an added spatial effect (the red streak – again randomized in location, it shows in upper left in the chart, lower right in the live shot). [Pardon the pilot error:  the soft focus of the live shot was due to faulty autofocus on that iPhone shot – but I didn’t notice it until comping the comparison shots several days later, and didn’t have the time to reset the environment and reshoot for one shot. I think the important issues can be resolved in spite of that, but did not want my readers to assume that soft focus was part of the filter!]

Live Filter = Crimson

Live Filter = Crimson

The Crimson filter is, well, crimson! A bit overstated for my taste, but if you need a filter to make your viewers think of “The Shining” then this one’s for you! What more can I say. Red. Lots of it.

Live Filter = Sienna

Live Filter = Sienna

The Sienna filter always makes me think of my early art school days, when my well-meaning parents thought I needed to be exposed to painting… (burnt sienna is a well-know oil pigment, an iron oxide derivative that is reddish-brown. My art instructor said “think tree trunks”.)   Alas, it didn’t take me (or my instructor) long to learn that painting with oils and brushes was not going to happen in this lifetime. Fortunately I discovered painting with light shortly after that, and I’ve been in love with the camera ever since. The Sienna as shown here is colder than the pigment, a somewhat austere brown. The brown tint is more evident in the lowlights, the whites warm up just slightly. As in many of the CameraPlusPro filters, the blacks are crushed, which creates an overall look of higher contrast, even if the midtone and highlight contrast levels are unchanged (look at the grayscale in the chart). There is also an overall desaturation.

Live Filter = Emerald

Live Filter = Emerald

Emerald brings us, well, green… along with what should now be familiar:  crushed blacks, increased contrast, desaturation.

Live Filter = Bourbon

Live Filter = Bourbon

The Bourbon filter resembles the Sienna filter, but has a decidedly magenta cast in the shadows, while the upper midtones are yellowish. The lowered saturation is another common trait of the CameraPlusPro filters.

Live Filter = Washed

Live Filter = Washed

The Washed filter actually looks more like ‘unwashed’ print paper to me.. Let me explain:  before the world of digits descended on photography, during the print process (well, this applies to film as well but the effect is much better known in the printing process), after developing, stopping and fixing, you need to wash the prints. Really, really well. For a long time, like 30-45 minutes under flowing water. This is necessary to wash out almost all of the residual thiosulfate fixing chemical – if you don’t, your prints will age prematurely, showing bleaching and staining, due to the slow annihilation of elemental silver in the emulsion by the remaining thiosulfate. The prints will end up yellowed and a bit faded, in an uneven manner. In this digital approximation, the biggest difference is (as usual for this filter set) the crushed blacks. In the chemical world, just the opposite would occur, as the blacks in a photographic print have the highest accumulation of silver crystals (that block light or cover up the white paper underneath). The other attributes of this particular filter are: strongly yellowed highlights, lowlights tend to blue, increased contrast and raised saturation.

Live Filter = Arctic

Live Filter = Arctic

This Arctic filter looks cold! Unlike the true arctic landscape (which is subtle but has an amazing spectrum of colors), this filter is actually a tinted monochrome. The image is first reduced to black and white, then tinted with a cold blue. This is very clear by looking at the chart. It’s an effect.

LIve Filter = Warm

Live Filter = Warm

After looking so cold in the last shot, our subject is better when Warm. Slightly increased saturation and a yellow-brown cast to the entire tonal range are the basic components of this filter.

Edit Functions

This app has 6 groups of edit functions:  Crop, Rotate, Flash, Adjust, Filters and Borders. The first two are self-evident, and are more than adequately explained in the manual. The “how-to” of the remaining functions I will leave to the manual, what will be discussed here are examples of each variable in the remaining four groups.

Flash – also known as “Digital Flash” – a filter designed to brighten an overly dark scene. Essentially, this filter attempts to bring the image levels up to what they might have been if a flash had been used to take the photograph initially. As always, this will be a ‘best effort’ – nothing can take the place of a correct exposure in the first place. The most frequent ‘side effects’ of this type of filter are increased noise in the image (since the image was dark in the first place – and therefore would have substantial noise due to the nature of CCD/CMOS sensors), raising the brightness level will also raise the appearance of the noise; and white clipping of those areas of the picture that did receive normal, or near-normal, illumination.

This app supports 3 levels of ‘flash’ [brightness elevation] – I call this ‘shirt-sizing’ – S, M, L.  Below are 4 screen shots of the Flash filter in action: None, Small, Medium, Large.

This filter attempts to be somewhat realistic – it is not just an across-the-board brightness increase. For instance, objects that are very dark in the original scene (such as her handbag or the interior revealed by the doorway in the rear of the scene) only are increased slightly in level, whiile midtones and highlights are raised much more substantially.

Flash: Original

Flash: Small / Medium / Large

Adjust – there are 7 sub-functions within the Adjust edit function; Brightness, Saturation, Hue, Contrast, Sharpness, Tint and Color Temperature. Each function has a slider that is initially centered, moving it left reduces the named parameter, moving it right increases. Once moved off the zero center position, a small “x” on the upper right of the assoiciated icon can be tapped to return the slider to the middle position, effectively turning off any changes. Examples below for each of the sub-functions are shown.

Brightness: Minimum / Original / Maximum

Saturation: Minimum / Original / Maximum

Hue: Minimum / Original / Maximum

Contrast: Minimum / Original / Maximum

Sharpness: Minimum / Original / Maximum

Tint: Minimum / Original / Maximum

Color Temperature: Minimum / Original / Maximum

Color Temperature: Cooler / Original / Warmer

Filters – There are 45 image filters in the Edit section of the app. Some of them are similar or identical in function to the filters of the same name that were discussed in the Live Filter section above. These are contained in 5 groups: Nostalgia, Special, Colorize, Artistic and Distortion. The examples below are similar in format to the presentation of the Live Filters. The source images for these comparisons are imported files (see the note at the beginning of this section for details).

Nostalgia filters:

Nostalgia filter = Coffee

Nostalgia filter = Coffee

The Coffee filter is rather well-named:  it looks like your photo had weak coffee spread over it! You can see from the chart that, as usual for many of the CameraPlusPro filters, increased contrast, crushed blacks and desaturation is the base on which a subtle warm-brown cast is overlayed. The live example shows the increased contrast around her eyes, and the skin tones in both the woman and the man in the background have tended to pale brown as opposed to the original red/yellow/pink.

Nostalgia filter = Retro Red

Nostalgia filter = Retro Red

The Retro Red filter shows increased saturation, a red tint across the board (highlights and lowlights), and does not alter the contrast – note all the steps in the grayscale are mostly discernable – although there is a slight blending/clipping of the top highlights. The overall brightness levels are raised from midtones through the highlights.

Nostalgia filter = Vintage

Nostalgia filter = Vintage

The Vintage filter here in the Edit portion of the app is very similar to the filter of the same name in the Live Filter section. The overall brightness appears higher, but some of that may be due to the different process of shooting with a live filter and applying a filter in the post-production process. This is more noticeable in the live shot as opposed to the charts – a comparision of the “Vintage” filter test charts from the Live Filter section and the Edit section shows almost a dead match. This filter is a warming filter that adds a reddish-brown cast to the image. It increases the contrast some  – this can be seen in the chart in the area of columns 1-2 and rows A-J. The white and black ends of the grayscale are likewise compressed. Any cool pastel colors either turn white or a pale warm shade (look at columns 9-11). The live image shows these effects, note particularly how the man’s blue shirt and shorts change color remarkably. The increase in contrast, coupled with the warming tint, does tend to make skin tones blotchy – note the subject’s face and chest.

Nostalgia filter = Nostalgia

Nostalgia filter = Nostalgia

The Nostalgia filter, like Vintage above, is basically the same filter as the instance offered in the Live Filter section. The main difference is the Live Filter version is more magenta and a bit darker than this filter. Also the cyans tend green more strongly in this version of the filter – check out columns 12-13 in the chart. Some increased contrast, pronounced yellows in the highlights and increased red/blue saturation are also evident.

Nostalgia filter = Retro

Nostalgia filter = Retro

The Retro filter, as in the version in the Live Filter section, attempts to recreate the look of cheap film cameras of the 1960′s and 1970′s. These low quality cameras often had simple plastic lenses, light leaks due to imperfect fit of components, etc. The noticeable chromatic aberrations of the lens and other optical ‘faults’ have now seen a resurgence as a style, and that is emulated with digital filters in this and others shown below. This particular filter shows a general warming, but with a pronounced red shift in the low lights. This is easily observable in the gray scale strip on the chart.

Nostalgia filter = Retro Green

Nostalgia filter = Retro Green

The Retro Green filter is a bit of a twist on Retro, with some of Nostalgia thrown in (yes, filter design is a lot like cooking with spices..)  The lowlights are similar to Nostalgia, with a blue cast, the highlights show the same yellows as both Retro and Nostalgia, the big difference is in the midtones which are now strongly green.

Nostalgia filter = 70s

Nostalgia filter = 70s

The 70s filter gives us some desaturation, no change in contrast, red shift in midtones and lowlights, yellow shift in highlights.

Nostalgia filter = Antique

Nostalgia filter = Antique

The Antique filter is similar to the Antique Live Filter, but is much lighter in terms of brightness. There is a large degree of desaturation, some increase in contrast, significant brightness increase in the highlights, and very slight color shifts at the ends of the grayscale:  yellow in the highlights, blue in the lowlights.

Nostalgia filter = Washed

Nostalgia filter = Washed

The Washed filter here in the Edit section is very different from the filter of the same name in Live Filters. The only real similarity is the strongly yellowed highlights. This filter, like many of the others we have reviewed so far, has a much lighter look (brightness levels raised), a very slight magenta shift, slightly increased contrast, enhanced blues in the lowlights and some increase in cyan in the midtones.

Special filters:

Special filter = XPro

Special filter = XPro

The XPro filter in the Edit functions has a different appearance than the filter of the same name in Live Filters. This instance of the digital emulation of a ‘cross-process’ filter is less contrasty, less magenta, and has more yellow in the highlights. The chart shows the yellows in the highlights, blues in the lowlights, and increased saturation. The live shot reveals the increased white clipping on her dress (due to increased contrast), as well as the crushed blacks (notice the detail of the folds in her leather handbag are lost).

Special filter = Pop

Special filter = Pop

The Pop filter brings the familiar basic tonal adjustments (increased contrast, with crushed whites and blacks, an overall increase in midtone and highlight brightness levels) but this time the lowlights have a distince red/magenta cast, with midtones and highlights tending greenish/yellow. This is particularly evident in the live shot. Look at the black doorway in the original which is now very reddish in the filtered shot.

Special filter = Lomo

Special filter = Lomo

The Lomo filter here in the Edit area is rather different than the same named filter in Live Filters. This particular instance shows increased contrast and saturation, yellowish warming in the highlights, and like some other CameraPlusPro filters, an added spatial effect (the red splotch – in this example the red tint is in the same lower right corner for both chart and woman – if the placement is random, then this is just coincidence – but… it makes it look like the lowlights in the grayscale chart are pushed hard to red:  not so, it’s just that’s where the red tint overlay is this time…). Look at the top of her handbag in the live shot to see that the blacks are not actually shifted red. As with many other CameraPlusPro filters, the whites and blacks are crushed some – you can see on her dress how the highlights are now clipped.

Special filter = Holga

Special filter = Holga

The Holga filter is one where there is a marked similarity between the Live Filter and this instance as an Edit Filter. This version is lighter overall, with a more greenish-yellow cast, particularly in the shadows. The vignette effect is stronger in this Edit filter as well.

Special filter = Diana

Special filter = Diana

The Diana filter is another ‘retro camera’ effect:  based on, wow – surprise, the Diana camera… another of the cheap plastic cameras prevalent in the 1960′s. The vignetting, light leaks, chromatic aberrations and other side-effects of a $10 camera have been brought into the digital age. In a similar fashion to several of the previous ‘retro’ filters discussed already, you will notice crushed blacks & highlights, increased contrast, odd tints (in this case unsaturated highlights tend yellow), increased saturation of colors – and a slight twist in this filter due to even monochrome areas becoming tinted – the silver pendant on her chest now takes on a greenish/yellow tint.

Special filter = Polaroid

Special filter = Polaroid

The Polaroid filter here in the Edit section resembles the effects of the same filter in Live Filters in the highlights (tends yellow with some mild clipping), but diverges in the midtones and shadows. Overall, this instance is lighter, with much less magenta shift in the skin tones. The contrast is not as high as in the Live Filter version, and the saturation is a bit lower.

Special filter = Rust

Special filter = Rust

The Rust filter is really very similar to old-style sepia printing:  this is a post-tint process to a monochrome image. In this filter, the image is first rendered to a black & white image, then colorized with a warm brown overlay. The chart clearly shows this effect.

Special filter = Glamorize

Special filter = Glamorize

The Glamorize filter is a high contrast effect, with considerable clipping in both the blacks and the whites. The overall color balance is mostly unchanged, with a slight increase in saturation in the midtones and lowlights. Thr highlights on the other hand are somewhat desaturated.

Special filter = Hipster

Special filter = Hipster

The Hipster filter follows the same pattern as other filters that have the same name in both the Live Filters section and the Edit section: the Edit version is usually lighter with higher brightness levels, less of a magenta cast in skin tones and lowlights, and a bit less contrast. Still, in relation to the originals, the Hipster has the typical crushed whites and blacks, raised contrast, and in this case an overall warming (red/yellow) of midtones and highlights.

Colorize filters:

Colorize filter = Black & White

Colorize filter = Black & White

The Black & White filter here is almost identical to the effects produced by the same filter in the Live Filter section. A comparison of the chart images shows that. The live shots also render in a similar manner, with as usual the Edit filter being a bit lighter with slightly lower contrast. This is yet another reason to always evaluate a filter with at least two (and the more, the better) different types of source material. While digital filters offer a wealth of possibilities that optical filters never could, there are very fundamental differences in how these filters work.

At a simple level, an optical filter is far more predictable across a wide range of input images than a digital filter. The more complex a digital filter becomes (and many of the filters discussed here that attempt to emulate a multitude of ‘retro’ camera effects are quite complex) the more unexpected results are possible. When you consider that a Wratten #85 warming filter is really very simple (an orange filter that essentially partially blocks bluish/cyan light) – therefore this action will occur no matter what the source image is.

A filter such as Hipster, for example, attempts to mimic what is essentially a series of composited effects from a cheap analog film camera:  chromatic aberration of the cheap plastic lens, spherical lens aberration, light leaks, vignetting due to incomplete coverage of the film (sensor) rectangle, focus anomalies due to imperfect alignment of the focal plane of the lens with the film plane, etc. etc. Trying to mimic all this with mathematics (which is what a digital filter does, it simply applies a set of algorithms to each pixel) means that it’s impossible for even the most skilled visual programmer to fully predict what outputs will occur from a wide variety of inputs.

Colorize filter = Sepia

Colorize filter = Sepia

The Sepia filter is very similar to the Rust filter – it’s another ‘monochrome-then-tint’ filter. This time instead of a reddish-brown tint, the color overlay is a warm yellow.

Colorize filter = Sunset

Colorize filter = Sunset

The Sunset filter brings increased brightness, crushed whites and blacks, increased contrast and an overall warming towards yellow/red. Looks like it’s attempting to emulate the late afternoon light.

Colorize filter = Moss

Colorize filter = Moss

The Moss filter is, well, greenish… It’s a somewhat interesting filter, as most of the tinting effect is concentrated solely on monochromatic midtones. The chart clearly shows this. The live shot demonstrates this as well, the saturated bits keep their colors, the neutrals turn minty-green. Note his shirt, her dress, yellow sign stays yellow, and skin tones/hair don’t take on that much color.

Colorize filter = Lucifer

Colorize filter = Lucifer

The Lucifer filter is – surprise – a reddish warming look. There is an overall desaturation, followed by a magenta/red cast to midtones and lowlights. A slight decrease in contrast actually gives this filter a more faded, retro look than ‘devilish’, and in some ways I prefer this look to some of the previous filters with more ‘retro-sounding’ names.

Colorize filter = Faded

Colorize filter = Faded

The Faded filter offers a desaturated, but contrasty, look. Usually I interpret a ‘faded’ look to mean the kind of visual fading that light causes on a photographic print, where all the blacks and strongly saturated colors fade to a much lighter, softer tone. In this case, much of the color has faded, but the luminance is unchanged (in terms of brightness) and the contrast is increased, resulting in the crushed whites and blacks common to Camera Plus Pro filter design.

Colorize filter = Warm

Colorize filter = Warm

The Warm filter is basically a “plus yellow” filter. Looking at the chart you can see that there is an across-the-board increase in yellow. That’s it.

Colorize filter = Arctic

Colorize filter = Arctic

The Arctic filter is, well, cold. Like several of the other tinted monochromatic filters (Rust, Sepia), this filter first renders the image to a monochrome version, then tints it at all levels with a cold blue color.

Colorize filter = Allure

Colorize filter = Allure

The Allure filter is similar to the Warming filter – an even application of a single color increase – in this case magenta. There is also a slight increase in contrast.

Artistic filters:

Artistic filter = HDR

Artistic filter = HDR

The HDR filter is an attempt to mimic the result from ‘real’ HDR (High Dynamic Range) photography. Of course without true double (or more) exposures, this is not possible, but since some of the ‘look’ that some instances of HDR processing reveal show increased contrast, saturation and so on – this filter emulates some of that. Personally, I believe that true HDR photography should be indistinguishable from a ‘normal’ image – except that it should correctly map a very wide range of illumination levels correctly. A lot of “HDR” images tend to be a bit ‘gimicky’ with excessive edge glow, false saturation, etc. While this can make an interesting ‘special effect’ I think that it would better serve the imaging community if we correctly labeled those images as ‘cartoon’ or some other more accurate name – those filter side-effects really have nothing to do with true HDR imaging. Nevertheless, to complete the description of this filter, it is actually quite ‘color-neutral- (no cast), but does add contrast, particularly edge contrast; and significant vibrance and saturation.

Artistic filter = Fantasy

Artistic filter = Fantasy

The Fantasy filter is another across-the-board ‘color cast’ filter, this time with an increase in yellow-orange. Virtually no change in contrast, just a big shift in color balance.

Artistic filter = Vignette

Artistic filter = Vignette

The Vignette filter is a spatial filter, in that it really just changes the ‘shape’ of the image, not the overall color balance or tonal gradations. It mimics the light fall-off that was typical of early cameras whose lenses had inadequate covering power (the image rendered by the lens did not extend to the edges of the film). There is a tiny loss of brightness even in the center of the frame, but essentially this filter darkens the corners.

Artistic filter = Grunge

Artistic filter = Grunge

The Grunge filter is a combination filter:  both a spatial and tonal filter. It first, like past filters that are ‘tinted monochromatic’ filters, renders the image to black & white, then tints it – in this case with a grayish-yellow cast. There is also a marked decrease in contrast, along with elevated brightness levels. This is easily evident from the grayscale strip in the chart. In the live shot you can see her handbag is now a dark gray instead of black. The spatial elements are then added:  specialized vignetting, to mimic frayed or over-exposed edges of a print, as well as ‘scratches’ and ‘wrinkles’ (formed by spatially localized changes in brightness and contrast). All this combines to offer the look of an old, faded, bent and generally funky print.

Artistic filter = Pop Art

Artistic filter = Pop Art

The Pop Art filter is very much a ‘special effects’ filter. This particular filter is based on the solarization technique. This process (solarization) is in fact a rather complex and highly variable technique. It was initially discovered by Daguerre and others who first pioneered photography in the mid-1800’s. The name comes from the reversal of image tone of a drastically over-exposed part of an image:  in this case, pictures that included the sun in direct view. Instead of the image of the sun going pure white (on the print, pure black in the negative), the sun’s image actually went back to a light gray on the negative, rendering the sun a very dark orb in the final print. One of the very first “optical special effects” in the new field of photography. This is actually cause by halogen ions released within the halide grain by over-exposure diffusing to the grain surface in amounts sufficient to destroy the latent image.

In negatives, this is correctly known as the Sabattier effect after the French photographer, who published an article in Le Moniteur de la Photographie 2 in 1862. The digital equivalent of this technique, as shown in this filter, uses image tonal mapping computation to create high contrast bands where the levels of the original image are ‘flattened’ into distinct and constant brightness bands. This is clearly seen in the grayscale strip in the chart image. It is a very distinctive look and can be visually interesting when used in a creative manner on the correct subject matter.

Artistic filter = Grayscale

Artistic filter = Grayscale

The Grayscale filter is just that:  the rendering of the original image into a grayscale image. The difference between this filter and the Black & White filters (in both Live Filters and this Edit section) is a much lower contrast. By comparing the grayscale strips in the original and filtered chart images, you can see there is virtually no difference. The Black & White filters noticeably increase the contrast.

Artistic filter = Emboss

Artistic filter = Emboss (40%)

Artistic filter = Emboss (100%)

The Emboss filter is another highly specialized effects filter. As can be seen from the chart image, the picture is rendered to a constant monochrome shade of gray, with only contrasting edges being represented by either an increase or decrease in brightness. This creates the appearance of a flat gray sheet that is ‘stamped’ or embossed with the outline of the image elements. High contrast edges are rendered sharply, lower contrast edges are softer in shape. Reading from left to right, a transition from dark to light is represented by a dark edge, from light to dark is shown as light edge. Since each of these Edit filters has an intensity slider, the effect’s strength can be ‘dialed in’ as desired. I have shown all the filters up to now at full strength, for illustrative purposes. Here I have included a sample of this filter at a 40% level, since it shows just how different a look can be achieved in some cases by not using a filter at full strength.

Artistic filter = Xray

Artistic filter = Xray

The Xray filter is yet another ‘monochromatic tint’ filter, with the image first being rendered to a grayscale image, then (in this case) undergoing a complete tonal reversal (to make the image look like a negative), then finally a tint with a dark greenish-cyan color. It’s just a look (since all ‘real’ x-ray films are black and white only), but I’m certain at least one of the millions of people that have downloaded this app will find a use for it.

Artistic filter = Heat Signature

Artistic filter = Heat Signature

The Heat Signature filter is the final filter in this Artistic group. It is illustrative of a scientific imaging method whereby infrared camera images (that see only wavelengths too long for the human eye to see) are rendered into a visual color spectrum to help illustrate relative temperatures of the observed object. In the real scientific camera systems, cooler temperatures are rendered blue, the hottest parts of the image in reds. In between temperatures are rendered in green. Here, this mapping technique is applied against the grayscale. Blacks are blue, midtones are green, highlights are red.

Distortion filters:

The geometric distortion filters are presented differently, since these are spatial filters only. There is no need, nor advantage, to using the color chart test image. I have presented each filter as a triptych, with the first image showing the control as found when the filter is opened within the app, the second image showing a manipulation of the “effects circle” (which can be moved and resized), and the third image is the resultant image after applying the filter. There are no intensity sliders on the distortion filters.

Geometric Filter: Center Offset - Initial / Targeted Area / Result

The Center Offset filter ‘pulls’ the image to the center of the circle, as if the image was on an elastic rubber sheet, and was stretched towards the center of the control circle.

Geometric Filter: Pixelate - Initial / Targeted Area / Result

The Pixelate filter distorts the image inside of the control circle by greatly enlarging the quantization factors in the affected area, causing a large ‘chunking’ of the picture. This renders the affected area virtually recognizable – often used in candid video to obfuscate the identity of a subject.

Geometric Filter: Bulge - Initial / Targeted Area / Result

The Bulge filter is similar to the Center Offset, but this time the image is ‘pulled into’ the control circle, as if a magnifying fish-eye lens was applied to just a portion of the image.

Geometric Filter: Squeeze - Initial / Targeted Area / Result

The Squeeze filter is somewhat the opposite of the Bulge filter, with the image within the control circle being reduced in size and ‘pushed back’ visually.

Geometric Filter: Swirl - Initial / Targeted Area / Result

The Swirl filter does just that:  takes the image within the control circle and rotates it. Moving the little dot controls the amount and direction of the swirl. She needs a chiropractor after this…

Geometric Filter: Noise - Initial / Targeted Area / Result

The Noise filter works in a similar way to the Pixelate filter, only this time large-scale noise is introduced, rather than pixelation.

Geometric Filter: Light Tunnel - Initial / Targeted Area / Result

The Light Tunnel filter is probably a Star Trek shadow – what part of our common culture has not been affected by that far-seeing series? Remember the ‘communicator’?  Flip type cell phones, invented 30 years later, looked suspiciously like that device…

Geometric Filter: Fish Eye - Initial / Result

The Fish Eye filter mimics what a ‘fish eye’ lens might make the picture look like. There is no control circle on this filter – it is a fixed effect. The center of the image is the center of the fish-eye effect. In this case, it’s really not that strong of a curvature effect, to me it looks about like what a 12mm lens (on a 35mm camera system) would look like. If you want to see just how wide a look is possible, go to Nikon’s site and look for examples of their 6.5mm fisheye lens. That is wide!

Geometric Filter: Mirror - Initial / Result

The Mirror filter divides the image down the middle (vertically) and reflects the left half of the image onto the right side. There are no controls – it’s a fixed effect.

Borders:

Borders: Thin White / Rounded Black / Double Frame

Borders: White Frame / Polaroid / Stamp

Borders: Torn / Striped / Grainy

Ok, that’s it. Another iPhone camera app dissected, inspected, respected. Enjoy.

iPhone4S – Section 4b: Camera+ app

March 19, 2012 · by parasam

Camera+  A full-featured camera and editing application. Version described is 3.02

Feature Sets:

  • Light Table design for selecting which photos to Edit, Share, Save or get Info.
  • Camera Functions
    • Ability to split Focus area from Exposure area
    • Can lock White Balance
    • Flash: Off/Auto/On/Torch
    • Front or Rear camera selection
    • Digital Zoom
    • 4 Shooting Modes: Normal/Stabilized/Self-Timer/Burst
  • Camera options:
    • VolumeSnap On/Off
    • Sound On/Off
    • Zoom On/Off
    • Grid On/Off
    • Geotagging On/Off
    • Workflow selection:  Classic (shoot to Lightbox) / Shoot&Share (edit/share after each shot)
    • AutoSave selection:  Lightbox/CameraRoll/Both
    • Quality:  Full/Optimized (1200×1200)
    • Sharing:  [Add social services for auto-post to Twitter, Facebook, etc.]
    • Notifications:
      • App updates On/Off
      • News On/Off
      • Contests On/Off
  • Edit Functions
    • Scenes
      • None
      • Clarity
      • Auto
      • Flash
      • Backlit
      • Darken
      • Cloudy
      • Shade
      • Fluorescent
      • Sunset
      • Night
      • Portrait
      • Beach
      • Scenery
      • Concert
      • Food
      • Text
    • Rotation
      • Left
      • Right
      • Flip Horizontal
      • Flip Vertical
    • Crop
      • Freeform (variable aspect ratio)
      • Original (camera taking aspect ratio)
      • Golden rectangle (1:1.618 aspect ratio)
      • Square (1:1 aspect ratio)
      • Rectangular (3:2 aspect ratio)
      • Rectangular (4:3 aspect ratio)
      • Rectangular (4:6 aspect ratio)
      • Rectangular (5:7 aspect ratio)
      • Rectangular (8:10 aspect ratio)
      • Rectangular (16:9 aspect ratio)
    • Effects
      • Color – 9 tints
        • Vibrant
        • Sunkiss’d
        • Purple Haze
        • So Emo
        • Cyanotype
        • Magic Hour
        • Redscale
        • Black & White
        • Sepia
      • Retro – 9 ‘old camera’ effects
        • Lomographic
        • ‘70s
        • Toy Camera
        • Hipster
        • Tailfins
        • Fashion
        • Lo-Fi
        • Ansel
        • Antique
      • Special – 9 custom effects
        • HDR
        • Miniaturize
        • Polarize
        • Grunge
        • Depth of Field
        • Color Dodge
        • Overlay
        • Faded
        • Cross Process
      • Analog – 9 special filters (in-app purchase)
        • Diana
        • Silver Gelatin
        • Helios
        • Contessa
        • Nostalgia
        • Expired
        • XPRO C-41
        • Pinhole
        • Chromogenic
    • Borders
      • None
      • Simple – 9 basic border styles
        • Thick White
        • Thick Black
        • Light Mat
        • Thin White
        • Thin Black
        • Dark Mat
        • Round White
        • Round Black
        • Vignette
      • Styled – 9 artistic border styles
        • Instant
        • Vintage
        • Offset
        • Light Grit
        • Dark Grit
        • Viewfinder
        • Old-Timey
        • Film
        • Sprockets

Camera Functions

After launching the Camera+ app, the first screen the user sees is the basic camera viewfinder.

Camera view, combined focus & exposure box (normal start screen)

On the top of the screen the Flash selector button is on the left, the Front/Rear Camera selector is on the right. The Flash modes are: Off/Auto/On/Torch. Auto turns the flash off in bright light, on in lower light conditions. Torch is a lower-powered continuous ‘flash’ – also known as a ‘battery-killer’ – use sparingly! Virtually all of the functions of this app are directed to the high-quality rear-facing camera – the front-facing camera is typically reserved for quick low-resolution ID snaps, video calling, etc.

On the bottom of the screen, the Lightbox selector button is on the left, the Shutter release button is in the middle (with the Shutter Release Mode button just to the right-center), and on the right is the Menu button. The Digital Zoom slider is located on the right side of the frame (Digital Zoom will be discussed at the end of this section). Notice in the center of the frame the combined Focus & Exposure area box (square red box with “+” sign). This indicates that both the focus and the exposure for the entire frame are adjusted using the portion of the scene that is contained within this box.

You will notice that the bottle label is correctly exposed and focused, while the background is dark and out of focus.

The next screen shows what happens when the user selects the “+” sign on the upper right edge of the combined Focus/Exposure area box:

Split focus and exposure areas (both on label)

Now the combined box splits into two areas:  an Focus area (square box with bull’s eye), and an Exposure area (round circle resembling the adjustable f-stop ring in a camera lens). The exposure is now measured separately from the focus – allowing more control over the composition and exposure of the image.

In this case the resultant image looks like the previous one, since both the focus and the exposure areas are still placed on the label, which has consistent focus and lighting.

In the next example, the exposure area is left in place – on the label – but the focus area is moved to a point in the rear of the room. You will now notice that the rear of the room has come into focus, and the label has gone soft – out of focus. However, since the the exposure area is unchanged, the relative exposure stays the same – label well lit – but the room beyond still dark.

Split focus and exposure areas, focus moved to rear of room

This level of control allows greater freedom and creativity for the photographer.  [please excuse the slight blurring of some of the screen shot examples – it’s not easy to hold the iPhone completely still while taking a screen shot – which requires simultaneously pressing the Home button and the Power button – even on a tripod]

The next image shows the results of selecting the little ‘padlock’ icon in lower left of the image – this is Lock/Unlock button for Exposure, Focus and White Balance (WB).

Showing 'lock' panel for White Balance (WB), exposure and focus

Each of the three functions (Focus, Exposure, White Balance) can be locked or unlocked independently)

Focus moved back to label, still showing lock panel

In the above example, the focus area has been moved back to the label, showing how the focus now returns to the label, leaving the rear of the room once again out of focus.

The next series of screens demonstrate the options revealed when the Shutter Release Mode button (the little gear icon to the right of the shutter button) is selected:

Shutter type 'Settings' sub-menu displayed, showing 4 options

The ‘Normal’ mode exposes one image each time the shutter button is depressed.

Shutter type changed to Stabilized mode

When the ‘Stabilizer’ mode is selected, the button icon changes to indicate this mode has been selected. This mode is an indication (and an automatic shutter release) of the stability of the iPhone camera once the Stabilizer Shutter Release button is depressed – NOT a true motion-stabilized lens as in some expensive DSLR cameras. You have to hold the camera still to get a sharp picture – this function just helps the user know that the camera is indeed still. Once the Stabilizer Shutter Release is pushed, it glows red if the camera is moving, and text on the screen urges the user to hold still. As the camera detects that motion has stopped (using the iPhone’s internal accelerometer – motion detector) little beeps sound, and the shutter button changes color from red to yellow to green and then the picture is taken.

Shutter type changed to Timer mode - screen shows beginning of 5sec countdown timer

The Self-Timer Shutter Release mode allows a time delay before the actual shutter release occurs – after the shutter button is depressed. The most common use for this feature is a self-portrait (of course you need either a tripod or other method of securing the iPhone so the composition does not change!). This mode can also be useful to avoid jiggling the camera while pressing the shutter release – important in low light situations. The count-down timer is indiated by the numbers in the center of the screen. Once the shutter is depressed, the numbers count down (in seconds) until the exposure occurs. The default time delay is 5 seconds, this can be adjsuted by tapping the number on the screen before the shutter button is selected. The choices are 5, 15 and 30 seconds.

Shutter type changed to Burst mode

The final of the four shutter release modes is the ‘Burst’ mode. This exposes a short series of exposures, one right after the other. This can be useful for sports or other fast moving activity, where the photographer wants to be sure of catching a particular moment. The number of exposures taken is a function of how long you hold down the shutter release – the camera keeps taking pictures as fast as it can as long as you hold down the shutter.

There are a number of things to be aware of while using this mode:

  • You must be in the ‘Classic’ Workflow, not the ‘Shoot & Share’ ( more on this below when we discuss that option)
  • The best performance is obtained when the AutoSave mode is set to ‘Lightbox’ – writing directly to the Camera Roll (using the ‘Camera Roll’ option) is slower, leading to more elapsed time between each exposure. The last option of AutoSave (‘Lightbox & CameraRoll’) is even slower, and not recommended for burst mode.
  • The resolution of burst photos is greatly reduced (from 3264×2448 down to 640×480). This is the only way the data from the camera sensor can be transferred quickly enough – but one of the big differencdes between the iPhone camera system and a DSLR. The full resolution is 8megapixels, the burst resolution is only 0.3megapixels – more than 25x less resolution!

Resultant picture taken with exposure & focus on label

The above shot is an actual unretouched image using the settings from the first example (focus and exposure areas both set on label of the bottle).

Here is an example of how changing the placement of the Exposure Area box within the frame affects the outcome of the image:

exposure area set on wooden desktop - normal range of exposure

exposure area set on white dish - resulting picture is darker than normal

exposure area set on black desk mat - resulting image is lighter than normal

To fully understand what is happening above you need to remember that any camera light metering system sets the exposure assuming that you have placed the exposure area on a ‘middle gray’ value (Zone V). If you place the exposure measurement area on a lighter or darker area of the image the exposure may not be what you envisioned. Further discussion of this topic is outside the scope of this blog – but it’s very important, so if you don’t know – look it up.

The Lightbox

The next step after shooting a frame (or 20) is to process (edit) the images. This is done from the Lightbox. This function is entered by pressing the little icon of a ‘film frame’ on the left of the bottom control bar.

empty Lightbox, ready to import a photograph for editing

The above case shows an empty Lightbox (which is how the app looks after all shots are edited and saved). If you have just exposed a number of images, they will be waiting for you in the Lightbox – you will not need to import them. The following steps are for when you are processing previously exposed images (and it doesn’t matter if they were shot with Camera+ or any other camera app. I sometimes shoot with my film camera, scan the film, import to iPhone and edit with Camera+ in order to use a particular filter that is available).

selection window of the Lightbox image picker

an image selected in the Lightbox image picker, ready for loading into the Lightbox for editing

image imported into the Lightbox, ready for an action

When entering the Edit mode after loading an image, the following screen is displayed. There are two buttons on the top:  Cancel and Done. Cancel returns the user to the Lightbox, abandoning any edits or changes made while in the Edit screen, while Done applies all the edits made and returns the user to the Lightbox where the resultant image can be Shared or Saved to the Camera Roll.

Along the bottom of the screen are two ribbons showing all the edit functions. The bottom ribbon selects the particular edit mode, while the top ribbon selects the actual Scene, Rotation, Crop, Effect or Border that should be applied. The first set of individual edit functions that we will discuss are the Scenes. The following screen shots show the different Scene choices in the upper ribbon.

Scene modes 1 - 5

Scene modes 6 - 9

Scene modes 10 - 14

Scene modes 15 - 17

The ‘Scenes’ that Camera+ offers are one of the most powerful functions of this app. Nevertheless, there are some quirks (mostly about the naming – and the most appropriate way to apply the Scenes, based on the actual content of your image) that will be discussed. The first thing to understand is the basic difference between Scenes and Effects. Both at the most fundamental level transform the brightness, contrast, color, etc of the image (essentially the visual qualities of the image) – as opposed to the spatial qualities of the image that are adjusted with Rotation, Crop and Border. However, a Scene typically adjusts overall contrast, color balance, sometimes modifies white balance, brightness and so on. An Effect is a more specialized filter – often significantly distorting the original colors, changing brightness or contrast in a certain range of values, etc. – the purpose being to introduce a desired effect to the image. Many times a Scene can be used to ‘rescue’ an image that was not correctly exposed, or to change the feeling, mood, etc. of the original image. Another way to think about a Scene is that the result of applying a Scene will still almost always look as if the image had just been taken by the camera, while an Effect very often is clearly an artificially applied filter.

In order to best demonstrate and evaluate the various Scenes that are offered, I have assembled a number of images that show a “before and after” of each Scene type. Within each Scene pair, the left-hand image is always the unadjusted original image, while the right-hand image has the Scene applied. The first series of test images is constructed with two comparisons of each Scene type: the first image pair shows a calibrated color test chart, the second image pair shows a woman in a typical outdoor scene. The color chart can be used to analyze how various ranges of the image (blacks, grays, whites, colors) are affected by the Scene adjustment; while the woman subject image is often a good representation of how the Scene will affect a typical real-world image.

After all of the Scene types are shown in this manner, I have added a number of sample images, with certain Scene types applied – and discussed – to better give a feeling of how and why certain Scene types may work best in certain situations.

Scene = Clarity

Scene = Clarity

The Clarity scene type is one of the most powerful scene manipulations offered – it’s not an accident that it is the first scene type in the ribbon… The power of this scene is not that obvious from the color chart, but it is more obvious in the human subject. This particular subject, while it shows most of the attributes of the clarity filter well, is not ideally suited for application of this filter – better examples follow at the end of this section. The real effect of this scene is to cause an otherwise flat image to ‘pop’ more, and have more visual impact. However, just like in other parts of life – less is often more. My one wish is that an “intensity slider” was included with Scenes (it is only offered on Effects, not Scenes) – as many times I feel that the amount of Clarity is overblown. There are techniques to accomplish a ‘toning down’ of Clarity, but those will only be discussed in Part 5 of this series – Tips & Techniques for iPhonography – as currently this requires the use of multiple apps – which is beyond the scope of the app introduction in this part of the series. The underlying enhancement appears to be a spatially localized increase of contrast, and an increase in vibrance and saturation of color.

Notice in the gray scale of the chart that the edges of each density chip are enhanced – but the overall gamma is unchanged (the steps from white to black remain even and separately identifiable). Look at the color patches – there is an increase in saturation (vivedness of color) – but this is more pronounced in colors that are already somewhat saturated. For instance look at the pastel colors in the range of columns 9-19 and rows B-D:  there is little change in overall saturation. Now look at, for instance, the saturated reds and greens of columns 17-18 and rows J-L:  these colors have picked up noticeabley increased saturation.

Looking at the live subject, the local increase in contrast can easily be seen in her face, with the subtle variations in skin tone in the original becoming much more pronounced in the Clarity scene type. The contrast between the light print on her dress and the gray background is more obvious with Clarity applied. Observe how the wrinkles in the man’s shirt and shorts are much more obvious with Clarity. Notice the shading on the aqua-colored steel piping in the lower left of the image: in the original the square pipes look very evenly illuminated, with Clarity applied there is a noticeable transition from light to dark along the pipe.

Scene = Auto

Scene = Auto

The Auto scene type is sort of like using a ‘point and shoot’ camera set to full automatic – the user ideally doesn’t have to worry about exposure, etc. Of course, in this case since the image has already been exposed there is a limit to the corrections that can be applied! Basically this Scene attempts to ensure full tonal range in the image and will manipulate levels, gamma, etc. to achieve a ‘centered’ look. For completeness I have included this Scene – but as you will notice, and this should be expected – there is almost no difference between the befor and after images.With correctly exposed initial images this is what should happen… It may not be apparent on the blog, but when looking carefully at the color chart images on a large calibrated monitor the contrast is increased slightly at both ends of the gray scale:  the whites and blacks appear to clip a little bit.

Scene = Flash

Scene = Flash

The Flash scene type is an attempt to ‘repair’ an image that was taken in low light without a flash – when it was probably needed. Again, like any post-processing technique, this Scene cannot make up for something that is not there:  areas of shadow in which there was no detail in the original image can at best only be turned into lighter noise… But in many cases it will help underexposed images. The test chart clearly shows the elevation in brightness levels all through the gray scale – look for example at gray chip #11 – the middle gray value of the original is considerably lightened in the right-hand image. This Scene works best on images that are of overall low light level – as you can see from both the chart and the woman areas of the picture that are already well-lit tend to be blown out and clipped.

Scene = Backlit

Scene = Backlit

The Backlit scene type tries to correct for the ‘silhouette’ effect that occurs when strong light is coming from behind the subject without corresponding fill light illuminating the subject from the front. While I agree that this is a useful scene correction, it is hard to perform after the fact – and personally I think this is one Scene that is not well executed. My issue is with the over-saturation of reds and yellows (Caucasian skin tones) that more often than not make the subject look like a boiled lobster. I think this comes from the attempt to raise the percived brightness of an overly dark skin tone (since the most common subject in such a situation is a person standing in front of a brightly lit background). You will notice on the chart that the gray scale is hardly changed from the original (a slight overall brightness increase) – but the general color saturation is raised. A very noticeable increase in red/orange/yellow saturation is obvious:  look at the red group of columns 7-8 and rows A-B. In the original these four squares are clearly differentiated – in the ‘after’ image they have merged into a single fully saturated area. A glance at the woman’s image also shows overly hot saturation of skin tones – even the man in the background has a hot pink face now. So, to summarize, I would reserve this Scene for very dark silhouette situations – where you need to rescue an otherwise potentially unusable shot.

Scene = Darken

Scene = Darken

The Darken scene type does just what it says – darkens the overall scene fairly uniformly. This can often help a somewhat overexposed scene. It cannot fix one of the most common problems with digital photography however:  the clipping of light areas due to overexposure. As explained in a previous post in this series, once a given pixel has been clipped (driven into pure white due to the amount of light it has received) nothing can recover this detail. Lowering the level will only turn the bright white into a dull gray, but no detail will come back. Ever. A quick look at the gray scale in the right-hand image clearly shows the lowering of overall brightness – with the whites manipulated more than the blacks. For instance, chip #1 turns from almost white into a pale gray, while chip #17 shows only a slight darkening. This is appropriate so the blacks in the image are not crushed. You will notice by looking at the colors on the chart that darkening effect is pretty much luminance only – no change in color balance. The apparent increase in reds in the skin tone of the woman is a natural side-effect of less luminance with chrominance held constant – once you remove the ‘white’ from a color mixture the remaining color appears more intense. You can see the same effect in the man’s shirt and the yellow background. Ideally the saturation should be reduced slightly as well as the luminance in this type of filter effect – but that can be tricky with a single filter designed to work with any kind of content. Overall this is a useful filter.

Scene = Cloudy

Scene = Cloudy

The Cloudy scene type appears to normalize the exposure and color shift that occurs when the subject is photographed under direct cloudy skies. This is in contrast to the Shade scene type (discussed next) where the subject is shot in indirect light (open shade) while illuminated from a clear sky. This is mostly a color temperature problem to solve – remember that noon sunlight is approximately 5500°K (degrees Kelvin is a measure of color temperature, low numbers are reddish, high numbers are bluish, middle ‘white’ is about 5000°K). Illumination from a cloudy sky is often very ‘blue’ (in terms of color temperature) – between 8000°K – 10000°K, while open shade is less so, usually between 7000°K – 8000°K. If you compare the two scene types (Cloudy and Shade) you will notice that the image is ‘warmed up’ more with the Cloudy scene type. There is a slight increase in brightness but the main function of this scene type is to warm up the image to compensate for the cold ‘look’ often associated with shots of this type. You can see this occuring in the reddening of the woman’s skin tones, and the warming of the gray values of the sidewalk between her and the man.

Scene = Shade

Scene = Shade

The Shade scene type is similar in function to the Cloudy scene (see above for comparison) but differs in two areas: There is a noticeable increase in brightness (often images exposed in the shade are slightly underexposed) and there is less warming of the image (due to open shade being warmer in color temperature than a full cloudy scene illumination). One easy way to compare the two scene types is to examine the color charts – looking at the surround (where the numbers and letters are) – the differences are easy to see there. A glance at the test shot of the woman shows a definite increase in brightness, as well as a slight warming – again look at the skin tones and the sidewalk.

Scene = Fluorescent

Scene = Fluorescent

The Fluorescent scene type is designed to correct for images shot under fluorescent lighting. This form of illumination is not a ‘full-spectrum’ light source, and as such has some unnatural effects when used to take photographs. Our human vision corrects for this – but film or digital exposures do not – so such photos tend to have a somewhat green/cyan cast to them. In particular this makes light-colored skin look a bit washed out and sickish. The only real change I can see in this scene filter is an increase in magenta in the overall color balance (which will help to correct for the green/cyan shift under fluorescent light). The difference introduced is very small – I think it will help in some cases and may be insufficient in others. It is more noticeable in the image of the woman than the test chart (her skin tones shift towards a magenta hue).

Scene = Sunset

Scene = Sunset

The Sunset scene type starts a slightly different type of scene filter – ones that are apparently designed to make an image look like the scene name, instead of correcting the exposure for images taken under the named situation (like Shade, Cloudy, etc.). This twist in naming conventions is another reason to always really test and understand your tools – know what the filter you are applying does at a fundamental level and you will have much better results. It’s obvious from both the chart and the woman that a marked increase in red/orange is applied by this scene type. There is also a general increase in saturation – just more in the reds than the blues. See the difference in the man’s shirt and shorts to see how the blue saturation increases. Again, like the Backlit filter, I personally feel this effect is a bit overdone – and within Camera+ there is no simple way to remedy this. There are methods, using another app to follow the corrections of this app – these techniques will be discussed in the next of the series (Tips & Techniques for iPhonography).

Scene = Night

Scene = Night

The Night scene type attempts to correct for images taken at night under very low illumination. This scene is a bit like the Flash type – but on steroids! The full gray scale is pushed towards white a lot, with even the darkest shadow values receiving a noticeable bump in brightness. There is also a general increase in saturation – as colors tend to be undersaturated when poorly illuminated. Of course this will make images that are normally exposed look greatly overexposed (see both the chart and the woman), but it still gives you an idea of how the scene filter works. Some better ‘real-world’ examples of the Night scene type follow this introduction of all the scene types.

Scene = Portrait

Scene = Portrait

The Portrait scene type is basically a contrast and brightness adjustment. It’s easy to see in the chart comparison, both in the gray scale and on the color chip chart. Look first at the gray scale: all of the gray and white values from chip #17 and lighter are raised, the chips below #17 are darkened. Chips #1 and #2 are now both pure white, having no differentiation. Likewise with chips #20-22, they are pure black. The area defined by columns 13-19 and rows A-C are now pure white, compared to the original where clear differences can be noted. Likewise row L between columns 20-22 can no longer be differentiated. In the shot of the woman, this results in a lightening of skin tones and increased contrast (notice her black bag is solid black as opposed to very dark gray; the patch of sunlight just above her waist is now solid white instead of a bright highlight). Again, like several scene effects I have noted earlier, I find this one a little overstated – I personally don’t like to see details clipped and crushed – but like any filter, the art is in the application. This can be very effective on a head shot that is flat and without punch. The trick is applying the scene type appropriately. Knowledge furthers…

Scene = Beach

Scene = Beach

The Beach scene type looks like the type of filter first mentioned in Sunset (above). In other words, an effect to make the image look like it was taken on a beach (or at least that type of light!). It’s a little bit like the previous Portrait scene (in that there is an increase in both brightness and contrast – but less than Portrait) but also has a bit of Sunset in it as well (increased saturation in reds and yellows – but again not as much as Sunset). While the Sunset type had greater red saturation, this Beach filter is more towards the yellow. See column #15 in the chart – in the original each square is clearly differentiated, in the right-hand image the more intense yellows are very hard to tell apart. Just to the right, in column #17, the reds have not changed that much. When looking at the woman, you can see that this scene type makes the image ‘bright and yellow/red’ – sort of a ‘beachy’ effect I guess.

Scene = Scenery

Scene = Scenery

The Scenery scene type produces a slight increase in contrast, along with increased saturation in both reds and blues – with blue getting a bit more intensity. Easy to compare using the chart, and in the shot of the woman this can be seen in reddish skin tone, as well as significantly increased saturation in the man’s shirt and shorts. While this effect makes people look odd, it can work well on so-called “postcard” landscape shots (which tend to be flat panoramic views with low contrast and saturation). However, as you will see in the ‘real-world’ examples below, often a different scene or filter can help out a landscape in even a better way – it all depends on the original photo with which you are starting.

Scene = Concert

Scene = Concert

The Concert scene type is oddly enough very similar to the previous scene type (Scenery) – just turned up really loud! A generalized increase in contrast, along with red and blue saturation increase – attempts to make your original scene look like, well, I guess a rock-and-roll concert… Normal exposures (see the woman test shot) come out ‘hot’ (overly warm, contrasty with elelvated brightness) but if you need some color and punch in your shot, or desire an overstated effect this scene type could be useful.

Scene = Food

Scene = Food

The Food scene type offers a slight increase in contrast and a bit of increased saturation in the reds. This can be seen in both the charts and the woman shot. It’s less overdone than several of the other scene types, so keep this in mind for any shot that needs just a bit of punch and warming up – not just for shots of food. And again, I see this scene type in the same vein as Beach, Sunset, etc. – an effect to make your shot look like the feeling of the filter name, not to correct shots of food…

Scene = Text

Scene = Text

The Text scene type is an extreme effect – and best utilized as its name implies – to help render shots of text material come out clearly and legibly. An example of actual text is shown below, but you can see from the chart that this is accomplished by a very high contrast setting. The apparent increase in saturation in the test chart is really more of a result of the high contrast – I don’t think a deliberate increase in saturation was added to this effect.

(Note:  the ‘real-world’ examples referred to in several of the above explanations will be shown at the very end of this discussion, after we have illustrated the remaining basic edit functions (Rotation, Crop, Effects and Borders) in a similar comparative manner as above with the Scene types. This is due to many of the sample shots combining both a Scene and an Effect (one of the powerful capabilities of Camera+) – I want the viewer to fully understand the individual instruments before we assemble a symphony…)

The next group of Edit functions is the image Rotation set.

Rotation submenu

The usual four choices of image rotation are presented.

The Crop functions are displayed next.

group 1 of crop styles

group 2 of crop styles

group 3 of crop styles

The Effects (FX) filters, after the Scene types, are the most complex image manipulation filters included in the Camera+ app. As opposed to Scenes, which tend to affect overall lighting, contrast and saturation -and keep a somewhat realistic look to the image – the Effects filters seriously bend the image into a wide variety of (often but not always) unrealistic appearances. Many of the effects mimic early film processes, emulsions or camera types; some offer recent special filtering techniques (such as HDR – High Dyamic Range photography), and so on.

The Effects filters are grouped into four sections:  Color, Retro, Special and Analog. The Analog filter group requires an in-app purchase ($0.99 at the time of this article). In a similar manner to how Scene types were introduced above, an image comparison is shown along with the description of each filter type. The left-hand image is the original, the right-hand image has the specific Effects filter applied. Some further ‘real-world’ examples of using filters on different types of photography are included at the end of this section.

Color Effects filters

Color Effects filters

Color Filter = Vibrant

The Vibrant filter significantly increases the saturation of colors, and appears to enhance the red channel more than green or blue.

Color Filter = Sunkiss'd

The Sunkiss’d filter is a generalized warming filter. Notice, that in contrast to the Vibrance filter above, even the tinfoil dress on the left-hand dancer has turned a warm gold color – where the Vibrance filter did not alter the silver tint – as that filter has no effect on portions of the image that are monochrome. Also, all colors in Sunkiss’d are moved towards the warm end of the spectrum – note the gobo light effect projected above the dancers: it is pale blue in the original, and becomes a warmer green/aqua after the application of Sunkiss’d.

Color Filter = Purple Haze

The Purple Haze filter does not provide hallucinogenic experiences for the photographer, but does perhaps simulate what the retina might imagine… Increased contrast, increased red/blue saturation and a color shift towards.. well… purple.

Color Filter = So Emo

The So Emo filter type (So Emotional?) starkly increases contrast, shifts color balance towards cyan, and to add a counterpoint to an overall cyan tint there is apparently a narrow band enhancement of magenta – notice the enhancement of the tulle skirt on the center dancer that should otherwise be almost monochromatic with addition of so much cyan shift in the color balance. However, the flesh tones of the dancers’ legs (more reddish) are rendered almost colorless by the cyan tint; this shows that the enhancement is narrow-band, it does not include red.

Color Filter = Cyanotype

The Cyanotype effects filter is reminiscent of early photogram techniques (putting plant leaves, etc. on photo paper sensitized with the cyanotype process in direct sunlight to get a silhouette exposure). This is the same process that makes blueprints. Later it was used (in a similar way as sepia toning) to tint black & white photographs. In the case of this effects filter, the image is first rendered to monochrome, then subsequently tinted with a slightly yellowish cyan color.

Color Filter = Magic Hour

The Magic Hour filter attempts to make the image look like it was taken during the so-called “Magic Hour” – the last hour before sunset – when many photographers feel the light is best for all types of photography, particularly landscape or interpretive portraits. Brightness is raised, contrast is slightly reduced and a generalized warming color shift is applied.

Color Filter = Redscale

The Redscale effects filter is a bit like the previous Magic Hour, but instead of a wider spectrum warming, the effect is more localized to reds and yellows. The contrast is slightly raised, instead of lowered as for Magic Hour, and the effect of the red filter can clearly be seen on the gobo light projected above the dancers:  the original cyan portion of the light is almost completely neutralized by the red enhancement, leaving only the green portion of the original aqua light remaining.

Color Filter = Black White

The Black & White filter does just what it says: renders the original color photograph into a monochrome only version. It looks like this is a simple monochrome conversion of the RGB channels (color information deleted, only luminance kept). There are a number of advanced techniques for rendering a color image into a high quality black & white photo – it’s not as simple as it sounds. If you look at a great black and white image from a film camera, and compare it to a color photograph of the same scene, you will know what I am saying. There are specialized apps for taking monochrome pictures with the iPhone (one of which will be reviewed later in this series of posts); and there is a whole set of custom filters in Photoshop devoted to just this topic – getting the best possible conversion to black & white from a color original. In many cases however a simple filter like this will do the trick.

Color Filter = Sepia

The Sepia filter, like Cyanotype, is a throwback to the early days of photography – before color film – when black & white images were toned to increase interest. In the case of this digital filter, the image is first turned into monochrome, then tinted with a sepia tone via color correction.

Retro Effects filters

Retro Effects filters

Retro Filter = Lomographic

The Lomographic filter effect is designed to mimic some of the style of photograph produced by the original LOMO Plc camera company of Russia (Leningrad Optical Mechanical Amalgamation). This was a low cost automatic 35mm film camera. While still in production today, this and similar cameras account for only a fraction of LOMO’s production – the bulk is military and medical optical systems – and are world class… Due to the low cost of components and production methods, the LOMO camera exhibited frequent optical defects in imaging, color tints, light leaks, and other artifacts. While anathema to professional photographers, a large community that appreciates the quirky effects of this (and other so-called “Lo-Fi” or Low Fidelity) cameras has sprung up with a world-wide following. Hence the Lomographic filter…

While, like all my analysis on Scenes and Effects, I have no direct knowledge of how the effect is produced, I bring my scientific training and decades of photographic experience to help explain what I feel is a likely design, based on empirical study of the effect. That said, this effect appears to show increased contrast, a greenish/yellow tint for the mid-tones (notice the highlights, such as the white front stage, stay almost pure white). A narrow-band enhancement filter for red/magenta keeps the skin tones and center dancer’s dress from desaturating in the face of the green tint.

Retro Filter = '70s

The ’70s effect is another nod to the look of older film photographs, this one more like what Kodachrome looked like when the camera was left in a hot car… All film stock is heat sensitive, with color emulsions, particularly older ones, being even more so. While at first this filter has a resemblance to the Sunkiss’d color filter, the difference lies in the multi-tonal enhancements of the ’70s filter. The reds are indeed punched up, but that’s in the midtones and shadows – the highlights take on a distinct greenish cast. Notice that once the enhanced red nulled out the cyan in the overhead gobo projection, then the remaining highlights have turned bright green – with a similar process occuring on the light stage surface.

Retro Filter = Toy Camera

The Toy Camera effects filter emulates the low cost roll-film cameras of the ’50s and ’60s – with the light leaks, uneven processing, poor focus and other attributes often associated with photographs from that genre of cameras. Increased saturation, a slightly raised black level, spatially localized contrast enhancement (a technique borrowed from HDR filtering) – notice the slight flare on the far right over the seated woman’s head become a bright hot flare in the filtered image, and streaking to simulate light leakage on film all add to the multiplicity of effects in this filter.

Retro Filter = Hipster

The Hipster effect is another of the digital memorials to the original Hipstamatic camera – a cheap all plastic 35mm camera that shot square photos. Copied from an original low-cost Russian camera, the two brothers that invented it only produced 157 units. The camera cost $8.25 in 1982 when it was introduced. With a hand-molded plastic lens, this camera was another of the “Lo-Fi” group of older analog film cameras whose ‘look’ has once again become popular. As derived by the Camera+ crew, the Hipster effect offers a warm, brownish-red image. Achieved apparently with raised black levels (a common trait of cheap film cameras, the backs always leaked a bit of light so a low level ‘fog’ of the film base always tended to raise deep blacks [areas of no light exposure in a negative] to a dull gray); a pronounced color shift towards red/brown in the midtones and lowlights; and an overall white level increase (note the relative brightness of the front stage between the original and the filtered version),

Retro Filter = Tailfins

The Tailfins retro effect is yet another take on the ’50s and ’60s – with an homage to the 1959 Cadillac no doubt – the epitomy of the ‘tailfin era’. It’s similar to the ’70s filter described above, but lacks the distinct ‘overcooked Kodachrome’ look with the green highlights. Red saturation is again pushed up, as well as overall brightness. Once again blacks are raised to simulate the common film fog of the day. Lowered contrast finishes the look.

Retro Filter = Fashion

The Fashion effects filter is an interesting and potentially very useful filter. Although I am sure there are styles of fashion photography that have used this muted look, the potential uses for this filter extend far beyond fashion or portraiture. Essentially this is a desaturating filter that also warms the lowlights more than the highlights. Notice the rear wall – almost neutral gray in the original, a very warm gray in the filtered version. The gobo projected light, the greenish-yellow spill on the ceiling, the center dancer’s dress – all greatly desaturated. The contrast appears just a bit raised:  the white front stage is brighter than the original version, and the black dress of the right-hand dancer is darker. With so many photos today – and filters – that tend to make things go pop! bang! and sparkle! it’s sometimes nice to present an image that is understated, but not cold. This just might be a useful tool to help tell that story.

Retro Filter = Lo-Fi

The Lo-Fi is another retro effect filter that is similar in some respects to the Toy Camera filter reviewed above, but does not express the light streak and obvious film fog artifacts. It again provides an unnatural intensity of color – this through greatly increased saturation. There is also an increase in contrast – note the front stage is nearly pure white and the ceiling to the right of the gobo projection has gone almost pure black. There is a non-uniform assignment of color balance and saturation, dependent on the relative luminance of the original scene. The lighter the original scene, the less saturation is added:  compare the white stage to the dark gray interior of the large “1” on the back wall.

Retro Filter = Ansel

The Ansel filter is of course a tip of the hat to the iconic Ansel Adams – one of the premier black & white photographers ever. Although… Ansel would likely have something to say about separation of gray values in the shadows, particularly around Zones II – III.  Compared to the ‘Black & White’ color filter discussed earlier, this filter is definitely of a higher contrast. Personally, I think the blacks are crushed a bit – most of the detail is lost in the black dress, and the faces of the dancers are almost lost now in dark shadow. But for the right original exposure, this filter will offer more dyanmism than the “Black & White” filter.

Retro Filter = Antique

The Antique effects filter is in the same vein as Sepia and Cyanotope: a filter that first extracts a monochrome image from the color original, then tints it – in this case with a yellow cast. The contrast is increased as well.

Special Effects filters

Special Effects filters

Special Filter = HDR @ 100%

Special Filter = HDR @ 50%

The HDR Special filter is, along with the Clarity Scene type, one of the potentially more powerful filters in this entire application. Because of this (and to demonstrate the Intensity Slider function) I have inserted two examples of this filter, one with the intensity set at 100%, and one with the intensity at 50%. All of the Effects filters have an intensity control, so the relative level of the effect can be adjusted from 0-100%. All of the other examples are shown at full intensity to discuss the attributes of the filter with the greatest ease, but many times a lessening of the intensity will give better results. That is nowhere more evident than with the HDR effect. This terms stands for High Dynamic Range photography. Normally, this can only be performed with multiple exposures of precisely the same shot in the camera – then through complex post-production digital computations, the two (or more) images are superimposed on top of each other, with the various parts of the images seamlessly blended. The whole purpose of this is to make a composite image that has a greater range of exposure than was possible with the taking camera/sensor/film.

The usual reason for this is an extreme range of brightness. An example:  if you stand in a dimly lit barn and shoot a photograph out the open barn door at the brightly lit exterior at noon, the ratio of brightness from the barn interior to the exterior scene can easily approach 100,000:1 – which is impossible for any medium, whether film or digital, to capture in a single exposure. The widest range film stock ever produced could capture about 14 stops – about 16,000:1. And that is theoretical – once you add the imperfections of lens, the small amount of unavoidable base fog and development noise, 12 stops (about 4,000:1) is more realistic. With CCD arrays (high quality digital sensors as found in expensive DSLR cameras, not the CMOS sensors used in the iPhone), it is theoretically possible to get about the same. While the top of the line DSLRs boast a 16-bit converter, and do output 16-bit images, the actual capability of the sensor is not that good. I personally doubt it’s any better than the 12 stops of a good film camera – and that only on camera backs costing the same as a small car…

What this means in practicality is that to capture such a scene leads to one of two scenarios:  either the blacks are underexposed (if you try to avoid blowing out the whites); or the white detail is lost in clipping if you try to keep the black shadow detail in the barn visible. The only other option (employed by professional photographers with a budget of both time and money) is to light the inside of the barn sufficiently that the contrast of the overall scene is brought within range of the taking film or digital array.

With HDR, a whole new possibility has arrived: take two photographs (identical, must line up perfectly so camera has to be on a tripod, and no motion in the scene is allowed – a rather restrictive element, but critical) – then with the magic of digital post-processing, the low-light image (correctly exposed for the shadows, so the highlights are blown out) and the hi-light image (correctly exposed for the brightly lit part of the scene, so the inside of the barn is just solid black with no detail) are combined into a composite photograph that has incredible dynamic range. There is a lot more to it than this, and you can’t get around the display part of the equation (how do you then show an image that has 16 or more stops of dynamic range on a computer monitor that has at best 10 stops of range? or worse yet, ink jet printers, that even on the best high gloss art paper may be able to render 6 stops of dynamic range? We’ll leave those questions for my next part of this blog series, but for now it’s enough to understand that high dynamic range exposures (HDR) are very challenging for the photographer.

So what exactlty IS an HDR filter? Obviously it cannot duplicate the true HDR technique (multiple exposures)… [First, to be clear, there are different types of “HDR filters” – for instance the very complex one in Photoshop is designed to work with multiple source images as discussed above – here we are talking about the HDR filter included with Camera+, and what it can, and cannot, do). The type of filtering process that appears to be used by the HDR filter in this app is known as a “tone mapping” filter. This is actually a very complex process, chock full of high mathematics, and if it weren’t for the power of the iPhone hardware and core software this would be impossible to do on anything but a desktop computer. Essentially, through a process of both global and local tone mapping using specific algorithms, an output image is derived from the input image. As you can see from the results in the right hand images, tone mapping HDR has a unique look. It tends to enhance local contrast, so image sharpness is enhanced. A side effect – that some like and others don’t – is an apparent ‘glow’ around the edges of dark objects in the scene when they are in front of lighter objects. In these examples, look around the edges of the black dress, and the edges of the black outline of the “1” on the back wall. Notice also that in the original photo, the white stage looks almost smooth, in the resultant filtered image, you can see every bit of dust and footscrapes from the models. The overall brightness of the image is enhanced, but nothing is clipped. Due to the enhancement of small detail, noise in low light areas (always an issue with digital sensors) is increased. Look at the area of the ceiling to the right of the projected gobo image:  in the original the low lit area looks relatively smooth, in the filtered image there are many more mottling and other noise artifacts. Due to the amount of detail added, side effects, etc. it is often desired to not ‘overdo’ the tone mapping effect. This can clearly be shown with the second set of comparisions, which has the intensity of the effect set to 50% instead of 100%.

Special Filter = Miniaturize

The Special filter Miniaturize initially confused me – I didn’t understand the naming of this filter in reference to its effects: this filter is very similar to the Depth of Field filter which will be discussed shortly. Essentially this filter increases saturation a bit, and then applies blurring technique to defocus the upper third and lower third of the image, leaving the middle third sharp. A reader of my inital release of this section was kind enough to point out that this filter is attempting to mimic the planar depth-of-field effect that happens when the lens is tilted about the axis of focus. With a physical tilt-shift lens, the areas of soft focus are due to one area of the image being too far to be in focus, the other area being too near to be in focus. This technique is used to simulate miniature photography, hence the filter name. Thanks to darkmain for the update.

Special Filter = Polarize

The Polarize filter is another somewhat odd name for the actual observed effect – since polarizing filtering must take place optically – there is no way to electronically substitute this. Polarizing filters are often used to reduce reflections (from windows, water surface, etc.) – as well as allow us to see 3D movies. None of these techniques are offered by this filter. What this one does do is to substantially increase the contrast, add significant red/blue saturation – but, like the earlier Lo-Fi filter the increase in saturation is inversely proportional to the brightness of the element in the scene: dark areas get increased saturation, light areas do not.

Special Filter = Grunge

The Grunge effect does have a name that makes sense! Looking like the photo was taken through a grungy piece of glass, it has a faded look that is somewhat realistic of old, damaged print photos. The apparent components of this filter are:  substantial desaturation, significant lightening (brightness level raised), a golden/yellow tint added, then the ‘noise’ (scratches).

Special Filter = Depth of Field

The Depth of Field effect is, as mentioned, very similar to the Miniaturize effect. The overall brightness is a bit lower, and the other main difference appears to be a circular area of sharpness in the center of the frame, as opposed to the edge to edge horizontal band of sharpness apparent with the Miniaturize filter. Check the focus of the woman seated on the far right in the two filters and you’ll see what I mean.

Special Filter = Color Dodge

The Color Dodge special filter has me scratching my head again as far as naming goes… In photographic terminology, “dodge” means to hold back, to reduce – while “burn” means to increase. These are techniques originally used in darkroom printing (actually one of the first methods of tone mapping!) to locally increase or decrease the light falling on the print in order to change the local contrast/brightness. In the resultant image from this filter, red saturation has not just been increased, it has been firewalled! Basically, areas in the original image that had little color in them stayed about the same, areas that have significant color values have those values increased dramatically. There is additionally an increase in overall contrast.

Special Filter = Overlay

The Overlay effect has the same contrast and saturation functions as the previous Color Dodge filter, but the saturation is not turned up as high (gratefully!). In addition, there is a pronounced vignette effect – easy to see in the bottom of the frame. It’s circular, just harder to see in this particular image at the top. Like the rest of the effects filters, the ability to reduce the intensity of this effect can make it useful for situations that at first may not be obvious. For instance, since the saturation only works on existing chroma in the image, if one applies this image to a monochrome image now you have a variable vignetter filter – with no color component…

Special Filter = Faded

The Faded special effect filter does just what it says… this is a simple desaturating filter with no other functions visible. With the ability to vary the intensity of desaturation it makes for a powerful tool. Often one would like to just take a ‘bit off the top’ in terms of color gain – digital sensors are inherently more saturated looking than many film emulsions – just compare (if you can, not that many film labs left…) a shot taken of the same scene with both Ektachrome transparency film and the same scene with the iPhone.

Special Filter = Cross Process

The Cross Process is a true special effect! The name comes from a technique, first discovered by accident, where film is processed in a chemical developer that was intended for a different type of film. While the effect in this particular filter is not indicitive of any particular similar chemical cross-process, the overall effect is similar:  high contrast, unnatural colors, and a general ‘retro’ look that has found an audience today. Just like with the chemical version of cross-processing, the results are unpredictable – one just has to try it out and see. Of course, with film, if you didn’t like it… tough.. go reshoot… with digital, just back up and do something else….

Analog Effects filters

Analog Effects filters

Analog Filter = Diana

The Diana effect is based on, wow – surprise, the Diana camera… another of the cheap plastic cameras prevalent in the 1960’s. The vignetting, light leaks, chromatic aberrations and other side-effects of a $10 camera have been brought into the digital age. In a similar fashion to several of the previous ‘retro’ filters discussed already, you will notice raised blacks, slight lowering of contrast, odd tints (in this case unsaturated highlights tend yellow), increased saturation of colors – and a slight twist in this filter due to even monochrome areas becoming tinted – the silver dress (which stayed silver in even some of the strongest effects discussed above) now takes on a greenish/yellow tint.

Analog Filter = Silver Gelatin

The Silver Gelatin effect, based on the original photochemical process that is over 140 years old – is a wonderful and soft effect for the appropriate subject matter. While the process itself was of course only black & white, the very nature of the process (small silver molecules suspended in a thin gelatin coating) caused fading relatively soon after printing. The gelatin fades to a pale yellow, and the silver (which creates the dark parts of the print) tended to turn a purplish color instead of the original pure black.

Analog Filter = Helios

The Helios analog effect, while it’s possible that it was named for the Helios lens that was fabricated for the Russian “Zenit” 35mm SLR camera in 1958 – is just as likely to be called this due to the burning-fire red tint of virtually the entire frame. In a similar manner to other filters we have discussed, the tinting (and this is clearly another example of a tinted monochrome extraction from the original color image) is based on relative luma values:  near whites and near blacks are not tinted, all other mid-range values of gray are tinted strongly red. It’s an interesting technique, but personally I would have only sparing use for this one.

Analog Filter = Contessa

The Contessa effect is named after one of the really great early 35mm Zeiss/Ikon cameras, produced in the late 1940’s. The effect as it exists here is actually not true to the Contessa:  this original film camera would not have caused the vignetting seen – not with one of the world’s greatest lenses attached! However, that’s immaterial, it’s just the name… what we can say about this filter is that obviously it’s another black & white extraction from the color original – but it adds a sense of ‘old time photograph’ with the vignette, the staining/spotting on the sides of the image, and the very slight warm tint – really appears to look faded as opposed to an actual tint. It’s a nice filter, I would adjust it a bit to add more detail/contrast in the dancers’ faces (left and middle dancers’ faces are a bit dark) – but a nice addition to the toolbox.

Analog Filter = Nostalgia

The Nostalgia effect is now reaching into true ‘special effects’ territory. With a cross process look to start with (similar to Diana and Cross Process), some added saturation in the reds, and then the ‘fog’ effect around the perimeter of the frame – this is leaving photorealism behind…

Analog Filter = Expired

The Expired analog effect is a rather good copy of what film used to look like when you left it in the glove box of your car in the summer… or just plain let it get too old. The look created here is just one of many, many possible effects from expired film – it’s a highly unpredictable entity. In this filter, we have strong red saturation increase – again, with no color in the front of the white stage, nothing to saturate… The overall brightness is raised, contrast is lowered, and a light streak is added.

Analog Filter = XPRO C-41

The XPRO C-41 effect is another cross-process filter. This one is loosely based on what happens when you process E-6 film (color transparency) with color negative developer (C-41). Whites get a bit blown out, light areas tend to greenish, with darker areas tending bluish. The red saturation is (I believe) just something these software developers added – I’ve personally never seen this happen in chemical cross processing.

Analog Filter = Pinhole

The Pinhole analog effect is based, of course, on the oldest camera type of all. With major vignetting, considerable grain, monochrome only, enhanced contrast and lowered brightness, this filter does a fair job imitating what an iPhone would have produced in 1850 (when the first actual photograph was taken with a pinhole camera). The issue was that of a photosenstive material – the pinhole camera has been around for well over a thousand years (the Book of Optics published in 1021AD describes it in detail).

Analog Filter = Chromogenic

The Chromogenic analog effect is based on the core methodology of all modern color film emulsions: the coupling of color dyes to exposed silver halide crystals. All current photochemical film emulsions use light-sensitive silver crystals for exposure to light. To make color, typically 3 layers on the film (cyan, yellow, magenta) are actually specialzed chromogenic layers where the dye colors attach themselves to exposed silver crystals in that layer only. This leads to the buildup of a color negative (or positive) by the ‘stacking’ of the three layers to make a completed color image. Very early on, during the experimentation that led to this development, the process was not nearly as well defined – and this filter is one software artist’s idea of what an early chromogenic print may have looked like. In terms of analysis, the overall cast is reddish-brown, with enhanced contrast, slightly crushed blacks and very desaturated colors (aside from the overall tint).

The Border variations

Simple Border styles

Styled border styles

There are, in addition to the “No Border” option, 9 simple borders and 9 styled borders.

‘Real World’ examples using Scenes and Effects

The following examples show some various images, each processed with one or more techniques that have been introduced above. In each case the original image is shown on the left, the processed image is shown on the right. Since, just like in real life, we often learn more from our mistakes than what we get right the first time – I have included some ‘mistakes’ to demonstrate things I believe did not work so well.

Retro Filter = Ansel

The Ansel filter on this subject has too much contrast – there is no detail in the dress and the subtle detail reflected in the glass behind her is washed out.

Scene = Concert

The Concert filter used here looks unnatural:  skin tones too red. It does make the reflections in the glass pop out though…

Scene = Clarity

The Clarity scene is mostly successful here:  improved detail in her dress, the reflections in the window are clearer, the detail in the stone floor pops more. I would opt for a bit less saturation in her skin tones, particularly the legs – the best technique here would be to ‘turn down’ the Clarity a bit. This can be done, but not within Camera+ (at this time).

Special Filter = Overlay

The Overlay filter as used here offers another interpretation of the scene. The slight vignetting helps focus on the subject, the floor is now defocused, the background reflection in the glass behind her is lessened – the only thing I would like to see improved is her dress is a bit dark – hard to see the detail.

Scene = Portrait

The Portrait scene punches up the contrast, and in this case works fairly well. The floor is a bit bright, and it’s a personal decision if the greater detail revealed in the reflection behind the subject is distracting or not… The upper half of her dress is a bit dark due to the increased contrast, it would be nice to see more detail there.

Scene = Shade

This Shade scene applied shows the effects rather well: the scene is warmed up and the brightness is slightly raised.

Analog Filter = Silver Gelatin

The Silver Gelatin effect is demonstrated well here: blacks are a bit purple, all the whites/grays go a bit yellow. It’s a nice soft look for the right subject matter.

Scene = Backlit

The Backlit scene helps add fill to the otherwise underexposed subject. Since in this case she is standing in open shade (very cool in terms of color temperature), the tendency for this Scene to make skin tones too warm is ok. The background is blown out a bit though.

Scene = Food

In this version, the Food scene is used, well, not for food… it doesn’t add as much fill light to the subject as Backlit, but the background isn’t as overexposed either.

Scene = Backlit

Here’s an example of using the Backlit scene for a subject that is not a person – rather for the whole foreground that is in virtual silhouette. It helps marginally, and does tend to wash out the sky some. But the path and the parked cars have more visibility.

Scene = Backlit

Here is the Backlit scene used in the traditional manner – and as was discussed when this scene was introduced above, I find the skin tones just too red. It does fix the lighting however. All is not lost – once we move on to other techniques to be discussed in a future post:  using another app for editing the results of Camera+. For instance, if we now bring this image into PhotoForge2 and apply color correction and some desaturation we can tone down the red skin and back off the overly saturated colors of their tops.

Scene = Backlit

Here’s another example of the Backlit scene. It does again resolve the fill light issue, but once again oversaturates both the skin tones and the background.

Scene = Backlit

Here is Backlit scene used to attempt to fix this shot where insufficient fill light was available on the subject.

Scene = Clarity

This shows the Clarity scene used to help the lack of fill light. I think it works much better than Backlit in this case. I would still follow up with raising the black levels to bring out details in her shirt.

Scene = Beach

Here are three versions of another backlit scene, with different solutions: the first one uses the Beach scene…

Scene = Flash

This version uses Flash…

Scene = Clarity

and the final version trys Clarity. I personally like this one the best, although I would follow up with raising the deep blacks just a bit to bring out the first girl’s top and pants’ detail.

Scene = Beach

This is a woman at the beach… showing what the Beach scene will do… in my opinion, it doesn’t add anything, and has three issues:  the breaking wave is now clipped, with loss of detail in the white foam, the added contrast reduces the detail in her shirt and pants, and the sand at the bottom of the image has lost detail and become blocky. This exemplifies my earlier comment on this type of scene filter:  use it to change the lighting of your image to ‘look like it was shot at the beach’, not fix images that were taken at the beach…

Scene = Clarity

Here once again is our best friend Clarity… this scene really brings out the detail in the waves, sand and her clothes. It’s a great use of this filter, and shows the added ‘punch’ that Clarity can often bring to a shot that is correctly exposed, but just a bit flat.

Scene = Beach, followed by Special Filter = Overlay @ 67%

Now here is a very different interpretation of the same original shot. It all goes to what story you want to tell with your shot. The Clarity version above is a better depiction of the scene that either the original or the version using Beach, but the method shown above (using the Special filter Overlay, at 67% intensity) brings a completely different feeling to the shot. The woman becomes the central figure, with the beach only a hint of the surrounding…

Scene = Scenery

Using the tradional approach… the Scenery scene on, well, scenery…  Doesn’t work well – mid-ground goes dark, mountains in rear too blue, foreground bush loses detail and snap.

Scene = Shade

Here is the same scene using Shade. It does bring out the detail in the middle of the image, and warm up the reflection of the sky in the lake.

Scene = Sunset

Another view, this time using Sunset. Here we have many of the same issues as the first shot did using the Scenery version.

Scene = Beach

Now here I have used the Beach scene type. Although maybe not an intuitive choice, I like the results:  the lake has a warmer reflection of the sky, the contrast between the foreground bush and the middle area is increased, but without losing detail in the middle trees; the mountains in the rear have picked up detail, and even the shore on the left of the lake has more punch. Know your tools…

Scene = Portrait

Next shown are five different versions of a group with some challenging parameters:  white dresses (that pick up reflected light), dark pants and jacket on the man, theatrical lighting (it’s actually a white rug!), and bright backlighting on the left. This first version, using Portrait, with its increased contrast, helps the dresses to a more pure white look, but now the man’s head blends right into the picture – not enough black detail to separate the objects.

Special Filter = HDR

Here is a version using the special filter HDR. Very stylized. Does clearly separate all the details, the picture on the wall is now visible, and easily separated from the man’s head. The glow and the heavy tinting of the model’s dresses is just part of the ‘look’…

Special Filter = Faded @ 50%

As overstated as the last version may have been, this one goes the other way. Using the special filter Faded (at 50% intensity), the desaturation afforded by this method removes the tinting of the white dresses (they pick up the turquoise lighting), and the skin tones look more natural. Maybe a bit flat…

Scene = Clarity

Here is the Clarity scene type.While it does separate out his head from the picture again, the enhanced edges don’t really work as well in this shot. The increased local saturation actually causes the rug in the foreground to blend together – the original has more detail. This is one of the potential side-effects of this filter – when the source has highly saturated colors to start with some weird things can happen.

Scene = Beach

And the last version, using the Beach scene type. The dresses have punch, the skin tones are warmer, but the higher contrast once again merges his head with the picture. The lighting on the rug also looks overdone.

Scene = Scenery

This is a typical landscape shot, treated with the Scenery filter. Doesn’t work. Mountains have gone almost black, sea has lost it’s shading and beautiful turquoise color, the rocks in the foreground have gone harsh.

Scene = Clarity

Now here’s Clarity used on the same scene.This scene type brings out all the detail without overwhelming any one area. The only two minor faults I would point out is the small ‘halo’ effect in the sky near the edges of the mountains, and the emerald area of the ocean (just above the rocks, next to the foam on the shore) is a little oversaturated. But all in all, a much better filter than Scenery – for this shot. It’s all in using the right tool for the right job.

Scene = Clarity

Now as good as Clarity can be for some things, here is causes a very different effect. Not to say it’s wrong – if you are looking for posterization and noise, then this can be a great effect.

Scene = Darken

Although the Darken scene may not seem at all what one would choose on an already poorly lit scene, it focuses attention purely on the subject, and reduces some of the noise and mottling in her top.

Special Filter = Faded @ 33%

Now here is an interesting solution: using the special filter Faded (at 33% intensity) to reduce the saturation of the scene. This immediately brings out a more natural modeling of her face, makes her right hand look less blocky, and brings a more natural look to the subject in general.

Scene = Clarity

A sunset scene using the Clarity filter. In this case, Clarity is not our BFF… the brick goes oversaturated, the shadows in the foreground are just too much -the eye gets confused, doesn’t know where to look.

Scene = Darken

Here, the Darken scene type is tried. Not much better than Clarity, above. (Hint: sometimes the best result is to leave things alone… a properly exposed and composed image often is perfect as it stands, without additional meddling…)

Scene = Sunset

Here we have a number of different expressions of a train leaving the station at sunset… this one using the Sunset scene type. While it does add some color to the sky, most of the detail in the shadows is now gone.

Scene = Clarity

This version shows the use of Clarity. Detail that was barely visible now pops out. The shot now is almost a bit too busy – there is so much extraneous detail to the right and left of the train that the focus of the moment has changed…

Special Filter = HDR (100%)

Here’s a very different version, using HDR at full intensity. Stylized – tells a certain story – but the style is almost overwhelming the image.

Special Filter = HDR @ 30%

This is HDR again, but this time at 30% intensity.What a difference! I feel this selection is even better than Clarity – detail in the middle of the shot is visible, but not detracting from the train. There is now just enough information in the shadows to round out the shot, yet not pull the eye from the story.

Scene = Cloudy

The difference between the Cloudy and Shade scene types is useful to understand. Here are two subjects standing in open shade – i.e. only illiminated by open sky that has no clouds. This version is filtered with the Cloudy scene type. Note that the back wall has changed hue from the original (gone warmer) as has the street. The subjects are still a bit dark as well.

Scene = Shade

Here is the same shot, processed with the Shade scene type. The brightness is improved, the color temperature is not warmed up as much, and overall this is a better solution. (Well, after all, they were standing in the shade 🙂

Scene = Cloudy

Here is another pair of comparisons between the Cloudy and Shade scene type filters. This choice was less obvious, as the hostess was standing just outside the entrance to a restaurant, in partial shade; but the sky was very overcast – a ‘cloudy’ illumination. Notice, that since the Cloudy filter warms the scene more than the Shade filter does, her skin tone has warmed up considerably, as has her sheer top and and the menu. The top and menu are also a bit clipped, losing detail in the highlights.

Scene = Shade

Here is the ‘Shade’ version. Her skin is not as warm, and the top and menu retain more of their original whiteness. The white levels are better as well, with less clipping on the menu and the left side of her top. This shows that often you must try several filter types and make a selection based on the results – this could have easily gone either way, given the nature of the lighting.

Scene = Cloudy

College campus, showing the use of the Cloudy filter. Here this choice is clearly correct. The sky is obviously cloudy <grin> and the resultant warmth added by the filter is appreciated in the scene.

Scene = Shade

This version, using the Shade scene type, is not as effective. The scene is still a bit cold, and the additional brightness added by this filter makes the sky overly bright – the general illumination now somewhat contradicts the feeling of the scene – a cool and cloudy day.

Scene = Concert

We discussed learning from our mistakes… here is why you usually don’t want to use the Concert scene type at a concert…

Scene = Darken

The Darken scene type is called for here to help with the overexposure. While it doesn’t completely fix the problem, it certainly helps.

Scene = Darken

The Darken filter used again to good effect.

Scene = Darken, followed by Special Filter = HDR @ 20%

This example shows a powerful feature of Camera+    the ability to layer an Effect on top of a Scene type. You can only use one of each, and you can’t put a scene on top of another scene (or an effect on top of an effect) – but the potential of changes has now multipled enormously. Here, the original scene was overexposed. A combination of the Darken scene type, followed by the HDR filter at 30% intensity made all the difference.

Scene = Flash

This shows a typical challenging scene to capture correctly – a dark interior looking out to a brightly lit exterior. In the original exposure you can see that the camera exposed for the outdoor portion, leaving the intererior very dark. The first attempt to rectify this is using the Flash scene type. While this helped the bookcases in the hall, the exterior is now totally blown out, and the right foreground is still too dark.

Scene = Night

Here is the result of using the Night scene type. Better detail in both the hallway as well as the right foreground – and the exterior is now visible, even though still a bit overexposed.

Special Filter = HDR

This version uses the HDR effect filter – giving the best overall exposure between the outside, the bookcase and the foreground. Ideally, I would follow up with another app and raise the black levels a bit to bring out more detail in the shadows in the foreground and near part of the hall.

Scene = Flash

A night-blooming cactus photographed before sunrise – it’s a bit underexposed. Using the Flash scene type brings out the plant well, but the white flowers are now too hot.

Scene = Night

The Night scene type provides a better result, with good balance between the flowers, plant and trees behind.

Special Filter = HDR

Using the HDR filter to attempt to improve the foreground illumination of this shot. It helps… but the typical style of this tone-mapping filter oversaturates the reds in the wood, and the foreground is still a bit dark.

Scene = Flash

And here’s another attempt, using the Flash scene type. A different set of side effects.. the sunlight on the wall is now overexposed, and the foreground is still not ideally lit. Camera+ can’t fix everything… (in this case, using a different app – PhotoForge2 – which has a powerful tool “Shadows & Highlights” did a better job. We’ll see that when we get around to discussing that app).

Scene = Flash

Here’s a good use for the Flash scene. The original is very dark, the filtered version really does look like a flash had been used.

Scene = Flash

And here’s one that didn’t work so well. The Flash filter didn’t do what a real flash would have done: illuminated the interior without making any difference at all in the exterior lighting. Here, the opposite took place: the sunset sky is blown out, yet the interior isn’t helped out any at all.

Scene = Flash

Flash scene used on shot out airplane window, takeoff from London in late twilight. Doesn’t work for me…

Scene = Night

This time, Night was used. Less dramatic – personally I prefer the original. But it’s a good example to show how the different filters operate.

Scene = Food

Ok, just had to try food with Food (scene type)… You can see the filter at work:  whites are warmed up with more red; the potatoes now look almost like little sausages, contrast is increased. I would really like it if the Intensity sliders would be added to the Scene types as well as the Effects… here a better result would be found with about 40% of this filter dialed in, I believe…

Special Filter = HDR

Main street in Montagu, a little town in South Africa. The HDR filter shows how to help resolve a high contrast scene. I didn’t redo this one, but I would have had a more natural look if I had dialed back the intensity of the HDR filter to about 50%.

Scene = Scenery

The Scenery filter applied to an outdoor scene.I find this too contrasty, and even the sky looks a bit oversaturated.

Scene = Clarity

Here is Clarity applied to the same image. Shadow detail much improved (except right under nearest arch). However the right side of the image looks a bit ‘post-cardy’ (flat and washed out).

Special Filter = HDR @ 35%

HDR filter applied at 35%. A different set of ‘almost but not quite’ issues… Arches in shadows are too blue, the sunlit portions are blown out, and the sky is too saturated. The trees on the right are a big improvement over the previous version (Clarity) however. Sometimes you really do need Photoshop….

Scene = Clarity

This is a tough shot:  extreme brightness differences – my estimate is over 14 stops of exposure – way more than the iPhone camera can handle. So it’s really a matter of how best to interpret a scene that will inevitably have both white and black clipping. I used Clarity on this version – didn’t help out the highlights at all, but did add some detail in the shadows, as well as a bit of punch to her pants.

Special Filter = HDR @ 50%

The HDR effect was used here at 50% intensity. This brough a bit more control to at least the edges of the highlights, and still opened up the shadows. Note the difference in the shaded carpet at lower left of the image from this version to the previous one (done with Clarity). I actually prefer this version – the Clarity one seems a bit too much. Overall, I think this does a better job of this particular scene.

Scene = Night

Showing the use of the Night scene type at the last few minutes of twilight. As is often the case, I prefer the original shot, but wanted to demonstrate the capabilities of this scene type. Technically it did a great job – it just didn’t tell the story in the same way as the original.

Scene = Night

Now this scene is an excellent example of what the Night filter can do in the right circumstance. Almost full dark, only illumination was from store windows, streetlights and headlights.

Scene = Night

The Night scene type bringing out enough detail to make the shot.

Scene = Night

One more Night shot – again, a very good use of this scene type.

Scene = Night, followed by Special Filter = HDR @ 33%

This is an example of ‘stacking’ two corrections on top of each other to fix a shot that really needed help. You don’t always have to use the Night scene type at night… Here, due to both underexposure and backlight, the Night filter was applied first, then HDR effect at 33% intensity on top of that. It’s not a perfect result but it definitely shows you what can be done with these tools.

Scene = Portrait

The Portrait scene type applied. The contrast is too much, and the red saturation makes her skin tones look like a lobster.

Color Filter = So Emo

A very different effect, using the So Emo filter.

Scene = Clarity

Just to show what Clarity does in this instance. Again, not the best choice – the background gets too busy, and her face suffers…

Scene = Portrait

Here is Portrait applied to, well, a portrait type shot. But once again the high contrast of this filter works against the best result:  blown out highlights, skin tones too bright.

Scene = Shade

Sometimes you use filters in not-obvious ways:  the Shade scene type was applied here, and from that we got a slight improvement in background brightness, without driving her top into clipping. You can now see some definition between her hair and the background, and the slight warmth added does not detract from the image.

Scene = Sunset

The Sunset filter applied at sunset… I think this is too much. The extra contrast killed the detail in the trees at right center, and the red roof is artificially saturated now.

Scene = Scenery

The Scenery filter applied at sunset. Looks a bit different than the Sunset filter, above, but has many of the same issues when used on this scene: loss of detail in the shadows, too much red saturation.

Scene = Scenery

The Scenery filter at sunset. If this scene type could be ratcheted down with a slider, like is possible with the Effects, then this shot could be enhanced nicely. The original is just a tad flat looking, but the full Monty of the Scenery filter is way over the top. I would love to see what 20% of this effect would do.

Scene = Shade

This shows the Shade scene type doing exactly what it is supposed to: warm up the skin tones, add a bit of brightness. Altogether a less forlorn look…

Scene = Clarity

The same shot with Clarity applied. Much punchier – while I like the detail it’s almost too much. Again, Camera+ engineers, please bring us intensity sliders for scene types!

Scene = Sunset

Now here’s a use for the Sunset scene type at last. While it may not be the best storytelling tool for this particular shot, it shows the nice punch and improvement in detail and saturation this filter can provide – once the original image is basically soft enough to take the sometimes overpowering influence of this scene type.

Scene = Sunset

Trying the Sunset scene type once again. If I could use this at half strength it would actually make this shot a bit more colorful – but in its current incarnation the saturation is too much.

Scene = Sunset

Last Sunset for this post. I promise… but just to prove that never say never… here is a sunset where applying Sunset certainly helps the sky and water. I would like the sign not as saturated, and the increased constrast cost us the few remaining details hidden in the market building at the bottom of the frame.

Scene = Text

This is an example of the Text scene type. Ultra high contrast – really just for what it says (or other special effect you want to create using an almost vertical gamma curve!)

Digital Zoom

Now.. one last thing before the end of this post: a very short dicussion of Digital Zoom. I have ignored this topic up until now. But it’s really really important in iPhonography (actually this affects all cellphone cameras, as well as many small inexpensive digital cameras). ‘Real’ cameras (i.e. all DSLRs and many mid-priced and up digital cameras use Optical Zoom (or as in some consumer type digital cameras, both optical and digital zoom are offered). The difference is that with optical zoom, the lens elements physically move (i.e. a real zoom lens), changing both the magnification and the field of view that is projected onto the film or digital sensor. The so-called ‘Digital Zoom’ technique is a result of physical lenses that cannot “zoom”. Cellphone lenses are too small to adjust their focal length like a DSLR lens does. Also, the optical complexity of a zoom lens is far greater than that of a prime lens (fixed focal length) – and the light-gathering power of a zoom lens is always significantly less than that of a prime lens. (You can get relatively fast zoom lenses for DSLRs… as long as you have the budget of a small country…)

What the “digital zoom” technique is actually doing is cropping the image on the CMOS sensor (using only a small portion of the available pixels), then digitally magnifying that area back out to the original size of the sensor (in pixels). For example, the iPhone sensor is 3264×2448. If one were to crop down to 50% of the original area covered by the lens (to 1632×1224) the lens would now only be covering 1/4 of the original area. Do the math:  3264×2448 is 8megapixels; 1632×1224 is now only 2megapixels. What you do get for this is an apparent ‘zoom’ – or greater percieved magnification, due to the new ‘sensor size’ and the fact that the original focal lenght of the lens has not changed. The original 35mm equivalent focal length of the iPhone camera system is 32mm – by ‘cropping/zooming’ as described, the new focal length is now 4x greater, or 128mm (in 35mm equivalent terms). However – and this is a HUGE however – you pay a large price for this: resolution and noise. You now only have a 2megapixel sensor… not the supersharp 8megapixel sensor that you started with. This is like stepping all the way back to the iPhone 3 – which had 2MP resolution. Wow! In addition, these 2megapixels are now “zoomed” back to fill the full size of the original 8MP sensor (in memory), essentially this means that each original pixel of the taking sensor now covers 4 pixels in the newly formed zoomed image. This means that any noise in the original pixel is magnified by 4 times… So the bottom line is that digital zoom ALWAYS makes noisy, low resolution images.

Now.. just like in any good sales effort, you will hear grand ‘snake oil’ stories of how good this camera or that camera does at ‘digital zoom’ – and that it’s “just as good” as optical zoom. BS. Period. You can’t change the laws of physics…. What is possible (and bear in mind that this is a really small band-aid on a big owie…) is to use some really sophisticated noise reduction and image processing algorithms to try to make this pot of beans look like filet again… and most hardware and software camera manufacturers try at some level. Yes, if such attempts succeed, then you are a LITTLE better off than you were before. Not much. So what’s the answer. Don’t use digital zoom. Just say no. Unless you can accept the consequences (noisy, low resolution images). We’ll discuss this further in the last part of this series on Tips & Techniques for iPhonography, but for now you will see why I don’t address it as a feature.

Ok, that’s it. Really. Hope you have a bit more info now about this very useful app. Many of my upcoming posts on the rest of the sofware tools for the iPhone will not be nearly as detailed, but this was an opportunity to discuss many topics that are germane to most photography apps; offer a bit of a guide to a very popular and useful app that currently publishes no manual or even a help screen, and to demonstrate the thought process involved in working with filters, lighting and so on.

Many thanks for your attention.

iPhone4S – Section 4a: Camera app

March 14, 2012 · by parasam

Camera   The original iPhone photo app. Pretty simple – Use camera icon button to take picture, or use Volume Up (+) on side of phone. Option buttons on top of screen:

  • Flash: On/Auto/Off
  • Options:  Grid – On/Off;  HDR – On/Off
  • Rear-facing/Front-facing camera selector

Features/Buttons on bottom of screen:

  • Last shot preview thumbnail
  • Shutter release button
  • Still/Video selector

Blue box is area where both exposure and focus are measured. Option buttons across top of screen.

When "Options" is selected, you can choose to display a grid overlay on the screen for aid in composition; turn HDR feature on or off.

When 'video' mode is selected, the options change, and the shutter button changes to a 'record' button. Press to start recording, press again to stop.

iPhone4S – Section 4: Software

March 13, 2012 · by parasam

This section of the series of posts on the iPhone4S camera system will address the all-important aspect of software – the glue that connects the hardware we discussed in the last section with the human operator. Without software, the camera would have little function. Our discussion will be divided into three parts:  Overview; the iOS camera subsystem of the Operating System; and the actual applications (apps) that users normally interact through to take and process images.

As the audience of this post will likely cover a wide range of knowledge, I will try to not assume too much – and yet also attempt not to bore those of you who likely know far more than I do about writing software and getting it to behave in a somewhat consistent fashion…

Overview

The iPhone, surprise-surprise – is a computer. A full-fledged computer, just like what sits on your desk (or your lap). It has a CPU (brain), memory, graphics controller, keyboard, touch surface (i.e. mouse), network card (WiFi & Bluetooth), a sound card and many other chips and circuits. It even has things most desktops and laptops don’t have:  a GPS radio for location services, an accelerometer (a really tiny gyroscope-like device that senses movement and position of the phone), a vibrating motor (to bzzzzzz at you when you get a phone call in a meeting) – and a camera. A rather cool, capable little camera. Which is rather the point of our discussion…

So… like any good computer, it needs an operating system – a basic set of instructions that allows the phone to make and receive calls, data to be written to and read from memory, information to be sent and retrieved via WiFi – and on and on. In the case of the iDevice crowd (iPod, iPhone, iPad) this is called iOS. It’s a specialized, somewhat scaled down version of the full-blown OS that runs on a Mac. (Actually it’s quite different in the details, but the concept is exactly the same). The important part of all this for our discussion is that a number of basic functions that affect camera operation are baked into the operating system. All an app has to do is interact via software with these command structures in the OS, present the variable to the user in a friendly manner (like turn the flash on or off), and most importantly, take the image data (i.e the photograph) and allow the user to save it or modify it, based on the capability of the app in question.

The basic parameters that are available to the developer of an app are the same for everyone. It’s an equal playing field. Every app developer has exactly the same toolset, the same available parameters from the OS, and the same hardware. It’s up to the cleverness of the development team to achieve either brilliance or mediocrity.

The Core OS functions – iOS Camera subsystem

The following is a very brief introduction to some of the basic functions that the OS exposes to any app developer – which forms the basis for what an app can and cannot do. This is not an attempt to show anyone how to program a camera app for the iPhone! Rather, a small glimpse into some of the constraints that are put on ALL app developers – the only connection any app has with the actual hardware is through the iOS software interface – also known as the API (Application Programming Interface). For instance, Apple passes on to the developers through the API only 3 focus modes. That’s it. So you will start to see certain similarities between all camera apps, as they all have common roots.

There are many differences, due to the way a given developer uses the functions of the camera, the human interface, the graphical design, the accuracy and speed of computations in the app, etc. It’s a wide open field, even if everyone starts from the same place.

In addition, the feature sets made available through the iOS API change with each hardware model, and can (and do!) change with upgrades of the iOS. Of course, each time Apple changes the underlying API, each app developer is likely to need to update their software as well. So then you’ll get the little red number on your App Store icon, telling you it’s time to upgrade your app – again.

The capabilities of the two cameras (front-facing and rear-facing) are markedly different. In fact, all of the discussion in this series has dealt only with the rear-facing camera. That will continue to be the case, since the front-facing camera is of very low resolution, intended pretty much just to support FaceTime and other video calling apps.

Basic iOS structure

The iOS is like an onion, layers built upon layers. At the center of the universe… is the Core. The most basic is the Core OS. Built on top of this are additional Core Layers: Services, Data, Foundation, Graphics, Audio, Video, Motion, Media, Location, Text, Image, Bluetooth – you get the idea…

Wrapped around these “apple cores” are Layers, Frameworks and Kits. These Apple-provided structures further simplify the work of the developer, provide a common and well tuned user interface, and expand the basic functionality of the core systems. Some examples are:  Media Layer (including MediaPlayer, MessageUI, etc.); the AddressBook Framework; the Game Kit; and so on.

Our concern here will be only with a few structures – the whole reason for bringing this up is to allow you, the user, to understand what parameters on the camera and imaging systems can be changed and what can’t.

Focus Modes

There are three focus modes:

  • AVCaptureFocusModeLocked: the focal area is fixed.

This is useful when you want to allow the user to compose a scene then lock the focus.

  • AVCaptureFocusModeAutoFocus: the camera does a single scan focus then reverts to locked.

This is suitable for a situation where you want to select a particular item on which to focus and then maintain focus on that item even if it is not the center of the scene.

  • AVCaptureFocusModeContinuousAutoFocus: the camera continuously auto-focuses as needed.

Exposure Modes

There are two exposure modes:

  • AVCaptureExposureModeLocked: the exposure mode is fixed.
  • AVCaptureExposureModeAutoExpose: the camera continuously changes the exposure level as needed.

Flash Modes

There are three flash modes:

  • AVCaptureFlashModeOff: the flash will never fire.
  • AVCaptureFlashModeOn: the flash will always fire.
  • AVCaptureFlashModeAuto: the flash will fire if needed.

Torch Mode

Torch mode is where a camera uses the flash continuously at a low power to illuminate a video capture. There are three torch modes:

  •    AVCaptureTorchModeOff: the torch is always off.
  •    AVCaptureTorchModeOn: the torch is always on.
  •    AVCaptureTorchModeAuto: the torch is switched on and off as needed.

White Balance Mode

There are two white balance modes:

  •    AVCaptureWhiteBalanceModeLocked: the white balance mode is fixed.
  •    AVCaptureWhiteBalanceModeContinuousAutoWhiteBalance: the camera continuously changes the white balance as needed.

You can see from the above examples that many of the features of the camera apps you use today inherit these basic structures from the underlying CoreImage API. There are obviously many, many more parameters that are available for control by a developer team – depending on whether you are doing basic image capture, video capture, audio playback, modifying images with built-in filter, etc. etc.

While we are on the subject of core functionality exposed by Apple, let’s discuss camera resolution.

Yes, I know we have heard a million times already that the iPhone4S has an 8MP maximum resolution (3264×2448). But there ARE other resolutions available. Sometimes you don’t want or need the full resolution – particularly if the photo function is only a portion of your app (ID, inventory control, etc.) – or even as a photographer you want more memory capacity and for the purpose at hand a lower resolution image is acceptable.

It’s almost impossible to find this data, even on Apple’s website. Very few apps give access to different resolutions, and the ones that do don’t give numbers – it’s ‘shirt sizes’ [S-M-L]. Deep in the programming guidelines for CoreImage I found a parameter AVCaptureStillIMageOutput that allows ‘presetting the session’ to one of the values below:

PresetNameStill           PresetResolutionStill

Photo                              3264×2448

High                                1920×1080

Med                                 640×480

Lo                                   192×144

PresetNameVideo         PresetResolutionVideo

1080P                              1920×1080

720P                                1280×720

480P                                640×480

I then found one of the very few apps that support ALL of these resolutions (almost DSLR) and shot test stills and video at each resolution to verify. Everything matched the above settings EXCEPT for the “Lo” preset in Still image capture. The output frame measured 640×480, the same as “Med” – however the image quality was much lower. I believe that the actual image IS captured at 192×144, but then is scaled up to 640×480 – why I am not sure, but it is apparent that the Lo image is of far lower quality than Med. The image size was lower for the Lo quality image – but not enough that I would ever use it. On the tests I shot, Lo = 86kB, Med = 91kB. The very small difference in size is not worth the big drop in quality.

So… now you know. You may never have need of this, or not have an app that supports it – but if you do require the ability to shoot thousands of images and have them all fit in your phone, now you know how.

There are two other important aspects of image capture that are set by the OS and not changeable by any app:  color space and image compression format. These are fixed, but different, for still images and video footage. The color space (which for the uninitiated is essentially the gamut – or range of colors – that can be reproduced by a color imaging system) is set to sRGB. This is a common and standard setting for many digital cameras, whether full sized DSLR or cellphones.

It’s beyond the scope of this post to get into color space, but I personally will be overjoyed when the relatively limited gamut of sRGB is put to rest… however, it is appropriate for the iPhone and other cellphone camera systems due to the limitations of the small sensors.

The image compression format used by the iPhone (all models) is JPEG, producing the well-known .jpg file format. Additional comments on this format, and potential artifacts, were discussed in the last post. Since there is nothing one can do about this, no further discussion at this time.

In the video world, things are a little different. We actually have to be aware of audio as well – we get stereo audio along with the video, so we have two different compression formats to consider (audio and video), as well as the wrapper format (think of this as the envelope that contains the audio and video track together in sync).

One note on audio:  if you use a stereo external microphone, you can record stereo audio along with the video shot by the iPhone4S. This requires an external device which connects via the 30-pin docking connector. You will get far superior results – but of course it’s not as convenient. Video recordings made with the on-board microphone (same one you use to speak into the phone) are mono only.

The parameters of the video and audio streams are detailed below: (this example is for the full 1080P resolution)

General

Format : MPEG-4
Format profile : QuickTime
Codec ID : qt
Overall bit rate : 22.9 Mbps

Video

ID : 1
Format : AVC
Format/Info : Advanced Video Codec
Format profile : Baseline@L4.1
Format settings, CABAC : No
Format settings, ReFrames : 1 frame
Codec ID : avc1
Codec ID/Info : Advanced Video Coding
Bit rate : 22.4 Mbps
Width : 1 920 pixels
Height : 1 080 pixels
Display aspect ratio : 16:9
Rotation : 90°
Frame rate mode : Variable
Frame rate : 29.500 fps
Minimum frame rate : 15.000 fps
Maximum frame rate : 30.000 fps
Color space : YUV
Chroma subsampling : 4:2:0
Bit depth : 8 bits
Scan type : Progressive
Bits/(Pixel*Frame) : 0.367
Title : Core Media Video
Color primaries : BT.709-5, BT.1361, IEC 61966-2-4, SMPTE RP177
Transfer characteristics : BT.709-5, BT.1361
Matrix coefficients : BT.709-5, BT.1361, IEC 61966-2-4 709, SMPTE RP177

Audio

ID : 2
Format : AAC
Format/Info : Advanced Audio Codec
Format profile : LC
Codec ID : 40
Bit rate mode : Constant
Bit rate : 64.0 Kbps
Channel(s) : 1 channel
Channel positions : Front: C
Sampling rate : 44.1 KHz
Compression mode : Lossy
Title : Core Media Audio

The highlights of the video/audio stream format are:

  • H.264 (MPEG-4) video compression, Baseline Profile @ Level 4.1, 22Mb/s
  • QuickTime wrapper (.mov)
  • AAC-LC audio compression, 44.1kHz, 64kb/s

The color space for the video is the standard adopted for HD television, Rec709. Importantly, this means that videos shot on the iPhone will look correct when played out on an HDTV.

This particular sample video I shot for this exercise was recorded at just under 30 frames per second (fps), the video camera supports a range of 15-30fps, controlled by the application.

Software Applications for Still & Video Imaging on the iPhone4S

The following part of the discussion will cover a few of the apps that I use on the iPhone4S. These are just what I have come across and find useful – this is not even close to all the apps available for the iPhone for imaging. I obtained all of the apps via normal retail Apple App Store – I have no relationship with any of the vendors – they are unaware of this article (well, at least until it’s published…)

I am not a professional reviewer, and take no stance as to absolute objectivity – I do always try to be accurate in my observations, but reserve the right to have favorites! The purpose in this section is really to give examples of how a few representative apps manage to expose the hardware and underlying iOS software to the user, showing the differences in design and functionality.

These apps are mostly ‘purpose-built’ for photography – as opposed to some other apps that have a different overall purpose but contain imaging capabilities as part of the overall feature set. One example (that I have included below) is EasyRelease, an app for obtaining a ‘model release’ [legal approval from the subject to use his/her likeness for commercial purposes]. This app allows taking a picture with the iPhone/iPad for identification purposes – so has some very basic image capture abilities – it’s not a true ‘photo app’.

BTW, this entire post has been focused on only the iPhone camera, not the iPad (both 2nd & 3rd generation iPads contain cameras) – I personally don’t think a tablet is an ideal imaging device – it’s more like a handy accessory if you have your tablet out and need to take a quick snap – than a camera. Evidently Apple feels this way as well, since the camera hardware in the iPads have always lagged significantly behind that of the iPhone. However, most photo apps will work on both the iPad as well as the iPhone (even on the 1st generation model – with no camera), since many of the apps support working with photos from the Camera Roll (library) as well as directly from the camera.

I frequently work this way – shoot on iPhone, transfer to iPad for easier editing (better for tired eyes and big fingers…), then store or share. I won’t get into the workflows of moving images around – it’s not anywhere near as easy as it should be, even with iCloud – but it’s certainly possible and often worth the effort.

Here is the list of apps that will be covered. For quick reference I have listed them all below with a simple description, a more detailed set of discussions on each app follows.

[Note:  due to the level of detail, including many screenshots and photo examples used for the discussion of each app, I have separated the detailed discussions into separate posts – one for each app. This allows the reader to only select the app(s) they may be interested in, as well as keep the overall size of an individual post to a reasonable size. This is important for mobile readers…]

Still Imaging

Each of the app names (except for original Camera) is a link that will take you to the corresponding page in the App Store.

Camera  The original photo app included on every iPhone. Basic but intuitive – and of course the biggest plus is the ability to fast-launch this without logging in to the home page first. For streetphotography (my genre) this a big feature.

Camera+  I use this as much for editing as shooting, biggest advantage over native iPhone camera app is you can set different part of frame for exposure and focus. The info covers the just-released version 3.0

Camera Plus Pro  This is similar to the above app (Camera+) – some additional features, not the least of which it shoots video as well as still images. Although made by a different company, it has many similar features, filters, etc. It allows for some additional editing functions and features ‘live filters’ – where you can add the filter before you start shooting, instead of as a post-production workflow in Camera+. However, there are tradeoffs (compression ratio, shooting speed, etc.)  Compare the apps carefully – as always, know your tools…  {NOTE: There are two different apps with very similar names: Camera+, made by TapTapTap with the help of pixel wizard Lisa Bettany; and Camera Plus, made by Global Delight Technologies – who also make Camera Plus Pro – the app under discussion here. Camera+ costs $0.99 at the time of this post; Camera Plus is free; Camera Plus Pro is $1.99 — are you confused yet? I was… to the point where I felt I needed to clarify this situation of unfortunately very similar brand names for somewhat similar apps – but there are indeed differences. I’m going to be as objective in my observations as possible. I am not reviewing Camera Plus, as I don’t use it. Don’t infer anything from that – this whole blog is about what I know about what I personally use. I will be as scientific and accurate as possible once I write about a topic, but it’s just personal preference as to what I use}

almost DSLR is the closest thing to fully manual control of iPhone camera you can get. Takes some training, but is very powerful once you get the hang of it.

ProHDR I use this a lot for HDR photography. Pic below was taken with this. It’s unretouched! That’s how it came out of the camera…

Big Lens This allows you to manually ‘blur’ background to simulate shallow depth of field. Quite useful since 30mm focal length lens (35mm equivalent) puts almost everything in focus.

Squareready  If you use Instagram then you know you need to upload in square format. Here’s the best way to do that.

PhotoForge2  Powerful editing app. Basically Photoshop on the iPhone.

Snapseed  Another very good editing app. I use this for straightening pix, as well as ability to tweak small areas of picture differently. On some iPhone snaps I have changed 9 different areas of picture with things like saturation, contrast, brightness, etc.

TrueDoF  This one calculates true depth-of-field for a given lens, sensor size, etc. I use this when shooting DSLR to plan my range of focus once I know my shooting distance.

OptimumCS-Pro  This is sort of inverse of the above app – here you enter the depth of field you want, then OCSP tells you the shooting distance and aperture you need for that.

Iris Photo Suite  A powerful editing app, particularly in color balance, changing histograms, etc. Can work with layers like Photoshop, perform noise reduction, etc.

Filterstorm  I use this app to add visible watermarks to images, as well as many other editing functions. Works with layers, masks, variable brushes for effects, etc.

Genius Scan+  While this app was intended for (and I use it for this as well) scanning documents with the camera to pdf, I found that it works really well to straighten photos… like when you are shooting architectural and have unavoidable keystoning distortion… Just be sure to pull back and give yourself some surround on your subject, as the perspective cropping technique that is used to straighten costs you some of your frame…

Juxtaposer  This app lets you layer two different photos onto each other, with very controllable blending.

Frame X Frame  Camera app, used for stop motion video production as well as general photography.

Phonto  One of the best apps for adding titles and text to shots.

SkipBleach  This mimics the effect of skipping (or reducing) the bleach step in photochemical film processing. It’s what gives that high contrast, faded and harsh ‘look’.

Monochromia  You probably know that getting a good B&W shot out of a color original is not as simple as just desaturating.. here’s the best iPhone app for that.

MagicShutter  This app is for time exposures on iPhone, also ‘light painting’ techniques.

Easy Release  Professional model release. Really, really good – I use it on iPad and have never gone back to paper. Full contractual terms & conditions, you can customize with your additional wording, logo, etc. – a relatively expensive app ($10) but totally worth it in terms of convenience and time saved if you need this function.

Photoshop Express  This is actually a bit disappointing for a $5 app, others above do more for less – except the noise reduction (a new feature) is worth it for that alone. It’s really, really good.

Motion Imaging

Movie*Slate  A very good slate app.

Storyboard Composer  Excellent app for building storyboards from shot or library photos, adding actors, camera motion, script, etc. Powerful.

Splice  Unbelievable – a full video editor for the iPhone/iPad. Yes, you can: drop movies and stills on a timeline, add multiple sound tracks and mix them, work in full HD, has loads of video and audio efx, add transitions, burn in titles, resize, crop, etc. etc. Now that doesn’t mean that I would choose to edit my next feature on a phone…

iTC Calc  The ultimate time code app for iDevices. I use on both iPad and iPhone.

FilmiC Pro  Serious movie camera app for iPhone. Select shooting mode, resolution, 26 frame rates, in-camera slating, colorbars, multiple bitrates for each resolution, etc. etc.

Camera+ Pro  This app is listed under both sections, as it has so many features for both still and motion photography. The video capture/edit portion even has numerous filters that can be used during capture.

Camcorder Pro  simple but powerful HD camera app. Anti-shake and other features.

This concludes this post on the iPhone4S camera software. Please check out the individual posts following for each app mentioned above. I will be posting each app discussion as I complete it, so it may be a few days before all the app posts are uploaded. Please remember these discussions on the apps are merely my observations on their behavior – they are not intended to be a full tutorial, operations manual or other such guide. However, in many cases, the app publisher offers little or no extra information, so I believe the data provided will be useful.

iPhone4S – Section 2: Contrast – the essence of photography – and what that has to do with an iPhone…

March 9, 2012 · by parasam

If a photo was all white – or all black – there would be no contrast, no differentiation, no nothing. A photo is many things, but first and foremost it must contain something. And something is recognized by one shape, one entity, standing out from another. Hence… contrast. This is the prima facie of a photograph – whether color or monochrome, whether tack sharp like a Weston or a blur against a rain smeared window – contrast of elements is the core of a photograph.

After that can come focus, composition, tonal range, texture, evocative subject… all the layers that distinguish great from mundane – but they all run second.

Although this set of posts is indeed concerned with an exploration of the mechanics and limitations of a rather cool cellphone camera (iPhone4S), the larger intent is to embolden the user with a tool to image his or her surroundings. The fact that such a small and portable device is capable of imaging at a level that was only a few years ago relegated to DSLR cameras is a technological wonder. Absolutely it is not a replacement for a high quality camera – but in the hands of a trained and experienced person with a vision and the patience to understand the possibilities of such a device, improbable things are possible.

Contrast of devices – DSLR vs cellphone

This post will cover a few of the limitations and benefits of a relatively high quality cellphone camera. While I am discussing the iPhone4S camera in particular, these observations apply to any modern reasonably high quality cellphone camera.

For the first 50 years or so of photography, portability was not even a possibility. Transportability yes, but large view cameras, glass or tin plates and the need for both camera (on tripod) and subjects (either mountains that didn’t move or people frozen in a tableau) to remain locked in place didn’t do much for spontaneity. Roll film, the Brownie camera, the Instamatic, eventually the 35mm camera system – not to mention Polaroid – changed the way we shot pictures forever.

But more or less, for the first hundred years of this art form, the process was one of delayed gratification. One took a photo, then waited through hours or days of photochemical processes to see what actually happened. The art of previsualization became paramount for a professional photographer, for only if you could reasonably predict how your photo would turn out could you stay in business!

With the first digital picture ever produced in 1975 (in a Kodak lab), this is indeed a young science. Consumer digital photography is only about 20 years old – and a good portion of that was relatively low performance ‘snapshot’ cameras. High end digital cameras for professionals only came on the scene in the late 1990’s – at obscene prices. The pace of development since then has been nothing short of stratospheric.

We now have DSLR (Digital Single Lens Reflex) cameras that have more resolution that any film stock ever produced; with lenses that automatically compensate for vibration, assist in exposure and focus, and have light-gathering capabilities that will allow excellent pictures in starlight.

These high-end systems do not come cheaply, nor are they small and lightweight. Even though they are based on the 35mm film camera system, and employ a digital sensor about the same size as a 35mm film frame – they are considerably complex imaging computers and optical systems – and are not for the faint of heart or pocketbook!

Full-sized DSLR with zoom lense and bellow hood

With camera backs only (no lens) going for $7,000 and high quality lenses costing from $2,000 – $12,000 each, these wonders of modern imaging technology require substantial investment – of both knowledge and cash.

On the other end of the spectrum we have the consumer ‘point and shoot’ cameras, usually of a few megapixels resolution, and mostly automatic in function. The digital equivalent of the Kodak Brownie.

Original Kodak Brownie camera

These digital snapshot cameras revolutionized candid photography. The biggest change was the immediacy – no more waiting and expensive disappointment of a poorly exposed shot – one just looked and tried again. If nothing else, the opportunity to ‘self-teach’ has already improved the general photographic skill of millions of people.

Almost as soon as the cellphone was invented, the idea of stuffing a camera inside came along. With the first analog cellphones arriving in the mid-1980s, within 10 years (1997 to be exact) the first cellphone with a built-in camera was announced (Kyocera, 1MP).

A scant 15 years later we have the iPhone4S and similar camera systems routinely used by millions of people worldwide. In many cases, the user has no photographic training and yet the results are often quite acceptable. This blog however is for those that want to take a bit of time to ‘look under the hood’ and extract the maximum capabilities of these small but powerful imaging devices.

Major differences between DSLR cameras and cellphone cameras

The essence of a cellphone camera is portability, while the prime focus of a DSLR camera is to produce the highest quality photograph possible – given the constraints of cost, weight and complexity of operation. It is only natural then that many compromises are made in the design of a cellphone camera. The challenges of very light weight, low cost, small size and other technical issues forced cellphone cameras into a low quality genre for some time. Not any more. Yes, it is absolutely correct that there are many limitations to even ‘high quality’ cellphone cameras such as the iPhone, but with an understanding of these limitations, it is possible to take photos that many would never assume came from a phone.

One of the primary limitations on a cellphone camera is size. Given the physical constraints of the design package for modern cellphones, the entire camera assembly must be very small, usually on the order of ½” square and less than ¼” thick. Compared to an average DSLR, which is often 4” wide by 3” high and 2” thick – the cellphone camera is microscopic.

The first challenge this presents then is sensor size. Two factors actually come into play here:  actual X-Y sensor size dimensions, and lens focal length. Since the covering power of the lens (the area that a lens can cover with a focused image) is a function of the focal length of the lens – and therefore the physical dimensions required in the lens barrel assembly – materially affects the overall thickness of the lens/camera assembly, compromises have to made here to keep the overall size within limits.

The combination of physical sensor size and the depth that would be required if the actual focal length was more than about 5mm, mandates typical cellphone camera sensor sizes to be in the range of 1/3” in size. For example, the iPhone4S sensor is 4.54mm x  3.42mm and the lens has a focal length of 4.28mm. Most other quality cellphone cameras are in this range.

Full details (and photographs) of the actual iPhone hardware will be discussed in the following blog, iPhone4S Specs & Hardware.

The limitation of sensor size then sets the physical size of the pixels that make up the sensor. With a desire to offer a relatively high megapixel count – for sharp resolution – the camera manufacturer is then forced to a very small pixel size. The iPhone4S pixel size is 1.4μm. That is really, really small. Less than a millionth of an inch square. The average size of a pixel on a high quality “35mm” style DSLR camera is 40X larger…

The small pixel size is one of the largest factors in the differences that make cellphone cameras less capable that full-fledged DSLR cameras. The light sensitivity is much less, due the basic nature of how a CCD/CMOS sensor works.

Film vs CCD/CMOS sensor technology – Blacks & Whites

To fully understand the issues with small digital sensor pixel size we need to briefly revisit the differences between film and digital image capture first. Up until 50 years ago, only photochemical film emulsions could capture images. The fundamental way that light is ‘frozen’ into a captured image is very different from film as compared to digital techniques – and it is visible in the resultant photographs.

That is not to say one is better, they are just different. Furthermore, there is a difference in appearance to the human eye from a fully ‘chemical’ process (photograph captured on film, then printed directly onto photo paper and developed chemically – even from a film image that is scanned and printed digitally. Film scanners also use the same CCD array that digital cameras use, and the basic difference of image capture once again comes into play.

Without getting lost in the wonderful details of materials science, physics and chemistry that all play a part in how photochemical photography works, when light strikes film the energy of the light starts changing certain molecules of the film emulsion. The more light that hits certain areas of the film negative, the more certain molecules start clumping together and changing. Once developed, these little groups of matter become the shadows and light of a photograph. All film photographs show something we call ‘grain’ – very small bits of optical gravel that actually constitute the photograph.

The important bit here is to remember that with film, exposure (light intensity X time) results in increased amounts of ‘clumped optical gravel’ – which when developed looks black on a film negative. Of course black on a negative prints to white on a positive – the print that we actually view.

Conversely, on film, very lightly exposed portions of the negative (the shadows, those portions of the picture that were illuminated the least) show up as very light on the negative. This brings us to one of the MOST important aspects of film photography as compared to digital photography:

  • With film, you expose for the shadows and print for the highlights
  • With digital, you expose for the highlights and print for the shadows

The two mediums really ARE different. An additional challenge here is when we shoot film, but then scan the negative and print digitally. What then? Well, you have to treat this scenario as two serial processes:  expose the film as you should – for the shadows. Then when you scan the film, you must expose for the highlights (since you are in reality taking a picture of a picture) and now that you are in the digital domain, print for the shadows.

The reason behind all this is due to the difference between how film reacts to light and how a digital sensor works. As mentioned above, film emulsions react to light by increasing the amount of ‘converted molecules’ – silver halide crystals to be exact – leaving unexposed areas (dark areas in the original scene) virtually unexposed.

Digital sensor capture, using either the CCD or CMOS technology (more on the difference in a moment) respond to light in a different manner:  the photons that make up light fall on the sensor elements (pixels) and ‘fill up’ the energy levels of the ‘pixel container’. The resultant voltage level of each pixel is read out and turned into an image by the computational circuits associated with the sensor. The dark areas in the original image, since they contribute very little illumination, leave the ‘pixel tanks’ mostly unfilled. The high sensitivity of the photo-sensitive arrays mean that any stray light, electrical noise, etc. can be interpreted as ‘illumination’ by the sensor electronics – and is. The bottom line therefore is that the low light areas (shadows) in an image captured by digital means are always the most noisy.

To sum it up:  blacks in a film negative are the least noisy, as basically nothing is happening there; blacks in a digital image are the most noisy, since the unfilled ‘pixel containers’ are like little magnets for any kind of energy. What this means is that fundamentally, digitally captured images are different looking than film:  noisy blacks in digital, clean blacks in film.

There are two technologies for digital sensor image capture:  CCD and CMOS. While similar at the high level, there are significant differences. CCD (Charge Coupled Devices) are older, typically create high quality, low-noise images. CMOS (Complementary Metal Oxide Semiconductor) arrays consume much less power, are less sensitive to light, and are far less expensive to fabricate. This means that all cellphone cameras use CMOS technology – the iPhone included. Most medium to high-end DSLR cameras use CCD technology.

The above issue only adds to the quality challenge for a cellphone camera:  using less expensive technology means higher noise, lower quality, etc. for the produced image. Therefore, to get a good exposure on digital, one would think that you would want to ‘expose for the shadows’ to be sure you reduced noise by adding exposure to the shadow areas. Unfortunately, the opposite is actually the case!

The reason is that in digital capture, once a pixel has been ‘filled up’ (i.e. received enough light that the level of that pixel is at the maximum [255 for an 8-bit system]) it can no longer hold any detail – that pixel is just clipped at pure white. No amount of post-processing (Photoshop, etc.) can recover detail that is lost since the original capture was clipped. With under-exposed blacks, you can always apply noise-reduction, raise levels, etc. and play with ‘an emptyp-ish container’.

That is why it’s so important to ‘expose for the highlights’ with digital – once you have clipped an area of image at pure white, you can’t ever get that detail back again – it’s burnt out. For film, the opposite was true due the negative process:  if you didn’t get SOME exposure in the shadows, you can’t make something out of nothing – all you get is gray noise if you try to pump up blacks that have no detail. In film, you have to expose for the shadows, then you can always tone down the highlights.

So, with your cellphone cameras,  ALWAYS make sure you don’t blow out the highlights, even if you have to compromise with noisy blacks- you can use various techniques to minimize that issue during post-production.

Fixed vs Adjustable Aperture

Another major difference between digital cameras and cellphone cameras is the issue of fixed aperture. All cellphone cameras have a fixed aperture – i.e. no way to adjust the lens aperture as one does with the “f-stop” ring on a camera lens. Essentially, the cellphone lens is “wide open” all the time.  This is purely a function of cost and complexity. In normal cameras, the aperture is controlled with a ‘variable vane’ system, a rather complex set of curved thin pieces of metal that open and close a portal within the lens assembly to allow either more or less light through the lens as a whole.

With a typical lens measuring 3” in diameter and about 2” – 6” long this was not a mechanical design issue. A cellphone lens on the other hand is usually less than ¼” in diameter and less than ¼” in depth. The mechanical engineering required to insert such an aperture control mechanism would be very difficult and exorbitantly expensive.

Also, the operational desire of most cellphone manufacturers is to keep the device very simple to operate, so having another control that significantly affected camera operations and control was not high on the feature list.

A fixed aperture means several things:  normally this means a shallow depth of field, but with the relatively wide angle lenses employed by most mobile phone manufacturers the depth of field is usually more than enough; and for daylight exposures, adjustment of both ISO and shutter speed are necessary to avoid over-exposure.

Exposure settings

On a digital camera, you can use either automatic or manual controls to set the exposure. Many cameras allow either “shutter priority” or “aperture priority.”  What this means is that with shutter priority the user selects a shutter speed (or range of speeds), and the camera adjusts the aperture as is required to get the correct exposure. This setting does not allow the user to set the f-stop, so the depth of field on a photograph will vary depending on the light level.

With aperture priority, the user selects an f-stop setting, and the camera selects a shutter speed that is appropriate. With this setting, the user does not set the shutter speed, so care is required if slow shutter speeds are anticipated:  camera and/or subject movement must be minimized.

On a film or digital camera, the user can set the ISO speed rating manually. This is not possible on most cellphone cameras. The speed rating of the sensor (ISO #) is really a ‘baseline’ for the exposure setting.

Here is an example of setting the base ISO speed correctly:

Under exposure

Normal exposure

Over exposure

The exposure algorithm inside the cellphone camera software computes both the shutter speed and the ISO (the only two factors that it can change, since the aperture is fixed) and arrives at a compromise that the camera software believes will make the best exposure. Here is where art and experience come into play – no cellphone hardware or software manufacturer has yet to publish their algorithms, no do I ever expect this to happen. One must shoot lots and lots of exposures under controlled conditions to attempt to figure out how a given camera (and app) is deciding to set these parameters.

Like anything else, if one takes the time to know your tools, you get a better result. From what I have observed by shooting several thousand frames with my iPhone4S, and using about 10 different camera apps to do so, the following is very rough approximation of a typical algorithmic process:

  • A very fast ‘pre-exposure’ of the entire frame is performed  (averaging together all the pixels without regard to an exposure ‘area’) to arrive at a sense of the overall illumination of the frame.
  • From this an initial ISO setting is assigned.
  • Based on that ISO, then the ‘exposure area’ (usually shown in a box in the display, or sometimes just centered in the frame) is used to further set the exposure:  the pixels within the exposure area are averaged, then, based on the ISO setting, a shutter speed is chosen to place the average light level at Zone 5 (middle gray value) of a standard exposure index. [Look for an upcoming blog on Zones if you are not familiar with this]

It appears that subsequent adjustments to this process can happen (and most likely do!) – again, determinate on a particular vendor’s choice of algorithm:  for instance, if, based on the above sequence the final shutter speed is very slow (under 1/30 second) the base ISO sensitivity will likely be raised, as slow shutter speeds reveal both camera shake and subject movement.

Apple, nor any other manufacturer, seems to publish the exact limits on their embedded camera’s ISO sensitivity nor range of shutter speeds. With many, many controlled tests, I have determined (and apparently so have others, as the published info I have seen on the web corroborates my findings) that the shutter speeds of the iPhone4S range from 1/15 sec down to 1/2000 sec; while the ISO speeds range from ISO 64 to ISO 1000.

There are apps that will allow much longer exposures, and potentially a wider range of ISO values – I have not had time to run extensive tests with all the apps I have tried. Each camera app vendor has the choice to implement more or less features that Apple exposes in the programmatic interface (more on that in Part 4 of this series), so the potential variations are large.

Movement

As you have undoubtedly experienced, many photographs are spoiled by inadvertent movement of the subject, camera, or both. While sometimes movement is intended – and can make the shot artistically – most often this is not the case. With the very tiny sensor that is normal for any cellphone camera, the sensor is very often hungry for light, so the more you can give it, the better quality picture you will get.

What this means in practicality is that when possible, in lower light conditions, brace the camera against a solid object, put it on a tripod (using an adaptor), etc. Now here is where you again will get better results with experience:  our eyes have fantastic adaptive properties – cameras do not. When we walk inside a mall from a sunny outdoors, within seconds we perceive the mall to be as well lit as the outside – even though in real terms the average light value is less than 1% of what was outdoors!

However, our little iPhone is now struggling to get enough light to expose a picture. Outdoors, we might have found that at ISO 64 we were getting shutter speeds of 1/600 second, indoors we have changed to ISO400 at 1/15 second! The slow shutter speeds almost always will show blurred movement, whether from a person walking, camera shake, or both.

Here are a few examples:

1/20 sec @ ISO400 Camera handheld, you can see camera shake in blurred lines in glass panels, upper left; then in addition subject movement (her left leg is almost totally blurred).