• Home
  • about this blog
  • Blog Posts

Parasam

Menu

  • design
  • fashion
  • history
  • philosophy
  • photography
  • post-production
    • Content Protection
    • Quality Control
  • science
  • security
  • technology
    • 2nd screen
    • IoT
  • Uncategorized
  • Enter your email address to follow this blog and receive notifications of new posts by email.

  • Recent Posts

    • Take Control of your Phone
    • DI – Disintermediation, 5 years on…
    • Objective Photography is an Oxymoron (all photos lie…)
    • A Historical Moment: The Sylmar Earthquake of 1971 (Los Angeles, CA)
    • Where Did My Images Go? [the challenge of long-term preservation of digital images]
  • Archives

    • September 2020
    • October 2017
    • August 2016
    • June 2016
    • May 2016
    • November 2015
    • June 2015
    • April 2015
    • March 2015
    • December 2014
    • February 2014
    • September 2012
    • August 2012
    • June 2012
    • May 2012
    • April 2012
    • March 2012
    • February 2012
    • January 2012
  • Categories

    • 2nd screen
    • Content Protection
    • design
    • fashion
    • history
    • IoT
    • philosophy
    • photography
    • post-production
    • Quality Control
    • science
    • security
    • technology
    • Uncategorized
  • Meta

    • Register
    • Log in
    • Entries feed
    • Comments feed
    • WordPress.com

Browsing Category photography

Objective Photography is an Oxymoron (all photos lie…)

August 18, 2016 · by parasam

There is no such thing as an objective photograph

A recent article in the Wall Street Journal (here) entitled “When Pictures Are Too Perfect” prompted this post. The premise of the article is that too much ‘manipulation’ (i.e. Photoshopping) is present in many of today’s images, particularly in photojournalism and photo contests. There is evidently an arbitrary standard (that no one can appear to objectively define) that posits that essentially only an image ‘straight out of the camera’ is ‘honest’ or acceptable – particularly if one is a photojournalist or is entering your image into some form of competition. Examples are given, such as Harry Fisch having a top prize from National Geographic (for the image “Preparing the Prayers at the Ganges”) taken away because he digitally removed an extraneous plastic bag from an unimportant area of the image. Steve McCurry, best known for his iconic “Afghan Girl” photo on the cover of National Geographic magazine in 1985, was accused of digital manipulation of some images shot in 1983 in Bangladesh and India.

On the whole, I find this absurd and the logic behind such attempts at defining an ‘objective photograph’ fatally flawed. From a purely scientific point of view, there is absolutely no such thing as an ‘objective’ photograph – for a host of reasons. All photographs lie, permanently and absolutely. The only distinction is by how much, and in how many areas.

The First Lie: Framing

The very nature of photography, from the earliest days until now, has at its core an essential feature: the frame. Only a certain amount of what can be seen by the photographer can be captured as an image. There are four edges to every photograph. Whether the final ‘edges’ presented to the viewer are due to the limitations of the camera/film/image sensor, or cropping during the editing process, is immaterial. The initial choice of frame is made by the photographer, in concert with the camera in use, which presents physical limitations that cannot be exceeded. The choice of frame is completely subjective: it is the eye/brain/intuition of the photographer that decides in the moment where to point the camera, what to include in the frame. Is pivoting the camera a few degrees to the left to avoid an unsightly telephone pole “unwarranted digital manipulation?” Most news editors and photo contest judges would probably not agree. But what if the same exact result is obtained by cropping the image during an editing process – already we start to see disagreement in the literature.

If Mr. Fisch had simply walked over and picked up the offending plastic bag before exposing the image, he would likely be the deserved recipient of his 1st place prize from National Geographic, but as he removed the bag during editing his photograph was disqualified. By this same logic, when Leonardo Da Vinci painted the “Mona Lisa” there is a balustrade with two columns behind her. There is perfect symmetry in the placement of Lisa Gherardini (the presumed model) between the columns, which helps frame the subject. Painting takes time, it is likely that a bird would land from time to time on the balustrade. Was Leonardo supposed to include the bird or not? Did he ‘manipulate’ the image by only including the parts of the image that were important to the composition? Would any editor or judge dare ask him today, if that was possible?

Framing and large depth-of-field was necessary in this shot to tell the story: Background and facial expression of the subject both needed to be in focus. [©2012 Ed Elliott / Clearlight Imagery]

“So-So Happy!” Framing and large depth-of-field was necessary in this shot to tell the story: Background and facial expression of the subject both needed to be in focus. [©2012 Ed Elliott / Clearlight Imagery]

Framing and large depth-of-field was necessary in this shot to tell the story: Background and facial expression of the subject both needed to be in focus. [©2012 Ed Elliott / Clearlight Imagery]

“So-So Happy… NOT!” Framing and large depth-of-field was necessary in this shot to tell the story: Background and facial expression of the subject both needed to be in focus. [©2012 Ed Elliott / Clearlight Imagery]

A combination example of framing and depth-of-field. One photographer is standing 6 ft further away (from my camera position) than the other, but the foreshortening of the 200mm telephoto appears to depict 'dueling photographers'. [©2012 Ed Elliott / Clearlight Imagery]

A combination example of framing and depth-of-field. One photographer is standing 6 ft further away (from my camera position) than the other, but the foreshortening of the 200mm telephoto appears to depict ‘dueling photographers’. [©2012 Ed Elliott / Clearlight Imagery]

The Second Lie: The Lens

No photograph can occur without a lens. Every lens has certain irrefutable properties: focal length and maximum aperture being the most important. Each of these parameters impart a vital, and subjective, aspect to the image subsequently captured. Since the ‘lingua franca’ of focal length is the ubiquitous 35mm camera, we can generalize here: 50mm being the so-called ‘normal’ lens; 35mm is considered ‘wide angle’, 24mm ‘very wide angle’ and 10mm a ‘fisheye’. Going in the other direction, 85mm is often considered a ‘portrait’ lens (slight close-up), 105mm a medium ‘telephoto’, 200mm a ‘telephoto’ and anything beyond is for sports or space exploration. Each focal length brings more or less of the frame into focus, and inversely shortens the depth of field. Wide angle lenses tend to bring the entire field of view into sharp focus, while telephotos blur out everything except what the photographer has selected as the prime focus point.

Normal

Normal lens [©2016 Ed Elliott / Clearlight Imagery]

Telephoto lens (for illustrative purpose only, please ignore the soft focus at edges caused by lens adapter)

Telephoto lens (for illustrative purpose only, please ignore the soft focus at edges caused by lens adapter) [©2016 Ed Elliott / Clearlight Imagery]

Wide Angle lens (for illustrative purpose only, please ignore the soft focus at edges caused by lens adapter)

Wide Angle lens (for illustrative purpose only, please ignore the soft focus at edges caused by lens adapter) [©2016 Ed Elliott / Clearlight Imagery]

FishEye lens (for illustrative purpose only, please ignore the soft focus at edges caused by lens adapter) - curvature and edge distortions are normal for such an extreme angle-of-view lens

FishEye lens (for illustrative purpose only, please ignore the soft focus at edges caused by lens adapter) – curvature and edge distortions are normal for such an extreme angle-of-view lens   [©2016 Ed Elliott / Clearlight Imagery]

In addition, each lens type distorts the field of view noticeably: wide angle lenses tend to exaggerate the distance between foreground and background, making the closer objects in the frame look larger than they actually are, and making distant objects even smaller. Telephoto lenses have the opposite effect, foreshortening the image and ‘flattening’ the resulting picture. For example, in a long telephoto shot of a tree on a ridge backlit by the moon, both the tree and the moon can be tack sharp and apparently the moon is directly behind the tree, even though it is 239,000 miles away.

The other major ‘subjective’ quality of any lens is the aperture chosen by the photographer. Otherwise commonly known as the “f-stop” this is the ratio of the focal length of the lens divided by the diameter of the ‘entrance pupil’ (the size of the hole that the aperture diaphragm is set to on a given capture). The maximum aperture (the largest ‘hole’ that can be set by the photographer) depends on the diameter of the lens itself, in relation to the focal length. For example, with a ‘normal’ 50mm lens if the lens is 25mm in diameter then the maximum aperture is f/2 (50/25). Larger apertures (lower f-stop ratios) require larger lenses, and are correspondingly more difficult to use, heavy and expensive. One can see that an f/2 lens for a 50mm focal length is not that huge, to obtain the same f/2 ratio for a 200mm telephoto would require a lens that is at least 100mm (4in) in diameter – making such a device huge, heavy and obscenely expensive. As a quick comparison, (Nikon lenses, full frame, prime lens, priced from B&H Photo – discount photo equipment supplier) a 50mm f/2.8 lens costs $300, while the same lens in f/1.2 costs $700. A 400mm telephoto in f/5.6 would be $2,200, while an identical focal length with a maximum aperture of f/2.8 will set you back a little over $12,000.

Exaggeration of object size with wide angle lens: farther objects appear much smaller than in 'reality'.

Exaggeration of object size with wide angle lens: farther objects appear much smaller than in ‘reality’. [©2011 Ed Elliott / Clearlight Imagery]

Flattening and foreshortening of the image as a result of long telephoto lens (f8, 400mm lens) - the crane is hundreds of feet closer to the camera than the dark buildings behind, but looks like they are directly adjacent.

Flattening and foreshortening of the image as a result of long telephoto lens (f/8, 400mm lens) – the crane is hundreds of feet closer to the camera than the dark buildings behind, but looks like they are directly adjacent. [©2013 Ed Elliott / Clearlight Imagery]

Depth of field with shallow aperture (f/2.4) - in this case even with a wide angle lens the background is out of focus due to the large distance between the foreground and the background (in this case the Hudson River separated the two...)

Depth of field with shallow aperture (f/2.4) – in this case even with a wide angle lens the background is out of focus due to the large distance between the foreground and the background (in this case the Hudson River separated the two…) [©2013 Ed Elliott / Clearlight Imagery]

Flattening and foreshortening of the image with a long telephoto lens. The ship is almost 1/4 mile further away than the green roadway sign, yet appears to be directly behind it... (f4, 400mm)

Flattening and foreshortening of the image with a long telephoto lens. The ship is almost 1/4 mile further away than the green roadway sign, yet appears to be directly behind it… (f/4, 400mm) [©2013 Ed Elliott / Clearlight Imagery]

Wide angle lens (14-24mm zoom lens, set at 16mm - f2.8)

Wide angle lens (14-24mm zoom lens, set at 16mm – f/2.8) [©2012 Ed Elliott / Clearlight Imagery]

Shallow depth of field due to large aperture on telephoto lens (f/4 - 200mm lens on full-frame 35mm DSLR)

Shallow depth of field due to large aperture on telephoto lens (f/4 – 200mm lens on full-frame 35mm DSLR) [©2012 Ed Elliott / Clearlight Imagery]

Wide angle shot, demonstrating sharp focus from foreground to the background. Also exaggeration of perspective makes the bow of the vessel appear much taller than the stern.

Wide angle shot, demonstrating sharp focus from foreground to the background. Also exaggeration of perspective makes the bow of the vessel appear much taller than the stern. [©2013 Ed Elliott / Clearlight Imagery]

The bottom line is that the choice of lens and aperture is a controlling element of the photographer (or her pocketbook) – and has a huge effect on the image taken with that lens and setting. None of these choices can be deemed to be either ‘analog’ or ‘digital’ manipulation of the image during editing, but they have arguably a greater effect on the outcome, message, impact and tenor of the photograph than anything that can be done subsequently in the darkroom (whether chemical or digital).

The Third Lie: Shutter Speed

Every exposure is a product of two factors: Light X Time. The amount of light that strikes a negative (or digital sensor) is governed solely by the selected aperture (and possibly by any additional filters placed in front of the lens); the duration for which the light is allowed to impinge on the negative is set by the shutter speed. While the main property of setting the shutter speed is to produce the correct exposure once the aperture has been selected (to avoid either under or over-exposing the image), there is a huge secondary effect of shutter speed on any motion of either the camera or objects in the frame. Fast shutter speeds (over 1/125th of a second with a normal lens) will essentially freeze any motion, while slow shutter speeds will result in ‘shake’, ‘blur’ and other motion artifacts. While some of these can be just annoying, in the hands of a skilled photographer motion artifacts tell a story. And likewise a ‘freeze-frame’ (from a very fast shutter speed) can distort reality in the other direction, giving the observer a point of view that the human eye could never glimpse in reality. The hours-long time exposure of star trails or the suspended animation shot of a bullet about to pierce a balloon are both ‘manipulations’ of reality – but they take place as the image is formed, not in the darkroom. The subjective experience of a football distorted as the kicker’s foot impacts it – locked in time by a shutter speed of 1/2000th second – is very different to the same shot of the kicker at 1/15th second where his leg is a blurry arc against a sharp background of grass. Two entirely different stories, just from shutter speed choice.

Fast shutter speed to stop action

Fast shutter speed to stop action [©2013 Ed Elliott / Clearlight Imagery]

Combination of two effects: fast shutter speed to stop motion (but not too fast, slight blurring of left foot imparts motion) - and shallow depth of field to render background soft-focus (f4, 200mm lens)

Combination of two effects: fast shutter speed to stop motion (but not too fast, slight blurring of left foot imparts motion) – and shallow depth of field to render background soft-focus (f/4, 200mm lens) [©2013 Ed Elliott / Clearlight Imagery]

High shutter speed to freeze the motion. 1/2000 sec. [©2012 Ed Elliott / Clearlight Imagery]

High shutter speed to freeze the motion. 1/2000 sec. [©2012 Ed Elliott / Clearlight Imagery]

Fast shutter speed to provide clarity and freeze the motion. 1/800 sec @ f/8 [©2012 Ed Elliott / Clearlight Imagery]

Fast shutter speed to provide clarity and freeze the motion. 1/800 sec @ f/8 [©2012 Ed Elliott / Clearlight Imagery]

Although a hand-held shot, I wanted as fine-grained a result as possible, so took advantage of the stillness of the subjects and a convenient wall on which to place the camera. 2 sec exposure with ISO 500 at f8 to keep the depth of field. [©2012 Ed Elliott / Clearlight Imagery]

Although a hand-held shot, I wanted as fine-grained a result as possible, so took advantage of the stillness of the subjects and a convenient wall on which to place the camera. 2 sec exposure with ISO 500 at f/8 to keep the depth of field. [©2012 Ed Elliott / Clearlight Imagery]

The Fourth Lie: Film (or Sensor) Sensitivity [ISO]

As if Pinocchio’s nose hasn’t grown long enough already, we have yet another ‘distortion’ of reality that every image contains as a basic building block: that of film/sensor sensitivity. While we have discussed exposure as a product of Light Intensity X Time of Exposure, one further parameter remains. A so-called ‘correct’ exposure is one that has a balance of tonal values, and (more or less) represents the tonal values of the scene that was photographed. This means essentially that blacks, shadows, mid-tones, highlights and whites are all apparent and distinct in the resulting photograph, and the contrast values are more or less in line with that of the original scene. The sensitivity of the film (or digital sensor) is critical in this regard. Very sensitive film will allow a correct image with a lower exposure (either a smaller aperture, faster shutter speed, or both), while a ‘slow’ [insensitive] film will require the opposite.

A high ISO was necessary to capture the image during late twilight. In addition a slow shutter speed was used - 1/15 sec with ISO of 6400. [©2011 Ed Elliott / Clearlight Imagery]

A high ISO was necessary to capture the image during late twilight. In addition a slow shutter speed was used – 1/15 sec with ISO of 6400. [©2011 Ed Elliott / Clearlight Imagery]

Low ISO (50) to achieve relatively fine grain and best possible resolution (this was a cellphone shot). [©2015 Ed Elliott / Clearlight Imagery]

Low ISO (50) to achieve relatively fine grain and best possible resolution (this was a cellphone shot). [©2015 Ed Elliott / Clearlight Imagery]

Cellphone image at dusk, resulting in ISO 800 with 1/15 sec exposure. Taken from a parking garage, the highlight on the palm is from car headlights. [©2012 Ed Elliott / Clearlight Imagery]

Cellphone image at dusk, resulting in ISO 800 with 1/15 sec exposure. Taken from a parking garage, the highlight on the palm is from car headlights. [©2012 Ed Elliott / Clearlight Imagery]

Night photography often requires very high ISO values and slow shutter speeds. The resulting grain can provide texture as opposed to being a detriment to the shot. [©2012 Ed Elliott / Clearlight Imagery]

Night photography often requires very high ISO values and slow shutter speeds. The resulting grain can provide texture as opposed to being a detriment to the shot. [©2012 Ed Elliott / Clearlight Imagery]

Fine grain achieved with low ISO of 50. [©2012 Ed Elliott / Clearlight Imagery]

Fine grain achieved with low ISO of 50. [©2012 Ed Elliott / Clearlight Imagery]

Slow ISO setting for high resolution, minimal grain (ISO 50) [©2012 Ed Elliott / Clearlight Imagery]

Slow ISO setting for high resolution, minimal grain (ISO 50) [©2012 Ed Elliott / Clearlight Imagery]

Sometimes you frame the shot and do the best you can with the other parameters - and it works. Cellphone image at night meant slow shutter speed (1/15 sec) and lots of grain with ISO 800 - but the resultant grain and blurring did not detract from the result. [©2012 Ed Elliott / Clearlight Imagery]

Sometimes you frame the shot and do the best you can with the other parameters – and it works. Cellphone image at night meant slow shutter speed (1/15 sec) and lots of grain with ISO 800 – but the resultant grain and blurring did not detract from the result. [©2012 Ed Elliott / Clearlight Imagery]

A corollary to film sensitivity is grain (in film) or noise (in digital sensors). If you desire a fine-grained, super sharp negative, then you must use a slow film. If you need a fast film that can produce an acceptable image in low light without a flash, say for photojournalism or surveillance work, then you must use a fast film and accept grain the size of rice in some cases… Life is all about compromise. Again, the final outcome is subjective, and totally within the control of the accomplished photographer, but this exists completely outside the darkroom (or Photoshop). Two identical scenes shot with widely disparate ISO films (or sensor settings) will give very different results. A slow ISO will produce a very sharp, super-realistic image; while a very fast ISO will be grainy, somewhat fuzzy and can tend towards surrealism if pushed to an extreme.  [technical note: the arithmetic portion of the ISO rating is the same as the older ASA rating scale, I use the current nomenclature]

Editing: White Lies, Black Lies, Dutone and Technicolor…

In my personal work as a streetphotographer (my gallery is here) I tell ‘white lies’ all the time in editorial. By that I mean the small adjustments to focus, color balance, contrast, highlight and shadow balance, etc. This is a highly personal and subjective experience. I learned from master photographers (including Ansel Adams), books and much trial and even more error… to pre-visualize my shots, and mentally place the components of the image on the Zone Scale as accurately as possible with the equipment and lighting on hand. This process was most helpful when in university with no money – every shot cost, both in film and developing ingredients. I would often choose between beer and film.. film always won… fewer friends, more images.. not quite sure about that choice but I was fascinated with imagery. While pre-visualization is, I feel, an important magic and can result in the difference between an ok image and a great one – it’s not an easy process to follow in candid streetphotography, where the recognition of a potential shot and the chance to grab it is often 1-2 seconds.

This results, quite frequently, with things in the image not being where I imagined them in terms of composition, lighting, color balance, etc. So enter my ‘white lies’. I used to accomplish this in the darkroom with push/pull of developing, and significant tweaking during printing (burning, dodging, different choice of contrast printing papers, etc.). Now I use Photoshop (I’m not particularly an Adobe disciple, but I started with this program in 1989 with version 0.87 (known as part of Barneyscan, on my Mac Classic) and we’ve kind of grown up together… I just haven’t bothered to learn another program. It does what I need, I’m sure that I only know about 20% of its current capabilities, but that’s enough for my requirements.

The other extreme that can be accomplished by Photoshop experts (and I use the term generically here) are the ‘black lies’. This is where one puts Oprah’s head on someone else’s body, performs ‘digital liposuction’ to the extent that Lena Dunham and Adele both scream “enough!”, and many celebrities find their faces applied to actors and scenes (typically in North Hollywood) where they have never been, nor would want to… There’s actually a great novel by the late Michael Crichton [Rising Sun, 1992] that contains a detailed subplot about digital photomanipulation of video imagery. At that time, it took a supercomputer to accomplish the detailed and sophisticated retouching of long video sequences – today tools such as Photoshop and After Effects could accomplish this on a desktop workstation in a matter of hours.

"Duotone" technique [background masked and converted to monochrome to focus the viewer on the foreground image]

“Duotone” technique [background masked and converted to monochrome to focus the viewer on the foreground image] [©2016 Ed Elliott / Clearlight Imagery]

A technique I frequently use is Duotone – and even here I am being technically inaccurate. What I mean by this is separating the object of interest from the background by masking the subject and turning the rest of the image into black and white. The juxtaposition of a color subject against a monochrome background helps isolate and focus the viewer’s attention on the subject. Frequently in streetphotography the opportunity to place the subject against a non-intrusive background doesn’t exist, so this technique is quite effective in ‘turning down’ the importance of the often busy and distracting surrounds. [Technically the term duotone is used for printing the entire image in gradations of only two colors]. Is this ‘manipulation’? Yes. Does it materially detract from, or alter the intent of, the original image that I pre-visualized in my head? No. I firmly stand behind this point of view, that all photographs “lie” to one extent or another, and any tool that the photographer has at his or her hand to generate a final image that is in accordance with the original intent is fair game. What matters is the act of conveying the vision of the photographer to the brain of the viewer. Period.

The ‘photograph’ is just the medium that transports that image. At the end of the day, a photo is a conglomeration of pixels (either printed or glowing) that transmit photons to the human visual system, and ultimately end up in the visual cortex in the back of the human brain. That is where we actually “see”.

Early photography (and motion picture films) were only available in black & white. When color photography first came along, the colors were not ‘natural’. As emulsions improved things got better, but even so there was a marked deviation from ‘natural’ that was actually ‘designed in’ by Kodak and other film manufacturers. The saturation and color mapping of Kodachrome did not match reality, but it did satisfy the public that equated punchy colors with a ‘good color photo’ and made those vacation memories happy ones.. and therefore sold more film. The more subdued, and realistic, Ektachrome came along as professional photographers pushed for choice (and quite frankly an easier and more open developing process – Kodachrome could only be processed by licensed labs and it was notoriously difficult to process well). The down side of early Ektachrome emulsions was the unfortunate instability of the dye layers in color transparency film – leading to rapid fading of both slides and movies.

As one who has worked in film preservation and restoration for decades, it was interesting to note that an early color process (the Technicolor 3-stripe method) that was originally designed just to get vibrant colors on the movie screen in the 1930’s had a resurgence in film preservation. Turned out that so many of the early Ektachrome films from the 1950’s and 1960’s experienced rapid fading that significant restoration efforts were necessary to salvage some important movies. The only way at that time (before economical digital scanning of movies was possible) was to – after restoration of the color negative – scan using the Technicolor process and make 3 separate black & white films that represented the cyan, magenta and yellow dye layers. Then someday in the future the 3 negatives could be optically combined and printed back on to color film for viewing.

There is No Objective Truth in Photography (or Painting, Music…)

All photography is an illusion. Using a lens, a photo-sensitive element of some sort and a box to restrict the image to only the light coming through the lens, a photograph is a rendering of what is before the lens. Nothing more. Distorted and limited by the photographer’s choice of point of view, lens, aperture, shutter speed, film/sensor and so on; the resultant image – if correctly executed, reflects at most the inner vision of the photographer’s mind/perception of the original scene. Every photograph has a story (some more boring than others).

One of the great challenges of photography (and possibly one of the reasons that until quite recently this art form was not taken seriously) is that on first glance many photos appear to be just a ‘copy of reality’ – and therefore contain no inherent artistic value. Nothing could be further from the truth. It’s just that that ‘art’ hides in plain sight… It is our collective, subjective, and inaccurate view that photographs are ‘truthful’ and accurately represent the reality that was before the lens that is the root of the problem that engendered this post. We naively assume that photos can be trusted, that they show us the only possible view of reality. It’s time to grow up, to accept that photography, just like all other art forms, is a product of the artist, first and foremost.

Even the unassuming mom who is taking snapshots of her kids is making choices – whether she knows it or not – about each of the parameters already discussed. Since most snapshot (or cellphone) cameras have wide angle lenses, the ‘huge nose’ effect of close-up pics of babies and youngsters (that will haunt these innocent children forever on Facebook and Instagram – data never dies…) is just an objective artifact of lens choice and distance to subject. Somewhere along the line our moral compass became out of whack when we started drawing highly artificial lines around ‘acceptable editorial behavior’ and so on. An entirely different discussion – which is worthy of a separate post – can be had in terms of the photographer’s (or publisher’s) intention in sharing an image. If a deliberate attempt to misrepresent the scene, for financial gain, allocation of justice, change in power, etc. is taken, that is an issue. But the same issue exists whether the medium that transports such a distortion is the written word, an audio recording, a painting or a 3D holograph. It is illogical to apply a set of standards or restrictions to one art form and not another, just to attempt to reign in inadvertent or deliberate distortions in a story that may be deduced from the art by an observer.

To use another common example, we have all seen many photos of a full moon rising behind a skyline, trees on a ridge, etc. – typically with a really large moon – and most observers just appreciate the image, the impact, the feeling. Even some rudimentary science, and a bit of experience with photography, reveals that most such images are a composite, with a moon image enlarged and layered in behind the foreground. The moon, is simply never that large, in relation to the rest of the image. In many cases I have seen, the lighting of the rest of the scene clearly shows that the foreground was shot at a different time of night than the moon (a full moon on the horizon only occurs at dusk). I have also seen many full moons in photographs that are at astronomically impossible locations in the sky, given the longitude and latitude of the foreground that is shown in the image.

An example of "Moon on Steroids"... The actual size of the moon (31 minutes of arc) is about the same size as your thumbnail if you extend your arm fully. In this picture it's obvious (look at the grasses on the ground) that the tree is approximately 10 ft tall. In reality, the moon would be nestled in between a couple of the smaller branches.

An example of “Moon on Steroids”… The actual size of the moon (31 minutes of arc) is about the same size as your thumbnail if you extend your arm fully. In this picture it’s obvious (look at the grasses on the ground) that the tree is approximately 10 ft tall. In reality, the moon would be nestled in between a couple of the smaller branches.

Why is it that such an esteemed, and talented, photographer as Steve McCurry is chastised for removing some distracting bits of an image – which in no way detracted from the ‘story’ of the image – and yet I dare say that no one in their right mind wold criticize Leonardo da Vinci for including a physically impossible background (the almost mythological mountains and seas) in his rendition of Lisa Gherardini for his painting of “Mona Lisa”? As someone who has worked in the film/video/audio industry for my entire professional life, I can tell you with absolute certainty that no modern audio recording – from Adele to Ziggy Marley – is released that is not ‘digitally altered’ in some fashion. Period. It is just an absolute in today’s production environment to ‘clean up’ every track, every mix, every completed master – removing unwanted echoes, noise, coughs, burps, and other audio equivalents of Mr. Fisch’s plastic bag… and no one, ever, has complained about this or accused the artists of being ‘dishonest’.

This double standard needs to be put to rest permanently. It reflects poorly on those who take this position, demonstrating their lack of technical knowledge and a narrow perception of the art form of photography, and furthermore gives power to those whose only interest is to malign others and detract from the powerful impact that a great image can create. If ignorant observers can really believe that an airplane in an image as depicted is ‘real’ (for the airplane to be of such a size in relation to the tunnel and ladders it would have to be flying at a massively illegal low altitude in that location) then such observers must take responsibility. Does the knowledge that this placement of the plane is ‘not real’ detract from the photo? Does the contraposition of ‘stillness vs movement’ (concrete and steel silo vs rapidly moving aircraft) create a visually stimulating image? Is it important whether that occurred ‘in reality’ or not? Would an observer judge it differently if this was a painting or a sketch instead of a photograph?

I love the art and science of photography. I am daily enamored with the images that talented and creative people all over the world share, whether a mixture of camera originals, composites, pure fiction created in the ‘darkroom’ or some combination of all. This is a wondrous art form, and must be supported at all costs. It’s not easy, it takes dedication, effort, skill, perseverance, money, time and love – just as any art form. I would hope that we could move the conversation to what matters: ‘truth in advertising’. In a photo contest, nothing, repeat nothing, should matter except the image itself. Just like painting, sculpture, music, ceramics, dance, etc. – the observed ‘art’ should be judged only by the merits of the entity itself, without subjective expectations or philosophical distortions. If an image is used to reinforce a particular ‘story’ – whether for ethical, legal or news purposes, then both the words and the images must be authentic. Authentic does not mean ‘un-retouched’, it does mean that there is no ‘black lie’ in what is conveyed.

To summarize, let’s stop believing that photographs are ‘real’ – but let’s start accepting the art, craftsmanship, effort and focus that this medium brings to all of us. Let’s apply a common frame of reference to all forms of art, whether they be painting, writing, photography, music, etc. – terms of authenticity and purpose. Would we chide Escher for attempting to fool us with visual cues of an impossible reality?

A Historical Moment: The Sylmar Earthquake of 1971 (Los Angeles, CA)

August 14, 2016 · by parasam

A little over 45-1/2 years ago, at a few seconds past 6:00AM on Feb. 9, 1971,  I was jolted out of bed by a massive earthquake in Los Angeles. Or more accurately, the bed moved so far sideways that I fell on the floor… Perhaps a good thing as the bookshelves over my bed promptly dumped all the books, and shelves, onto the bed which I had recently occupied. Other than the Kern County earthquake in 1952, this was the first major quake in California since the calamitous 1906 disaster in San Francisco. Although I went on to experience two more severe earthquakes in California (Loma Prieta / San Francisco in 1989; and Northridge / Los Angeles in 1994), this was the first in my lifetime. As a high school senior, already accepted to an engineering college where I would study physics – including geophysics (the study of earthquakes among other things), I knew instantly what was happening. The force and sound still marveled me: it was so much greater than I could have imagined.

At 6.6 on the Richter Scale, this was a massive, but not apocalyptic, event. The 1906 quake measured 7.8, the later Loma Prieta was 7.1 and the Northridge was 6.7 – however the ‘shaking index’ – Mercalli Intensity Scale – (a measure of the actual movement perceived and damage caused by an earthquake) of this quake was “XI Extreme”, only one step from the end of the scale which is labelled “Total Destruction of Everything”. In comparison, both the Loma Prieta (1989) and Northridge (1994) quakes measured “IX Violent” on the Mercalli Scale, two steps below this quake (the Sylmar Earthquake, 1971). The historical San Francisco earthquake of 1906 measured the same (XI Extreme) in four locations just to the north of San Francisco, but the city itself only felt “X Extreme” shaking intensity on the Mercalli Scale. Remember that the most damage in the SF quake was from the subsequent fires, not the earthquake itself.

Bottom line: the 1971 Sylmar quake in Los Angeles produced the most destructive power of any earthquake in California since the Fort Tejon quake in 1857 (Richter 7.9). The quake lasted technically for 12 seconds – it felt like a lot more than that! – and caused $553 million damage [in 1971 dollars, that would be about $3.28 billion in 2016 dollars]. The recently completed freeway interchange in the north San Fernando Valley was destroyed, and took 2 years to rebuild – only to collapse again in the 1994 Northridge quake… seems the structural engineers keep learning after the fact…

Unless you have lived through a massive earthquake such as this, one simply cannot physically catalog the intensity of such an event. Words, even pictures, just fail. The noise is beyond incredible. The takeoff roll of a 747 aircraft is a whisper in comparison; the sight of the houses across the street rising and falling as if on a wave 20 feet high is beyond comprehension. I will never take the word ‘stability’ for granted again. Ever. We take for granted the earth under our feet is a constant. It doesn’t move. Or it’s not supposed to… the disorientation is extreme.

As soon as I had determined that my family was safe, and our home, although damaged, was not in immediate danger (and I had turned off gas and electricity), I got in my truck and headed off to where I had heard on the radio that the epicenter of damage was: the northern San Fernando Valley. The quake occurred at 6AM, I arrived at the destroyed freeway interchange (for those that know LA, the I-5/Hiway 14 interchange) at about 10AM, when the below images were taken. No police, fire or any emergency personnel had arrived yet. It was surreal, a few other curious humans like myself wandering around – and absolute quiet. One of the busiest freeways in Los Angeles was empty. The only sound was an occasional crow. The real major calamity (the collapse of the Olive View and Veterans Hospitals, which ended up with a death toll of 62) was several miles to the south of my location. There were cracks in the ground several feet wide and many feet deep. The Sylmar Converter Station (a major component of the LA basin electrical power grid) was totaled, with large transformers laying helter-skelter on their sides. I was reminded of H. G. Wells’ “War of The Worlds” with a strange and previously unknown landscape in front of me.

Although I had already been shooting images for almost 10 years (starting with a Kodak Brownie), the pictures below were probably my first real entry into photojournalism and streetphotography. Taken with a plastic (both lens and body) Kodak Instamatic 100 that exposed a proprietary 26mm square film (Kodacolor-X, ASA64) the resulting prints are not of the best quality. While I may still discover the negatives someday in a long-forgotten shelf, all I have at present are the prints from 1971. I’ve scanned, restored and processed them to recover as much of the original integrity as possible, but there is no retouching.

I’ve also included scans of some newspapers in LA from the first days after the quake (found those folded up and stored – reading some of the ads from that period was almost as interesting as the major stories…)

Destroyed overpass and roadway on the I-5.

Destroyed overpass and roadway on the I-5.

Severe damage to McMahan's Furniture in north San Fernando.

Severe damage to McMahan’s Furniture in north San Fernando.

Roadway torn asunder by the force of the earthquake.

Roadway torn asunder by the force of the earthquake.

Fallen transformer at the Sylmar Power station.

Fallen transformer at the Sylmar Power station.

Damaged office building, north San Fernando.

Damaged office building, north San Fernando.

Lateral deformation of the ground near the Sylmar Power station, close to the epicenter of the earthquake.

Lateral deformation of the ground near the Sylmar Power station, close to the epicenter of the earthquake.

Fallen roadway on the I-5.

Fallen roadway on the I-5.

Observers looking at the earthquake damage to the I-5 a few hours after the initial event.

Observers looking at the earthquake damage to the I-5 a few hours after the initial event.

Overpass on the I-5 / Hiway 14 interchange showing separation and subsiding of the roadway.

Overpass on the I-5 / Hiway 14 interchange showing separation and subsiding of the roadway.

Massive split in the roadway of the I-5, looking north.

Massive split in the roadway of the I-5, looking north.

Destroyed overpass on the I-5 freeway.

Destroyed overpass on the I-5 freeway.

Brick walls destroyed in suburban San Fernando Valley.

Brick walls destroyed in suburban San Fernando Valley.

The I-5 / Hiway 14 interchange was still in the final stages of construction when the earthquake hit. This image is deceptive as most of the damage was at the top of the frame. However the roadway that leads in from the lower right is split just after the overpass (detail in another image).

The I-5 / Hiway 14 interchange was still in the final stages of construction when the earthquake hit. This image is deceptive as most of the damage was at the top of the frame. However the roadway that leads in from the lower right is split just after the overpass (detail in another image).

Fallen overpass on the I-5 freeway. A few seconds after this image was taken a major aftershock occurred. I ran really, really fast from under the remaining bridge elements...

Fallen overpass on the I-5 freeway. A few seconds after this image was taken a major aftershock occurred. I ran really, really fast from under the remaining bridge elements…

I-5 / Hiway 14 freeway interchange damage

I-5 / Hiway 14 freeway interchange damage

LA Times, Feb 10 1971, page 1

LA Times, Feb 10 1971, page 1

The Valley News, Feb 9 1971, page 1

The Valley News, Feb 9 1971, page 1

The Valley News, Feb 9 1971, page 2

The Valley News, Feb 9 1971, page 2

Kodak Instamatic 100, released in 1963 with a price of $16 at the time.

Kodak Instamatic 100, released in 1963 with a price of $16 at the time.

Where Did My Images Go? [the challenge of long-term preservation of digital images]

August 13, 2016 · by parasam
Littered Memories - Photos in the Gutter

Littered Memories – Photos in the Gutter (© 2016 Paul Watson, used with permission)

Image Preservation – The Early Days

After viewing the above image from fellow streetphotographer Paul Watson, I wanted to update an issue I’ve addressed previously: the major challenge that digital storage presents in terms of long-term archival endurance and accessibility. Back in my analog days, when still photography was a smelly endeavor in the darkroom for both developing and printing, I slowly learned about careful washing and fixing of negatives, how to make ‘museum’ archival prints (B&W), and the intricacies of dye-transfer color printing (at the time the only color print technology that offered substantial lifetimes). Prints still needed carefully restricted environments for both display and storage, but if all was done properly, a lifetime of 100 years could be expected for monochrome prints and even longer for carefully preserved negatives. Color negatives and prints were much more fragile, particularly color positive film. The emulsions were unstable, and many of the early Ektachrome slides (and motion picture films) faded rapidly after only a decade or so. A well-preserved dye-transfer print could be expected to last for almost 50 years if stored in the dark.

I served for a number of years as a consultant to the Los Angeles County Museum of Art, advising them on photographic archival practices, particularly relating to motion picture films. The Bing Theatre for many years offered a fantastic set of screenings that offered a rare tapestry of great movies from the past – and helped many current directors and others in the industry become better at their craft. In particular, Ron Haver (the film historian, preservationist and LACMA director with whom I worked during that time) was instrumental in supervising the restoration, screening and preservation of many films that would now be in the dust bin of history without his efforts. I learned much from him, and the principles last to this day, even in a digital world that he never experienced.

One project in particular was interesting: bringing the projection room (and associated film storage facilities) up to Los Angeles County Fire Code so we could store and screen early nitrate films from the 1920’s. [For those that don’t know, nitrate film is highly flammable, and once on fire will quite happily burn under water until all the film is consumed. It makes its own oxygen while burning…] Fire departments were not great fans of this stuff… Due to both the large (and expensive) challenges in projecting this type of film, as well as the continual degradation of the film stock, almost all nitrate film left has since been digitally scanned for preservation and safety. I also designed the telecine transfer bay for the only approved nitrate scanning facility in Los Angeles at that period.

What this all underscored was the considerable effort, expense and planning that is required for long term image preservation. Now, while we may think that once digitized, all our image preservation problems are over – the exact opposite is true! We have ample evidence (glass plate negatives from the 1880’s, B&W sheet film negatives from the early 1900’s) that properly stored  monochrome film can easily last 100 years or more, and is readable today as it was the day the film was exposed with no extra knowledge or specialized machinery. B&W movie film is also just as stable as long as printed onto safety film base. Due to the inherent fading of so many early color emulsions, the only sure method for preservation (in the analog era) was to ‘color separate’ the negative film and print the three layers (cyan, magenta and yellow) onto three individual B&W films. – the so-called “Technicolor 3-stripe process”.

Digital Image Preservation

The problem with digital image preservation is not due to the inherent technology of digital conversion – if done well that can yield a perfect reproduction of the original after theoretically an infinite time period. The challenge is how we store, read and write the “0s and 1s” that make up the digital image. Our computer storage and processing capability has moved so quickly over the last 40 years that almost all digital storage from more than 25 years ago is somewhere between difficult and impossible to recover today. This problem is growing worse, not better, in every succeeding year…

IBM 305 RAMAC Disk System 1956: IBM ships the first hard drive in the RAMAC 305 system. The drive holds 5MB of data at $10,000 a megabyte.

IBM 305 RAMAC Disk System 1956: IBM ships the first hard drive in the RAMAC 305 system. The drive holds 5MB of data at $10,000 a megabyte.

This is a hard drive. It holds less than .01% of the data as the smallest iPhone today...

This is a hard drive. It holds less than .01% of the data as the smallest iPhone today…

One of the earliest hard drives available for microcomputers, c.1980. The cost then was $350/MB, today's cost (based on 1TB hard drive) is $0.00004/MB or a factor of 8,750,000 times cheaper.

One of the earliest hard drives available for microcomputers, c.1980. The cost then was $350/MB, today’s cost (based on 1TB hard drive) is $0.00004/MB or a factor of 8,750,000 times cheaper.

Paper tape digital storage as used by DEC PDP-11 minicomputers in 1975.

Paper tape digital storage as used by DEC PDP-11 minicomputers in 1975.

Paper punch card, a standard for data entry in the 1970s.

Paper punch card, a standard for data entry in the 1970s.

Floppy disks: (from left) 8in; 5-1/4"; 3-1/2". The standard data storage format for microcomputers in the 1980s.

Floppy disks: (from left) 8in; 5-1/4″; 3-1/2″. The standard data storage format for microcomputers in the 1980s.

As can be  seen from the above examples, digital storage has changed remarkably over the last few decades. Even though today we look at multi-terabyte hard drives and SSD (Solid State Drives) as ‘cutting edge’, will we chuckle 20 years from now when we look back at something as archaic as spinning disks or NAND flash memory? With quantum memory, holographic storage and other technologies already showing promise in the labs, it’s highly likely that even the 60TB SSD disks that Samsung just announced will take their place alongside 8-inch floppy disks in a decade or so…

And these issues are actually the least of the problem (the physical storage medium). Yes, if you put your ‘digital negatives’ on a floppy disk 15 years ago and now want to read them you have a challenge at hand… but with patience and some time on eBay you could probably assemble the appropriate hardware to retrieve the data into a modern computer. The bigger issue is that of the data format: both of the drives themselves and the actual image files. The file systems – the method that was used to catalog and find the individual images stored on whatever kind of physical storage device, whether ancient hard drive or floppy disk – have changed rapidly over the years. Most early file systems are no longer supported by current OS (Operating Systems), so hooking up an old drive to a modern computer won’t work.

Even if one could find a translator from an older file system to a current one (there is a very limited capability in this regard, many older file systems can literally only be read by a computer as old as the drive), that doesn’t solve the next issue: the image format itself. The issue of ‘backwards compatibility’ is one of the great Achilles Heels of the entire IT industry. The huge push by all vendors to keep all their users relentlessly updating to the latest software, firmware and hardware is just to avoid these same companies having to support older versions of hardware and software. This is not totally a self-serving issue (although there are significant costs and time involved in doing so) – frequently certain changes in technology just can’t support an older paradigm any longer. The earliest versions of Photoshop files, PICT, etc are not easily opened with current applications. Anyone remember Corel Draw?? Even ‘common interchange’ formats such as TIFF and JPEG have evolved, and not every version is supported by every current image processing application.

The more proprietary and specific the image format is, the more fragile it is – in terms of archival longevity. For instance, it may seem that the best archival format would be the Camera Raw format – essentially the full original capture directly from the camera. File types such as RAW, NEF, CR2 and so on are typical. However, each of these is proprietary and typically has about a 5 year life span, in terms of active application support by the vendor. As camera models keep changing – more or less on a yearly cycle – the Raw formats change as well. 3rd party vendors, such as Adobe Photoshop, are under no obligation to support earlier Raw formats forever… and as previously discussed the challenge of maintaining backwards compatibility grows more complex with each passing year. There will always come a time when such formats will no longer be supported by currently active image retrieval, viewing or processing software.

Challenges of Long-Term Digital Image Preservation

Therefore two major challenges must be resolved in order to achieve long term storage and future accessibility of digital images. The first is the physical storage medium itself, whether that is tape (such as LTO-6), hard disk, SSD, optical, etc. The second is the actual image format. Both must be usable and able to transfer images back to the operating system, device and software that is current at the time of retrieval in order for the entire exercise of archival digital storage to be successful. Unfortunately, this is highly problematic at this time. As the pace of technological advance is exponentially increasing, the continual challenge of obsolescence becomes greater every year.

Currently there is no perfect answer for this dilemma – the only solution is one of proactivity on the part of the user. One must accommodate the continuing obsolescence of physical storage mediums, file systems, operating systems and file formats by moving the image files on a regular and continual basis to current versions of all of the above. Typically this is an exercise that must be repeated every five years – at current rates of technological development. For uncompressed images, other than the cost of the move/update there is no impact on the digital image – that is one of the plus sides of digital imagery. However, many images (almost all if you are other than a professional photographer or filmmaker) are stored in a compressed format (JPG, TIFF-LZW/ZIP, MPG, MOV, WMV, etc.). These images/movies will experience a small degradation in quality each time they are copied. The amount and type of artifacts introduced are highly variable, depending on the level of compression and many other factors. The bottom line is that after a number of copy cycles of a compressed file (say 10) it is quite likely that a visible difference from the original file can be seen.

Therefore, particularly for compressed files, a balance must be struck between updating often enough to avoid technical obsolescence and making the fewest number of copies over time in order to avoid image degradation. [It should be noted that potential image degradation will typically only be due to changing/updating the image file format, not moving a bit-perfect copy from one type of storage medium to another].

This process, while a bit tedious, can be automated with scripts or other similar tools, and for the casual photographer or filmmaker will not be too arduous if undertaken every five years or so. It’s another matter entirely for professionals with large libraries, or for museums, archives and anyone else with thousands or millions of image files. A lot of effort, research and thought has been applied to this problem by these professionals, as this is a large cost of both time and money – and no solution other than what’s been described above has been discovered to date. Some useful practices have been developed, both to preserve the integrity of the original images as well as reduce the time and complexity of the upgrade process.

Methods for Successful Digital Image Archiving

A few of those processes are shared below to serve as a guide for those that are interested. Further search will yield a large amount of sites and information that addresses this challenge in detail.

  • The most important aspect of ensuring a long-term archival process that will result in the ability to retrieve your images in the future is planning. Know what you want, and how much effort you are willing to put in to achieve that.
  • While this may be a significant undertaking for professionals with very large libraries, even a few simple steps will benefit the casual user and can protect family albums for decades.
  • In addition to the steps discussed above (updating storage media, OS and file systems, and image formats) another very important aspect is “Where do I store the backup media?” Making just one copy and having it on the hard drive of your computer is not sufficient. (Think about fire, theft, complete breakdown of the computer, etc.)
    • The current ‘best practices’ recommendation is the “3-2-1” approach: Make 3 copies of the archival backup. Store in at least 2 different locations. Place at least 1 copy off-site. A simple but practical example (for a home user) would be: one copy of your image library in your computer. A 2nd copy on a backup drive that is only used for archival image storage. A 3rd copy either on another hard drive that is stored in a vault environment (fireproof data storage or equivalent) or cloud storage.
    • A note on cloud storage: while this can be convenient, be sure to check the fine print on liability, access, etc. of the cloud provider. This solution is typically feasible for up to a few terabytes, beyond that the cost can become significant, particularly when you consider storage for 10-20  years. Also, will the cloud provider be around in 20 years? What insurance do they provide in terms of buyout, bankruptcy, etc.? While the issue of storage media is not an issue with cloud storage and file formats (it is incumbent on the cloud provider to keep that updated) you are still personally responsible for the image format issue: the cloud vendor is only storing a set of binary files, they cannot guarantee that these files will be readable in 20 years.
    • Unless you have a fairly small image library, current optical media (DVD, etc.) is impractical: even double-sided DVDs only hold about 8GB of formatted data. In addition, as one would need to burn these DVDs in your computer, the longevity of ‘burned’ DVDs is not great (compared to printed DVDs like you purchase when you buy a movie). With DVD usage falling off noticeably this is most likely not a good long-term archival format.
    • The best current solution for off-premise archival storage is to physically store external hard drives (or SSDs) with a well known data vaulting vendor (Iron Mountain is one example). The cost is low, and since you only need access every 5 years or so the extra cost for retrieval and re-storage (after updating the storage media) is acceptable even for the casual user.
  • Another vitally important aspect of image preservation is metadata. This is the information about the images. If you don’t know what you have then future retrieval can be difficult and frustrating. In addition to the very basic metadata (file name, simple description, and a master catalog of all your images) it is highly desirable to put in place a metadata schema that can store keywords and a multitude of other information about the images. This can be invaluable to yourself or others who may want to access these images decades in the future. A full discussion of image metadata is beyond the scope of this post, but there is a wealth of information available. One notable challenge is the most basic (and therefore future-proof) still image formats in use today [JPG and TIFF] do not have any facility to attach metadata directly within the image file – it must be stored externally and cross-referenced somehow. Photoshop files on the other hand store both metadata and the image within the same file – but as discussed above this is not the best format for archival storage. There are techniques to cross-reference information to images: from purpose-built archival image software to a simple spreadsheet that uses the filename of the image as a key to the metadata.
  • An important reminder: the whole purpose of an archival exercise is to be able to recover the images at a future date. So test this. Don’t just assume. After putting it all in place, pull up some images from your local offline storage every 3-6 months and see that everything works. Pull one of your archival drives from off-site storage once a year and test it to be sure you can still read everything. Set up reminders in your calendar – it’s so easy to forget until you need a set of images that was accidentally deleted from your computer – and then find out your backup did work as expected.

A final note:  if you look at entities that store valuable images as their sole activity (Library of Congress, The National Archives, etc.) you will find [for still images] that the two most popular image formats are low-compression JPG and uncompressed TIFF. It’s a good place to start…

 

iPhone5 – Part 1: Features, Performance… and 4G

September 16, 2012 · by parasam

[Note: This is the first of either 2 or 3 posts on the new iPhone5 – depending on how quickly accurate information becomes available on this device. This post covers what Apple has announced, along with info gleaned from other technical sources to date. Further details will have to wait until actual phones are shipped, then torn down by specialists and real benchmarks are run against the new hardware and iOS6]

Introduction

Unless you’ve been living under a very large rock, one couldn’t help but hear that Apple has introduced the next version of its iPhone. This article will look at what this device actually purports to offer the user, along with some of my comments and observations. All of these comments are based on current press releases and ‘paper’ information:  the actual hardware won’t release until Sept. 21, and due to high demand, it may take me a bit longer to get one in hand for personal testing. I’ll go into details below, but I don’t intend to upgrade from my 4S at this time. I do have a good relationship with my local Apple business retailer, and my rep will be setting aside one of the new phones for me to come in and play with for a few hours as soon as she has one that is not immediately promised. Currently we are looking at about first week in October – so look for another post then. As of the date of writing (15 Sep) Apple has said their initial online allocation has sold out, so I expect demand to be high for the first few weeks.

Front and Back of iPhone5

The basic specifications and comparisons to previous models are shown below:

Physical Comparison

 

Apple iPhone 4

Apple iPhone 4S

Apple iPhone 5

Samsung Galaxy S 3

     
Height

115.2 mm (4.5″)

115.2 mm (4.5″)

123.8 mm (4.87″)

136.6 mm (5.38″)

     
Width

58.6 mm (2.31″)

58.6 mm (2.31″)

58.6 mm (2.31″)

70.6 mm (2.78″)

     
Depth

9.3 mm ( 0.37″)

9.3 mm ( 0.37″)

7.6 mm (0.30″)

8.6 mm (0.34″)

     
Weight

137 g (4.8 oz)

140 g (4.9 oz)

112 g (3.95 oz)

133 g (4.7 oz)

     
CPU

Apple A4 @ ~800MHz Cortex A8

Apple A5 @ ~800MHz Dual Core Cortex A9

Apple A6 (Dual Core Cortex A15?)

1.5 GHz MSM8960 Dual Core Krait

     
GPU

PowerVR SGX 535

PowerVR SGX 543MP2

?

Adreno 225

     
RAM

512MB LPDDR1-400

512MB LPDDR2-800

1GB LPDDR2

2GB LPDDR2

     
NAND

16GB or 32GB integrated

16GB, 32GB or 64GB integrated

16GB, 32GB or 64GB integrated

16GB or 32GB NAND with up to 64GB microSDXC

     
Camera

5MP with LED Flash + Front Facing Camera

8MP with LED Flash + Front Facing Camera

8MP with LED Flash + 720p Front Facing Camera

8 MP with LED flash + 1.9 MP front facing

     
Screen

3.5″ 640 x 960 LED backlit LCD

3.5″ 640 x 960 LED backlit LCD

4″ 1136 x 640 LED backlit LCD

4.8″ 1280 x 720 HD Super AMOLED

     
Battery

Integrated 5.254Whr

Integrated 5.291Whr

Integrated ?? Whr

Removable 7.98 Whr

     
WiFi/BT

802.11 b/g/n

Bluetooth 2.1

802.11 b/g/n

Bluetooth 4.0

802.11 a/b/g/n

Bluetooth 4.0

802.11 a/b/g/n

Bluetooth 4.0

     

As can be seen from the above chart, the iPhone5 is an improvement in several areas from the 4S, but in pure technological features still is behind some of the latest Android devices. We’ll now go through some of the details, and what they actually may mean for a user.

Case

The biggest external change is the shape and size of the iPhone5: due to the larger screen (true 16:9 aspect ratio for the first time), the phone is longer while maintaining the same width. It is also slightly thinner. The construction of the case is a bit different as well: the iPhone4S used glass panels for the full front and rear; the iPhone5 replaces the rear panel with a solid aluminum panel except for the very top and bottom of the rear shell which remain glass. This is required for the Wi-Fi, Bluetooth and GPS antennas to receive radio signals (metal blocks reception).

There are two major changes in the case design, both of which will have significant impacts to usage and accessories: the headphone/microphone jack has been moved to the bottom of the case, and the docking connector has been completely redesigned: this is now a new proprietary “Lightning” connector that is much smaller. Both of these changes have instantly rendered obsolete all 3rd-party devices that use the docking connector to plug the iPhone into external accessories such as charging bases, car charging cords, clock-radios and HiFi units, etc. While Apple is offering an adaptor cable in several forms, there are serious drawbacks for many uses.

The basic Lightning-to-USB adaptor cable ($19) is provided as part of the iPhone5 package [along with the small charger], if you have other desktop power supplies or chargers, or are fortunate enough to have a car charger that accepts a USB cable (as opposed to a built in docking connector as most do), you can spend the extra cash and still use those devices with the new iPhone5.

Lightning to USB adaptor cable (1m)

For connecting the new iPhone5 to current 30-pin docking connector devices, Apple offers two solutions: a short cable (0.2m – 8″) [$39] or a stub connector [$29]:

Lightning to 30-pin cable (0.2m)

Lightning to 30-pin stub connector

The Lightning-to-USB adaptor is growing scare already:  in the last 48 hours the shipping dates have slipped from 1-2 days to 3 weeks or more. Neither of the Lightning-to-30-pin adaptors has a ship date yet, a rather nebulous statement of “October” is all that is stated on the Apple store. So early adapters of the iPhone5 should expect a substantial delay before they can make use of any current aftermarket devices that use the docking connector. Another issue:  the cost of the adaptors. As part of their incredible branding the closed-universe of Apple/Mac/iDevice, users have been conditioned to paying a hefty premium for basic utility devices as compared to devices that perform the same funtion for other brands such as Android phones. For example, the same phone-to-USB cable (1m) that Apple sells for $19 is available for the latest model Samsung Galaxy S3 for between $6 to $9 at a number of online retailers. It’s very easy to end up spending $100 or more on iPhone accessories just for a case and a few adaptors.

Now let’s get to the real issue of this new Lightning adaptor – even assuming that one can eventually purchase the necessary adaptors shown above. Basically there are two classes of devices that use the docking connector: those that connect via a flexible cable (chargers and similar devices), and those that mechanically support the iPhone with the docking connector, such as clock/radios, HiFi units, audio and other adaptors, phone holders for cars, just to name a few. The old style 30-pin connector was wide enough, along with the mechanical design, to actually support the iPhone with a minimum of external ‘cradle’ to not put undue stress on the connector. The Apple desktop docking adaptor is such an example:

30-pin docking adaptor

The new Lightning connector is so small that it offers no mechanical stability. Any device that will hold the iPhone will need a new design, not only to add sufficient mechanical support to avoid bending or disconnecting the new docking adaptor, but to accomodate the thinner case as well. Here is a small small sample of devices that will affected by this design change:

As can be seen, this connector change has a profound and wide reaching effect. Users that have a substantial investment in aftermarket devices will need to carefully consider any decision to upgrade to the iPhone5. Virtually all of the above devices will simply not work with the new phone, even if the ‘stub adaptor’ was employed. While a large number of 3rd party providers of iPhone accessories will be happy (they can resell the same product again each time a design change occurs), the end user may be less enchanted. Even simple things such as protective cases can not be ‘recycled’ for use on the new phone. I’ll give one personal example: I have an external camera lens adaptor set, the iPro by Schneider. This set of lenses will not work at all with the iPhone5. Not only is the case different (which is critical for mounting the lenses to the phone in precise alignment with the internal iPhone camera), but the current evidence is that Apple has changed the optics slightly on the iPhone5, such that an optical redesign of accessory lenses would be required. A very careful and methodical analysis of the side-effects of potentially upgrading your iPhone should be performed if you own any significant devices that use the docking connector.

The other design change is the movement of the headphone jack to the bottom left of the case. While this does not in and of itself present the same challenges that the docking connector poses, it does have ramifications that may not be immediately apparent. While, for a user that is just carrying the iPhone as a music playback device (iPod-ish use) the headphone cable connected to the bottom is a superior design choice, it once again poses a challenge for any device where the iPhone is physically ‘docked’. The headphone cable is no longer accessible! For instance, with the original iPhone dock, I could be on the phone (using a headphone/microphone cable assembly) and walk to my desk and drop the iPhone in the docking station/charger and keep talking while my depleted battery was now being refueled… no longer… the cable from the bottom won’t allow the phone to be inserted into the docking station…

The bottom line is that Apple has drawn an absolute line in the sand with the iPhone5:  the user is forced to start completely over with all accessories, from the trivial to the expensive. While it is likely that some of the aftermarket devices can be, and will be, eventually adapted to the new case design, there will be a cost in terms of both money and time delay. Depending on the complexity (plastic cases for the iPhone5 will show up in a few months, while hi-end home HiFi units that accept an iPhone may take 6 months to a year to arrive) there will be a significant delay in being able to use the iPhone5 in as ubiquitous manner as all previous iPhones (which shared the same docking and case design).

The last issue to raise in regards to the change in case design is simply the size of the new phone. It’s longer. We’ve already discussed that this will require new cases, shells, etc. – but this will also affect  many ‘fashion-oriented’ aftermarket handbags, belt-cases, messenger bags, etc. With the iPhone being the darling of the artistic, entertainment and fashion groups, many stylish (and expensive) accoutrements have been created that specifically fit the iPhone 3/4 case size. Those too will have to adapt.

Screen

The driving factor behind the new case size is the increase in screen resolution from 960×640 (1:1.50 aspect ratio) to 1136×640 (1:1.77 aspect ratio). The new size matches current HD display aspect ratio of 16:9 (1.77) so movies viewed on the iPhone will correctly fit the screen. With the iPhone4S, which had full technical capability to both shoot and display 1920×1080 (FHD or Full HD), HD movies were either cut off on the left and right side, or letterboxed (black bars at top and bottom of the picture) when displayed. Many Android devices have had full 16:9 display capabilities for a year or more now. Very few technical details have been released so far by Apple on the actual screen, here is what I have been able to glean to date:

  • The touch-screen interface has changed from “on-cell” to “in-cell” technology. Without getting overly geeky, this means that the actual touch-sensitive surface is now built-in to the LCD surface itself, instead of being a separate layer glued on top of the LCD display. This has three advantages:
    • Thinner display
    • Simplifies manufacture, as one less assembly step (aligning and gluing the touch layer)
    • Slightly brighter and more saturated visible display, due to not having a separate layer on top of the actual LCD layer.
  • The color gamut for virtually all cellphone and computer displays is currently the sRGB standard (which itself is a low-gamut color space – in the future we will see much improved color spaces, but for now that is best thing that can economically be manufactured, particularly for mobile devices). None of the current devices fully reproduce the full sRGB gamut, even as limited as it is. But this improvement gets the iPhone that much closer. One of the tests I intend to run when I get my test drive of the iPhone5 is a gamut check with a precision optical color gamut tester.
  • No firm data is available yet, but anecdotal reports, coupled with known ‘side-effects’ of “in-cell” technology, promise a slightly more efficient display, in terms of battery life. Since the LCD display is one of the largest consumers of battery power, this is significant.

Camera(s)

The rear-facing camera (high resolution one that is used for still and video photography) is essentially unchanged. However… there are potentially three small but significant updates that will likely affect serious iPhonographers: 

  1. Though no firm details have been released by Apple yet, when images were taken at the press conference and compared to images taken with an iPhone4S of the same subject from the same position, the iPhone5 images appear to have a slightly larger field of view. This, if accurate, would indicate that the focal length of the lens has changed slightly. The iPhone4S has an actual focal length of 4.28mm (equivalent to a 32mm lens for a 35mm camera); this may indicate the reduction of focal length to 3.75mm (28mm equivalent focal length). There are several strong reasons that support this theory:
    1. The iPhone5 is thinner, and everything else has to accomodate this. A shorter focal length lens allows the camera lens/sensor assembly to be thinner.
    2. Many users have expressed a desire for a slightly wider angle of view, in fact the most popular aftermarket adaptor lenses for the iPhone are wide angle format.
    3. The slightly wider field of view simplies the new panoramic ‘stitch’ capability of the camera hardware/software.
  2. Apple claims the camera is “25% smaller”. We have no idea what that really means, but IF this in fact results in a smaller sensor surface then the individual pixels will be smaller. The same number of pixels are used (it is still an 8MP sensor), but smaller pixels mean less light-gathering capability, potentially making low light photography more difficult.
    1. Apple does claim new hardware/software to make the camera perform better in low light. What this means is not yet clear.
    2. The math and geometry of optics, sensor size and lens mechanics essentially show us that small sensors are more subject to camera movement, shaking and vibration. (The same angular movement of a full sized 35mm digital camera will cause far less blurring in the resultant image than an iPhone4S. If the sensor is even smaller in the iPhone5, this effect will be more pronounced).
  3. Apple claims a redesigned lens cover for the iPhone5. (In all iPhones, there is a clear plastic window that protects the actual lens. This is part of the exterior case). With the iPhone5, this window is now “sapphire glass” – whatever that actually is… The important issue is that any change is a change – even if this window material is harder and ‘more clear’, it will be different from the iPhone4 or iPhone4S – different materials have different transmissive characteristics. Where this may cause an effect is with external adaptor lenses designed for iPhone4/4S devices.

The front-facing camera (FaceTime, self-portrait) has in the past been a very low resolution device of VGA quality (640×480). This produced very fuzzy images, the sensor was not very sensitive in low light, and the images did not match the display aspect ratio. The iPhone5 has increased the resolution of the front-facing sensor to 1280×720 (720P) for video, 1280×960 for still (1.2MP). While no other specs on this camera have been released, one can assume some degree of other improvements in the combined camera/lens assembly, such that overall image quality will improve.

The faster CPU and other system hardware, combined with new improvement in iOS 6.0, bring several new enhancements to iPhonography. Details are skimpy at this time, but panoramic photos, faster image-taking in general, improved speed for image processing within the phone, better noise reduction for low-light photography are some of the new features mentioned. Experience, testing and a full tear-down of an iPhone5 are the only way we will know for sure. More to come in future posts…

CPU/SystemChips/Memory

Inside the iPhone5

As best as can be determined at this early stage, there are a number of changes inside the iPhone5. Some (very little actually!) of the information below is from Apple, many of the other observations are based on the same detective work that was used for earlier reporting on the iPhone4S:  careful reading of industry trends, tracking of orders for components of typical iPhone parts manufacturers, comments and interviews with industry experts that track Apples, Androids and other such odd things, and to a certain extent just experience. Even though Apple is a phenomenally secretive company, even they can’t make something out of nothing. There are only so many chips to choose from, and when one factors in things like power consumption, desired performance, physical size, compatibility with other parts of the phone and so on, there really aren’t that many choices. So even if some of the assumptions at this early stage are slightly in error, the overall capabilities and functionality will be the same.

Ok, yes, Apple has said there is a new CPU in the iPhone5, and it’s named the “A6”. But that doesn’t actually tell one what it is, how it’s made, or what it does. About all that Apple has said directly so far is that it’s “up to twice as fast as the A5 chip [used in iPhone4S]”, and “the A6 chip offers graphics performance that’s up to twice as fast as the A5.” That’s not a lot of detailed information… Once companies such as Anandtech and Chipworks get a few actual iPhone5 units and tear them apart we will know more. These firms are exhaustive in their analysis (and no, the phone does not work again once they take it to bits!) – they even ‘decap’ the chips and use x-ray techniques to analyze the actual chip substrate to look for vendor codes and other clues as to the makeup of each part. I will report on that once this data becomes available.

At this time, some think that the A6 chip is using 28/32nm technology (absolutely cutting edge for mobile chipsets) and packing in two ARM Cortex A15 cores to create the CPU. Others think that this may in fact be an entirely Apple ‘home-grown’ ARM dual-core chip. The GPU (Graphics Processing Unit) is likely an assembly using four of Imagination’s PowerVR SGX543 cores, which double the GPU cores that are in the iPhone4S. In addition to the actual advanced hardware, the final performance is almost for certain a careful almalgamation of peripheral chips, tweaking and tuning of both firmware and kernel software, etc. The design criteria and implementation of devices such as the iPhone5 is just about as close to the edge of what’s currently possible as current science and human cleverness can get. This is one area where, for all of the downsides to a ‘closed ecosystem’ that is the World of Apple, the upside is that when a company has total control over both the hardware and the software of a device, a level of systems tuning is possible that open-source implementations such as Android simply can never match. If one is interested further in this philosophy, please see my further comments about such “complementary design techniques” in my post on iPhone accessory lenses here.

There are two types of memory in all advanced smartphones, including the iPhone5. The first is SDRAM (which is similar to the RAM in your computer, the very fast working memory that is directly addressed by the CPU chips), the second is NAND (which is similar to the hard disk in your computer – slower but has much greater storage capacity). In smartphones, the NAND is also a solid-state device (not a spinning disk) to save weight and power, but it still is considerably slower in access time than the SDRAM. As a point, it would not be practical, either in terms of economics, power or size, to attempt to use SDRAM for all the memory in a smartphone. The chart at the beginning of this article shows the increase in size of the SDRAM over the various iPhone models, to date the mass storage (NAND) has been available in 3 sizes:  16, 32 and 64GB.

Radios:  Wi-Fi/Cellular/GPS/Bluetooth

Although most of us don’t think about a cellphone in this way, once you get all the peripheral bits out of the way, these devices are just highly sophisticated portable radio transcievers. Sort of CB radio handsets on steroids. There are four main categories of radios used in smartphones: Wi-Fi; cellular radios for both voice and data; GPS and Bluetooth. The design, frequencies used and other parameters are so different for each of these classes that entirely separate radios must be used for each function. In fact, as we will see shortly, even within the cellular radio group it is frequently required to have multiple radios to handle all the variations found in world-wide networks. Each separate radio adds complexity, cost, weight, power consumption and the added issue of antenna design and inter-device interference. It is truly a complicated design task to integrate all the distinct RF components in a device such as the iPhone.

Again, this initial review is lacking in hard facts ‘from the horse’s mouth’ – our particular horse (the Rocking Apple) is mute… but using similar techniques as outlined above for the CPU/GPU chips, here is what my best guess is for the innards of “radio-land” inside an iPhone5:

Wi-Fi

    • At this time there are four Wi-Fi standards in use, all of which are ‘subparts’ of the IEEE 802 wireless communications standard: 802.11a; 802.11b, 802.11g, 802.11n
    • There are a lot of subtle details, but in essence each increase in the appending letter is equivalent to a higher data transfer speed. In a perfect world (pay attention to this – Wi-Fi almost never gets even close to what is theoretically possible! Marketing hype alert…) the highest speed strata, 802.11n, is capable of up to 150Mb/s.
    • Again, I am oversimplifying, but older WiFi technology used a single band of radio frequencies, centered around 2.4GHz. The newest form, 802.11n, allows the use of two bands of frequencies, 2.4GHZ and 5.0GHz. If the designer implements two WiFi radios, it is possible to use both frequency bands simultaneously, thereby increasing the aggregate data transfer, or to better avoid interference that may be present on one of the bands. As always, adding radios adds cost, complexity, etc.

Cellular

This is the area that causes the most confusion, and ultimately required (in the case of the iPhone) two entirely separate versions of hardware (GSM for AT&T, CDMA for Verizon – in the US. Gets even more complicated overseas). Cellular telephone systems unfortunately were developed by different groups in different countries at different times. Adding to this were social, political, geographical, economic and engineering issues that were anything but uniform. This led to a large number of completely incompatible cellular networks over time. Even in the earliest days of analog cellphones there were multiple, incompatible networks. Once the world switched to digital carrier technology, the diaspora continued.. This is such a complicated subject that I have decided to write a separate blog on this – it is really a bit off-topic (in terms of detail) for this post, and may unreasonably detract those that are not interested in such details. I’ll post that in the next week, with a link from here once complete.

For the purposes of this iPhone5 introduction, a very simple and brief primer so we can understand the importance – and limitations!! of what is (incorrectly) called 4G – that bit of marketing hype that has everyone so fascinated even though 93% of humanity has absolutely no idea what it really is. Such is the power of marketing…

Another warning:  telecommunications industries are totally in love with acronyms. Really arcane weird and hard to understand acronyms. If a telecomms engineer can’t wedge in at least eight of them in every sentence, he/she starts twitching and otherwise showing physical symptoms of distress and feelings of incompetence… I’m just going to list them here, in all their other-worldly glory… if you want them deciphered, wait for my blog (promised above) on the cellular system.

 To add some semblance of control to the chaotic jungle of wireless networks there are a number of standards bodies that attempt to set up some rules. Without them we would have no interoperabililty of cellphones from one network to another. The two main groups, in terms of this discussion, are the 3GPP (3rd Generation Partnership Project) and the ITU (International Telecommunication Union). That’s where nomenclature such as 2G, 3G, 4G comes from. And, you guessed it, “G” is generation. (Never mind “1G” – that too will be in the upcoming blog…). For practical purposes, most of us are used to 3G – that was the best data technology for cellular system until recently. 4G is “better”… sort of… we’ll see why in a moment.

The biggest reason I am delving into this archane stuff is to (as simply as I can) educate the user as to why you can’t browse the web or perform other data functions while simultaneosly talking on the phone IF you are using an iPhone on the Sprint or Verizon networks – but can if you are on AT&T. The reason is that LTE is an extension of GSM (the technology that AT&T uses for voice and data currently), whereas both Sprint and Verizon use a different technology for voice/data (CDMA). Each of these technologies requires a separate radio and a separate antenna. For AT&T customers, the iPhone needs 2 antennas (4G LTE + 3G for voice [and 3G data fallback if no 4G/LTE is available in that location), if the iPhone was going to support the same functionality for Sprint/Verizon, a 3rd radio and antenna would be required (4G/LTE for high speed data; 3G fallback data, and CDMA voice). Apple decided not to add the weight, complexity and expense to the iPhone5, so customers on those networks face an either/or choice: voice or data, but not at the same time.

Apple is making some serious claims on improved battery life when using 4G, saying that the battery will last the same (up to 8 hours) whether on 3G or 4G. That’s impressive, early 4G phones from other vendors have had notoriously low battery life on 4G. Some assumptions, other than OS tweaks, are possibly the use of a new Qualcomm chip, the MDM9615LTE.

The range of cellular voice and data types/bands/variations that are said to be supported by the iPhone5 are:  GSM (AT&T), CDMA (Verizon & Sprint), EDGE, EV-DO, HSPA, HSPA+, DC-HSPA, LTE.

Now, another important few points on 4G:

    • The current technology that everyone is calling 4G… isn’t really. The marketing monsters won the battle however, and even the standards bodies caved. LTE (Long Term Evolution – and this does have a technical meaning in terms of digital symbol reconstitution from a multiplexed data stream, as opposed to the actual advancement of intellect, compassion, health and heart of the human species – something that I hold in serious doubt right now…) is a ‘stepping-stone’ on the way to “True 4G”, and is not necessarily the only way to implement 4G – but the marketing folks just HAD to have ‘higher number means better’  term, so just like at one point we had “2.5G” (not quite real 3G but better than 2G in a few weird ways), we now have 4G… to be supplemented next year with “LTE Advanced” or “4G Advanced”. Hmmmm. And once the networks improve to “True 4G” or whatever, will the iPhone5 still work? Yes, but it won’t necessarily support all the features of “LTE Advanced” – for instance, LTE Advanced will support “VoLTE” [Voice over LTE] so that only a single radio/antenna would be required for all voice and data – essentially the voice call is muxed into the data layer and just carried as another stream of data. However, and this is a BIG however, that would require essentially full global coverage of “4G/LTE Advanced” – something that is years away due to cost and time to build out networks.
    • Even with the current “baby 4G”, this is a new technology, and most networks in the world only support this in certain limited areas, if at all. It will improve every month as the carriers slowly build out the networks, but it will take time. The actual radio/antenna systems are different from everything currently deployed, so new hardware has to be stuck onto every single cell tower in the world… not a trivial task… Trying to determine where 4G actually works, on which carrier, is effectively impossible at this time. No one tells the whole story, and you can be sure that Pinnochio would look like a snub-nose in comparision to many of the claims put forth by various cellular carriers… In the US, both Verizon and AT&T claim about 65-75% coverage of their respective markets: but these are in high density population areas where the subscriber base makes this economically attractive.
    • The situation is much more spotty overseas, with two challenges: even within the LTE world there are different frequencies used in different areas, and the iPhone5 does not support all of them. If you are planning to use the iPhone5 outside of the US, and want to use LTE, check carefully. And of course the build-out of 4G is nowhere near as complete as in the US.
    • The final issue with 4G is economic, not technical. Since data usage is what gobbles up network capacity (as opposed to voice/text), the plans that the carriers sell to their users are rapidly changing to offer either high limit or unlimited voice/text at fairly reasonable rates, with data now being capped and the prices increasing. While a typical data plan (say 5GB) allows that much data to be transferred, regardless of whether on 3G or 4G, the issue is speed. Since LTE can run as fast as 100Mb/s (again, your individual mileage may vary…) – which is much, much faster than 3G, and in fact often faster than most Wi-Fi networks, it is easy for the user to consume their cap much faster. If you have ever stood on a street corner and s-l-o-w-l-y waited for a single page to load on your iPhone4, you are not really motivated to stand there for an hour cruising the web or watching sport. But… if the pages go snap! snap! snap!, or the US Open plays great in HD without any pauses or that dreaded ‘buffering’ message – then normal human tendancy will be to use more. And the carriers are just loving that!!
    • As an example, (just to be theoretical and keep math simple) if we assume 100Mb/s on LTE, then your monthly 5GB cap would be consumed in about 7 minutes!! Now this example is assuming constant download of that data rate, which is unrealistic – a typical page load for a mobile device is under 1 MB, and then you stare at it for a bit, then load another one, and so one – so for web browsing you get snappy loads without consuming a ridiculous amount of data – but beware video streaming – which DOES consume constant data. It will take users some time (and sticker shock at bill time if you have auto-renew set on your data plan!) to learn how to manage their data consumption. (Tip: set to lower resolution streaming when on LTE, switch back to high resolution when on WiFi).

GPS

Global Positioning Service, or “Location Services” as Apple likes to call it, requires yet another radio and set of antennas. This is a receive-only technology where simultaneous reception of data from multiple satellites allows the device to be located in 3D space (longitude, latitude and altitude) rather accurately. The actual process used by the underlying hardware, the OS and the apps on the iPhone is quite complex, merging together information from the actual GPS radio, WiFi (if it’s on, which helps a lot with accuracy) and even the internal gyroscope that is built in to each iPhone. This is necessary since consumers just want things to work, no matter the laws of physics (yes my radio should receive satellite signals even if I’m six stories underground in a car park…), interference from cars, electrical wires, etc. etc. The bottom line is we have come to depend on GPS to the point that I see people yelping at their Yelp app when it doesn’t know exactly where the next pizza house is…

Bluetooth

Again, we have become totally dependent on this technology for everyday use of a cellphone. In most states now (and countries outside the US), there are rather strict laws on ‘hands-free’ cellphone while driving a car. While legally this can be accomplished with a wired earplug (know your laws, some places ONLY allow wireless [Bluetooth] headsets! – others allow wired headsets but only in one ear, and it must be of the ‘earbud’ type, not an ‘over the ear’ version), the Bluetooth headset is the most common.

There are other uses for Bluetooth with the iPhone: I frequently use a Bluetooth keyboard when I am actually using the iPhone as a little computer at a coffee bar – it’s SO much faster than pecking on that tiny glass keyboard… There are starting to be a number of interesting external ‘appliances’ that communicate with the iPhone via Bluetooth as well – temperature/humidity meter; various sports/exercise measuring devices; even civil engineering transits can now communicate their reading via Bluetooth to an app for automatic recording and triangulation of data.

And yes, it takes another radio and antenna…

And last but certainly not least:  iOS6

A number of new features are either totally OS-related, or the new hardware improvements are expressed to the user via the new OS. The good news is that some of these new features will now show up in earlier iPhone models, commensurate of course with hardware limitations.

A few of the new features:

  • Improvements to Siri:  open apps and post comments to social apps with voice commands
  • Facebook: integrated into Calendar, Camera, Maps, Photos. (yes, you can turn off sharing via FB, but in typical FB fashion everything is ‘opt out’…)
  • Passbook: a little digital vault for movie tickets, airline boarding passes, etc. Still ‘under construction’ in terms of getting vendors to sign up with Apple
  • FaceTime: now works over 3G/4G as well as WiFi (watch out for your data usage when not at WiFi – with the new 720P front facing video camera, that nice long chat with your significant other just smoked your entire data plan for the month…)
  • Safari:  links open web pages on multiple Apple devices that are all on same iCloud account. Be careful… if you are bored in the office and are cruising ‘artistic’ web sites, they may reflect in real time in your kitchen or your daughter’s iMac…
  • Maps:  Google Maps kicked out of Apple-land, now a home-grown map app that finally includes turn-by-turn navigation.

Summary

It’s a nice upgrade Apple. As usual, the industrial design is good. For me personally, it’s starting to get a bit big – but I’ll admit I have an iPad, so if I want more screen space than my 4S I’ll just pick up the Pad. Most of the improvements are incremental, but good nonetheless. In terms of pure technology, the iPhone5 is a bit behind some of the Android devices, but this is not the article to start on that! Those arguments could go on for years… I’m only commenting here on what’s in this particular phone, and my personal thoughts on upgrading from a recent 4S to the 5. For me, I won’t do that at this time. A lot of this is very individual, and depends on your use, needs, etc. I tend to almost always be near either my own or free Wi-Fi locations, so 4G is just not a huge deal. The improved speed sounds very nice, but my 4S currently is fast enough – I am an avid photographer, and retouch/filter a lot using the iPhone4S, and find that it’s fast enough. I love speedy devices, and if the upgrade were free I would perhaps think differently, but at this point I am not suffering with any aspect of my 4S enough to feel that I have to move to the 5 right away.

Now, I would absolutely feel differently if I had anything earlier than a 4S. I upgraded from the iPhone4 to the 4S without hesitation – in my view the improvements were totally worth it: much better camera, much faster processor, etc. So in the end, my personal recommendation: a highly recommended upgrade for anything at the level of iPhone4 or earlier, for the 4S – it’s down to individual choices and your budget.

Lens Adaptors for iPhone4S: technical details, issues and usage

August 31, 2012 · by parasam

[Note: before moving on with this post, a comment on stupid spell-checkers… my blog writer (and even Microsoft Word!) insists that “adaptor” is a mis-spelling. Not so… “adaptor” is a device that adapts one thing to another that would otherwise be incompatible, while an “adapter” is a person that adapts to new situations or environments… I’ve seen countless instances of mis-use… I fear that even educated users are deferring to software, assuming that it’s always correct. The amount of flat-out wrong instances in both spelling and grammar in most major software applications is actually scary…]

Okay, now for the good stuff…  While the iPhone (in all models) is a fantastic camera for a cellphone, it does have many limitations, some of which have been discussed in previous articles in this blog. The one we’ll address today is the fixed field-of-view (FOV) of the camera lens. Since most users are familiar with 35mm SLR (Single Lens Reflex) – or, if you are young enough to not have used film, then DSLR (Digital SLR) – and have at least an acquaintance with the relative FOV of different focal length lenses. As a quick review, the so-called “normal” lens for a 35mm sensor size is a 50mm focal length. Anything less than that is termed a “wide angle” lens, anything greater than that is termed a “telephoto” lens. This is a somewhat loose description, and at very small focal lengths (which leads to very wide angle of view) the terminology changes to a “fisheye” lens. For a more detailed explanation of focal length and other issues please see my original post on the iPhone4S camera “Basic Overview” here.

Overview

The lens that is part of the iPhone4S camera system is a fixed aperture / fixed focal length lens. The aperture is set at f2.4 while the 35mm equivalent focal length of the lens is 32mm – a moderately wide angle lens. The FOV (Field of View) for this lens is 62° [for still photos], 46° [for video]. {Note: since the video mode of 1920×1080 pixels is smaller than the sensor size used for still photos (3264×2448) the angle of view changes with the focal length held constant} The fixed FOV (i.e. not a zoom lens) affects composition of the image, as well as depth of field. A quick note: yes the iPhone (and most other cellphone cameras) have a “zoom” function, but this is a so-called “digital zoom” which is achieved by cropping and magnifying a small portion of the original image as captured on the sensor. This produces a poor quality image that has low resolution, and is avoided for any serious photography. A true zoom lens (sometimes called ‘optical zoom’) achieves this function by mechanically changing the focal length – something that is impossible to engineer for a cellphone. As a rule of thumb, the smaller the focal length, the greater the depth of field (the areas of the image that are in focus, in relation to the distance from the lens); and the greater the field of view (how much of the total scene fits into the captured image).

In order to add some variety to the compositional choices afforded by the fixed iPhone lens, the only option is to fit external adaptor lenses to the iPhone. There are several manufacturers that offer these, using a variety of mechanical devices to mount the lens. There are two basic divisions of adaptor type: those that provide external lenses and the mounting hardware; and those that provide a mechanical adaptor to use commonly available 35mm lenses with the iPhone. One example of an adaptor for 35mm lenses is here, while an example of lens+mount is here.

I personally don’t find a use for adapting 35mm lenses to the iPhone:  if I am going to deal with the bulk of a full sized lens then I will always choose to attach a real camera body and take advantage of the resolution and control that a full purpose-built camera provides. Not everyone may share this sentiment, and for those that find this useful there are several adaptors available. I do shoot a lot with the iPhone, and found that I did really want to have a relatively small and lightweight set of adaptor lenses to offer more choice in framing an image. I researched the several vendors offering such devices, and for my personal use I chose the iPro lens system manufactured by Schneider Optics. I made this choice based on two primary factors:  I had prior experience with lenses made by Schneider (their unparalleled Super Angulon wide angle for my view camera), and the precision, quality and versatility of the iPro system. This is a personal choice – ultimately any user will find what works for them – but the principles discussed here will apply to any external adaptor lens. As I have mentioned in previous posts, I am not a professional reviewer, have no relationship with any hardware or software vendor (other than the support offered as an end user), and have no commercial interest in any product I mention in this blog. I pick what I like, then write about it.

I do want to point out however, once I started using the iPro lenses and had some questions, that I received a large amount of time and assistance from the staff at Schneider Optics, particularly Niki Mustain. I would like to thank her and all the staff that so generously answered my incessant questions, and did a fair amount of additional research and testing prompted by some of my observations. They kindly made available an internal report on iPro lens performance, and the interactions with the iPhone camera (some of these issues to be discussed below). When and if they make that public (likely as an application note on their website) I will update this blog with a comment to point to that, in the meantime they have allowed me to use some of their comments on the general technology and limitations of any adaptor lens system as background for this post.

Technical specs on the iPro lens adaptor system

This particular system offers three different adaptor lenses (they can be purchased individually or as a set): Wide Angle, Telephoto and Fisheye. Here are the basic specifications:

As can be seen from the above details, the Telephoto is a 2X magnification, doubling the focal length and halving the FOV (Field of View). The Wide Angle changes the stock medium wide-angle view of the iPhone to a “very wide” wide angle (19mm equivalent – about the widest FOV provided by most variable focal length** 35mm lenses). The Fisheye offers what I would consider a ‘medium’ fisheye look, with a 12mm equivalent focal length. With fisheye lenses generally accepted as having focal lengths of 18mm or less, this falls about midway between 6mm*** and 18mm.

**There is a difference between “variable focal length” and “zoom” lenses, although most use the term interchangeably not being aware of the distinction between the two. A variable focal length lens allows a continuous change of focal length, but once the new focal length is established, the image must be refocused. A true zoom lens will maintain focus throughout the entire range of focal lengths allowed by the lens design. Obviously a true zoom lens is more difficult (and therefore costly) to manufacture. Typically, zoom lenses are larger and heavier than a variable focal length lens. It is also more difficult to create such a lens with a wide aperture (low f/stop number). To give an example, you can purchase a reasonable 70-200mm zoom lens for about $200 (with a maximum aperture of f5.6); a high quality zoom lens of the same range (70-200mm) that opens up to f2.8 will run about $2,500.

Another thing to keep in mind is that most ‘variable focal length’ lenses are not advertised as such, they are often marketed as zoom lenses, but careful testing will show that accurate focus is not maintained throughout the full range of focal lengths. Not surprising, as this is a difficult optical feat to do well, which is why high quality zoom lenses cost so much. Really good HD video or cinemaphotography zoom lenses that have an extremely wide range (often used for sports television – for example the Canon DigiSuper 80 with a zoom range of 8.8 to 710mm) can cost upwards of $163,000. Warning: playing with one of these for a few days will produce depression and optical frustration once returning to ‘normal’ inexpensive zoom lenses… A good lens is simply the most important factor in getting a great image. Period.

*** The extreme wide end of fisheye lenses is held by the Nikkor 6mm/f2.8 which is a masterpiece of engineering. With an almost insane 220° FOV, this is the widest lens for 35mm cameras of which I am aware. You won’t find this in your local camera shop however, only a few hundred were ever made – during the 1970s – 1980s. The last time one went on auction (in the UK in April 2012) it sold for just over $160,000. The objective lens is a bit over 236mm (9.25″) in diameter! Here are a few pix of this awesome lens:

actual image taken with 6mm f2.8 Nikkor fisheye

Ok, back to reality (both size and wallet-wise…)

Here are some images of my iPro lenses to give the reader a better idea of the devices which we’ll be discussing further:

The 3-lens iPro kit fully assembled for carrying/storage.

An ‘exploded view’ of all the bits that make up the 3-lens iPro system.

The Fisheye, Telephoto and Wide Angle iPro lenses.

Front view of iPro case mounted to iPhone4S, showing the attached tripod adaptor.

Rear view of the iPro case mounted on iPhone4S.

Close-up of the bayonet lens mounting feature of the iPro case.

2X Telephoto mounted on iPhone.

WideAngle lens mounted on iPhone.

Fisheye lens mounted on iPhone.

Basic use of the iPro lens system

The essential parts of the iPro lens system are the case, which allows precision alignment of the lens with the iPhone camera, and the detachable lens elements themselves. As we will discuss below, the precision and accuracy of mounting an external adaptor lens is crucial to good optical performance. It may seem trivial, but the material and case design is an important overall part of the performance of this adaptor lens system. Due to the necessary rigidity of the case material, once it is installed on the iPhone it is not the easiest to remove… I missed this important part of the instructions provided:  you must attach the tripod adaptor to the case body to provide the additional leverage needed to slightly flex the case for removal. (the hole in the rear of the case that shows the Apple logo is actually a critical design element: that is where you push with a finger of your opposite hand while flexing the case in order to pop out the phone from the case).

In addition to providing the necessary means for taking the iPhone out of the case if you should need to (and you really won’t: I found that this case works just fine as an everyday shell for the phone, protecting the edges, insulating the metallic sideband to avoid the infamous ‘hand soaking up microwaves dropped call iPhone effect’, and is slim enough that it fits perfectly in my belt-mounted carrying case), the tripod mounting screw provides a very important improvement for iPhonography: stability. Even if you don’t use any of the adaptor lenses, the ability to affix the phone to a tripod (or even a small mono-pod) is a boon to getting better photographs with the iPhone. Rather than bore you with various laws of physics and optic science, just know that the smaller the sensor, the more a resultant image is affected by camera movement. The simple truth is that the very small sensor size of the iPhone camera, coupled with the light weight and small case size of the phone, means that most users unconsciously jiggle the camera a lot when taking an image. This is the single greatest reason for lack of sharpness in iPhone images. To compound things, the smaller the sensor size, the less sensitive it is for gathering light, which means that often, in virtually anything but direct sunlight, the iPhone is shooting at relatively slow shutter speeds, which only exaggerates camera movement.

Since the EXIF data (camera image metadata) is collected with each shot, you can see afterwards what shutter speed was used by the iPhone on each of your shots. The range of shutter speeds on the iPhone4S is from 1/15 sec to 1/2000 sec. Any shutter speed slower than 1/250 sec will show some blurring if the camera moves at all during the shot. So, whenever possible, brace your phone against a rigid object when shooting, particularly in partial shade or darker surroundings. Since often a suitable fence post, lamp pole or other object is not right where you need it for your shot, the ability to use some form of tripod will often provide a superior result for your image.

The adaptor lenses themselves twist into the case with a simple bayonet mount. As usual with any fine optics, take care to avoid dropping, scratching or otherwise damaging the delicate optical surfaces of the lenses. The telephoto lens will most benefit from tripod use (when possible), as the narrower the angle of view, the more pronounced camera shake is on the image. On the other hand, the fisheye lens can be handheld for most work with no visible impairment. A note on use of the fisheye lens:  the FOV is so wide that it’s easy for your hand to end up in the image… take some care and practice with how you hold the phone when using this lens.

Optical issues with adaptor lenses, including the iPro lens system

After using the adaptor lenses for a short time, I found several impairments in the images taken. Essentially the artifacts result in a lack of sharpness towards the edge of the image, and color fringing of certain objects near the edge of the frame. I went on to perform extensive tests of each of the lenses and then forwarded my concerns to the staff at Schneider Optics. To my pleasure, they were open to my concerns, and performed a number of tests in their own lab as well. While I will discuss the details below, the bottom line is that both myself and the iPro team agrees that external adaptor lenses are not a perfect science, particularly with the iPhone. We must remember, for all the fantastic capabilities that this device exhibits… it’s a bloody cellphone! I have every confidence that Schneider (and probably other vendors as well) have made every effort within the scope of practicality and budget for such lenses to minimize the side-effects. I have found the actual optical precision of the iPro lenses (as measured for such things as MTF [Modulation Transfer Function – an objective measurement of the resolving capability of a lens system], illumination fall-off, chromatic and geometric aberrations, optical alignment and contrast ratio) are excellent – particularly for lenses that are really quite inexpensive compared to their quality.

The real issue lies with the iPhone camera system itself: Apple never designed this camera to interoperate with external adaptor lenses. One cannot fault the original manufacturer for attempting to produce a piece of hardware that offers good performance at a reasonable price within a self-contained system. The iPhone designers have treated the totality of the hardware and software of the camera system as a fixed and closed universe. This is typical of the way that Apple designs both their hardware and software. There are both pros and cons to this philosophy:  the strong advantage is the ability to blend design characteristics of both hardware and software to mutually complement each other in the effort to meet design criteria with a time/cost budget; the disadvantage is the lack of easy adaptability in many cases for external hardware or software to easily interoperate with Apple products. For example, the software development guidelines for Apple devices are the most stringent in the entire industry. You work within the framework provided, or you don’t get approval for your app. Every app intended for any iDevice must be submitted to Apple directly for testing and approval. This is virtually unique in the entire computer/cellphone industry. (I’m obviously not talking about the gray area of ‘jailbroken’ phones and software).

The way in which this design philosophy shows up in relation to external adaptor lenses is this: the iPhone camera is an amazingly good camera for it’s size, cost and weight, but it was never designed to be complementary to external lenses. Certain design choices that are not evident when images are taken with the native camera show up, sometimes rather glaringly, when external lenses are coupled with the iPhone camera. One might say that latent issues in the lens and sensor design are significantly amplified by external adaptor lenses. This issue is endemic to any external lens, not just the iPro lenses I am discussing here. Each one will of course have its own unique ‘fingerprint’ of interaction with the iPhone camera, but the general issues discussed will be the same.

As usual, I bring all this up to share with my readers the best information I can find or develop in the pursuit of what’s realistically possible with this great little camera. The better we know the capabilities and limitations of our tools, the better able we are to make the images we want. I have taken some great shots with these adaptor lenses that would have been impossible to create any other way. I can live with the distortions introduced as a compromise to get the kind of shot that I want. The more aware I am of what the issues are, the better I can attempt (while composing a shot) to attempt to minimize the visibility of some of these artifacts.

To get started, here are some example shots:

[Note:  all shots are unretouched from the iPhone camera, the only adjustment is resizing to fit the constraints of this blog format]

iPhone4 Normal (no adaptor lens)

iPhone4 WideAngle adaptor lens

iPhone4 Fisheye adaptor lens

iPhone4 Telephoto adaptor lens

iPhone4S #1 Normal

iPhone4S #1 WideAngle

iPhone4S #1 Fisheye

iPhone4S #1 Telephoto

iPhone4S #2 Normal

iPhone4S #2 WideAngle

iPhone4S #2 Fisheye

iPhone4S #2 Telephoto

The above shots were taken to test one of the first potential causes for the artifacts in the images: the softening towards the edges as well as the color fringing of bright areas near the edge of the image (chromatic aberration). A big potential issue with externally mounted adaptor lenses for the iPhone is lens alignment. The iPhone lens is physically aligned to the sensor as part of the entire camera assembly. This unitary assembly is then inserted into the case during final manufacture of the device. Since Apple never considered the use of external adaptor lenses, no effort was made to ensure perfect alignment of the camera assembly into the case. As can be seen from my blog on the iPhone hardware (showing detailed images of an iPhone torn apart), the camera assembly is simply pressed into place – there is no precision mechanical lock to align the optical axis of the camera with the case. In addition, the actual camera lens is protected by being installed behind a clear plastic window that is part of the outer case itself.

What this means is that if the camera assembly is tilted even very slightly it will produce a “tilt-shift” de-focus effect when coupled with an external lens:  the center of the image will be in focus, but both edges will be out of focus. One side will actually be focused a bit behind the sensor plane, the other side will be focused a bit in front of the sensor plane.

The above diagram represents an extreme example, but you can see that if the lens is tilted in relation to the image sensor plane, the plane of focus changes. Objects at the edge of the frame will no longer be in focus, while objects in the center of the frame will remain in focus.

In order to eliminate this probability from my tests, I used three separate iPhones (one iPhone4 and two iPhone4S models). While not a large sample statistically, it did provide some certainty that the issues I was observing were not related to a single iPhone. You can see from the examples above that all of the adaptor lens shots exhibit some degree of the two artifacts (defocused edges and chromatic aberration). So further investigation was required in order to attempt to understand the root cause of these distortions.

Since the first set of test shots was not overly ‘scientific’ (back yard), I was advised by the staff at Schneider that a brick wall was a good test subject. It was easy to visualize the truth of this, so I went off in search of a large public test chart (brick wall…)

WideAngle taken from 15 ft.

Fisheye taken from 15 ft.

Telephoto taken from 15 ft.

To add some control to the shots, and reduce potential errors of camera movement that may affect sharpness in the image, the above and all subsequent test shots were taken while the iPhone was mounted on a stable tripod. In addition, each shot was taken from exactly the same camera position (in the above shots, 15 feet from the wall). Two things stood out here: 1) there was a lack of visible chromatic aberration [I think likely due to the flat lighting on the wall and lack of high contrast edges, which typically enhance that form of artifact]; and 2) the soft focus artifact is more pronounced on the left and right sides as opposed to the top and bottom edges. [More on why I think this may occur later in this article].

WideAngle, 8 ft.

Fisheye, 8 ft.

WideAngle, 30 ft.

Fisheye, 30 ft.

Telephoto, 30 ft.

WideAngle, 50 ft.

Fisheye, 50 ft.

Telephoto, 150 ft.

Telephoto, 150 ft.

Telephoto, 500 ft.

The above set of images represented the next test series of shots. Here, various distances to the “test chart” [this time I needed even a larger ‘chart’ so had to find a 3-story brick building…] were used in order to see what effect that may have on the resultant image. A few ‘real world’ images were shot using just the telephoto at long distances – here the large distance from camera to subject, using a telephoto lens, would normally result in a completely ‘flat’ image with everything in the same focal plane. Once again, we continue to see soft focus and chromatic aberrations at the edges.

Normal (no adaptor), auto-focus

Normal, Selective Focus

WideAngle, Auto Focus

WideAngle, Selective Focus

Fisheye, Auto Focus

Fisheye, Selective Focus

Telephoto, Auto Focus

Telephoto, Selective Focus

This last set of test shots was suggested by the Schneider staff, based on some tests they ran and subsequent discussions. One theory is that there is a difference in how the iPhone camera internal software (firmware + OS kernel software – not anything a camera app developer has access to) handles auto-focus vs selective-focus. Selective focus is where the user can select the focus area, usually with a little square that can be moved to different parts of the image. In all the above tests, the selective focus area was set to the center of the image. In theory, since my test images were flat and all at the same difference from the camera, there should have been no difference between auto-focus or selective-focus, no matter which lens was used. Careful examination of the above images shows an inconsistent result:  the fisheye showed no difference between the two focus modes, the normal and telephoto looked better with selective focus, while the wideangle looked best when auto focus was applied.

The internal test report I received from Schneider pointed out another potential anomaly, one I have not yet had time to attempt to reproduce: using selective focus off-center in the image. This usage appeared to generate results that would be unexpected in normal photographic work: the area of selective focus was sharp, most of the rest of the image was a bit softer, but a mirror image position of the original selective focus region was once again sharp on the opposite side of the image. This does seem to clearly point to some image-enhancement algorithms behaving in an unexpected fashion.

The issue of auto-focus methods is a bit beyond the scope of this article, but some considerable research shows that the most likely methodology used in the iPhone camera is passive detection (that is certain – there is no range finder on an iPhone!) controlled lens barrel or lens element adjustment. There are a large number of vendors that support this form of auto-focus (and here, I mean ‘not manual focus’ since there is no mechanical focus ring on cellphones… – the ‘auto-focus’ can either be entirely automatic [as I use the term “auto-focus” in my tests above] or selective area auto-focus, where the user indicates a region of the image on which the auto-focus is concentrated. One of the most advanced methods is MEMS (Micro-Electrical Mechanical Systems) which moves a single optical element within the lens barrel, another popular method is the ‘voice-coil’ micro-motor which moves the entire lens barrel to effect focus.

With the advances brought to bear with iOS5, including face area recognition (the camera attempts to recognize faces in the image and focus on those when in full auto-focus mode), it is apparent that significant image recognition and processing are being done at the kernel level, before any camera app ‘gets their hands on’ the camera controls. The bottom line is that there may well be some interactions between the way in which the passive detection and image processing algorithms are affected by an unexpected (to the iPhone software) presence of an external adaptor lens. Another way to put this is that the internal software of the camera is likely ‘tuned’ to the lens that is part of the camera assembly, and the addition of a significant change to the optical pattern drawn on the camera sensor (now that a telephoto lens adaptor is attached) alters the focusing algorithm in an unexpected manner, producing the artifacts we see in the examples.

This issue is not at all unknown in engineering and quality control: a holistically designed system where all of the variables are thought to be known can be significantly degraded when even one element is externally modified without knowledge of the full scope of design parameters. This often occurs with after-market additions or changes to automobiles. One simple example is if you change the tire size (radius, not width) the speedometer is no longer is accurate – the entire system of the car, including wheel and tire diameter, was part of the calculus for determining how many turns of the axle per minute (all the speedometer mechanism actually measures) are required to indicate X amount of kph (or mph) on the instrument panel.

Another factor that may have a material effect on the focus and observed chromatic aberration is the lens design itself, and how an external adaptor lens may interact with the native design. Simple lenses are often portions of a sphere, so called “spherical lenses.”  Such a lens suffers from significant optical aberrations, as not all of the light rays that are focused by a spherical lens converge to a single point (producing a lack of sharp focus). Also, such lenses bend different colors of light differently, leading to chromatic aberrations (where one sees color fringing, usually blue/purple on one side of a high contrast object and green/yellow on the opposite side). Most high quality modern camera lenses are either aspherical (specially modified shapes that deviate away from a perfect spheroid shape) or groups of elements, some of which may be spherical and others aspherical. Several examples are shown below:

We know from published literature that the lens used in the iPhone4S is a 5 element lens system with at least several aspherical elements. A diagram released by Apple is shown below:

iPhone4 lens system [top] and iPhone4S lens system [bottom]

Again, as described earlier, the iPhone camera system was designed as a unitary system, with factors from the lens system, the individual lens elements, the sensor, firmware and kernel software all becoming known variables in a highly complex opto-electronic equation. The introduction of an external adaptor array of additional elements can produce unplanned effects. All in all, the various vendors of such adaptor lenses, including iPro, have done a good job in dealing with many unknowns. Apple is a highly secretive manufacturer, and does not publish much information. Attempts to gain further technical knowledge are very difficult, at some point one invariably comes up against Apple’s draconian NDAs (Non-Disclosure Agreements) which have penalties large enough to deter even the most aggressive seekers of information. Even the accumulation of knowledge that I have acquired over the past year while writing about the iPhone has been slow, tedious and has taken a tremendous amount of research and ‘fact comparison.’

As a final example, using a more real-world subject, here are a few camera images and screen shots that demonstrate the challenge if one attempts to correct, using post-production techniques, some of the errors introduced by such a lens adaptor:

Original image, unretouched but annotated.

The original image shows significant chromatic aberrations (color fringing) around the reflections in the shop window, grout lines in the brickwork on the pavement, and on the left side of the man’s shirt.

Editing using Photoshop ‘CameraRaw’ to attempt to correct the chromatic aberrations.

Using the Photoshop Camera Raw module, it is possible to manually correct for color fringing shifts… but this affects the entire image. So a fix for the edges causes a new set of errors in the middle of the image.

Chromatic aberrations removed from around the reflections in the window…

Notice here that the color fringing is gone from around the bright reflections in the window, but now the left edge of the man’s shirt has the color shifted, leaving only the monochromatic outline behind, producing a dark gray edge instead of the uniform blue that should exist.

…but reciprocal chromatic edge errors are introduced in the central portion of the image where highly saturated colors abut more neutral areas.

Likewise, the green paint on the steel column has shifted, revealing a gray line on the right of the woman’s leg, with a corresponding shift of the flesh tone onto the green steelwork on the left side of her leg.

final retouched shot after ‘painting in’ was performed to resolve the chroma offset errors in the central portion of the image.

To fix all these new errors, a technique known as ‘painting in’ was used, sampling and filling the color errors with the correct shade, texture and intensity. This takes time, skill and patience. It is impractical in the most part – this was done as an example.

Summary

The use of external adaptor lenses, including the iPro system discussed here, can offer a useful extension to the creative composition of images with the iPhone. Such lenses bring a set of compromises with them, but hopefully once these are known, careful choice of lighting, camera position and other factors can be used to reduce the visibility of such effects. As with any ‘creative device’ less is often more… sparing use of such adaptors will likely bring the best results. However, there are shots that I have obtained with the iPro that would have been impossible with the basic iPhone camera/lens, so I happy to have this additional tool.

To close, here are a few more examples using the iPro lenses:

Fisheye

WideAngle

Telephoto

iPhone Cinemaphotography – A Proof of Concept (Part 1)

August 3, 2012 · by parasam

I’m introducing a concept that I hope some of my readers may find interesting:  the production of an HD video that is entirely built using only the iPhone (and/or iPad). Everything from storyboard to all photography, editing, sound, titles and credits, graphics and special effects, etc. – and final distribution – can now be performed on a “cellphone.” I’ll show you how. Most of the focus of the new crop of highly capable ‘cellphone cameras’ such as is available with the iPhone and certain Android phones has been focused on still photography. While motion photography (video) is certainly well-known, it has not received the same attention and detail – nor the amount of apps – as its single-image sibling.

While I am using a single platform with which I am familiar (iOS on the iPhone/iPad), this concept can I believe be performed on the Android class of devices as well. I have not (nor do I intend to) research that possibility – I’ll leave that for others who are more familiar with that platform. The purpose is to show that such a feat CAN be done – and hopefully done reasonably well. It’s only been a few years since the production of HD video was strictly in the realm of serious professionals, with budgets of hundreds of thousands of dollars or more. While there of course are many compromises – and I don’t for a minute pretend that the range of possible shots or quality will anywhere near approach what a high quality DSLR, RED, Arri or other professional video camera can produce, I do know that a full HD (1080P) video can now be totally produced on a low-cost mobile platform.

This POC (Proof Of Concept) is intended as more than just a lark or a geeky way to eat some spare time:  the real purpose is to bring awareness that the previous bar of high cost cinemaphotography/editing/distribution has been virtually eliminated. This paves the way for creative individuals almost anywhere in the world to express themselves in a way that was heretofore impossible. Outside of America and Western Europe both budgets and skilled operator/engineers are in far lower supply. But there are just as many people who have a good story to tell in South Africa, Nigeria, Uruguay, Aruba, Nepal, Palestine, Montenegro and many other places as there are in France, Canada or the USA. The internet has now connected all of us – information is being democratized in a huge way. Of course there are still the ‘firewalls’ of North Korea, China and a few others – but the human thirst for knowledge, not to mention the unbelievable cleverness and endurance of 13-year-old boys and girls in figuring out ‘holes in the wall’ shows us that these last bastions of stolidity are doomed to fall in short order.

With Apple and other manufacturers doing their best to leave nary a potential customer anywhere in the world ‘out in the cold’, the availability, both in real terms and affordability, is almost ubiquitous. With apps now costing typically a few dollars (it’s almost insane – the Avid editor for iOS is $5; the Avid Media Composer software for PC/Mac is $2,500) an entire production / post-production platform can be assembled for under $1,000. This exercise is about what’s possible, not what is the easiest, most capable, etc. Yes, there are many limitations. Yes, some things will take a lot longer. But what you CAN do is just nothing short of amazing. That’s the story I’m going to share with you.

A note to my readers:  None of the hardware or software used in this exercise was provided by any vendor. I have no commercial relationship with any vendor, manufacturer or distributor. Choices I have made or examples I use in this post are based purely on my own preference. I am not a professional reviewer, and have made no attempt to exhaustively research every possible solution for the hardware or software that I felt was required to produce this video. All of the hardware and software used in this exercise is currently commercially available – any reasonably competent user should be able to reproduce this process.

Before I get into detail on hardware or software, I need to remind you that the most important part of any video is the story. Just having a low-cost, relatively high quality platform on which to tell your ‘story’ won’t help if you don’t have something compelling to say – and the people/places/things in front of the lens to say it. We have all seen that vast amounts of money and technical talent means nothing in the face of a lousy script or poor production values – just look over some of the (unfortunately many) Hollywood bombs… I’m the first one to admit that motion picture storytelling is not my strong point. I’m an engineer by training and my personal passion is still photography – telling a story with a single image. So… in order to bring this idea to fruition – I needed help. After some thought, I decided that ‘piggybacking’ on an existing production was the most feasible way to produce this idea: basically adding a few iPhone cameras to a shoot where I could take advantage of existing set, actors, lighting, direction, etc. etc. For me, the this was the only practical way to make this happen in a relatively short time frame.

I was lucky enough to know a very talented director, Ambika Leigh, who was receptive and supportive of my idea. After we discussed my general idea of ‘piggybacking’ she kindly identified a potential shoot. After initial discussions with the producers, the green light for the project was given. The details of the process will come in future posts, but what I can say now (the project is an upcoming series that is not released yet – so be patient! It will be worth the wait) is that without the support and willingness of these three incredible women (Ambika Leigh, director; Tiffany Price & Lauren DeLong, producers/actors/writers) this project would not have moved forward with the speed, professionalism and just plain fun that it has. At a very high level, the series brings us into the clever and humorous world of the “Craft Ladies” – a couple of friends that, well, like to craft – and drink wine.

Craft Ladies is the story of Karen and Jane, best friends forever, who love to
craft…they just aren’t any good at it. Over the years Karen and Jane’s lives
have taken slightly different paths but their love of crafting (and wine)
remains strong. Tune in in September to watch these ladies fulfill their
dream…a craft show to call their own. You won’t find Martha Stewart here,
this is crafting Craft Ladies style. Craft Up Nice Things!”

Please check out their links for further updates and details on the ‘real thing’

www.facebook.com/CraftUpNiceThings
www.twitter.com/#!/2craftladies
www.CraftUpNiceThings.com

I am solely responsible for the iPhone portion of your program – so all errors, technical gaffs, editorial bloops and other stumbles are mine. As said, this is a proof of concept – not the next Spielberg epic… My intention is to follow – as closely as my expertise and the available iOS technology will allow – the editorial decisions, effects, titles, etc. that end up on the ‘real show’. To this end I will be necessarily lagging a bit in my production, as I have to review the assembled and edited footage first. However, I will make every effort to have my iPhone version of this series ready for distribution shortly after the real version launches. Currently this is planned for some time in September.

For the iPhone shoot, two iPhone4S devices were used. I need to thank my capable 2nd camerawoman – Tara Lacarna – for her endurance, professionalism and support over two very long days of shooting! In addition to her new career as an iPhonographer (ha!) she is a highly capable engineer, musician and creative spirit. While more detail will be provided later in this post, I would also like to thank Niki Mustain of Schneider Optics for her time (and the efforts of others at this company) in helping me get the best possible performance from the “iPro” supplementary lenses that I used on portions of the shoot.

Before getting down to the technical details of equipment and procedure, I’ll lay out the environment in which I shot the video. Of course, this can vary widely, and therefore the exact technique used, as well as some hardware, may have to change and adapt as required. In this case the entire shoot was indoors using two sets. Professional lighting was provided (3200°K) for the principal photography (which used various high-end DSLR cameras with cinema lenses). I had to work around the available camera positions for the two iPhone cameras, so my shots will not be the same as were used in principal photography. Most shots were locked off with both iPhones on tripods; there were some camera moves and a few handheld shots. The first set of episodes was filmed over two days (two very, very long days!!) and resulted in about 116GB of video material from the two iPhones. In addition to Ambika, Tiffany, Lauren and Tara there was a dedicated and professional crew of camera operators, gaffers, grips, etc. (with many functions often performed by just one person – this was after all about quality not quantity – not to mention the lack of a 7-figure Hollywood budget!). A full list of credits will be in a later post.

Aside from the technical challenges; the basic job of getting lines and emotion on camera; taking enough camera angles, close-ups, inserts and so on to ensure raw material for editorial continuity; and just plain endurance (San Fernando Valley, middle of summer, had to close all windows and turn off all fans and A/C for each shot due to noise, a pile of people on a small set, hot lights… you get the picture…) – the single most important ingredient was laughter. And there was lots of it!! At one time or another, we had to stop down for several minutes until one or the other of us stopped laughing so hard that we couldn’t hold a camera, say a line or direct the next sequence. That alone should prompt you to check this series out – these women are just plain hilarious.

Hardware:

As mentioned previously, two iPhone4S cameras were used. Each one was the 32GB model. Since shooting video generates large files, most user data was temporarily deleted off each phone (easy to restore later with a sync using iTunes). Approximately 20GB free space was made available on each phone. If one was going to use an iPhone for a significant amount of video photography the 64GB version would probably be useful. The down side is that (unless you are shooting very short events) you will still have to download several times a day to an external storage device or computer – and the more you have to download the longer that takes! As in any process, good advance planning is critical. In my case with this shoot, I needed to coordinate ‘dumping times’ with the rest of the shoot:  there was a tight schedule and the production would not wait for me to finish dumping data off the phones. The DSLR cameras use removable memory cards, so it only takes a few minutes to swap cards, then those cameras are ready to roll again. I’ll discuss the logistics of dumping files from the phones in more detail in the software section below. If one was going to attempt long takes with insufficient break time to fully dump the phone before needing to shoot again, the best solution would be to have two iPhones for each camera position, so that one phone could be transferring data while the other one was filming.

In order to provide more visual control, as well as interest, a set of external adapter lenses (the “iPro” system by Schneider Optics) was used on various shots. A total of three different lenses are available: telephoto, wide-angle and a fisheye. A detailed post on these lenses – and adaptor lenses in general – is here. For now, you can visit their site for further detail. These lenses attach to a custom shell that is affixed to the iPhone. The lenses are easily interchanged with a bayonet mounting system. Another vital feature of the iPro shell for the phone is the provision for tripod mounting – a must for serious cinemaphotography – especially with the telephoto lens which magnifies camera movement. Each phone was fitted with one of the iPro shells to facilitate tripod mounting. This also made each phone available for attaching one of the lenses as required for the shot.

iPro “Fisheye” lens

iPro “Wide Angle” lens

iPro “Telephoto” lens

Another hardware requirement is power:  shooting video kills batteries just about faster than any other activity on the iPhone. You are using most of the highest power consuming parts of the phone – all at the same time:  the camera sensor, the display, the processor, and high bandwidth memory writing. A fully charged iPhone won’t even last two hours shooting video, so one must run the phone on external power, or plan the shoot for frequent (and lengthy!) recharge sessions. Bring plenty of extra cables, spare chargers, extension cords, etc. – it’s very cheap insurance to keep the phones running. Damage to cables while on a shoot is almost a guaranteed experience – don’t let that ruin your session.

A particular challenge that I had was a lack of a ‘feed through’ docking connector on the Line6 “Mobile In” audio adapter (more on this below). This meant that while I was using this high quality audio input adapter I was forced to run on battery, since I could not plug in the Mobile In device and the power cable at the same time to the docking connector on the bottom of the phone. I’m not aware of a “Y” adapter for iPhone docking connectors, but that would have really helped. It took a lot of juggling to keep that phone charged enough to keep shooting. On several shots, I had to forgo the high quality audio as I had insufficient power remaining and had to plug in to the charger.

As can be seen, the lack of both removable storage and a removable battery are significant challenges for using the iPhone in cinemaphotography. This can be managed, but it’s a critical point that requires careful attention. Another point to keep in mind is heat. Continual use of the phone as a video camera definitely heats up the phone. While neither phone ever overheated to the point where it became an issue, one should be aware of this fact. If one was shooting outside, it may be helpful to (if possible) shade the phone(s) from direct sunlight as much as practical. However, do not put the iPhones in the ice bucket to keep them cool…

Gitzo tripod with fluid head attached

Close-up of fluid head

Tripods are a must for any real video work:  camera judder and shake is very distracting to the viewer, and is impossible to remove (with any current iPhone app). Even with serious desktop horsepower (there is rather good toolset in Adobe AfterEffects for helping to remove camera shake) it takes a lot of time, skill and computing power. Far better to avoid in the first place whenever possible. Since ‘locked off’ shots are not as interesting, it’s worth getting fluid heads for your tripods so you can pan and tilt smoothly. A good high quality tripod is also well worth the investment:  flimsy ones will bend and shake. While the iPhone is very light – and this may tempt one to go with a very lightweight tripod – this will work against you if you want to make any camera tilts or pans. The very light weight of the phone actually causes problems in this case: it’s hard to smoothly move a camera that has almost no mass. At least having a very rigid and sturdy tripod will help in this regard. One will need considerable practice to get used to the feel of your particular fluid head, get the tension settings just right, etc. – in order to effect the smoothest camera movements. Remember this is a very small sensor, and the best results will be obtained with slow and even camera pans/tilts.

For certain situations, miniature tripods or dollies can be very useful, but they don’t take the place of a normal tripod. I used a tiny tripod for one shot, and experimented with the Pico Dolly (sort of a miniature skateboard that holds a small camera) although did not actually use for a finished shot. This is where the small size and light weight of the iPhone can be a plus: you can hang it and place it in locations that would be difficult to impossible with a normal camera. Like anything else though, don’t get too creative and gimmicky:  the job of the camera is to record the story, not call attention to itself or technology. If a trick or a gadget can help you visually tell the story – then it’s useful. Otherwise stick with the basics.

Another useful trick I discovered that helped stabilize my hand-held shots:  my tripod (as many do) has a removable center post on which the fluid head is mounted (that in turn holds the camera). By removing the entire camera/fluid-head/center-post assembly I was able to hold the camera with far greater accuracy and stability. The added weight of the central post and fluid head, while not much – maybe 500 grams – certainly added stability to those shots.

Tripod showing center shaft extended before removal.

Center shaft removed for “hand-held” use

If you are planning on any camera moves while on the tripod (pans or tilts), it is imperative that the tripod be leveled first – and rechecked every time you move it or dismount the phone. Nothing worse than watching a camera pan move uphill as you traverse from left to right… A small circular spirit level is the perfect accessory. While I have seen very small circular levels actually attached to tripod heads, I find them too small for real accuracy. I prefer a small removable device that I can place on top of the phone itself (which then accounts for all the hardware up to and including the shell) that can affect alignment. The one I use is 25mm (1″) in diameter.

I touched on the external audio input adapter earlier while discussing power for the iPhones, I’ll detail that now. For any serious video photography you must use external microphones: the one in the phone itself – although amazingly sensitive, has many drawbacks. It is single channel – where the iPhone hardware (and several of the better video camera apps) are capable of recording stereo; you can’t focus the sensitivity of the microphone, and most importantly, the mike is on the front of the phone at the bottom – pointing away from where your lens is aimed!

While it is possible to plug a microphone into the combination headphone/microphone connector on the top of the phone, there are a number of drawbacks. The first is it’s still a mono input – only 1 channel of sound. The next is the audio quality is not that great. This input was designed for telephone conversation headpiece use, so extended frequency response, low noise and reduced harmonic distortion were not part of the design parameters. Far better audio quality is available on the digital docking connector on the bottom of the phone. That said, there are very few devices actually on the market today (that I have been able to locate) that will function in the environment of video cinemaphotography, particularly if one is using the iPro shell and tripod mounting the iPhone. Many of the devices treat the iPhone as just an audio device (the phone actually snaps into several of the units, making it impossible to use as a camera); with others the mechanical design is not compatible with either the iPro case or tripod mounting. Others offer only a single channel input (these are mostly designed for guitar input so budding Hendrix types can strum into GarageBand). The only unit I was able to find that met all of my requirements (stereo line input, high audio quality, mechanically did not interfere with tripod or the iPro case) was a unit “Mobile In” manufactured by Line6. Even this device is primarily a guitar input unit, but it does have a line in stereo connector that works very well. In order to use the hardware, you must download and install their free app (and it’s on the fat side, about 55MB) which contains a huge amount of guitar effects. Totally useless for the line input – but it won’t work without it. So just install it and forget about it. You never need to open the MobilePOD app in order to use the line input connector. As discussed above in the section on power, the only major drawback is that once this device is plugged in you can’t run your phone off external power. Really need to find that “Y” adapter for the docking connector..

“Mobile In” audio input adapter attached.

Now you may ask, why do I need a line input connector when I’m using microphones?? My attempt here is to produce the highest quality content possible, while still using the iPhone as the camera/recorder. For the reasons already discussed above, the use of external microphones is required. Typically a number of mikes will be placed, fed into a mixer, and then a line level feed (usually stereo) will be fed to the sound recorder. In all ‘normal’ (aka not using cellphones as cameras!!) video shoots, the sound is almost always recorded on a separate device, just synchronized in some fashion to each of the cameras so the entire shoot is in sync. In this particular shoot, the two actors on the set were individually miked with lavalier microphones (there is a whole hysterical story on that subject, but it will have to wait until after that episode airs…) and a third direction boom mike was used for ambient sound. The three mikes were fed into a small portable mixer/sound recorder. The stereo output (usually used for headphone monitoring – a line level output) was fed (through a “Y” cable) to both the monitoring headphones and the input to the Mobile In device. Essentially, I just ‘piggybacked’ on top of the existing audio feed for the shoot.

This didn’t violate my POC – as one would need this same equipment – or something like it – on any professional shoot. At a minimum, one could just use a small mixer, obviously if the iPhone was recording the sound an external recorder is not required. I won’t attempt to further discuss all the issues in recording high quality sound – that would take a full post (if not a book!) – but there is a massive amount of literature out there on the web if one looks. Good sound recording is an art – if possible avail yourself of someone who knows this skill to assist you on your shoot – it will be invaluable. I’ll just mention a few pointers to complete this part of the discussion:

  • Record the most dynamic range possible without distortion (big range between soft and loud sounds). This will markedly improve the presence of your audio tracks.
  • Keep all background noise to an absolute minimum. Turn off all cellphones! (put the iPhone that are ‘cameras’ in “airplane mode” so they won’t be disturbed by phone calls, texts or e-mails). Turn off fans, air conditioners, refrigerators (if you are near a kitchen), etc. etc. Take a few moments after calling ‘quiet on the set’ to sit still and really listen to your headphones to ensure you don’t hear any noise.
  • As much as possible, keep the loudness levels consistent from take to take – it will help keep your editor (or yourself…) from taking out the long knives after way too many hours trying to normalize levels between takes…
  • If you use lavalier mikes (those tiny microphones that clip onto clothing – they are available in ‘wired’ or ‘wireless’ versions) you need to listen carefully during rehearsals and actual takes for clothing rustle. That can be very distracting – you may have to stop and reposition the mike so that the housing is not touching any clothing. These mikes come with little clips that actually mount on to the cable just below the actual microphone body – thereby insulating clothing movement (rustle) from being transmitted to the sensor through the body of the microphone. Take care in mounting and test with your actor as they move – and remind them that clasping their hands to their chest in excitement (and thumping the mike) will make your sound person deaf – and ruin the audio for that shot!

Actors’ view of the camera setup for a shot, (2 iPhones, 3 DSLRs)

Storage and the process of dumping (transferring video files from the iPhones to external storage) is a vital part of both hardware, software and procedure. The hardware I used will be discussed here, the software and procedure is mentioned in the next section. Since the HD video files consume about 2.5GB for every 10 minutes of filming, even the largest capacity iPhone (64GB) will run out of space in short order. As mentioned earlier, I used the 32GB models on this shoot, with about 20GB free space on each phone. That meant that, at a maximum, I had a little over an hour’s storage on each phone. During the two days of shooting, we shot just under 5 hours of actual footage – which amounted to a total of 116GB from the two iPhones in total. (Not every shot was shadowed by the iPhones: some of the close-ups and inserts could not be performed by the iPhones as they would have been in the shot composed by the DSLR cameras).

The challenge to this project was to not involve anything other than the iPhone/iPad for all factors of the production. The dumping of footage from the iPhones to external storage is one area where Apple (nor any 3rd party developer that I have found) does not offer a purely iOS solution. With the lack of removable storage, there are only two ways to move files off the iPhone: Wi-Fi or the USB cable attached to the docking connector. Wi-Fi is not a practical solution in this environment:  the main reason is it’s too slow. You can find as many ‘facts’ on iPhone Wi-Fi speed as there are types of orchids in the Amazon, but my research (verified by personal tests) show that, in a real-world and practical manner 8Mb/s is a top-end average for upload (which is what you need to transmit files FROM the phone to an external storage device). That’s only 800KB/s – so it would take 7 hours to upload one 2.5GB movie file – which is 10 minutes of shooting! Not to mention the issues of Wi-Fi interference, dropped connections, etc. etc.

That brings us to cabled connections. Currently, the only way to move data off of (or on to for that matter) an iPhone is to use a computer. While the Apple Time Machine could in theory function as a direct-to-phone data storage device, it only connects via Wi-Fi. However, the method I chose only uses the computer as a ‘connection link’ to an external hard drive, so in my view it does not break my premise of an “all iOS” project. When I get to the editing stage, I just reverse the process and pull files back from the external drive through the computer back to the phone (in this case using iTunes).

I will discuss the precise technique and software used below, but suffice to say here that I used a PC as the computer – mainly just because that is the laptop that I have. It also does prove however that there is no issue of “Mac vs PC” as far as the computer goes. I feel this is an important point, as in many countries outside USA and Western Europe the price premium on Apple computers is such that they are very scarce. For this project, I wanted to make sure the required elements were as widely available as possible.

The choice of external storage is important for speed and reliability’s sake. Since the USB connection from the phone to the computer is limited to v2.0 (480Mb/s theoretical) one may assume that just any USB2.0 external drive would be sufficient. That’s not actually the case, as we shall see…  While the link speed of USB2.0 supposedly can provide a maximum of 48MB/s (480Mb/s), that is never matched in reality. USB chipsets in the internal hub in the computer, processing power in the phone and the computer, other processes running on the computer during transfer, bus and cpu speed in the computer, actual disk controller and disk speed of the external storage – all these factors serve to significantly affect transfer speed.

Probably the most important is the actual speed of the external disk. Most common portable USB2.0 disks (the small 2.5″ format) run at 5400RPM, and have disk controller chipsets that are commensurate, with actual performance in the 5-10MB/s range. This is too slow for our purposes. The best solution is to use an external RAID array of two ‘striped’ disks [RAID 0] using high performance 7200RPM SATA disks with an appropriately designed disk controller. Devices such as the G-RAID Mini system is a good example. If you are using a PC, get the best performance with an eSATA connection to the drive (my laptop has a built-in eSATA connector, but PC Card adapters are available that easily support this connectivity for computers that don’t have it built in). This offers the highest performance (real-world tests show average write speeds of 115MB/s using this device). If you are using an Apple computer, opt for the FW800 connection (I’m not aware of eSATA on any Mac computer). While this limits the performance to around 70MB/s maximum, it’s still much faster than the USB2.0 interface from the phone so it’s not an issue. I have proven that having a significant amount of ‘headroom’, in terms of speed performance, on the external drive, is desirable. You just don’t need the drive to slow things down any.

There are other viable alternatives for external drives, particularly if one needed a drive that did not require an external power supply (which the G-RAID does due to the performance). Keep in mind that while it’s possible to run a laptop and external drive all off battery power, you really won’t want to do this – for one, unless you are on a remote outdoor location shoot, you will have AC power – and disk writing at continuous high throughput is a battery killer! That said, a good alternative (for PC) is one of the Seagate GoFlex USB3.0 drives. I use a 1.5TB model that houses a high-performance 7200RPM drive and supports up to 50MB/s write speeds. For the Mac, Seagate has a Thunderbolt model. Although the Thunderbolt interface is twice as fast (10Gb/s vs 5Gb/s) as USB3.0 it makes no difference in transfer speed (these single drive storage devices can’t approach the transfer speeds of either interface). However, there is a very good reason to go with USB3.0/eSATA/Thunderbolt instead of USB2.0 – overall performance. With the newer high-speed interfaces, the full system (hard disk controller, interface chipset, etc.) is designed for high-speed data transfer, and I have proved to myself that it DOES make a difference. It’s very hard to find a USB2.0 system that matches the performance of a USB3.0/etc system – even on a 2.5″ single drive subsystem.

The last thing to cover here under storage is backup. Your video footage is irreplaceable. Procedure will be covered below, but under hardware, provide a second external drive on the set. It’s simply imperative that you immediately back up the footage on to a second physical drive as soon as practical – NOT at the end of the day! If you have a powerful enough computer, with the correct connectivity, etc. – you can actually copy the iPhone files to two drives simultaneously (best solution), but otherwise plan on copying the files from one external drive to the backup while the next scenes are being shot (background task).

I’ll close with a final suggestion:  while this description of hardware and process is not meant in any way to be a tutorial on cinemaphotography, audio, etc. etc. – here is a small list (again, this is under ‘hardware’ as it concerns ‘stuff’) of useful items that will make your life easier “on the set”:

  • Proper transport cases, bags, etc. to store and carry all these bits. Organization, labeling, color-coding, etc. all helps a lot when on a set with lots of activity and other equipment.
  • Spare cables for everything! Murphy will see to it that the one item for which you have no duplicate will get bent during the shoot…
  • Plenty of power strips and extension cords.
  • Gorilla tape or camera tape (this is NOT ‘duct tape’). Find a gaffer and he/she will explain it to you…
  • Small folding table or platform (for your PC/Mac and drives) – putting high value equipment on the floor is asking for BigFoot to visit…
  • Small folding stool (appropriate for the table above), or an ‘apple box’ – crouching in front of computer while manipulating high value content files is distracting, not to mention tiring.
  • If you are shooting outside, more issues come into play. Dust is the big one. Cans of compressed air, lens tissue, camel-hair brushes, zip-lock baggies, etc. etc. – none of the items discussed in this entire post appreciate dust or dirt…
    • Cooling. Mentioned earlier, but you’ll need to keep the phone and computer as cool as practical (unless of course you are shooting in Scotland in February in which case the opposite will be true: trying to figure out how to keep things warm and dry in the middle of a wet and freezing moor will become paramount).
    • Special mention for ocean-front shoots:  corrosion is a deadly enemy of iPhones and other such equipment. Wipe down ALL equipment (with appropriate cloths and solutions) every night after the shoot. Even the salt air makes deposits on every exposed metal surface – and later on a very hard to remove scale will become apparent.
  • A final note for sunny outdoor shoots: seeing the iPhone screen is almost impossible in bright sunlight, and unlike DSLRs the iPhone does not have an optical viewfinder. Some sort of ‘sunshade’ will be required. While researching this online, I came across this little video that shows one possible solution. Obviously this would have to be modified to accommodate the audio adapter, iPro lenses, etc. shown in my project, but it will hopefully give you some ideas. (Thanks to triplelucky for this video).

Software:

As amazing as the hardware capabilities of the above system are (iPhone, supplemental lenses, audio adapters, etc.) – none of this would be possible without the sophisticated software that is now available for this platform at such low cost. The list of software that I am currently using to produce this video is purely of my own choosing – there may be other equally viable solutions for each step or process. I feel what is important is the possibility of the process, not the precise piece of kit used to accomplish the task. Obviously, as I am using the iOS platform, all the apps are “Apple iPhone/iPad compliant”. The reader that chooses an alternate platform will need to do a bit of research to find similar functionality.

As a parallel project, I am currently describing my experiences with the iPhone camera in general, as well as many of the software packages (apps) that support the iPhone still and video camera. These posts are elsewhere in this same blog location. For that reason, I will not describe in any detail the apps here. If software that is discussed or listed here is not yet in my stable of posts, please be patient – I promise that each app used in this project will be discussed in this blog at some point. I will refer the reader to this post where an initial list of apps that will be discussed is located.

Here is a short list of the apps I am currently using. I may add to this list before I complete this project! If so, I will update this and other posts appropriately.

Storyboard Composer Excellent app for building storyboards from shot or library photos, adding actors, camera motion, script, etc. Powerful.

Movie*Slate A very good slate app.

Splice Unbelievable – a full video editor for the iPhone/iPad. Yes, you can: drop movies and stills on a timeline, add multiple sound tracks and mix them, work in full HD, has loads of video and audio efx, add transitions, burn in titles, resize, crop, etc. etc. Now that doesn’t mean that I would choose to edit my next feature on a phone…

Avid Studio  The renowned capability of Avid now stuffed into the iPad. Video, audio, transitions, etc. etc. Similar in capability to Splice (above) – I’ll have a lot more to say after these two apps get a serious test drive while editing all the footage I have shot.

iTC Calc The ultimate time code app for iDevices. I use on both iPad and iPhone.

FilmiC Pro Serious movie camera app for iPhone. Select shooting mode, resolution, 26 frame rates, in-camera slating, colorbars, multiple bitrates for each resolution, etc. etc.

Camera+ I use this as much for editing stills as shooting, biggest advantage over native iPhone camera app is you can set different part of frame for exposure and focus.

almost DSLR is the closest thing to fully manual control of iPhone camera you can get. Takes some training, but is very powerful once you get the hang of it.

PhotoForge2 Powerful editing app. Basically Photoshop on the iPhone.

TrueDoF This one calculates true depth-of-field for a given lens, sensor size, etc. I use this to plan my range of focus once I know my shooting distance.

OptimumCS-Pro This is sort of inverse of the above app – here you enter the depth of field you want, then OCSP tells you the shooting distance and aperture you need for that.

Juxtaposer This app lets you layer two different photos onto each other, with very controllable blending.

Phonto One of the best apps for adding titles and text to shots.

Some of the above apps are designed for still photography only, but since stills can be laid down in the video timeline, they will likely come into use during transitions, effects, title sequences, etc.

I used Filmic Pro as the only video camera app for this project. This was firstly based just on personal preference and the capabilities that it provided (the ability to lock focus, exposure and white balance were critical to maintaining continuity across takes in my opinion). Once I had selected a video camera app with which I was comfortable, I felt it important to use that on both the iPhones – again for continuity of the content. There may be other equally capable apps for this purpose. My focus was on producing as high a quality product as possible within the means and capabilities at my disposal. The particular tools are less important than the totality of the process.

The process of dumping footage off the iPhone (transferring video files to external storage) requires some additional discussion. The required hardware has been mentioned above, now let’s dive into process and the required software. The biggest challenge is logistics: finding enough time in between takes to transfer footage. If the iPhones are the only cameras used, then in one way this is easier – you have control over the timeline in that regard. In my case, this was even more challenging, as I was ‘piggybacking’ on an existing shoot so I had to fit in with the timeline and process in place. Since professional video cameras all use removable storage, they only require a few minutes to effectively be ready to shoot again after the on-camera storage is full. But even if iPhones are the only cameras, taking long ‘time-outs’ to dump footage will hinder your production.

There are several ways to maximize the transfer speed of files off the iPhone, but the best way is to make use of time management:  try to schedule dumping for normal ‘down time’ on the set (breaks, scene changes, wardrobe changes, meal breaks, etc.)  In order to do this you need to have your ‘transfer station’ [computer and external drive] ready and powered up so you can take advantage of even a short break to clear files from the phone. I typically transferred only one to three files at a time, so in case we started up sooner than expected I was not stuck in the middle of a long transfer. The other advantage in my situation was that the iPhone charges while connected via USB cable, so I was able to accomplish two things at once: replenish battery capacity due to shooting with the Mobile In audio adapter not allowing shooting while on line power; and dumping the files to external storage.

My 2nd camerawoman, Tara, brought her Mac Air laptop for file transfer to an external USB drive, I used a Dell PC laptop (discussed above in the hardware section). In both cases, I found that using the native OS file management (Image Capture [part of OS] for the Mac, Windows Explorer for the PC) was hideously slow. It does work (after plugging in the iPhone to the USB connector on the computer, the iPhone shows up as just another external disk. You can navigate down through a few folders and find your video files). On my PC (which BTW is a very fast machine – basically a 4-core mobile workstation that can routinely transfer files to/from external drives at over 150MB/s) the best transfer speed I could obtain with Windows Explorer amounted to needing almost an hour to transfer 10 minutes of video off the iPhone – a complete non-starter in this case. After some research, I located software from WideAngle Software called TouchCopy that solved my problem. They make versions for both Mac and PC, and it allowed transfer off the iPhone to external storage about 6x faster than Windows Explorer. My average transfer times were approximately ‘real time’ – i.e. 10 minutes of footage took about 10 minutes to transfer. There may be other similar applications out there – as mentioned earlier I am not in the software reviewing business – once I find something that works for me I will use that – until I find something “better/faster/cheaper.”

To summarize the challenging file transfer issue:

  • Use the fastest hardware connections and drives that you can.
  • Use time management skills and basic logistics to optimize your ‘windows’ for file transfer.
  • Use supplemental software to maximize your transfer speed from phone to external storage.
  • Transfer in small chunks so you don’t hold up production.

The last bit that requires a mention is file backup. Your original footage is impossible to replace, so you need to take exquisite care with it. The first thing to do it back it up to a second external physical drive immediately after the file transfer. Typically I started this task as soon as I was done with dumping files off the iPhone – this task could run unsupervised during the next takes.. However, one thing to consider before doing that (and this may depend on how much time you have during breaks): the relabeling of the video files. The footage is stored on your iPhone as a generically labeled .mov file, usually something like IMG_2334.mov – not a terribly insightful description of your scene/take. I never change the original label, only add to it. There is a reason… it helps to keep all the files in sequential order when starting the scene selection and editorial process later. This can be very helpful when things go a bit skew – as the always do during a shoot. For instance if the slate is missing on a clip (you DO slate every take, correct??) having the original ‘shot order’ can really help place the orphan take into its correct sequence. In my case, this happened several time due to slate placement:  since my iPhone cameras were in different locations, sometimes the slate was pointed where it was in frame for the DSLR cameras but was not visible by the iPhones.

I developed a short-hand description take from the slate at the head of each shot that I appended to the original file name. This does a few seconds (to launch Quicktime or VLC, shuttle in to the slate, pause and get the slate info), but the sooner you do this, the better. If you have time to rename the shots before the backup, then you don’t have to rename twice – or face the possibility of human error during this task. Here is a sample of one of my files after renaming: IMG_2334_Roll-A1_EP1-1_T-3.mov  This is short for Roll A1, Episode 1, Scene 1, Take 3.

However you go about this, just ensure that you back up the original files quickly. The last step of course is to delete the original video files off the iPhone so you have room for more footage. To double-check this process (you NEVER want to realize you just deleted footage that was not successfully transferred!!!) I do three things:

  1. Play into the file with headphones on to ensure that I have video and audio at head, middle and end of each clip. That only takes a few seconds, but just do it.
  2. Using Finder or Explorer, get the file size directly off the still-connected iPhone and compare it to the copied file on your external drive. Look at actual file size, not ‘size on disk’, as your external disk may have different sector sizes than the iPhone). If they are different, re-transfer the file.
  3. Using the ‘scrub bar’, quickly traverse the entire file using your player of choice (Quicktime, VLC, etc.) and make sure you have picture from end to end in the clip.

Then and only then, double-check exactly what you are about to delete, offer a small prayer to your production spirit of choice, and delete the file(s).

Summary:

This is only the beginning! I will write more as this project moves ahead, but wanted to introduce the concept to my audience. A deep thanks to all of you who have read my past posts on various subjects, and please return for more of this journey. Your comments and appreciation provides the fuel for this blog.

Support and Contact Details:

Please visit and support the talented women that have enabled me to produce this experiment. This would not have been possible otherwise.

Tiffany Price, Writer, Producer, Actress
Lauren DeLong, Writer, Producer, Actress
Ambika Leigh, Director, Producer

The Perception of Privacy

June 5, 2012 · by parasam

Another in my series of posts on privacy in our connected world…  with a particular focus on photography and imaging

As I continue to listen and communicate with many others in our world – both ‘real’ and ‘virtual’ (although the lines are blurring more and more) – I recognize that the concept of privacy is rather elusive and hard to define. It changes all the time. It is affected by cultural norms, age, education, location and upbringing. There are differing perceptions of personal privacy vs collective privacy. Among other things, this means that most often, heavy-handed regulatory schemes by governments will fail – as by the very nature of a centralized entity, the one-size-must-fit-all solution will never work well in this regard.

A few items that have recently made news show just how far, and how fast, our perception of privacy is changing – and how comfortable many of us are now with a level of social sharing that would have been unthinkable just a few years ago. An article (here) explains ‘ambient video’ as a new way that many young people are ‘chatting’ using persistent video feeds. With technologies such as Skype and OoVoo that allow simultaneous video ‘group calls’ – teenagers are coming home from school, putting on the webcam and leaving it on in the background for the rest of the day. The group of connected friends are all ‘sharing’ each other’s lives, in real time, on video. If someone has a problem with homework, they just shout out to the ‘virtual room’ for help. [The implications for bandwidth usage on the backbone of networks for connecting millions of teens with simultaneous live video will be reserved for a future article!]

More and more videos are posted to YouTube, Vimeo and others now that are ‘un-edited’ – we appear, collectively, to be moving to more acceptance of a casual and ‘candid’ portrayal of our daily lives. Things like FaceTime, Skype video calls and so on make us all more comfortable with sharing not only our voices, but our visual surroundings during communication. Maybe this shouldn’t be so surprising, since that is what conversation was ‘back in the day’ when face-to-face communication was all there was…

We are surrounded by cameras today:  you cannot walk anywhere in a major city (or even increasingly in small towns) without being recorded by thousands of cameras. Almost every street corner now has cameras on the light poles, every shop has cameras, people by the billions have cellphone cameras, not to mention Google (with StreetView camera cars, GoogleEarth, etc.)  One of the odd things about cameras and photography in general is that our perceptions are not necessarily aligned with logic. If I walk down a busy street and look closely at someone, even if they see me looking at them, there might either complete disregard, or at most a glance implying “I see you seeing me” and life moves on. If I repeat the same action but take that person’s picture with a big DSLR and a 200mm lens I will almost certainly get a different reaction, usually one that implies the subject has a different perception of being ‘seen’ by a camera than a person. If I repeat the action again with a cellphone camera, the typical reaction is somewhere in between. Logically, there is no difference: one person is seeing another, the only difference is a record in a brain, a small sensor or a bigger sensor.

Emotionally, there is a difference, and therein lies the title of this post – The Perception of Privacy. Our interpretations of reality govern our response to that reality, and these are most often colored by past history, perceptions, feelings, projections, etc. etc.  Many years ago, some people had an unreasonable fear of photography, feeling that it ‘took’ something from them. In reality we know this to be complete fallacy:  a camera captures light just like a human eye (well, not quite, but you get the idea). The sense of permanence – that a moment could be frozen and looked at again – was the difference. With video, we can now record whole streams of ‘moments’ and play them back. But how different really is this from replaying an image in one’s head, whether still or moving? Depending on one’s memory, not very different at all. What is different then? The fact that we can share these moments.. Photography, for the first time, gave us a way to socialize one person’s vision of a scene with a group. It’s one thing to try to describe in words to a friend what you saw – it’s a whole different effect when you can share a picture.

Again, we need to see the logic of the objective situation:  if a large group shares a visual experience (watching a street performer for example) what is the difference between direct vision and photography? Here, the subject should feel no difference, as this is already a ‘shared visual experience’ – but if asked, almost every person would say it is different, in some way. There is still a feeling that a photograph or video is different from even a crowed of people watching the same event. Once again, we have to look to what IS different – and the answer can only be that not only can a photo be shared, but it can shared ‘out of time’ with others. The real ‘difference’ then of a photo or video of a person or an event is that it can be viewed in a different manner than ‘in the moment’ of occurrence.

As our collective technology has improved, we now can share more efficiently, in higher resolution, than in the days of campfire songs and tales. Books, newspapers, movies, photos, videos… it’s amazing to think just how much of technology (in the largest sense – not just Apple products!) has been focused on methods improving the sharing of human thought, voice, image. We are extremely social creatures and appear to crave, at a molecular level, this activity. In many cultures today, we see a far more relaxed and tolerant attitude towards sharing of expression and appearance (nudity / partial nudity, no makeup, candid or casual appearance in public, etc. etc.) than existed a decade ago. We are becoming more comfortable in ‘existing’ in public – whether that ‘public’ is a small group of ‘friends’ or the world at large.

One way of looking at this ‘perception of privacy’ is through the lens of a particular genre of photography:  streetphotography. While, like most descriptions of a genre, it’s hard to pin down – basically this has evolved to mean candid shots in public – sort of ‘cinema vérité’ in a still photo. Actually, the term paparazzi is a ‘sub-group’ of this genre, with typically their focus limited to ‘people of note’ (fashion, movie, sports personalities) – whose likenesses can be sold to magazines. While this small section has undoubtably overstepped the bounds of acceptable behavior in some cases, it should not be allowed to taint the larger genre of artistic practice.

The facts, in terms of what’s legally permissible, for ‘streetphotography’ do vary by state and country, but for most of the USA here are the basics – and just like other perceptions surrounding photography, they may surprise some:

  • Basically, as the starting premise, anything can be photographed at any time, in any place where there is NOT a ‘reasonable expectation of privacy’.
  • This means, that similar to our judicial system where ‘innocent until proven guilty’ is the byword, in photography, the assumption is that it is always permissible to take a picture, unless specifically told not to by the owner of the property on which you are standing, by posted signs, or if you are taking pictures of what would generally be accepted as ‘private locations’ – and interestingly there are far fewer of these than you might think.
  • The practice of public photography is strongly protected in our legal system under First Amendment rulings, and has been litigated thousands of times – with most of the rulings coming down in the favor of the photographer.
  • Here are some basic guidelines:  [and, I have to say this:  I am not a lawyer. This is not legal advice. This is a commentary and reporting on publicly available information. Please consult an attorney for specific advice on any legal matter].
    • Public property, in terms of photography, is “any location that offers unfettered access to the public, and where there is not a reasonable expectation of privacy”
    • This means, that in addition to technically public property (streets, sidewalks, public land, beaches, etc. etc.), that malls, shops, outdoor patios of restaurants, airports, train stations, ships, etc. etc. are all ‘fair game’ for photos, unless specifically signposted to the contrary, or if the owner (or a representative such as a security guard) asks you to refrain from photography while on their private property.
    • If the photographer is standing on public property, he or she can shoot anything they can see, even if the object of their photography is on private property. This means that it is perfectly legal to stand on the sidewalk and shoot through the front window of a residence to capture people sitting on a sofa… or for those low flying GoogleEarth satellites to capture you sun-bathing in your back yard… or to shoot people while inside a car (entering the car is forbidden, that is clearly private property).
    • In many states there are specific rulings about areas within ‘public places’ that are considered “areas where one has a reasonable expectation of privacy” such as restrooms, changing rooms, and so on. One would think that common sense and basic decorum would suffice… but alas the laws had to be made…
    • And here’s an area that is potentially challenging:  photography of police officers ‘at work’ in public. It is legal. It has been consistently upheld in the courts. It is not popular with many in police work, and often photographers have been unjustifiably hassled, detained, etc. – but ‘unless a clear and obvious threat to the security of the police officer or the general public would occur due to the photography’ this is permitted in all fifty states.
    • Now, some common sense… be polite. If requested to not shoot, then don’t. Unless you feel that you have just captured the next Pulitzer (and you did it legally), then go on your way. There’s always another day, another subject.
    • It is not legal for a policeman, security guard or any other person to demand your camera, film, memory cards – or even to demand to be shown what you photographed. If they attempt to take your camera they can be prosecuted for theft.
    • One last, but very important, item:  laws are local. Don’t get yourself into a situation where you are getting up close and personal with the inside of a Ugandan jail… many foreign countries have drastically different laws on photography (and even in places where national law may permit, local police may be ignorant… and they have the keys to the cell…)  Always check first, and balance your need for the shot against your need for freedom… 🙂

What this all shows is that photography (still or moving) is accepted, even at the legal level, as a fundamental right in the US. That’s actually a very interesting premise, as not many things are specifically called out in this way. Most other practices are not prohibited, but very few are specifically allowed. For instance, there is no specific legal right to carpentry, although of course it is not prohibited. The fact that imaging, along with reporting and a few other activities are specifically allowed points to the importance of social activities within our culture.

The public/private interface is fundamental to literally all aspects of collective life. This will be a constantly evolving process – and it is being pushed and challenged now at a rate that has never before existed in our history – mainly due to the incredible pace of technological innovation. While I have focused most of this discussion on the issues of privacy surrounding imaging, the same issues pertain to what is now called Big Data – that collection of data that describes YOU – what you do, what you like, what you buy, where you go, who you see, etc. Just as in imaging, the basic tenet of Big Data is “it’s ok unless specifically prohibited.” While that is under discussion at many levels (with potentially some changes from ‘opt out’ to ‘opt in’), many of the same issues of ‘what is private’ will continue to be open.

A few comments on the iPhone camera posts…

April 13, 2012 · by parasam

I have just posted the third of my ongoing series of discussions on iPhone camera apps (Camera Plus Pro). Thanks to all of you around the world who have taken the time and interest to read my most recent post on Camera+   To date I have had about 7,500 views from 93 countries – that is a fantastic response! Please share with your friends:  according to the developers of Camera+ they have sold about 10 million copies of their app, so that means there are millions more of you out there that might like to have a manual for this app (which does not come with one) – my blog serves as at least a rudimentary guide for this cool app.

It’s taken several weeks to get this next one written, hopefully the rest will come a bit faster. Apps that have a lot of filters require more testing (and lot of image uploading – the most recent post on Camera Plus Pro has 280 images!). BTW, I know this makes the posts a bit on the large side, and increases your download times. However, since the subject matter is comparing details of color, tonal values, etc. I feel that high resolution images are required for the reader to gain useful information, so I believe the extra time is worth it.

Although this list is contained in my intro to the iPhone camera app software, here are the apps for which I intend to post analysis and discussions:

Still Imaging Apps:

  • Camera
  • Camera+
  • Camera Plus Pro
  • almost DSLR
  • ProHDR
  • Big Lens
  • Squareready
  • PhotoForge2
  • Snapseed
  • TrueDoF
  • OptimumCS-Pro
  • Iris Photo Suite
  • Filterstrom
  • Genius Scan+
  • Juxtaposer
  • Frame X Frame
  • Phonto
  • SkipBleach
  • Monochromia
  • MagicShutter
  • Easy Release
  • Photoshop Express
  • 6×6
  • Camera!

Motion Imaging Apps:

  • Movie*Slate
  • Storyboard Composer
  • Splice
  • iTC Calc
  • FilmiC Pro
  • Camera
  • Camera Plus Pro
  • Camcorder Pro

The above apps are selected only because I use them. I am not a professional reviewer, have no relationship with any of the developers of the above apps, am not paid or otherwise motivated externally. I got started on this little mission as I love photography, science, and explaining how things work. At first, I just wanted to know what made the iPhone tick… as my article on the hardware explains, that was more of a mission than I had counted on… but fun! I then turned to the software that makes the hardware actually do something useful… and here we are.

The choice of apps is strictly personal – this is just what I have found useful to me so far. I am sure there are others that are equally as good for others – and I will leave it to those others to discuss. It’s a big world – lots of room for lots of writing… Undoubtedly I will add things from time to time, but this is a fair list to start with!

Readers like you (and so many thanks to those that have commented!) are what brings me back to the keyboard. Please keep the comments coming. If I have made errors, or confused you, please let me know so I can correct that. Blogs are live things – continually open to reshaping.

Thanks!

iPhone4S – Section 4c: Camera Plus Pro app

April 13, 2012 · by parasam

This is a similar app to Camera+ (but not made by the same developers). (This version costs $1.99 at the time of this post – $1 more than Camera+). It’s similar in design and function. The biggest differences are:

  • Ability to tag photos
  • More setup options on selections (self-timer, burst mode, resolution, time lapse, etc.)
  • More sharing options
  • Ability to add date and copyright text to photo
  • A ‘Quick Roll’ (light table type function) has 4 ‘bins’ (All, Photos, Video, Private – can be password protected)
  • Can share photos via WiFi or FTP
  • Bing search from within app
  • Separate ‘Digital Flash’ filter with 3 intensity settings
  • Variable ‘pro’ adjustments in edit mode (Brightness, Saturation, Hue, Contrast, Sharpness, Tint, Color Temperature)
  • Different filters than Camera+, including special ‘geometric distortion’ filters
  • Quick Roll design for selecting which photos to Edit, Share, Sync, Tag, etc.
  • Still Camera Functions  [NOTE: the Video Camera functions will be discussed separately in later in this series when I compare video apps for the iPhone]
    • Ability to split Focus area from Exposure area
    • Can lock White Balance
    • Flash: Off/On (for the 4 & 4S); this feature changes to “Soft Flash” for iPhone 3GS and Touch 4G.
    • Front or Rear camera selection
    • Digital Zoom
    • 4 Shooting Modes: Normal/Stabilized/Self-Timer/Burst (part of the below Photo Options menu)
    • Photo options:
      • Sound On/Off
      • Zoom On/Off
      • Grid Lines On/Off
      • Geo Tags On/Off
      • SubMenu:
        • Tags:  select and add tags from list to the shot; or add a new tag
        • Settings:  a number of advanced settings for the app
          • Photos:
            • Timer (select the time delay for self-timer: 2-10 seconds in 1 second increments
            • Burst Mode (select the number of pictures taken when in burst mode: 3-10
            • Resolution (Original [3264×2448]; Medium [1632×1224]; Low [816×612]) – NOTE: these resolutions for the iPhone4S, each different hardware model supported by this app has a different set of resolutions set by the sensor. Essentially it is Full, Half and Quarter resolution. The exact numbers for each model are in the manual.
            • Copyright (sets the copyright text and text color)  [note: this is a preset – the actual ‘burn in’ of the copyright notice into the image is controlled during Editing]
            • Date (toggle date display on/off; set date format; text color)
        • Videos (covered in later section)
        • Private Access Restriction (Set or Change password for the Private bin inside the Quick Roll)
        • Tags (edit, delete, add tag names here)
        • Share (setup and credentials for social sharing services are entered here):
          • Facebook
          • Twitter
          • Flickr
          • Picasa
          • YouTube (for videos)
        • Review (a link to review the app)
      • Info:
        • Some ‘adware’ is here for other apps from this vendor, and a list of FAQs, Tips, Tricks (all of which are also in the manual available for download as a pdf from here)
  • Live Filters:
    • A set of 18 filters that can be applied before taking your shot, as opposed to adding a filter after the shot during Editing.
      • BW
      • Vintage
      • Antique
      • Retro
      • Nostalgia
      • Old
      • Holga
      • Polaroid
      • Hipster
      • XPro
      • Lomo
      • Crimson
      • Sienna
      • Emerald
      • Bourbon
      • Washed
      • Arctic
      • Warm
    • A note on image quality using Live Filters. A bit more about the filters will be discussed below when we dive into the filter details, but some test shots using various Live Filters show a few interesting things:
      • The pixel resolution stays the same whether the filter is on or off (3264×2448 in the case of the iPhone4S).
      • While the Live Filter function is fully active during preview of an image, once you take the shot there is a delay of about 3 seconds while the filtering is actually applied to the image. Some moving icons on the screen notify the user. Remember that the screen is 960×640 while the full image is 3264×2448 (13 X larger!) so it takes a few seconds to filter all those additional pixels.
      • This does mean that when using Live Filter you can’t use Burst Mode (it is turned off when you turn on a Live Filter), and you can’t shoot that rapidly.
      • Although the pixel dimensions are unchanged, the size of the image file is noticeably smaller when using Live Filters than when not. This can only mean that the jpeg compression ratio is higher (same amount of input data; smaller output data; compression ratio mathematically must be higher).
      • I first noticed this when I went to email myself a full resolution image from my phone to my laptop [faster for one or two pix than syncing with iTunes] as I’m researching for this blog – the images were on average 1.7MB instead of the 2.7MB average for normal iPhone shots.
      • I tested against four other camera apps, including the native Camera app from Apple, and all of them delivered images averaging 2.7MB per image.
      • I then tested this app (Camera Plus Pro) in Unfiltered mode, and the size of the output file jumps up to an average of 2.3MB per image. Not as high as most of the others, but 35% larger. Therefore a 35% reduction in compression ratio. I’ll run some more objective tests during the filter analysis section below, but both in file size and visual observation, the images appear more highly compressed.
      • This does not mean that a more compressed picture is inferior, or softer, etc. – it is highly dependent on subject material, lighting, etc. But, what is true is that a more highly compressed picture will tend to show artifacts more easily in difficult parts of the frame than will the same image at a lower compression ratio.
      • Just all part of my “Know Your Tools” motto…
      • Edit Functions
        • Crop
          • Freeform (variable aspect ratio)
          • Square (1:1 aspect ratio)
          • Rectangular (2:3 aspect ratio) [portrait]
          • Rectangular (3:2 aspect ratio) [landscape]
          • Rectangular (4:3 aspect ratio) [landscape]
  • Rotation
    • Flip Horizontal
    • Right
    • Left
    • Flip Vertical
  • Digital Flash [a filter that simulates flash illumination]
    • Small
    • Medium
    • Large
  • Adjust [image parameter adjustments]
    • Brightness
    • Saturation
    • Hue
    • Contrast
    • Sharpness
    • Tint
    • Color Temperature
  • Effects
    • Nostalgia – 9 ‘retro’ effects
      • Coffee
      • Retro Red
      • Vintage
      • Nostalgia
      • Retro
      • Retro Green
      • 70s
      • Antique
      • Washed
    • Special – 9 custom effects
      • XPro
      • Pop
      • Lomo
      • Holga
      • Diana
      • Polariod
      • Rust
      • Glamorize
      • Hipster
    • Color – 9 tints
      • Black & White
      • Sepia
      • Sunset
      • Moss
      • Lucifer
      • Faded
      • Warm
      • Arctic
      • Allure
    • Artistic – 9 special filters
      • HDR
      • Fantasy
      • Vignette
      • Grunge
      • Pop Art
      • GrayScale
      • Emboss
      • Xray
      • Heat Signature
    • Distortion – 9 geometric distortion (warping) filters
      • Center Offset
      • Pixelate
      • Bulge
      • Squeeze
      • Swirl
      • Noise
      • Light Tunnel
      • Fish Eye
      • Mirror
  • Borders
    • Original (no border)
    • 9 border styles
      • Thin White
      • Rounded Black
      • Double Frame
      • White Frame
      • Polaroid
      • Stamp
      • Torn
      • Striped
      • Grainy

Camera Functions

[Note:  Since this app has a manual available for download that does a pretty fair job of describing the features and how to access and use them, I will not repeat that information here. I will discuss and comment on the features where I believe this will add value to my audience. You may want to have a copy of the manual available for clarity while reading this blog.]

The basic use and function of the camera is addressed in the manual, what I will discuss here are the Live Filters. I have run a series of tests to attempt to illustrate the use of the filters, and provide some basic analysis of each filter to help the user understand how the image will be affected by the filter choice. The resolution of the image is not reduced by the use of a Live Filter – in my case (testing with iPhone4S) the resultant images are still 3264×2448 – native resolution. There are of course the effects of the filter, which in some cases can reduce apparent sharpness, etc.

A note on my testing procedure:  In order to present a uniform set of comparison images to the reader, and have them be similar to my standard test images, the following steps were taken:

Firstly:  my standard test images that I use to analyze filters/scenes/etc for any iPhone camera app consists of two initial test images:  a technical image (calibrated color and grayscale image), and a ‘real-world’ image – a photo I shot of a woman in the foreground with a slightly out-of-focus background. The shot has a wide range of lighting, color, a large amount of skin tone for judging how a given filter changes that important parameter, and a fairly wide exposure range.

 The original source for the calibration chart was a precision 35mm slide (Kodak Q60, Ektachrome) that was scanned on a Nikon Super Coolscan 5000ED using Silverfast custom scanner software. The original image was scanned at 4000dpi, yielding a 21megapixel image sampled at 16bits per pixel. This image was subsequently reduced in gamut (from ProPhotoRGB to sRGB) and size (to match the native iPhone4S resolution of 3264×2448) and bit depth (8bits per pixel) . The image processing was performed using Photoshop CS5.5 in a fully color-calibrated workflow.

The source for the ‘real-world’ image was initially captured using a Nikon D5000 DSLR fitted with a Nikkor 200mm F2.8 prime lens (providing an equivalent focal length of 300mm compared to full-frame 35mm – the D5000 is a 2/3 size sensor [4288×2848]). The exposure was 1/250 sec @ f5.6 using camera raw format – no compression. That camera body captures in sRGB color space, and although outputs a 16bit per pixel format, the sensor is really not capable of anything more than 12 bits in a practical sense. The image was processed in Photoshop CS5.5 in a similar manner as above to yield a working image of 3264×2448, 8 bits per pixel, sRGB.

These image pairs are what are used throughout my blog for analyzing filters, by importing into each camera app as a file.

For this test of Live Filters, I needed to actually shoot with the iPhone, since there is no way using this app to apply the Live Filters to a pre-existing image. To replicate the images discussed above as closely as possible, the following procedure was used:

For the calibration chart, the same source image was used (Kodak Q60), this time as a precision print in 4″x5″ size. These prints were manufactured by Kodak under rigidly controlled processes and yield a highly accurate reflective target. (Most unfortunately, with the demise of Kodak, and film/print processing in general, these are no longer available. Even with the best of storage techniques, prints will fade and become inaccurate for calibration. It will be a challenge to replace these…)  I used my iPhone4S to make the exposures under controlled lighting (special purpose full-spectrum lighting set to 5000°K).

For the ‘real-world’ image, I wanted to stay with the same image of the woman for uniformity, and it provides a good range of test values. To accomplish that (and be able to take the pictures with the iPhone) was challenging, since the original shot was impossible to duplicate in real life. I started with the same original high resolution image (in Photoshop) in its original 16bit, high-gamut format. I then printed that image using a Canon fine art inkjet printer (Pixma Pro 9500 MkII), using a 16 bit driver, on to high quality glossy photo paper at a paper size of 13″ x 19″. At a print density of 267dpi, this yielded an image of over 17megapixels when printed. The purpose was to ensure that no subsampling of printed pixels would occur when photographed by the 8megapixel sensor in the iPhone. [Nyquist sampling theory demands a minimum of 2x sampling – 16megapixels in this case – to ensure that). I photographed the image with the same controlled lighting as used above for the calibration chart. I made one adjustment to each image for normalization purposes:  I mapped the highest white level in the photograph (the clipped area on the subject’s right shoulder – which was pure white in the original raw image) to just reach pure white in the iPhone image. This matched the tonal range for each shot, and made up for the fact that even with a lot of light in the studio it wasn’t enough to fully saturate the little tiny iPhone sensor. No other adjustments of any kind were made. [This adjustment was carried out by exporting the original iPhone image to Photoshop to map the levels].

While even further steps could have been taken to make the process more scientifically accurate, the purpose here is one of relative comparison, not absolute measurement, so I feel the steps taken are sufficient for this exercise.

The Live Filters:

Live Filter = BW

Live Filter = BW

The BW filter provides a monochrome adaptation of the original scene. It is a high contrast filter, this can clearly be seen in the test chart, where columns 1-3 are solid black, as well as all grayscale chips from 19-22. Likewise, on the highlight end of the scale, chips 1-3 have no differentiation. The live image shows this as well, with a strong contrast throughout the scene.

Live Filter = Vintage

Live Filter = Vintage

The Vintage filter is a warming filter that adds a reddish-brown cast to the image. It increases the contrast some (not nearly as much as the previous BW filter) – this can be seen in the chart in the area of columns 1-2 and rows A-J. The white and black ends of the grayscale are likewise compressed. Any cool pastel colors either turn white or a pale warm shade (look at columns 9-11). The live image shows these effects, note particularly how the man’s blue shirt and shorts change color remarkably. The increase in contrast, couple with the warming tint, does tend to make skin tones blotchy – note the subject’s face and chest.

Live Filter = Antique

Live Filter = Antique

The Antique filter offers a large amount of desaturation, a cooling of what color remains, and an increase in contrast. Basically, only pinks and navy blues remain in the color spectrum, and the chart shows the clipping of blacks and whites. The live image shows very little saturation, only some dark blue remains, with a faint pink tinge on what was originally the yellow sign in the window.

Live Filter = Retro

Live Filter = Retro

The Retro filter attempts to recreate the look of cheap film cameras of the 1960’s and 1970’s. These low quality cameras often had simple plastic lenses, light leaks due to imperfect fit of components, etc. The noticeable chromatic aberrations of the lens and other optical ‘faults’ have now seen a resurgence as a style, and that is emulated with digital filters in this and others shown below. This particular filter shows a general warming, but with a pronounced red shift in the low lights. This is easily observable in the gray scale strip on the chart.

Live Filter = Nostalgia

Live Filter = Nostalgia

Nostalgia offers another variation on early low-cost film camera ‘look and feel’. As opposed to the strong red shift in the lowlights of Retro, this filter shifts the low-lights to blue. There is also an increase in saturation of both red and blue, notice that in the chart. The green column, #18, hardly has any change in saturation from the original, while the reds and blues show noticeable increases, particularly in the low-lights. The highlights have a general warming trend, showed in the area bounded by columns 13-19 and rows A-C. The live shot shows the strong magenta/red shift that this filter caused on skin tones.

Live Filter = Old

Live Filter = Old

The Old filter applies significant shifts to the tonal range. It’s not exactly a high contrast filter, although that result is apparent in the ratio of the highlight brightness to the rest of the picture. There is strong overall reduction in brightness – in the chart all differentiation is lost below chip #16. There is also desaturation, this is more obvious when studying the chart. The highlights, like many of these filter types, are warmed toward the yellow spectrum.

Live Filter = Holga

Live Filter = Holga

The Holga filter is named after the all-plastic camera of the same name – from Hong Kong in 1982. A 120 format roll-film camera, the name comes from the phrase “ho gwong” – meaning ‘very bright’. The marketing people twisted that phrase into HOLGA. The actual variations show a warming in the highlights and cooling (blue) in the lowlights. The contrast is also increased. In addition, as with many of the Camera Plus Pro filters, there is a spatial element as well as the traditional tonal and chromatic shifts:  in this case a strong red tint in one corner of the frame. My tests appear to indicate that the placement of this (which corner) is randomized, but the actual shape of the red tint overlay is relatively consistent. Notice that in the chart the overlay is in the upper right corner, in the live shot it moved to lower right. There is also desaturation, this is noticeable in her skin, as well as the central columns of the chart.

Live Filter = Polaroid

Live Filter = Polaroid

The Polaroid filter mimics the look of one of the first ‘instant gratification’ cameras – the forerunner of digital instant photography. The PLC look (Polariod Land Camera) was contrasty with crushed blacks, tended towards blue in the shadows, and had slightly yellowish highlights. This particular filter has a pronounced magenta shift in the skin tones that is not readily apparent from the chart – one of the reasons I always use these two different types of test images.

Live Filter = Hipster

Live Filter = Hipster

The Hipster filter effect is another of the digital memorials to the original Hipstamatic camera – a cheap all plastic 35mm camera that shot square photos. Copied from an original low-cost Russian camera, the two brothers that invented it only produced 157 units. The camera cost $8.25 in 1982 when it was introduced. With a hand-molded plastic lens, this camera was another of the “Lo-Fi” group of older analog film cameras whose ‘look’ has once again become popular. The CameraPlusPro version shows pronounced red in the midtones, crushed blacks (see column 1-2 in the chart and chips #18 and below), along with increased contrast and saturation. In my personal view, this look is harsher and darker than the actual Hipstmatic film look, which tended towards raised blacks (a common trait of cheap film cameras, the backs always leaked a bit of light so a low level ‘fog’ of the film base always tended to raise deep blacks [areas of no light exposure in a negative] to a dull gray); a softer look (lower contrast due to raised blacks) and brighter highlights. But that’s purely a personal observation, the naming of filters is arbitrary at best, that’s why I like to ‘look under the hood’ with these detailed comparisons.

Live Filter = XPro

Live Filter = XPro

The XPro filter as manifested by the CameraPlusPro team looks very similar to their Nostalgia version, but the XPro has highlights that are more white than the yellow of Nostalgia. The term XPro comes from ‘cross-process’ – what happens when you process film in the wrong developer, for instance developing E-6 transparency film in C-41 color negative chemistry. The effects of this process are highly random, although there is a general tendency towards high contrast, unnatural colors, and staining. In this instance, the whites are crushed a bit, blacks tend blue, and contrast is raised.

Live Filter = Lomo

Live Filter = Lomo

The Lomo filter effect is designed to mimic some of the style of photograph produced by the original LOMO Plc camera company of Russia (Leningrad Optical Mechanical Amalgamation). This was a low cost automatic 35mm film camera. While still in production today, this and similar cameras account for only a fraction of LOMO’s production – the bulk is military and medical optical systems – and are world class… Due to the low cost of components and production methods, the LOMO camera exhibited frequent optical defects in imaging, color tints, light leaks, and other artifacts. While anathema to professional photographers, a large community that appreciates the quirky effects of this (and other so-called “Lo-Fi” or Low Fidelity) cameras has sprung up with a world-wide following. Hence the Lomo filter…

This particular instance shows increased contrast and saturation, warming in the highlights, green midtones, and like some other CameraPlusPro filters, an added spatial effect (the red streak – again randomized in location, it shows in upper left in the chart, lower right in the live shot). [Pardon the pilot error:  the soft focus of the live shot was due to faulty autofocus on that iPhone shot – but I didn’t notice it until comping the comparison shots several days later, and didn’t have the time to reset the environment and reshoot for one shot. I think the important issues can be resolved in spite of that, but did not want my readers to assume that soft focus was part of the filter!]

Live Filter = Crimson

Live Filter = Crimson

The Crimson filter is, well, crimson! A bit overstated for my taste, but if you need a filter to make your viewers think of “The Shining” then this one’s for you! What more can I say. Red. Lots of it.

Live Filter = Sienna

Live Filter = Sienna

The Sienna filter always makes me think of my early art school days, when my well-meaning parents thought I needed to be exposed to painting… (burnt sienna is a well-know oil pigment, an iron oxide derivative that is reddish-brown. My art instructor said “think tree trunks”.)   Alas, it didn’t take me (or my instructor) long to learn that painting with oils and brushes was not going to happen in this lifetime. Fortunately I discovered painting with light shortly after that, and I’ve been in love with the camera ever since. The Sienna as shown here is colder than the pigment, a somewhat austere brown. The brown tint is more evident in the lowlights, the whites warm up just slightly. As in many of the CameraPlusPro filters, the blacks are crushed, which creates an overall look of higher contrast, even if the midtone and highlight contrast levels are unchanged (look at the grayscale in the chart). There is also an overall desaturation.

Live Filter = Emerald

Live Filter = Emerald

Emerald brings us, well, green… along with what should now be familiar:  crushed blacks, increased contrast, desaturation.

Live Filter = Bourbon

Live Filter = Bourbon

The Bourbon filter resembles the Sienna filter, but has a decidedly magenta cast in the shadows, while the upper midtones are yellowish. The lowered saturation is another common trait of the CameraPlusPro filters.

Live Filter = Washed

Live Filter = Washed

The Washed filter actually looks more like ‘unwashed’ print paper to me.. Let me explain:  before the world of digits descended on photography, during the print process (well, this applies to film as well but the effect is much better known in the printing process), after developing, stopping and fixing, you need to wash the prints. Really, really well. For a long time, like 30-45 minutes under flowing water. This is necessary to wash out almost all of the residual thiosulfate fixing chemical – if you don’t, your prints will age prematurely, showing bleaching and staining, due to the slow annihilation of elemental silver in the emulsion by the remaining thiosulfate. The prints will end up yellowed and a bit faded, in an uneven manner. In this digital approximation, the biggest difference is (as usual for this filter set) the crushed blacks. In the chemical world, just the opposite would occur, as the blacks in a photographic print have the highest accumulation of silver crystals (that block light or cover up the white paper underneath). The other attributes of this particular filter are: strongly yellowed highlights, lowlights tend to blue, increased contrast and raised saturation.

Live Filter = Arctic

Live Filter = Arctic

This Arctic filter looks cold! Unlike the true arctic landscape (which is subtle but has an amazing spectrum of colors), this filter is actually a tinted monochrome. The image is first reduced to black and white, then tinted with a cold blue. This is very clear by looking at the chart. It’s an effect.

LIve Filter = Warm

Live Filter = Warm

After looking so cold in the last shot, our subject is better when Warm. Slightly increased saturation and a yellow-brown cast to the entire tonal range are the basic components of this filter.

Edit Functions

This app has 6 groups of edit functions:  Crop, Rotate, Flash, Adjust, Filters and Borders. The first two are self-evident, and are more than adequately explained in the manual. The “how-to” of the remaining functions I will leave to the manual, what will be discussed here are examples of each variable in the remaining four groups.

Flash – also known as “Digital Flash” – a filter designed to brighten an overly dark scene. Essentially, this filter attempts to bring the image levels up to what they might have been if a flash had been used to take the photograph initially. As always, this will be a ‘best effort’ – nothing can take the place of a correct exposure in the first place. The most frequent ‘side effects’ of this type of filter are increased noise in the image (since the image was dark in the first place – and therefore would have substantial noise due to the nature of CCD/CMOS sensors), raising the brightness level will also raise the appearance of the noise; and white clipping of those areas of the picture that did receive normal, or near-normal, illumination.

This app supports 3 levels of ‘flash’ [brightness elevation] – I call this ‘shirt-sizing’ – S, M, L.  Below are 4 screen shots of the Flash filter in action: None, Small, Medium, Large.

This filter attempts to be somewhat realistic – it is not just an across-the-board brightness increase. For instance, objects that are very dark in the original scene (such as her handbag or the interior revealed by the doorway in the rear of the scene) only are increased slightly in level, whiile midtones and highlights are raised much more substantially.

Flash: Original

Flash: Small / Medium / Large

Adjust – there are 7 sub-functions within the Adjust edit function; Brightness, Saturation, Hue, Contrast, Sharpness, Tint and Color Temperature. Each function has a slider that is initially centered, moving it left reduces the named parameter, moving it right increases. Once moved off the zero center position, a small “x” on the upper right of the assoiciated icon can be tapped to return the slider to the middle position, effectively turning off any changes. Examples below for each of the sub-functions are shown.

Brightness: Minimum / Original / Maximum

Saturation: Minimum / Original / Maximum

Hue: Minimum / Original / Maximum

Contrast: Minimum / Original / Maximum

Sharpness: Minimum / Original / Maximum

Tint: Minimum / Original / Maximum

Color Temperature: Minimum / Original / Maximum

Color Temperature: Cooler / Original / Warmer

Filters – There are 45 image filters in the Edit section of the app. Some of them are similar or identical in function to the filters of the same name that were discussed in the Live Filter section above. These are contained in 5 groups: Nostalgia, Special, Colorize, Artistic and Distortion. The examples below are similar in format to the presentation of the Live Filters. The source images for these comparisons are imported files (see the note at the beginning of this section for details).

Nostalgia filters:

Nostalgia filter = Coffee

Nostalgia filter = Coffee

The Coffee filter is rather well-named:  it looks like your photo had weak coffee spread over it! You can see from the chart that, as usual for many of the CameraPlusPro filters, increased contrast, crushed blacks and desaturation is the base on which a subtle warm-brown cast is overlayed. The live example shows the increased contrast around her eyes, and the skin tones in both the woman and the man in the background have tended to pale brown as opposed to the original red/yellow/pink.

Nostalgia filter = Retro Red

Nostalgia filter = Retro Red

The Retro Red filter shows increased saturation, a red tint across the board (highlights and lowlights), and does not alter the contrast – note all the steps in the grayscale are mostly discernable – although there is a slight blending/clipping of the top highlights. The overall brightness levels are raised from midtones through the highlights.

Nostalgia filter = Vintage

Nostalgia filter = Vintage

The Vintage filter here in the Edit portion of the app is very similar to the filter of the same name in the Live Filter section. The overall brightness appears higher, but some of that may be due to the different process of shooting with a live filter and applying a filter in the post-production process. This is more noticeable in the live shot as opposed to the charts – a comparision of the “Vintage” filter test charts from the Live Filter section and the Edit section shows almost a dead match. This filter is a warming filter that adds a reddish-brown cast to the image. It increases the contrast some  – this can be seen in the chart in the area of columns 1-2 and rows A-J. The white and black ends of the grayscale are likewise compressed. Any cool pastel colors either turn white or a pale warm shade (look at columns 9-11). The live image shows these effects, note particularly how the man’s blue shirt and shorts change color remarkably. The increase in contrast, coupled with the warming tint, does tend to make skin tones blotchy – note the subject’s face and chest.

Nostalgia filter = Nostalgia

Nostalgia filter = Nostalgia

The Nostalgia filter, like Vintage above, is basically the same filter as the instance offered in the Live Filter section. The main difference is the Live Filter version is more magenta and a bit darker than this filter. Also the cyans tend green more strongly in this version of the filter – check out columns 12-13 in the chart. Some increased contrast, pronounced yellows in the highlights and increased red/blue saturation are also evident.

Nostalgia filter = Retro

Nostalgia filter = Retro

The Retro filter, as in the version in the Live Filter section, attempts to recreate the look of cheap film cameras of the 1960′s and 1970′s. These low quality cameras often had simple plastic lenses, light leaks due to imperfect fit of components, etc. The noticeable chromatic aberrations of the lens and other optical ‘faults’ have now seen a resurgence as a style, and that is emulated with digital filters in this and others shown below. This particular filter shows a general warming, but with a pronounced red shift in the low lights. This is easily observable in the gray scale strip on the chart.

Nostalgia filter = Retro Green

Nostalgia filter = Retro Green

The Retro Green filter is a bit of a twist on Retro, with some of Nostalgia thrown in (yes, filter design is a lot like cooking with spices..)  The lowlights are similar to Nostalgia, with a blue cast, the highlights show the same yellows as both Retro and Nostalgia, the big difference is in the midtones which are now strongly green.

Nostalgia filter = 70s

Nostalgia filter = 70s

The 70s filter gives us some desaturation, no change in contrast, red shift in midtones and lowlights, yellow shift in highlights.

Nostalgia filter = Antique

Nostalgia filter = Antique

The Antique filter is similar to the Antique Live Filter, but is much lighter in terms of brightness. There is a large degree of desaturation, some increase in contrast, significant brightness increase in the highlights, and very slight color shifts at the ends of the grayscale:  yellow in the highlights, blue in the lowlights.

Nostalgia filter = Washed

Nostalgia filter = Washed

The Washed filter here in the Edit section is very different from the filter of the same name in Live Filters. The only real similarity is the strongly yellowed highlights. This filter, like many of the others we have reviewed so far, has a much lighter look (brightness levels raised), a very slight magenta shift, slightly increased contrast, enhanced blues in the lowlights and some increase in cyan in the midtones.

Special filters:

Special filter = XPro

Special filter = XPro

The XPro filter in the Edit functions has a different appearance than the filter of the same name in Live Filters. This instance of the digital emulation of a ‘cross-process’ filter is less contrasty, less magenta, and has more yellow in the highlights. The chart shows the yellows in the highlights, blues in the lowlights, and increased saturation. The live shot reveals the increased white clipping on her dress (due to increased contrast), as well as the crushed blacks (notice the detail of the folds in her leather handbag are lost).

Special filter = Pop

Special filter = Pop

The Pop filter brings the familiar basic tonal adjustments (increased contrast, with crushed whites and blacks, an overall increase in midtone and highlight brightness levels) but this time the lowlights have a distince red/magenta cast, with midtones and highlights tending greenish/yellow. This is particularly evident in the live shot. Look at the black doorway in the original which is now very reddish in the filtered shot.

Special filter = Lomo

Special filter = Lomo

The Lomo filter here in the Edit area is rather different than the same named filter in Live Filters. This particular instance shows increased contrast and saturation, yellowish warming in the highlights, and like some other CameraPlusPro filters, an added spatial effect (the red splotch – in this example the red tint is in the same lower right corner for both chart and woman – if the placement is random, then this is just coincidence – but… it makes it look like the lowlights in the grayscale chart are pushed hard to red:  not so, it’s just that’s where the red tint overlay is this time…). Look at the top of her handbag in the live shot to see that the blacks are not actually shifted red. As with many other CameraPlusPro filters, the whites and blacks are crushed some – you can see on her dress how the highlights are now clipped.

Special filter = Holga

Special filter = Holga

The Holga filter is one where there is a marked similarity between the Live Filter and this instance as an Edit Filter. This version is lighter overall, with a more greenish-yellow cast, particularly in the shadows. The vignette effect is stronger in this Edit filter as well.

Special filter = Diana

Special filter = Diana

The Diana filter is another ‘retro camera’ effect:  based on, wow – surprise, the Diana camera… another of the cheap plastic cameras prevalent in the 1960′s. The vignetting, light leaks, chromatic aberrations and other side-effects of a $10 camera have been brought into the digital age. In a similar fashion to several of the previous ‘retro’ filters discussed already, you will notice crushed blacks & highlights, increased contrast, odd tints (in this case unsaturated highlights tend yellow), increased saturation of colors – and a slight twist in this filter due to even monochrome areas becoming tinted – the silver pendant on her chest now takes on a greenish/yellow tint.

Special filter = Polaroid

Special filter = Polaroid

The Polaroid filter here in the Edit section resembles the effects of the same filter in Live Filters in the highlights (tends yellow with some mild clipping), but diverges in the midtones and shadows. Overall, this instance is lighter, with much less magenta shift in the skin tones. The contrast is not as high as in the Live Filter version, and the saturation is a bit lower.

Special filter = Rust

Special filter = Rust

The Rust filter is really very similar to old-style sepia printing:  this is a post-tint process to a monochrome image. In this filter, the image is first rendered to a black & white image, then colorized with a warm brown overlay. The chart clearly shows this effect.

Special filter = Glamorize

Special filter = Glamorize

The Glamorize filter is a high contrast effect, with considerable clipping in both the blacks and the whites. The overall color balance is mostly unchanged, with a slight increase in saturation in the midtones and lowlights. Thr highlights on the other hand are somewhat desaturated.

Special filter = Hipster

Special filter = Hipster

The Hipster filter follows the same pattern as other filters that have the same name in both the Live Filters section and the Edit section: the Edit version is usually lighter with higher brightness levels, less of a magenta cast in skin tones and lowlights, and a bit less contrast. Still, in relation to the originals, the Hipster has the typical crushed whites and blacks, raised contrast, and in this case an overall warming (red/yellow) of midtones and highlights.

Colorize filters:

Colorize filter = Black & White

Colorize filter = Black & White

The Black & White filter here is almost identical to the effects produced by the same filter in the Live Filter section. A comparison of the chart images shows that. The live shots also render in a similar manner, with as usual the Edit filter being a bit lighter with slightly lower contrast. This is yet another reason to always evaluate a filter with at least two (and the more, the better) different types of source material. While digital filters offer a wealth of possibilities that optical filters never could, there are very fundamental differences in how these filters work.

At a simple level, an optical filter is far more predictable across a wide range of input images than a digital filter. The more complex a digital filter becomes (and many of the filters discussed here that attempt to emulate a multitude of ‘retro’ camera effects are quite complex) the more unexpected results are possible. When you consider that a Wratten #85 warming filter is really very simple (an orange filter that essentially partially blocks bluish/cyan light) – therefore this action will occur no matter what the source image is.

A filter such as Hipster, for example, attempts to mimic what is essentially a series of composited effects from a cheap analog film camera:  chromatic aberration of the cheap plastic lens, spherical lens aberration, light leaks, vignetting due to incomplete coverage of the film (sensor) rectangle, focus anomalies due to imperfect alignment of the focal plane of the lens with the film plane, etc. etc. Trying to mimic all this with mathematics (which is what a digital filter does, it simply applies a set of algorithms to each pixel) means that it’s impossible for even the most skilled visual programmer to fully predict what outputs will occur from a wide variety of inputs.

Colorize filter = Sepia

Colorize filter = Sepia

The Sepia filter is very similar to the Rust filter – it’s another ‘monochrome-then-tint’ filter. This time instead of a reddish-brown tint, the color overlay is a warm yellow.

Colorize filter = Sunset

Colorize filter = Sunset

The Sunset filter brings increased brightness, crushed whites and blacks, increased contrast and an overall warming towards yellow/red. Looks like it’s attempting to emulate the late afternoon light.

Colorize filter = Moss

Colorize filter = Moss

The Moss filter is, well, greenish… It’s a somewhat interesting filter, as most of the tinting effect is concentrated solely on monochromatic midtones. The chart clearly shows this. The live shot demonstrates this as well, the saturated bits keep their colors, the neutrals turn minty-green. Note his shirt, her dress, yellow sign stays yellow, and skin tones/hair don’t take on that much color.

Colorize filter = Lucifer

Colorize filter = Lucifer

The Lucifer filter is – surprise – a reddish warming look. There is an overall desaturation, followed by a magenta/red cast to midtones and lowlights. A slight decrease in contrast actually gives this filter a more faded, retro look than ‘devilish’, and in some ways I prefer this look to some of the previous filters with more ‘retro-sounding’ names.

Colorize filter = Faded

Colorize filter = Faded

The Faded filter offers a desaturated, but contrasty, look. Usually I interpret a ‘faded’ look to mean the kind of visual fading that light causes on a photographic print, where all the blacks and strongly saturated colors fade to a much lighter, softer tone. In this case, much of the color has faded, but the luminance is unchanged (in terms of brightness) and the contrast is increased, resulting in the crushed whites and blacks common to Camera Plus Pro filter design.

Colorize filter = Warm

Colorize filter = Warm

The Warm filter is basically a “plus yellow” filter. Looking at the chart you can see that there is an across-the-board increase in yellow. That’s it.

Colorize filter = Arctic

Colorize filter = Arctic

The Arctic filter is, well, cold. Like several of the other tinted monochromatic filters (Rust, Sepia), this filter first renders the image to a monochrome version, then tints it at all levels with a cold blue color.

Colorize filter = Allure

Colorize filter = Allure

The Allure filter is similar to the Warming filter – an even application of a single color increase – in this case magenta. There is also a slight increase in contrast.

Artistic filters:

Artistic filter = HDR

Artistic filter = HDR

The HDR filter is an attempt to mimic the result from ‘real’ HDR (High Dynamic Range) photography. Of course without true double (or more) exposures, this is not possible, but since some of the ‘look’ that some instances of HDR processing reveal show increased contrast, saturation and so on – this filter emulates some of that. Personally, I believe that true HDR photography should be indistinguishable from a ‘normal’ image – except that it should correctly map a very wide range of illumination levels correctly. A lot of “HDR” images tend to be a bit ‘gimicky’ with excessive edge glow, false saturation, etc. While this can make an interesting ‘special effect’ I think that it would better serve the imaging community if we correctly labeled those images as ‘cartoon’ or some other more accurate name – those filter side-effects really have nothing to do with true HDR imaging. Nevertheless, to complete the description of this filter, it is actually quite ‘color-neutral- (no cast), but does add contrast, particularly edge contrast; and significant vibrance and saturation.

Artistic filter = Fantasy

Artistic filter = Fantasy

The Fantasy filter is another across-the-board ‘color cast’ filter, this time with an increase in yellow-orange. Virtually no change in contrast, just a big shift in color balance.

Artistic filter = Vignette

Artistic filter = Vignette

The Vignette filter is a spatial filter, in that it really just changes the ‘shape’ of the image, not the overall color balance or tonal gradations. It mimics the light fall-off that was typical of early cameras whose lenses had inadequate covering power (the image rendered by the lens did not extend to the edges of the film). There is a tiny loss of brightness even in the center of the frame, but essentially this filter darkens the corners.

Artistic filter = Grunge

Artistic filter = Grunge

The Grunge filter is a combination filter:  both a spatial and tonal filter. It first, like past filters that are ‘tinted monochromatic’ filters, renders the image to black & white, then tints it – in this case with a grayish-yellow cast. There is also a marked decrease in contrast, along with elevated brightness levels. This is easily evident from the grayscale strip in the chart. In the live shot you can see her handbag is now a dark gray instead of black. The spatial elements are then added:  specialized vignetting, to mimic frayed or over-exposed edges of a print, as well as ‘scratches’ and ‘wrinkles’ (formed by spatially localized changes in brightness and contrast). All this combines to offer the look of an old, faded, bent and generally funky print.

Artistic filter = Pop Art

Artistic filter = Pop Art

The Pop Art filter is very much a ‘special effects’ filter. This particular filter is based on the solarization technique. This process (solarization) is in fact a rather complex and highly variable technique. It was initially discovered by Daguerre and others who first pioneered photography in the mid-1800’s. The name comes from the reversal of image tone of a drastically over-exposed part of an image:  in this case, pictures that included the sun in direct view. Instead of the image of the sun going pure white (on the print, pure black in the negative), the sun’s image actually went back to a light gray on the negative, rendering the sun a very dark orb in the final print. One of the very first “optical special effects” in the new field of photography. This is actually cause by halogen ions released within the halide grain by over-exposure diffusing to the grain surface in amounts sufficient to destroy the latent image.

In negatives, this is correctly known as the Sabattier effect after the French photographer, who published an article in Le Moniteur de la Photographie 2 in 1862. The digital equivalent of this technique, as shown in this filter, uses image tonal mapping computation to create high contrast bands where the levels of the original image are ‘flattened’ into distinct and constant brightness bands. This is clearly seen in the grayscale strip in the chart image. It is a very distinctive look and can be visually interesting when used in a creative manner on the correct subject matter.

Artistic filter = Grayscale

Artistic filter = Grayscale

The Grayscale filter is just that:  the rendering of the original image into a grayscale image. The difference between this filter and the Black & White filters (in both Live Filters and this Edit section) is a much lower contrast. By comparing the grayscale strips in the original and filtered chart images, you can see there is virtually no difference. The Black & White filters noticeably increase the contrast.

Artistic filter = Emboss

Artistic filter = Emboss (40%)

Artistic filter = Emboss (100%)

The Emboss filter is another highly specialized effects filter. As can be seen from the chart image, the picture is rendered to a constant monochrome shade of gray, with only contrasting edges being represented by either an increase or decrease in brightness. This creates the appearance of a flat gray sheet that is ‘stamped’ or embossed with the outline of the image elements. High contrast edges are rendered sharply, lower contrast edges are softer in shape. Reading from left to right, a transition from dark to light is represented by a dark edge, from light to dark is shown as light edge. Since each of these Edit filters has an intensity slider, the effect’s strength can be ‘dialed in’ as desired. I have shown all the filters up to now at full strength, for illustrative purposes. Here I have included a sample of this filter at a 40% level, since it shows just how different a look can be achieved in some cases by not using a filter at full strength.

Artistic filter = Xray

Artistic filter = Xray

The Xray filter is yet another ‘monochromatic tint’ filter, with the image first being rendered to a grayscale image, then (in this case) undergoing a complete tonal reversal (to make the image look like a negative), then finally a tint with a dark greenish-cyan color. It’s just a look (since all ‘real’ x-ray films are black and white only), but I’m certain at least one of the millions of people that have downloaded this app will find a use for it.

Artistic filter = Heat Signature

Artistic filter = Heat Signature

The Heat Signature filter is the final filter in this Artistic group. It is illustrative of a scientific imaging method whereby infrared camera images (that see only wavelengths too long for the human eye to see) are rendered into a visual color spectrum to help illustrate relative temperatures of the observed object. In the real scientific camera systems, cooler temperatures are rendered blue, the hottest parts of the image in reds. In between temperatures are rendered in green. Here, this mapping technique is applied against the grayscale. Blacks are blue, midtones are green, highlights are red.

Distortion filters:

The geometric distortion filters are presented differently, since these are spatial filters only. There is no need, nor advantage, to using the color chart test image. I have presented each filter as a triptych, with the first image showing the control as found when the filter is opened within the app, the second image showing a manipulation of the “effects circle” (which can be moved and resized), and the third image is the resultant image after applying the filter. There are no intensity sliders on the distortion filters.

Geometric Filter: Center Offset - Initial / Targeted Area / Result

The Center Offset filter ‘pulls’ the image to the center of the circle, as if the image was on an elastic rubber sheet, and was stretched towards the center of the control circle.

Geometric Filter: Pixelate - Initial / Targeted Area / Result

The Pixelate filter distorts the image inside of the control circle by greatly enlarging the quantization factors in the affected area, causing a large ‘chunking’ of the picture. This renders the affected area virtually recognizable – often used in candid video to obfuscate the identity of a subject.

Geometric Filter: Bulge - Initial / Targeted Area / Result

The Bulge filter is similar to the Center Offset, but this time the image is ‘pulled into’ the control circle, as if a magnifying fish-eye lens was applied to just a portion of the image.

Geometric Filter: Squeeze - Initial / Targeted Area / Result

The Squeeze filter is somewhat the opposite of the Bulge filter, with the image within the control circle being reduced in size and ‘pushed back’ visually.

Geometric Filter: Swirl - Initial / Targeted Area / Result

The Swirl filter does just that:  takes the image within the control circle and rotates it. Moving the little dot controls the amount and direction of the swirl. She needs a chiropractor after this…

Geometric Filter: Noise - Initial / Targeted Area / Result

The Noise filter works in a similar way to the Pixelate filter, only this time large-scale noise is introduced, rather than pixelation.

Geometric Filter: Light Tunnel - Initial / Targeted Area / Result

The Light Tunnel filter is probably a Star Trek shadow – what part of our common culture has not been affected by that far-seeing series? Remember the ‘communicator’?  Flip type cell phones, invented 30 years later, looked suspiciously like that device…

Geometric Filter: Fish Eye - Initial / Result

The Fish Eye filter mimics what a ‘fish eye’ lens might make the picture look like. There is no control circle on this filter – it is a fixed effect. The center of the image is the center of the fish-eye effect. In this case, it’s really not that strong of a curvature effect, to me it looks about like what a 12mm lens (on a 35mm camera system) would look like. If you want to see just how wide a look is possible, go to Nikon’s site and look for examples of their 6.5mm fisheye lens. That is wide!

Geometric Filter: Mirror - Initial / Result

The Mirror filter divides the image down the middle (vertically) and reflects the left half of the image onto the right side. There are no controls – it’s a fixed effect.

Borders:

Borders: Thin White / Rounded Black / Double Frame

Borders: White Frame / Polaroid / Stamp

Borders: Torn / Striped / Grainy

Ok, that’s it. Another iPhone camera app dissected, inspected, respected. Enjoy.

iPhone4S – Section 4b: Camera+ app

March 19, 2012 · by parasam

Camera+  A full-featured camera and editing application. Version described is 3.02

Feature Sets:

  • Light Table design for selecting which photos to Edit, Share, Save or get Info.
  • Camera Functions
    • Ability to split Focus area from Exposure area
    • Can lock White Balance
    • Flash: Off/Auto/On/Torch
    • Front or Rear camera selection
    • Digital Zoom
    • 4 Shooting Modes: Normal/Stabilized/Self-Timer/Burst
  • Camera options:
    • VolumeSnap On/Off
    • Sound On/Off
    • Zoom On/Off
    • Grid On/Off
    • Geotagging On/Off
    • Workflow selection:  Classic (shoot to Lightbox) / Shoot&Share (edit/share after each shot)
    • AutoSave selection:  Lightbox/CameraRoll/Both
    • Quality:  Full/Optimized (1200×1200)
    • Sharing:  [Add social services for auto-post to Twitter, Facebook, etc.]
    • Notifications:
      • App updates On/Off
      • News On/Off
      • Contests On/Off
  • Edit Functions
    • Scenes
      • None
      • Clarity
      • Auto
      • Flash
      • Backlit
      • Darken
      • Cloudy
      • Shade
      • Fluorescent
      • Sunset
      • Night
      • Portrait
      • Beach
      • Scenery
      • Concert
      • Food
      • Text
    • Rotation
      • Left
      • Right
      • Flip Horizontal
      • Flip Vertical
    • Crop
      • Freeform (variable aspect ratio)
      • Original (camera taking aspect ratio)
      • Golden rectangle (1:1.618 aspect ratio)
      • Square (1:1 aspect ratio)
      • Rectangular (3:2 aspect ratio)
      • Rectangular (4:3 aspect ratio)
      • Rectangular (4:6 aspect ratio)
      • Rectangular (5:7 aspect ratio)
      • Rectangular (8:10 aspect ratio)
      • Rectangular (16:9 aspect ratio)
    • Effects
      • Color – 9 tints
        • Vibrant
        • Sunkiss’d
        • Purple Haze
        • So Emo
        • Cyanotype
        • Magic Hour
        • Redscale
        • Black & White
        • Sepia
      • Retro – 9 ‘old camera’ effects
        • Lomographic
        • ‘70s
        • Toy Camera
        • Hipster
        • Tailfins
        • Fashion
        • Lo-Fi
        • Ansel
        • Antique
      • Special – 9 custom effects
        • HDR
        • Miniaturize
        • Polarize
        • Grunge
        • Depth of Field
        • Color Dodge
        • Overlay
        • Faded
        • Cross Process
      • Analog – 9 special filters (in-app purchase)
        • Diana
        • Silver Gelatin
        • Helios
        • Contessa
        • Nostalgia
        • Expired
        • XPRO C-41
        • Pinhole
        • Chromogenic
    • Borders
      • None
      • Simple – 9 basic border styles
        • Thick White
        • Thick Black
        • Light Mat
        • Thin White
        • Thin Black
        • Dark Mat
        • Round White
        • Round Black
        • Vignette
      • Styled – 9 artistic border styles
        • Instant
        • Vintage
        • Offset
        • Light Grit
        • Dark Grit
        • Viewfinder
        • Old-Timey
        • Film
        • Sprockets

Camera Functions

After launching the Camera+ app, the first screen the user sees is the basic camera viewfinder.

Camera view, combined focus & exposure box (normal start screen)

On the top of the screen the Flash selector button is on the left, the Front/Rear Camera selector is on the right. The Flash modes are: Off/Auto/On/Torch. Auto turns the flash off in bright light, on in lower light conditions. Torch is a lower-powered continuous ‘flash’ – also known as a ‘battery-killer’ – use sparingly! Virtually all of the functions of this app are directed to the high-quality rear-facing camera – the front-facing camera is typically reserved for quick low-resolution ID snaps, video calling, etc.

On the bottom of the screen, the Lightbox selector button is on the left, the Shutter release button is in the middle (with the Shutter Release Mode button just to the right-center), and on the right is the Menu button. The Digital Zoom slider is located on the right side of the frame (Digital Zoom will be discussed at the end of this section). Notice in the center of the frame the combined Focus & Exposure area box (square red box with “+” sign). This indicates that both the focus and the exposure for the entire frame are adjusted using the portion of the scene that is contained within this box.

You will notice that the bottle label is correctly exposed and focused, while the background is dark and out of focus.

The next screen shows what happens when the user selects the “+” sign on the upper right edge of the combined Focus/Exposure area box:

Split focus and exposure areas (both on label)

Now the combined box splits into two areas:  an Focus area (square box with bull’s eye), and an Exposure area (round circle resembling the adjustable f-stop ring in a camera lens). The exposure is now measured separately from the focus – allowing more control over the composition and exposure of the image.

In this case the resultant image looks like the previous one, since both the focus and the exposure areas are still placed on the label, which has consistent focus and lighting.

In the next example, the exposure area is left in place – on the label – but the focus area is moved to a point in the rear of the room. You will now notice that the rear of the room has come into focus, and the label has gone soft – out of focus. However, since the the exposure area is unchanged, the relative exposure stays the same – label well lit – but the room beyond still dark.

Split focus and exposure areas, focus moved to rear of room

This level of control allows greater freedom and creativity for the photographer.  [please excuse the slight blurring of some of the screen shot examples – it’s not easy to hold the iPhone completely still while taking a screen shot – which requires simultaneously pressing the Home button and the Power button – even on a tripod]

The next image shows the results of selecting the little ‘padlock’ icon in lower left of the image – this is Lock/Unlock button for Exposure, Focus and White Balance (WB).

Showing 'lock' panel for White Balance (WB), exposure and focus

Each of the three functions (Focus, Exposure, White Balance) can be locked or unlocked independently)

Focus moved back to label, still showing lock panel

In the above example, the focus area has been moved back to the label, showing how the focus now returns to the label, leaving the rear of the room once again out of focus.

The next series of screens demonstrate the options revealed when the Shutter Release Mode button (the little gear icon to the right of the shutter button) is selected:

Shutter type 'Settings' sub-menu displayed, showing 4 options

The ‘Normal’ mode exposes one image each time the shutter button is depressed.

Shutter type changed to Stabilized mode

When the ‘Stabilizer’ mode is selected, the button icon changes to indicate this mode has been selected. This mode is an indication (and an automatic shutter release) of the stability of the iPhone camera once the Stabilizer Shutter Release button is depressed – NOT a true motion-stabilized lens as in some expensive DSLR cameras. You have to hold the camera still to get a sharp picture – this function just helps the user know that the camera is indeed still. Once the Stabilizer Shutter Release is pushed, it glows red if the camera is moving, and text on the screen urges the user to hold still. As the camera detects that motion has stopped (using the iPhone’s internal accelerometer – motion detector) little beeps sound, and the shutter button changes color from red to yellow to green and then the picture is taken.

Shutter type changed to Timer mode - screen shows beginning of 5sec countdown timer

The Self-Timer Shutter Release mode allows a time delay before the actual shutter release occurs – after the shutter button is depressed. The most common use for this feature is a self-portrait (of course you need either a tripod or other method of securing the iPhone so the composition does not change!). This mode can also be useful to avoid jiggling the camera while pressing the shutter release – important in low light situations. The count-down timer is indiated by the numbers in the center of the screen. Once the shutter is depressed, the numbers count down (in seconds) until the exposure occurs. The default time delay is 5 seconds, this can be adjsuted by tapping the number on the screen before the shutter button is selected. The choices are 5, 15 and 30 seconds.

Shutter type changed to Burst mode

The final of the four shutter release modes is the ‘Burst’ mode. This exposes a short series of exposures, one right after the other. This can be useful for sports or other fast moving activity, where the photographer wants to be sure of catching a particular moment. The number of exposures taken is a function of how long you hold down the shutter release – the camera keeps taking pictures as fast as it can as long as you hold down the shutter.

There are a number of things to be aware of while using this mode:

  • You must be in the ‘Classic’ Workflow, not the ‘Shoot & Share’ ( more on this below when we discuss that option)
  • The best performance is obtained when the AutoSave mode is set to ‘Lightbox’ – writing directly to the Camera Roll (using the ‘Camera Roll’ option) is slower, leading to more elapsed time between each exposure. The last option of AutoSave (‘Lightbox & CameraRoll’) is even slower, and not recommended for burst mode.
  • The resolution of burst photos is greatly reduced (from 3264×2448 down to 640×480). This is the only way the data from the camera sensor can be transferred quickly enough – but one of the big differencdes between the iPhone camera system and a DSLR. The full resolution is 8megapixels, the burst resolution is only 0.3megapixels – more than 25x less resolution!

Resultant picture taken with exposure & focus on label

The above shot is an actual unretouched image using the settings from the first example (focus and exposure areas both set on label of the bottle).

Here is an example of how changing the placement of the Exposure Area box within the frame affects the outcome of the image:

exposure area set on wooden desktop - normal range of exposure

exposure area set on white dish - resulting picture is darker than normal

exposure area set on black desk mat - resulting image is lighter than normal

To fully understand what is happening above you need to remember that any camera light metering system sets the exposure assuming that you have placed the exposure area on a ‘middle gray’ value (Zone V). If you place the exposure measurement area on a lighter or darker area of the image the exposure may not be what you envisioned. Further discussion of this topic is outside the scope of this blog – but it’s very important, so if you don’t know – look it up.

The Lightbox

The next step after shooting a frame (or 20) is to process (edit) the images. This is done from the Lightbox. This function is entered by pressing the little icon of a ‘film frame’ on the left of the bottom control bar.

empty Lightbox, ready to import a photograph for editing

The above case shows an empty Lightbox (which is how the app looks after all shots are edited and saved). If you have just exposed a number of images, they will be waiting for you in the Lightbox – you will not need to import them. The following steps are for when you are processing previously exposed images (and it doesn’t matter if they were shot with Camera+ or any other camera app. I sometimes shoot with my film camera, scan the film, import to iPhone and edit with Camera+ in order to use a particular filter that is available).

selection window of the Lightbox image picker

an image selected in the Lightbox image picker, ready for loading into the Lightbox for editing

image imported into the Lightbox, ready for an action

When entering the Edit mode after loading an image, the following screen is displayed. There are two buttons on the top:  Cancel and Done. Cancel returns the user to the Lightbox, abandoning any edits or changes made while in the Edit screen, while Done applies all the edits made and returns the user to the Lightbox where the resultant image can be Shared or Saved to the Camera Roll.

Along the bottom of the screen are two ribbons showing all the edit functions. The bottom ribbon selects the particular edit mode, while the top ribbon selects the actual Scene, Rotation, Crop, Effect or Border that should be applied. The first set of individual edit functions that we will discuss are the Scenes. The following screen shots show the different Scene choices in the upper ribbon.

Scene modes 1 - 5

Scene modes 6 - 9

Scene modes 10 - 14

Scene modes 15 - 17

The ‘Scenes’ that Camera+ offers are one of the most powerful functions of this app. Nevertheless, there are some quirks (mostly about the naming – and the most appropriate way to apply the Scenes, based on the actual content of your image) that will be discussed. The first thing to understand is the basic difference between Scenes and Effects. Both at the most fundamental level transform the brightness, contrast, color, etc of the image (essentially the visual qualities of the image) – as opposed to the spatial qualities of the image that are adjusted with Rotation, Crop and Border. However, a Scene typically adjusts overall contrast, color balance, sometimes modifies white balance, brightness and so on. An Effect is a more specialized filter – often significantly distorting the original colors, changing brightness or contrast in a certain range of values, etc. – the purpose being to introduce a desired effect to the image. Many times a Scene can be used to ‘rescue’ an image that was not correctly exposed, or to change the feeling, mood, etc. of the original image. Another way to think about a Scene is that the result of applying a Scene will still almost always look as if the image had just been taken by the camera, while an Effect very often is clearly an artificially applied filter.

In order to best demonstrate and evaluate the various Scenes that are offered, I have assembled a number of images that show a “before and after” of each Scene type. Within each Scene pair, the left-hand image is always the unadjusted original image, while the right-hand image has the Scene applied. The first series of test images is constructed with two comparisons of each Scene type: the first image pair shows a calibrated color test chart, the second image pair shows a woman in a typical outdoor scene. The color chart can be used to analyze how various ranges of the image (blacks, grays, whites, colors) are affected by the Scene adjustment; while the woman subject image is often a good representation of how the Scene will affect a typical real-world image.

After all of the Scene types are shown in this manner, I have added a number of sample images, with certain Scene types applied – and discussed – to better give a feeling of how and why certain Scene types may work best in certain situations.

Scene = Clarity

Scene = Clarity

The Clarity scene type is one of the most powerful scene manipulations offered – it’s not an accident that it is the first scene type in the ribbon… The power of this scene is not that obvious from the color chart, but it is more obvious in the human subject. This particular subject, while it shows most of the attributes of the clarity filter well, is not ideally suited for application of this filter – better examples follow at the end of this section. The real effect of this scene is to cause an otherwise flat image to ‘pop’ more, and have more visual impact. However, just like in other parts of life – less is often more. My one wish is that an “intensity slider” was included with Scenes (it is only offered on Effects, not Scenes) – as many times I feel that the amount of Clarity is overblown. There are techniques to accomplish a ‘toning down’ of Clarity, but those will only be discussed in Part 5 of this series – Tips & Techniques for iPhonography – as currently this requires the use of multiple apps – which is beyond the scope of the app introduction in this part of the series. The underlying enhancement appears to be a spatially localized increase of contrast, and an increase in vibrance and saturation of color.

Notice in the gray scale of the chart that the edges of each density chip are enhanced – but the overall gamma is unchanged (the steps from white to black remain even and separately identifiable). Look at the color patches – there is an increase in saturation (vivedness of color) – but this is more pronounced in colors that are already somewhat saturated. For instance look at the pastel colors in the range of columns 9-19 and rows B-D:  there is little change in overall saturation. Now look at, for instance, the saturated reds and greens of columns 17-18 and rows J-L:  these colors have picked up noticeabley increased saturation.

Looking at the live subject, the local increase in contrast can easily be seen in her face, with the subtle variations in skin tone in the original becoming much more pronounced in the Clarity scene type. The contrast between the light print on her dress and the gray background is more obvious with Clarity applied. Observe how the wrinkles in the man’s shirt and shorts are much more obvious with Clarity. Notice the shading on the aqua-colored steel piping in the lower left of the image: in the original the square pipes look very evenly illuminated, with Clarity applied there is a noticeable transition from light to dark along the pipe.

Scene = Auto

Scene = Auto

The Auto scene type is sort of like using a ‘point and shoot’ camera set to full automatic – the user ideally doesn’t have to worry about exposure, etc. Of course, in this case since the image has already been exposed there is a limit to the corrections that can be applied! Basically this Scene attempts to ensure full tonal range in the image and will manipulate levels, gamma, etc. to achieve a ‘centered’ look. For completeness I have included this Scene – but as you will notice, and this should be expected – there is almost no difference between the befor and after images.With correctly exposed initial images this is what should happen… It may not be apparent on the blog, but when looking carefully at the color chart images on a large calibrated monitor the contrast is increased slightly at both ends of the gray scale:  the whites and blacks appear to clip a little bit.

Scene = Flash

Scene = Flash

The Flash scene type is an attempt to ‘repair’ an image that was taken in low light without a flash – when it was probably needed. Again, like any post-processing technique, this Scene cannot make up for something that is not there:  areas of shadow in which there was no detail in the original image can at best only be turned into lighter noise… But in many cases it will help underexposed images. The test chart clearly shows the elevation in brightness levels all through the gray scale – look for example at gray chip #11 – the middle gray value of the original is considerably lightened in the right-hand image. This Scene works best on images that are of overall low light level – as you can see from both the chart and the woman areas of the picture that are already well-lit tend to be blown out and clipped.

Scene = Backlit

Scene = Backlit

The Backlit scene type tries to correct for the ‘silhouette’ effect that occurs when strong light is coming from behind the subject without corresponding fill light illuminating the subject from the front. While I agree that this is a useful scene correction, it is hard to perform after the fact – and personally I think this is one Scene that is not well executed. My issue is with the over-saturation of reds and yellows (Caucasian skin tones) that more often than not make the subject look like a boiled lobster. I think this comes from the attempt to raise the percived brightness of an overly dark skin tone (since the most common subject in such a situation is a person standing in front of a brightly lit background). You will notice on the chart that the gray scale is hardly changed from the original (a slight overall brightness increase) – but the general color saturation is raised. A very noticeable increase in red/orange/yellow saturation is obvious:  look at the red group of columns 7-8 and rows A-B. In the original these four squares are clearly differentiated – in the ‘after’ image they have merged into a single fully saturated area. A glance at the woman’s image also shows overly hot saturation of skin tones – even the man in the background has a hot pink face now. So, to summarize, I would reserve this Scene for very dark silhouette situations – where you need to rescue an otherwise potentially unusable shot.

Scene = Darken

Scene = Darken

The Darken scene type does just what it says – darkens the overall scene fairly uniformly. This can often help a somewhat overexposed scene. It cannot fix one of the most common problems with digital photography however:  the clipping of light areas due to overexposure. As explained in a previous post in this series, once a given pixel has been clipped (driven into pure white due to the amount of light it has received) nothing can recover this detail. Lowering the level will only turn the bright white into a dull gray, but no detail will come back. Ever. A quick look at the gray scale in the right-hand image clearly shows the lowering of overall brightness – with the whites manipulated more than the blacks. For instance, chip #1 turns from almost white into a pale gray, while chip #17 shows only a slight darkening. This is appropriate so the blacks in the image are not crushed. You will notice by looking at the colors on the chart that darkening effect is pretty much luminance only – no change in color balance. The apparent increase in reds in the skin tone of the woman is a natural side-effect of less luminance with chrominance held constant – once you remove the ‘white’ from a color mixture the remaining color appears more intense. You can see the same effect in the man’s shirt and the yellow background. Ideally the saturation should be reduced slightly as well as the luminance in this type of filter effect – but that can be tricky with a single filter designed to work with any kind of content. Overall this is a useful filter.

Scene = Cloudy

Scene = Cloudy

The Cloudy scene type appears to normalize the exposure and color shift that occurs when the subject is photographed under direct cloudy skies. This is in contrast to the Shade scene type (discussed next) where the subject is shot in indirect light (open shade) while illuminated from a clear sky. This is mostly a color temperature problem to solve – remember that noon sunlight is approximately 5500°K (degrees Kelvin is a measure of color temperature, low numbers are reddish, high numbers are bluish, middle ‘white’ is about 5000°K). Illumination from a cloudy sky is often very ‘blue’ (in terms of color temperature) – between 8000°K – 10000°K, while open shade is less so, usually between 7000°K – 8000°K. If you compare the two scene types (Cloudy and Shade) you will notice that the image is ‘warmed up’ more with the Cloudy scene type. There is a slight increase in brightness but the main function of this scene type is to warm up the image to compensate for the cold ‘look’ often associated with shots of this type. You can see this occuring in the reddening of the woman’s skin tones, and the warming of the gray values of the sidewalk between her and the man.

Scene = Shade

Scene = Shade

The Shade scene type is similar in function to the Cloudy scene (see above for comparison) but differs in two areas: There is a noticeable increase in brightness (often images exposed in the shade are slightly underexposed) and there is less warming of the image (due to open shade being warmer in color temperature than a full cloudy scene illumination). One easy way to compare the two scene types is to examine the color charts – looking at the surround (where the numbers and letters are) – the differences are easy to see there. A glance at the test shot of the woman shows a definite increase in brightness, as well as a slight warming – again look at the skin tones and the sidewalk.

Scene = Fluorescent

Scene = Fluorescent

The Fluorescent scene type is designed to correct for images shot under fluorescent lighting. This form of illumination is not a ‘full-spectrum’ light source, and as such has some unnatural effects when used to take photographs. Our human vision corrects for this – but film or digital exposures do not – so such photos tend to have a somewhat green/cyan cast to them. In particular this makes light-colored skin look a bit washed out and sickish. The only real change I can see in this scene filter is an increase in magenta in the overall color balance (which will help to correct for the green/cyan shift under fluorescent light). The difference introduced is very small – I think it will help in some cases and may be insufficient in others. It is more noticeable in the image of the woman than the test chart (her skin tones shift towards a magenta hue).

Scene = Sunset

Scene = Sunset

The Sunset scene type starts a slightly different type of scene filter – ones that are apparently designed to make an image look like the scene name, instead of correcting the exposure for images taken under the named situation (like Shade, Cloudy, etc.). This twist in naming conventions is another reason to always really test and understand your tools – know what the filter you are applying does at a fundamental level and you will have much better results. It’s obvious from both the chart and the woman that a marked increase in red/orange is applied by this scene type. There is also a general increase in saturation – just more in the reds than the blues. See the difference in the man’s shirt and shorts to see how the blue saturation increases. Again, like the Backlit filter, I personally feel this effect is a bit overdone – and within Camera+ there is no simple way to remedy this. There are methods, using another app to follow the corrections of this app – these techniques will be discussed in the next of the series (Tips & Techniques for iPhonography).

Scene = Night

Scene = Night

The Night scene type attempts to correct for images taken at night under very low illumination. This scene is a bit like the Flash type – but on steroids! The full gray scale is pushed towards white a lot, with even the darkest shadow values receiving a noticeable bump in brightness. There is also a general increase in saturation – as colors tend to be undersaturated when poorly illuminated. Of course this will make images that are normally exposed look greatly overexposed (see both the chart and the woman), but it still gives you an idea of how the scene filter works. Some better ‘real-world’ examples of the Night scene type follow this introduction of all the scene types.

Scene = Portrait

Scene = Portrait

The Portrait scene type is basically a contrast and brightness adjustment. It’s easy to see in the chart comparison, both in the gray scale and on the color chip chart. Look first at the gray scale: all of the gray and white values from chip #17 and lighter are raised, the chips below #17 are darkened. Chips #1 and #2 are now both pure white, having no differentiation. Likewise with chips #20-22, they are pure black. The area defined by columns 13-19 and rows A-C are now pure white, compared to the original where clear differences can be noted. Likewise row L between columns 20-22 can no longer be differentiated. In the shot of the woman, this results in a lightening of skin tones and increased contrast (notice her black bag is solid black as opposed to very dark gray; the patch of sunlight just above her waist is now solid white instead of a bright highlight). Again, like several scene effects I have noted earlier, I find this one a little overstated – I personally don’t like to see details clipped and crushed – but like any filter, the art is in the application. This can be very effective on a head shot that is flat and without punch. The trick is applying the scene type appropriately. Knowledge furthers…

Scene = Beach

Scene = Beach

The Beach scene type looks like the type of filter first mentioned in Sunset (above). In other words, an effect to make the image look like it was taken on a beach (or at least that type of light!). It’s a little bit like the previous Portrait scene (in that there is an increase in both brightness and contrast – but less than Portrait) but also has a bit of Sunset in it as well (increased saturation in reds and yellows – but again not as much as Sunset). While the Sunset type had greater red saturation, this Beach filter is more towards the yellow. See column #15 in the chart – in the original each square is clearly differentiated, in the right-hand image the more intense yellows are very hard to tell apart. Just to the right, in column #17, the reds have not changed that much. When looking at the woman, you can see that this scene type makes the image ‘bright and yellow/red’ – sort of a ‘beachy’ effect I guess.

Scene = Scenery

Scene = Scenery

The Scenery scene type produces a slight increase in contrast, along with increased saturation in both reds and blues – with blue getting a bit more intensity. Easy to compare using the chart, and in the shot of the woman this can be seen in reddish skin tone, as well as significantly increased saturation in the man’s shirt and shorts. While this effect makes people look odd, it can work well on so-called “postcard” landscape shots (which tend to be flat panoramic views with low contrast and saturation). However, as you will see in the ‘real-world’ examples below, often a different scene or filter can help out a landscape in even a better way – it all depends on the original photo with which you are starting.

Scene = Concert

Scene = Concert

The Concert scene type is oddly enough very similar to the previous scene type (Scenery) – just turned up really loud! A generalized increase in contrast, along with red and blue saturation increase – attempts to make your original scene look like, well, I guess a rock-and-roll concert… Normal exposures (see the woman test shot) come out ‘hot’ (overly warm, contrasty with elelvated brightness) but if you need some color and punch in your shot, or desire an overstated effect this scene type could be useful.

Scene = Food

Scene = Food

The Food scene type offers a slight increase in contrast and a bit of increased saturation in the reds. This can be seen in both the charts and the woman shot. It’s less overdone than several of the other scene types, so keep this in mind for any shot that needs just a bit of punch and warming up – not just for shots of food. And again, I see this scene type in the same vein as Beach, Sunset, etc. – an effect to make your shot look like the feeling of the filter name, not to correct shots of food…

Scene = Text

Scene = Text

The Text scene type is an extreme effect – and best utilized as its name implies – to help render shots of text material come out clearly and legibly. An example of actual text is shown below, but you can see from the chart that this is accomplished by a very high contrast setting. The apparent increase in saturation in the test chart is really more of a result of the high contrast – I don’t think a deliberate increase in saturation was added to this effect.

(Note:  the ‘real-world’ examples referred to in several of the above explanations will be shown at the very end of this discussion, after we have illustrated the remaining basic edit functions (Rotation, Crop, Effects and Borders) in a similar comparative manner as above with the Scene types. This is due to many of the sample shots combining both a Scene and an Effect (one of the powerful capabilities of Camera+) – I want the viewer to fully understand the individual instruments before we assemble a symphony…)

The next group of Edit functions is the image Rotation set.

Rotation submenu

The usual four choices of image rotation are presented.

The Crop functions are displayed next.

group 1 of crop styles

group 2 of crop styles

group 3 of crop styles

The Effects (FX) filters, after the Scene types, are the most complex image manipulation filters included in the Camera+ app. As opposed to Scenes, which tend to affect overall lighting, contrast and saturation -and keep a somewhat realistic look to the image – the Effects filters seriously bend the image into a wide variety of (often but not always) unrealistic appearances. Many of the effects mimic early film processes, emulsions or camera types; some offer recent special filtering techniques (such as HDR – High Dyamic Range photography), and so on.

The Effects filters are grouped into four sections:  Color, Retro, Special and Analog. The Analog filter group requires an in-app purchase ($0.99 at the time of this article). In a similar manner to how Scene types were introduced above, an image comparison is shown along with the description of each filter type. The left-hand image is the original, the right-hand image has the specific Effects filter applied. Some further ‘real-world’ examples of using filters on different types of photography are included at the end of this section.

Color Effects filters

Color Effects filters

Color Filter = Vibrant

The Vibrant filter significantly increases the saturation of colors, and appears to enhance the red channel more than green or blue.

Color Filter = Sunkiss'd

The Sunkiss’d filter is a generalized warming filter. Notice, that in contrast to the Vibrance filter above, even the tinfoil dress on the left-hand dancer has turned a warm gold color – where the Vibrance filter did not alter the silver tint – as that filter has no effect on portions of the image that are monochrome. Also, all colors in Sunkiss’d are moved towards the warm end of the spectrum – note the gobo light effect projected above the dancers: it is pale blue in the original, and becomes a warmer green/aqua after the application of Sunkiss’d.

Color Filter = Purple Haze

The Purple Haze filter does not provide hallucinogenic experiences for the photographer, but does perhaps simulate what the retina might imagine… Increased contrast, increased red/blue saturation and a color shift towards.. well… purple.

Color Filter = So Emo

The So Emo filter type (So Emotional?) starkly increases contrast, shifts color balance towards cyan, and to add a counterpoint to an overall cyan tint there is apparently a narrow band enhancement of magenta – notice the enhancement of the tulle skirt on the center dancer that should otherwise be almost monochromatic with addition of so much cyan shift in the color balance. However, the flesh tones of the dancers’ legs (more reddish) are rendered almost colorless by the cyan tint; this shows that the enhancement is narrow-band, it does not include red.

Color Filter = Cyanotype

The Cyanotype effects filter is reminiscent of early photogram techniques (putting plant leaves, etc. on photo paper sensitized with the cyanotype process in direct sunlight to get a silhouette exposure). This is the same process that makes blueprints. Later it was used (in a similar way as sepia toning) to tint black & white photographs. In the case of this effects filter, the image is first rendered to monochrome, then subsequently tinted with a slightly yellowish cyan color.

Color Filter = Magic Hour

The Magic Hour filter attempts to make the image look like it was taken during the so-called “Magic Hour” – the last hour before sunset – when many photographers feel the light is best for all types of photography, particularly landscape or interpretive portraits. Brightness is raised, contrast is slightly reduced and a generalized warming color shift is applied.

Color Filter = Redscale

The Redscale effects filter is a bit like the previous Magic Hour, but instead of a wider spectrum warming, the effect is more localized to reds and yellows. The contrast is slightly raised, instead of lowered as for Magic Hour, and the effect of the red filter can clearly be seen on the gobo light projected above the dancers:  the original cyan portion of the light is almost completely neutralized by the red enhancement, leaving only the green portion of the original aqua light remaining.

Color Filter = Black White

The Black & White filter does just what it says: renders the original color photograph into a monochrome only version. It looks like this is a simple monochrome conversion of the RGB channels (color information deleted, only luminance kept). There are a number of advanced techniques for rendering a color image into a high quality black & white photo – it’s not as simple as it sounds. If you look at a great black and white image from a film camera, and compare it to a color photograph of the same scene, you will know what I am saying. There are specialized apps for taking monochrome pictures with the iPhone (one of which will be reviewed later in this series of posts); and there is a whole set of custom filters in Photoshop devoted to just this topic – getting the best possible conversion to black & white from a color original. In many cases however a simple filter like this will do the trick.

Color Filter = Sepia

The Sepia filter, like Cyanotype, is a throwback to the early days of photography – before color film – when black & white images were toned to increase interest. In the case of this digital filter, the image is first turned into monochrome, then tinted with a sepia tone via color correction.

Retro Effects filters

Retro Effects filters

Retro Filter = Lomographic

The Lomographic filter effect is designed to mimic some of the style of photograph produced by the original LOMO Plc camera company of Russia (Leningrad Optical Mechanical Amalgamation). This was a low cost automatic 35mm film camera. While still in production today, this and similar cameras account for only a fraction of LOMO’s production – the bulk is military and medical optical systems – and are world class… Due to the low cost of components and production methods, the LOMO camera exhibited frequent optical defects in imaging, color tints, light leaks, and other artifacts. While anathema to professional photographers, a large community that appreciates the quirky effects of this (and other so-called “Lo-Fi” or Low Fidelity) cameras has sprung up with a world-wide following. Hence the Lomographic filter…

While, like all my analysis on Scenes and Effects, I have no direct knowledge of how the effect is produced, I bring my scientific training and decades of photographic experience to help explain what I feel is a likely design, based on empirical study of the effect. That said, this effect appears to show increased contrast, a greenish/yellow tint for the mid-tones (notice the highlights, such as the white front stage, stay almost pure white). A narrow-band enhancement filter for red/magenta keeps the skin tones and center dancer’s dress from desaturating in the face of the green tint.

Retro Filter = '70s

The ’70s effect is another nod to the look of older film photographs, this one more like what Kodachrome looked like when the camera was left in a hot car… All film stock is heat sensitive, with color emulsions, particularly older ones, being even more so. While at first this filter has a resemblance to the Sunkiss’d color filter, the difference lies in the multi-tonal enhancements of the ’70s filter. The reds are indeed punched up, but that’s in the midtones and shadows – the highlights take on a distinct greenish cast. Notice that once the enhanced red nulled out the cyan in the overhead gobo projection, then the remaining highlights have turned bright green – with a similar process occuring on the light stage surface.

Retro Filter = Toy Camera

The Toy Camera effects filter emulates the low cost roll-film cameras of the ’50s and ’60s – with the light leaks, uneven processing, poor focus and other attributes often associated with photographs from that genre of cameras. Increased saturation, a slightly raised black level, spatially localized contrast enhancement (a technique borrowed from HDR filtering) – notice the slight flare on the far right over the seated woman’s head become a bright hot flare in the filtered image, and streaking to simulate light leakage on film all add to the multiplicity of effects in this filter.

Retro Filter = Hipster

The Hipster effect is another of the digital memorials to the original Hipstamatic camera – a cheap all plastic 35mm camera that shot square photos. Copied from an original low-cost Russian camera, the two brothers that invented it only produced 157 units. The camera cost $8.25 in 1982 when it was introduced. With a hand-molded plastic lens, this camera was another of the “Lo-Fi” group of older analog film cameras whose ‘look’ has once again become popular. As derived by the Camera+ crew, the Hipster effect offers a warm, brownish-red image. Achieved apparently with raised black levels (a common trait of cheap film cameras, the backs always leaked a bit of light so a low level ‘fog’ of the film base always tended to raise deep blacks [areas of no light exposure in a negative] to a dull gray); a pronounced color shift towards red/brown in the midtones and lowlights; and an overall white level increase (note the relative brightness of the front stage between the original and the filtered version),

Retro Filter = Tailfins

The Tailfins retro effect is yet another take on the ’50s and ’60s – with an homage to the 1959 Cadillac no doubt – the epitomy of the ‘tailfin era’. It’s similar to the ’70s filter described above, but lacks the distinct ‘overcooked Kodachrome’ look with the green highlights. Red saturation is again pushed up, as well as overall brightness. Once again blacks are raised to simulate the common film fog of the day. Lowered contrast finishes the look.