• Home
  • about this blog
  • Blog Posts

Parasam

Menu

  • design
  • fashion
  • history
  • philosophy
  • photography
  • post-production
    • Content Protection
    • Quality Control
  • science
  • security
  • technology
    • 2nd screen
    • IoT
  • Uncategorized
  • Enter your email address to follow this blog and receive notifications of new posts by email.

  • Recent Posts

    • Take Control of your Phone
    • DI – Disintermediation, 5 years on…
    • Objective Photography is an Oxymoron (all photos lie…)
    • A Historical Moment: The Sylmar Earthquake of 1971 (Los Angeles, CA)
    • Where Did My Images Go? [the challenge of long-term preservation of digital images]
  • Archives

    • September 2020
    • October 2017
    • August 2016
    • June 2016
    • May 2016
    • November 2015
    • June 2015
    • April 2015
    • March 2015
    • December 2014
    • February 2014
    • September 2012
    • August 2012
    • June 2012
    • May 2012
    • April 2012
    • March 2012
    • February 2012
    • January 2012
  • Categories

    • 2nd screen
    • Content Protection
    • design
    • fashion
    • history
    • IoT
    • philosophy
    • photography
    • post-production
    • Quality Control
    • science
    • security
    • technology
    • Uncategorized
  • Meta

    • Register
    • Log in
    • Entries feed
    • Comments feed
    • WordPress.com

Archive For August, 2016

Objective Photography is an Oxymoron (all photos lie…)

August 18, 2016 · by parasam

There is no such thing as an objective photograph

A recent article in the Wall Street Journal (here) entitled “When Pictures Are Too Perfect” prompted this post. The premise of the article is that too much ‘manipulation’ (i.e. Photoshopping) is present in many of today’s images, particularly in photojournalism and photo contests. There is evidently an arbitrary standard (that no one can appear to objectively define) that posits that essentially only an image ‘straight out of the camera’ is ‘honest’ or acceptable – particularly if one is a photojournalist or is entering your image into some form of competition. Examples are given, such as Harry Fisch having a top prize from National Geographic (for the image “Preparing the Prayers at the Ganges”) taken away because he digitally removed an extraneous plastic bag from an unimportant area of the image. Steve McCurry, best known for his iconic “Afghan Girl” photo on the cover of National Geographic magazine in 1985, was accused of digital manipulation of some images shot in 1983 in Bangladesh and India.

On the whole, I find this absurd and the logic behind such attempts at defining an ‘objective photograph’ fatally flawed. From a purely scientific point of view, there is absolutely no such thing as an ‘objective’ photograph – for a host of reasons. All photographs lie, permanently and absolutely. The only distinction is by how much, and in how many areas.

The First Lie: Framing

The very nature of photography, from the earliest days until now, has at its core an essential feature: the frame. Only a certain amount of what can be seen by the photographer can be captured as an image. There are four edges to every photograph. Whether the final ‘edges’ presented to the viewer are due to the limitations of the camera/film/image sensor, or cropping during the editing process, is immaterial. The initial choice of frame is made by the photographer, in concert with the camera in use, which presents physical limitations that cannot be exceeded. The choice of frame is completely subjective: it is the eye/brain/intuition of the photographer that decides in the moment where to point the camera, what to include in the frame. Is pivoting the camera a few degrees to the left to avoid an unsightly telephone pole “unwarranted digital manipulation?” Most news editors and photo contest judges would probably not agree. But what if the same exact result is obtained by cropping the image during an editing process – already we start to see disagreement in the literature.

If Mr. Fisch had simply walked over and picked up the offending plastic bag before exposing the image, he would likely be the deserved recipient of his 1st place prize from National Geographic, but as he removed the bag during editing his photograph was disqualified. By this same logic, when Leonardo Da Vinci painted the “Mona Lisa” there is a balustrade with two columns behind her. There is perfect symmetry in the placement of Lisa Gherardini (the presumed model) between the columns, which helps frame the subject. Painting takes time, it is likely that a bird would land from time to time on the balustrade. Was Leonardo supposed to include the bird or not? Did he ‘manipulate’ the image by only including the parts of the image that were important to the composition? Would any editor or judge dare ask him today, if that was possible?

Framing and large depth-of-field was necessary in this shot to tell the story: Background and facial expression of the subject both needed to be in focus. [©2012 Ed Elliott / Clearlight Imagery]

“So-So Happy!” Framing and large depth-of-field was necessary in this shot to tell the story: Background and facial expression of the subject both needed to be in focus. [©2012 Ed Elliott / Clearlight Imagery]

Framing and large depth-of-field was necessary in this shot to tell the story: Background and facial expression of the subject both needed to be in focus. [©2012 Ed Elliott / Clearlight Imagery]

“So-So Happy… NOT!” Framing and large depth-of-field was necessary in this shot to tell the story: Background and facial expression of the subject both needed to be in focus. [©2012 Ed Elliott / Clearlight Imagery]

A combination example of framing and depth-of-field. One photographer is standing 6 ft further away (from my camera position) than the other, but the foreshortening of the 200mm telephoto appears to depict 'dueling photographers'. [©2012 Ed Elliott / Clearlight Imagery]

A combination example of framing and depth-of-field. One photographer is standing 6 ft further away (from my camera position) than the other, but the foreshortening of the 200mm telephoto appears to depict ‘dueling photographers’. [©2012 Ed Elliott / Clearlight Imagery]

The Second Lie: The Lens

No photograph can occur without a lens. Every lens has certain irrefutable properties: focal length and maximum aperture being the most important. Each of these parameters impart a vital, and subjective, aspect to the image subsequently captured. Since the ‘lingua franca’ of focal length is the ubiquitous 35mm camera, we can generalize here: 50mm being the so-called ‘normal’ lens; 35mm is considered ‘wide angle’, 24mm ‘very wide angle’ and 10mm a ‘fisheye’. Going in the other direction, 85mm is often considered a ‘portrait’ lens (slight close-up), 105mm a medium ‘telephoto’, 200mm a ‘telephoto’ and anything beyond is for sports or space exploration. Each focal length brings more or less of the frame into focus, and inversely shortens the depth of field. Wide angle lenses tend to bring the entire field of view into sharp focus, while telephotos blur out everything except what the photographer has selected as the prime focus point.

Normal

Normal lens [©2016 Ed Elliott / Clearlight Imagery]

Telephoto lens (for illustrative purpose only, please ignore the soft focus at edges caused by lens adapter)

Telephoto lens (for illustrative purpose only, please ignore the soft focus at edges caused by lens adapter) [©2016 Ed Elliott / Clearlight Imagery]

Wide Angle lens (for illustrative purpose only, please ignore the soft focus at edges caused by lens adapter)

Wide Angle lens (for illustrative purpose only, please ignore the soft focus at edges caused by lens adapter) [©2016 Ed Elliott / Clearlight Imagery]

FishEye lens (for illustrative purpose only, please ignore the soft focus at edges caused by lens adapter) - curvature and edge distortions are normal for such an extreme angle-of-view lens

FishEye lens (for illustrative purpose only, please ignore the soft focus at edges caused by lens adapter) – curvature and edge distortions are normal for such an extreme angle-of-view lens   [©2016 Ed Elliott / Clearlight Imagery]

In addition, each lens type distorts the field of view noticeably: wide angle lenses tend to exaggerate the distance between foreground and background, making the closer objects in the frame look larger than they actually are, and making distant objects even smaller. Telephoto lenses have the opposite effect, foreshortening the image and ‘flattening’ the resulting picture. For example, in a long telephoto shot of a tree on a ridge backlit by the moon, both the tree and the moon can be tack sharp and apparently the moon is directly behind the tree, even though it is 239,000 miles away.

The other major ‘subjective’ quality of any lens is the aperture chosen by the photographer. Otherwise commonly known as the “f-stop” this is the ratio of the focal length of the lens divided by the diameter of the ‘entrance pupil’ (the size of the hole that the aperture diaphragm is set to on a given capture). The maximum aperture (the largest ‘hole’ that can be set by the photographer) depends on the diameter of the lens itself, in relation to the focal length. For example, with a ‘normal’ 50mm lens if the lens is 25mm in diameter then the maximum aperture is f/2 (50/25). Larger apertures (lower f-stop ratios) require larger lenses, and are correspondingly more difficult to use, heavy and expensive. One can see that an f/2 lens for a 50mm focal length is not that huge, to obtain the same f/2 ratio for a 200mm telephoto would require a lens that is at least 100mm (4in) in diameter – making such a device huge, heavy and obscenely expensive. As a quick comparison, (Nikon lenses, full frame, prime lens, priced from B&H Photo – discount photo equipment supplier) a 50mm f/2.8 lens costs $300, while the same lens in f/1.2 costs $700. A 400mm telephoto in f/5.6 would be $2,200, while an identical focal length with a maximum aperture of f/2.8 will set you back a little over $12,000.

Exaggeration of object size with wide angle lens: farther objects appear much smaller than in 'reality'.

Exaggeration of object size with wide angle lens: farther objects appear much smaller than in ‘reality’. [©2011 Ed Elliott / Clearlight Imagery]

Flattening and foreshortening of the image as a result of long telephoto lens (f8, 400mm lens) - the crane is hundreds of feet closer to the camera than the dark buildings behind, but looks like they are directly adjacent.

Flattening and foreshortening of the image as a result of long telephoto lens (f/8, 400mm lens) – the crane is hundreds of feet closer to the camera than the dark buildings behind, but looks like they are directly adjacent. [©2013 Ed Elliott / Clearlight Imagery]

Depth of field with shallow aperture (f/2.4) - in this case even with a wide angle lens the background is out of focus due to the large distance between the foreground and the background (in this case the Hudson River separated the two...)

Depth of field with shallow aperture (f/2.4) – in this case even with a wide angle lens the background is out of focus due to the large distance between the foreground and the background (in this case the Hudson River separated the two…) [©2013 Ed Elliott / Clearlight Imagery]

Flattening and foreshortening of the image with a long telephoto lens. The ship is almost 1/4 mile further away than the green roadway sign, yet appears to be directly behind it... (f4, 400mm)

Flattening and foreshortening of the image with a long telephoto lens. The ship is almost 1/4 mile further away than the green roadway sign, yet appears to be directly behind it… (f/4, 400mm) [©2013 Ed Elliott / Clearlight Imagery]

Wide angle lens (14-24mm zoom lens, set at 16mm - f2.8)

Wide angle lens (14-24mm zoom lens, set at 16mm – f/2.8) [©2012 Ed Elliott / Clearlight Imagery]

Shallow depth of field due to large aperture on telephoto lens (f/4 - 200mm lens on full-frame 35mm DSLR)

Shallow depth of field due to large aperture on telephoto lens (f/4 – 200mm lens on full-frame 35mm DSLR) [©2012 Ed Elliott / Clearlight Imagery]

Wide angle shot, demonstrating sharp focus from foreground to the background. Also exaggeration of perspective makes the bow of the vessel appear much taller than the stern.

Wide angle shot, demonstrating sharp focus from foreground to the background. Also exaggeration of perspective makes the bow of the vessel appear much taller than the stern. [©2013 Ed Elliott / Clearlight Imagery]

The bottom line is that the choice of lens and aperture is a controlling element of the photographer (or her pocketbook) – and has a huge effect on the image taken with that lens and setting. None of these choices can be deemed to be either ‘analog’ or ‘digital’ manipulation of the image during editing, but they have arguably a greater effect on the outcome, message, impact and tenor of the photograph than anything that can be done subsequently in the darkroom (whether chemical or digital).

The Third Lie: Shutter Speed

Every exposure is a product of two factors: Light X Time. The amount of light that strikes a negative (or digital sensor) is governed solely by the selected aperture (and possibly by any additional filters placed in front of the lens); the duration for which the light is allowed to impinge on the negative is set by the shutter speed. While the main property of setting the shutter speed is to produce the correct exposure once the aperture has been selected (to avoid either under or over-exposing the image), there is a huge secondary effect of shutter speed on any motion of either the camera or objects in the frame. Fast shutter speeds (over 1/125th of a second with a normal lens) will essentially freeze any motion, while slow shutter speeds will result in ‘shake’, ‘blur’ and other motion artifacts. While some of these can be just annoying, in the hands of a skilled photographer motion artifacts tell a story. And likewise a ‘freeze-frame’ (from a very fast shutter speed) can distort reality in the other direction, giving the observer a point of view that the human eye could never glimpse in reality. The hours-long time exposure of star trails or the suspended animation shot of a bullet about to pierce a balloon are both ‘manipulations’ of reality – but they take place as the image is formed, not in the darkroom. The subjective experience of a football distorted as the kicker’s foot impacts it – locked in time by a shutter speed of 1/2000th second – is very different to the same shot of the kicker at 1/15th second where his leg is a blurry arc against a sharp background of grass. Two entirely different stories, just from shutter speed choice.

Fast shutter speed to stop action

Fast shutter speed to stop action [©2013 Ed Elliott / Clearlight Imagery]

Combination of two effects: fast shutter speed to stop motion (but not too fast, slight blurring of left foot imparts motion) - and shallow depth of field to render background soft-focus (f4, 200mm lens)

Combination of two effects: fast shutter speed to stop motion (but not too fast, slight blurring of left foot imparts motion) – and shallow depth of field to render background soft-focus (f/4, 200mm lens) [©2013 Ed Elliott / Clearlight Imagery]

High shutter speed to freeze the motion. 1/2000 sec. [©2012 Ed Elliott / Clearlight Imagery]

High shutter speed to freeze the motion. 1/2000 sec. [©2012 Ed Elliott / Clearlight Imagery]

Fast shutter speed to provide clarity and freeze the motion. 1/800 sec @ f/8 [©2012 Ed Elliott / Clearlight Imagery]

Fast shutter speed to provide clarity and freeze the motion. 1/800 sec @ f/8 [©2012 Ed Elliott / Clearlight Imagery]

Although a hand-held shot, I wanted as fine-grained a result as possible, so took advantage of the stillness of the subjects and a convenient wall on which to place the camera. 2 sec exposure with ISO 500 at f8 to keep the depth of field. [©2012 Ed Elliott / Clearlight Imagery]

Although a hand-held shot, I wanted as fine-grained a result as possible, so took advantage of the stillness of the subjects and a convenient wall on which to place the camera. 2 sec exposure with ISO 500 at f/8 to keep the depth of field. [©2012 Ed Elliott / Clearlight Imagery]

The Fourth Lie: Film (or Sensor) Sensitivity [ISO]

As if Pinocchio’s nose hasn’t grown long enough already, we have yet another ‘distortion’ of reality that every image contains as a basic building block: that of film/sensor sensitivity. While we have discussed exposure as a product of Light Intensity X Time of Exposure, one further parameter remains. A so-called ‘correct’ exposure is one that has a balance of tonal values, and (more or less) represents the tonal values of the scene that was photographed. This means essentially that blacks, shadows, mid-tones, highlights and whites are all apparent and distinct in the resulting photograph, and the contrast values are more or less in line with that of the original scene. The sensitivity of the film (or digital sensor) is critical in this regard. Very sensitive film will allow a correct image with a lower exposure (either a smaller aperture, faster shutter speed, or both), while a ‘slow’ [insensitive] film will require the opposite.

A high ISO was necessary to capture the image during late twilight. In addition a slow shutter speed was used - 1/15 sec with ISO of 6400. [©2011 Ed Elliott / Clearlight Imagery]

A high ISO was necessary to capture the image during late twilight. In addition a slow shutter speed was used – 1/15 sec with ISO of 6400. [©2011 Ed Elliott / Clearlight Imagery]

Low ISO (50) to achieve relatively fine grain and best possible resolution (this was a cellphone shot). [©2015 Ed Elliott / Clearlight Imagery]

Low ISO (50) to achieve relatively fine grain and best possible resolution (this was a cellphone shot). [©2015 Ed Elliott / Clearlight Imagery]

Cellphone image at dusk, resulting in ISO 800 with 1/15 sec exposure. Taken from a parking garage, the highlight on the palm is from car headlights. [©2012 Ed Elliott / Clearlight Imagery]

Cellphone image at dusk, resulting in ISO 800 with 1/15 sec exposure. Taken from a parking garage, the highlight on the palm is from car headlights. [©2012 Ed Elliott / Clearlight Imagery]

Night photography often requires very high ISO values and slow shutter speeds. The resulting grain can provide texture as opposed to being a detriment to the shot. [©2012 Ed Elliott / Clearlight Imagery]

Night photography often requires very high ISO values and slow shutter speeds. The resulting grain can provide texture as opposed to being a detriment to the shot. [©2012 Ed Elliott / Clearlight Imagery]

Fine grain achieved with low ISO of 50. [©2012 Ed Elliott / Clearlight Imagery]

Fine grain achieved with low ISO of 50. [©2012 Ed Elliott / Clearlight Imagery]

Slow ISO setting for high resolution, minimal grain (ISO 50) [©2012 Ed Elliott / Clearlight Imagery]

Slow ISO setting for high resolution, minimal grain (ISO 50) [©2012 Ed Elliott / Clearlight Imagery]

Sometimes you frame the shot and do the best you can with the other parameters - and it works. Cellphone image at night meant slow shutter speed (1/15 sec) and lots of grain with ISO 800 - but the resultant grain and blurring did not detract from the result. [©2012 Ed Elliott / Clearlight Imagery]

Sometimes you frame the shot and do the best you can with the other parameters – and it works. Cellphone image at night meant slow shutter speed (1/15 sec) and lots of grain with ISO 800 – but the resultant grain and blurring did not detract from the result. [©2012 Ed Elliott / Clearlight Imagery]

A corollary to film sensitivity is grain (in film) or noise (in digital sensors). If you desire a fine-grained, super sharp negative, then you must use a slow film. If you need a fast film that can produce an acceptable image in low light without a flash, say for photojournalism or surveillance work, then you must use a fast film and accept grain the size of rice in some cases… Life is all about compromise. Again, the final outcome is subjective, and totally within the control of the accomplished photographer, but this exists completely outside the darkroom (or Photoshop). Two identical scenes shot with widely disparate ISO films (or sensor settings) will give very different results. A slow ISO will produce a very sharp, super-realistic image; while a very fast ISO will be grainy, somewhat fuzzy and can tend towards surrealism if pushed to an extreme.  [technical note: the arithmetic portion of the ISO rating is the same as the older ASA rating scale, I use the current nomenclature]

Editing: White Lies, Black Lies, Dutone and Technicolor…

In my personal work as a streetphotographer (my gallery is here) I tell ‘white lies’ all the time in editorial. By that I mean the small adjustments to focus, color balance, contrast, highlight and shadow balance, etc. This is a highly personal and subjective experience. I learned from master photographers (including Ansel Adams), books and much trial and even more error… to pre-visualize my shots, and mentally place the components of the image on the Zone Scale as accurately as possible with the equipment and lighting on hand. This process was most helpful when in university with no money – every shot cost, both in film and developing ingredients. I would often choose between beer and film.. film always won… fewer friends, more images.. not quite sure about that choice but I was fascinated with imagery. While pre-visualization is, I feel, an important magic and can result in the difference between an ok image and a great one – it’s not an easy process to follow in candid streetphotography, where the recognition of a potential shot and the chance to grab it is often 1-2 seconds.

This results, quite frequently, with things in the image not being where I imagined them in terms of composition, lighting, color balance, etc. So enter my ‘white lies’. I used to accomplish this in the darkroom with push/pull of developing, and significant tweaking during printing (burning, dodging, different choice of contrast printing papers, etc.). Now I use Photoshop (I’m not particularly an Adobe disciple, but I started with this program in 1989 with version 0.87 (known as part of Barneyscan, on my Mac Classic) and we’ve kind of grown up together… I just haven’t bothered to learn another program. It does what I need, I’m sure that I only know about 20% of its current capabilities, but that’s enough for my requirements.

The other extreme that can be accomplished by Photoshop experts (and I use the term generically here) are the ‘black lies’. This is where one puts Oprah’s head on someone else’s body, performs ‘digital liposuction’ to the extent that Lena Dunham and Adele both scream “enough!”, and many celebrities find their faces applied to actors and scenes (typically in North Hollywood) where they have never been, nor would want to… There’s actually a great novel by the late Michael Crichton [Rising Sun, 1992] that contains a detailed subplot about digital photomanipulation of video imagery. At that time, it took a supercomputer to accomplish the detailed and sophisticated retouching of long video sequences – today tools such as Photoshop and After Effects could accomplish this on a desktop workstation in a matter of hours.

"Duotone" technique [background masked and converted to monochrome to focus the viewer on the foreground image]

“Duotone” technique [background masked and converted to monochrome to focus the viewer on the foreground image] [©2016 Ed Elliott / Clearlight Imagery]

A technique I frequently use is Duotone – and even here I am being technically inaccurate. What I mean by this is separating the object of interest from the background by masking the subject and turning the rest of the image into black and white. The juxtaposition of a color subject against a monochrome background helps isolate and focus the viewer’s attention on the subject. Frequently in streetphotography the opportunity to place the subject against a non-intrusive background doesn’t exist, so this technique is quite effective in ‘turning down’ the importance of the often busy and distracting surrounds. [Technically the term duotone is used for printing the entire image in gradations of only two colors]. Is this ‘manipulation’? Yes. Does it materially detract from, or alter the intent of, the original image that I pre-visualized in my head? No. I firmly stand behind this point of view, that all photographs “lie” to one extent or another, and any tool that the photographer has at his or her hand to generate a final image that is in accordance with the original intent is fair game. What matters is the act of conveying the vision of the photographer to the brain of the viewer. Period.

The ‘photograph’ is just the medium that transports that image. At the end of the day, a photo is a conglomeration of pixels (either printed or glowing) that transmit photons to the human visual system, and ultimately end up in the visual cortex in the back of the human brain. That is where we actually “see”.

Early photography (and motion picture films) were only available in black & white. When color photography first came along, the colors were not ‘natural’. As emulsions improved things got better, but even so there was a marked deviation from ‘natural’ that was actually ‘designed in’ by Kodak and other film manufacturers. The saturation and color mapping of Kodachrome did not match reality, but it did satisfy the public that equated punchy colors with a ‘good color photo’ and made those vacation memories happy ones.. and therefore sold more film. The more subdued, and realistic, Ektachrome came along as professional photographers pushed for choice (and quite frankly an easier and more open developing process – Kodachrome could only be processed by licensed labs and it was notoriously difficult to process well). The down side of early Ektachrome emulsions was the unfortunate instability of the dye layers in color transparency film – leading to rapid fading of both slides and movies.

As one who has worked in film preservation and restoration for decades, it was interesting to note that an early color process (the Technicolor 3-stripe method) that was originally designed just to get vibrant colors on the movie screen in the 1930’s had a resurgence in film preservation. Turned out that so many of the early Ektachrome films from the 1950’s and 1960’s experienced rapid fading that significant restoration efforts were necessary to salvage some important movies. The only way at that time (before economical digital scanning of movies was possible) was to – after restoration of the color negative – scan using the Technicolor process and make 3 separate black & white films that represented the cyan, magenta and yellow dye layers. Then someday in the future the 3 negatives could be optically combined and printed back on to color film for viewing.

There is No Objective Truth in Photography (or Painting, Music…)

All photography is an illusion. Using a lens, a photo-sensitive element of some sort and a box to restrict the image to only the light coming through the lens, a photograph is a rendering of what is before the lens. Nothing more. Distorted and limited by the photographer’s choice of point of view, lens, aperture, shutter speed, film/sensor and so on; the resultant image – if correctly executed, reflects at most the inner vision of the photographer’s mind/perception of the original scene. Every photograph has a story (some more boring than others).

One of the great challenges of photography (and possibly one of the reasons that until quite recently this art form was not taken seriously) is that on first glance many photos appear to be just a ‘copy of reality’ – and therefore contain no inherent artistic value. Nothing could be further from the truth. It’s just that that ‘art’ hides in plain sight… It is our collective, subjective, and inaccurate view that photographs are ‘truthful’ and accurately represent the reality that was before the lens that is the root of the problem that engendered this post. We naively assume that photos can be trusted, that they show us the only possible view of reality. It’s time to grow up, to accept that photography, just like all other art forms, is a product of the artist, first and foremost.

Even the unassuming mom who is taking snapshots of her kids is making choices – whether she knows it or not – about each of the parameters already discussed. Since most snapshot (or cellphone) cameras have wide angle lenses, the ‘huge nose’ effect of close-up pics of babies and youngsters (that will haunt these innocent children forever on Facebook and Instagram – data never dies…) is just an objective artifact of lens choice and distance to subject. Somewhere along the line our moral compass became out of whack when we started drawing highly artificial lines around ‘acceptable editorial behavior’ and so on. An entirely different discussion – which is worthy of a separate post – can be had in terms of the photographer’s (or publisher’s) intention in sharing an image. If a deliberate attempt to misrepresent the scene, for financial gain, allocation of justice, change in power, etc. is taken, that is an issue. But the same issue exists whether the medium that transports such a distortion is the written word, an audio recording, a painting or a 3D holograph. It is illogical to apply a set of standards or restrictions to one art form and not another, just to attempt to reign in inadvertent or deliberate distortions in a story that may be deduced from the art by an observer.

To use another common example, we have all seen many photos of a full moon rising behind a skyline, trees on a ridge, etc. – typically with a really large moon – and most observers just appreciate the image, the impact, the feeling. Even some rudimentary science, and a bit of experience with photography, reveals that most such images are a composite, with a moon image enlarged and layered in behind the foreground. The moon, is simply never that large, in relation to the rest of the image. In many cases I have seen, the lighting of the rest of the scene clearly shows that the foreground was shot at a different time of night than the moon (a full moon on the horizon only occurs at dusk). I have also seen many full moons in photographs that are at astronomically impossible locations in the sky, given the longitude and latitude of the foreground that is shown in the image.

An example of "Moon on Steroids"... The actual size of the moon (31 minutes of arc) is about the same size as your thumbnail if you extend your arm fully. In this picture it's obvious (look at the grasses on the ground) that the tree is approximately 10 ft tall. In reality, the moon would be nestled in between a couple of the smaller branches.

An example of “Moon on Steroids”… The actual size of the moon (31 minutes of arc) is about the same size as your thumbnail if you extend your arm fully. In this picture it’s obvious (look at the grasses on the ground) that the tree is approximately 10 ft tall. In reality, the moon would be nestled in between a couple of the smaller branches.

Why is it that such an esteemed, and talented, photographer as Steve McCurry is chastised for removing some distracting bits of an image – which in no way detracted from the ‘story’ of the image – and yet I dare say that no one in their right mind wold criticize Leonardo da Vinci for including a physically impossible background (the almost mythological mountains and seas) in his rendition of Lisa Gherardini for his painting of “Mona Lisa”? As someone who has worked in the film/video/audio industry for my entire professional life, I can tell you with absolute certainty that no modern audio recording – from Adele to Ziggy Marley – is released that is not ‘digitally altered’ in some fashion. Period. It is just an absolute in today’s production environment to ‘clean up’ every track, every mix, every completed master – removing unwanted echoes, noise, coughs, burps, and other audio equivalents of Mr. Fisch’s plastic bag… and no one, ever, has complained about this or accused the artists of being ‘dishonest’.

This double standard needs to be put to rest permanently. It reflects poorly on those who take this position, demonstrating their lack of technical knowledge and a narrow perception of the art form of photography, and furthermore gives power to those whose only interest is to malign others and detract from the powerful impact that a great image can create. If ignorant observers can really believe that an airplane in an image as depicted is ‘real’ (for the airplane to be of such a size in relation to the tunnel and ladders it would have to be flying at a massively illegal low altitude in that location) then such observers must take responsibility. Does the knowledge that this placement of the plane is ‘not real’ detract from the photo? Does the contraposition of ‘stillness vs movement’ (concrete and steel silo vs rapidly moving aircraft) create a visually stimulating image? Is it important whether that occurred ‘in reality’ or not? Would an observer judge it differently if this was a painting or a sketch instead of a photograph?

I love the art and science of photography. I am daily enamored with the images that talented and creative people all over the world share, whether a mixture of camera originals, composites, pure fiction created in the ‘darkroom’ or some combination of all. This is a wondrous art form, and must be supported at all costs. It’s not easy, it takes dedication, effort, skill, perseverance, money, time and love – just as any art form. I would hope that we could move the conversation to what matters: ‘truth in advertising’. In a photo contest, nothing, repeat nothing, should matter except the image itself. Just like painting, sculpture, music, ceramics, dance, etc. – the observed ‘art’ should be judged only by the merits of the entity itself, without subjective expectations or philosophical distortions. If an image is used to reinforce a particular ‘story’ – whether for ethical, legal or news purposes, then both the words and the images must be authentic. Authentic does not mean ‘un-retouched’, it does mean that there is no ‘black lie’ in what is conveyed.

To summarize, let’s stop believing that photographs are ‘real’ – but let’s start accepting the art, craftsmanship, effort and focus that this medium brings to all of us. Let’s apply a common frame of reference to all forms of art, whether they be painting, writing, photography, music, etc. – terms of authenticity and purpose. Would we chide Escher for attempting to fool us with visual cues of an impossible reality?

A Historical Moment: The Sylmar Earthquake of 1971 (Los Angeles, CA)

August 14, 2016 · by parasam

A little over 45-1/2 years ago, at a few seconds past 6:00AM on Feb. 9, 1971,  I was jolted out of bed by a massive earthquake in Los Angeles. Or more accurately, the bed moved so far sideways that I fell on the floor… Perhaps a good thing as the bookshelves over my bed promptly dumped all the books, and shelves, onto the bed which I had recently occupied. Other than the Kern County earthquake in 1952, this was the first major quake in California since the calamitous 1906 disaster in San Francisco. Although I went on to experience two more severe earthquakes in California (Loma Prieta / San Francisco in 1989; and Northridge / Los Angeles in 1994), this was the first in my lifetime. As a high school senior, already accepted to an engineering college where I would study physics – including geophysics (the study of earthquakes among other things), I knew instantly what was happening. The force and sound still marveled me: it was so much greater than I could have imagined.

At 6.6 on the Richter Scale, this was a massive, but not apocalyptic, event. The 1906 quake measured 7.8, the later Loma Prieta was 7.1 and the Northridge was 6.7 – however the ‘shaking index’ – Mercalli Intensity Scale – (a measure of the actual movement perceived and damage caused by an earthquake) of this quake was “XI Extreme”, only one step from the end of the scale which is labelled “Total Destruction of Everything”. In comparison, both the Loma Prieta (1989) and Northridge (1994) quakes measured “IX Violent” on the Mercalli Scale, two steps below this quake (the Sylmar Earthquake, 1971). The historical San Francisco earthquake of 1906 measured the same (XI Extreme) in four locations just to the north of San Francisco, but the city itself only felt “X Extreme” shaking intensity on the Mercalli Scale. Remember that the most damage in the SF quake was from the subsequent fires, not the earthquake itself.

Bottom line: the 1971 Sylmar quake in Los Angeles produced the most destructive power of any earthquake in California since the Fort Tejon quake in 1857 (Richter 7.9). The quake lasted technically for 12 seconds – it felt like a lot more than that! – and caused $553 million damage [in 1971 dollars, that would be about $3.28 billion in 2016 dollars]. The recently completed freeway interchange in the north San Fernando Valley was destroyed, and took 2 years to rebuild – only to collapse again in the 1994 Northridge quake… seems the structural engineers keep learning after the fact…

Unless you have lived through a massive earthquake such as this, one simply cannot physically catalog the intensity of such an event. Words, even pictures, just fail. The noise is beyond incredible. The takeoff roll of a 747 aircraft is a whisper in comparison; the sight of the houses across the street rising and falling as if on a wave 20 feet high is beyond comprehension. I will never take the word ‘stability’ for granted again. Ever. We take for granted the earth under our feet is a constant. It doesn’t move. Or it’s not supposed to… the disorientation is extreme.

As soon as I had determined that my family was safe, and our home, although damaged, was not in immediate danger (and I had turned off gas and electricity), I got in my truck and headed off to where I had heard on the radio that the epicenter of damage was: the northern San Fernando Valley. The quake occurred at 6AM, I arrived at the destroyed freeway interchange (for those that know LA, the I-5/Hiway 14 interchange) at about 10AM, when the below images were taken. No police, fire or any emergency personnel had arrived yet. It was surreal, a few other curious humans like myself wandering around – and absolute quiet. One of the busiest freeways in Los Angeles was empty. The only sound was an occasional crow. The real major calamity (the collapse of the Olive View and Veterans Hospitals, which ended up with a death toll of 62) was several miles to the south of my location. There were cracks in the ground several feet wide and many feet deep. The Sylmar Converter Station (a major component of the LA basin electrical power grid) was totaled, with large transformers laying helter-skelter on their sides. I was reminded of H. G. Wells’ “War of The Worlds” with a strange and previously unknown landscape in front of me.

Although I had already been shooting images for almost 10 years (starting with a Kodak Brownie), the pictures below were probably my first real entry into photojournalism and streetphotography. Taken with a plastic (both lens and body) Kodak Instamatic 100 that exposed a proprietary 26mm square film (Kodacolor-X, ASA64) the resulting prints are not of the best quality. While I may still discover the negatives someday in a long-forgotten shelf, all I have at present are the prints from 1971. I’ve scanned, restored and processed them to recover as much of the original integrity as possible, but there is no retouching.

I’ve also included scans of some newspapers in LA from the first days after the quake (found those folded up and stored – reading some of the ads from that period was almost as interesting as the major stories…)

Destroyed overpass and roadway on the I-5.

Destroyed overpass and roadway on the I-5.

Severe damage to McMahan's Furniture in north San Fernando.

Severe damage to McMahan’s Furniture in north San Fernando.

Roadway torn asunder by the force of the earthquake.

Roadway torn asunder by the force of the earthquake.

Fallen transformer at the Sylmar Power station.

Fallen transformer at the Sylmar Power station.

Damaged office building, north San Fernando.

Damaged office building, north San Fernando.

Lateral deformation of the ground near the Sylmar Power station, close to the epicenter of the earthquake.

Lateral deformation of the ground near the Sylmar Power station, close to the epicenter of the earthquake.

Fallen roadway on the I-5.

Fallen roadway on the I-5.

Observers looking at the earthquake damage to the I-5 a few hours after the initial event.

Observers looking at the earthquake damage to the I-5 a few hours after the initial event.

Overpass on the I-5 / Hiway 14 interchange showing separation and subsiding of the roadway.

Overpass on the I-5 / Hiway 14 interchange showing separation and subsiding of the roadway.

Massive split in the roadway of the I-5, looking north.

Massive split in the roadway of the I-5, looking north.

Destroyed overpass on the I-5 freeway.

Destroyed overpass on the I-5 freeway.

Brick walls destroyed in suburban San Fernando Valley.

Brick walls destroyed in suburban San Fernando Valley.

The I-5 / Hiway 14 interchange was still in the final stages of construction when the earthquake hit. This image is deceptive as most of the damage was at the top of the frame. However the roadway that leads in from the lower right is split just after the overpass (detail in another image).

The I-5 / Hiway 14 interchange was still in the final stages of construction when the earthquake hit. This image is deceptive as most of the damage was at the top of the frame. However the roadway that leads in from the lower right is split just after the overpass (detail in another image).

Fallen overpass on the I-5 freeway. A few seconds after this image was taken a major aftershock occurred. I ran really, really fast from under the remaining bridge elements...

Fallen overpass on the I-5 freeway. A few seconds after this image was taken a major aftershock occurred. I ran really, really fast from under the remaining bridge elements…

I-5 / Hiway 14 freeway interchange damage

I-5 / Hiway 14 freeway interchange damage

LA Times, Feb 10 1971, page 1

LA Times, Feb 10 1971, page 1

The Valley News, Feb 9 1971, page 1

The Valley News, Feb 9 1971, page 1

The Valley News, Feb 9 1971, page 2

The Valley News, Feb 9 1971, page 2

Kodak Instamatic 100, released in 1963 with a price of $16 at the time.

Kodak Instamatic 100, released in 1963 with a price of $16 at the time.

Where Did My Images Go? [the challenge of long-term preservation of digital images]

August 13, 2016 · by parasam
Littered Memories - Photos in the Gutter

Littered Memories – Photos in the Gutter (© 2016 Paul Watson, used with permission)

Image Preservation – The Early Days

After viewing the above image from fellow streetphotographer Paul Watson, I wanted to update an issue I’ve addressed previously: the major challenge that digital storage presents in terms of long-term archival endurance and accessibility. Back in my analog days, when still photography was a smelly endeavor in the darkroom for both developing and printing, I slowly learned about careful washing and fixing of negatives, how to make ‘museum’ archival prints (B&W), and the intricacies of dye-transfer color printing (at the time the only color print technology that offered substantial lifetimes). Prints still needed carefully restricted environments for both display and storage, but if all was done properly, a lifetime of 100 years could be expected for monochrome prints and even longer for carefully preserved negatives. Color negatives and prints were much more fragile, particularly color positive film. The emulsions were unstable, and many of the early Ektachrome slides (and motion picture films) faded rapidly after only a decade or so. A well-preserved dye-transfer print could be expected to last for almost 50 years if stored in the dark.

I served for a number of years as a consultant to the Los Angeles County Museum of Art, advising them on photographic archival practices, particularly relating to motion picture films. The Bing Theatre for many years offered a fantastic set of screenings that offered a rare tapestry of great movies from the past – and helped many current directors and others in the industry become better at their craft. In particular, Ron Haver (the film historian, preservationist and LACMA director with whom I worked during that time) was instrumental in supervising the restoration, screening and preservation of many films that would now be in the dust bin of history without his efforts. I learned much from him, and the principles last to this day, even in a digital world that he never experienced.

One project in particular was interesting: bringing the projection room (and associated film storage facilities) up to Los Angeles County Fire Code so we could store and screen early nitrate films from the 1920’s. [For those that don’t know, nitrate film is highly flammable, and once on fire will quite happily burn under water until all the film is consumed. It makes its own oxygen while burning…] Fire departments were not great fans of this stuff… Due to both the large (and expensive) challenges in projecting this type of film, as well as the continual degradation of the film stock, almost all nitrate film left has since been digitally scanned for preservation and safety. I also designed the telecine transfer bay for the only approved nitrate scanning facility in Los Angeles at that period.

What this all underscored was the considerable effort, expense and planning that is required for long term image preservation. Now, while we may think that once digitized, all our image preservation problems are over – the exact opposite is true! We have ample evidence (glass plate negatives from the 1880’s, B&W sheet film negatives from the early 1900’s) that properly stored  monochrome film can easily last 100 years or more, and is readable today as it was the day the film was exposed with no extra knowledge or specialized machinery. B&W movie film is also just as stable as long as printed onto safety film base. Due to the inherent fading of so many early color emulsions, the only sure method for preservation (in the analog era) was to ‘color separate’ the negative film and print the three layers (cyan, magenta and yellow) onto three individual B&W films. – the so-called “Technicolor 3-stripe process”.

Digital Image Preservation

The problem with digital image preservation is not due to the inherent technology of digital conversion – if done well that can yield a perfect reproduction of the original after theoretically an infinite time period. The challenge is how we store, read and write the “0s and 1s” that make up the digital image. Our computer storage and processing capability has moved so quickly over the last 40 years that almost all digital storage from more than 25 years ago is somewhere between difficult and impossible to recover today. This problem is growing worse, not better, in every succeeding year…

IBM 305 RAMAC Disk System 1956: IBM ships the first hard drive in the RAMAC 305 system. The drive holds 5MB of data at $10,000 a megabyte.

IBM 305 RAMAC Disk System 1956: IBM ships the first hard drive in the RAMAC 305 system. The drive holds 5MB of data at $10,000 a megabyte.

This is a hard drive. It holds less than .01% of the data as the smallest iPhone today...

This is a hard drive. It holds less than .01% of the data as the smallest iPhone today…

One of the earliest hard drives available for microcomputers, c.1980. The cost then was $350/MB, today's cost (based on 1TB hard drive) is $0.00004/MB or a factor of 8,750,000 times cheaper.

One of the earliest hard drives available for microcomputers, c.1980. The cost then was $350/MB, today’s cost (based on 1TB hard drive) is $0.00004/MB or a factor of 8,750,000 times cheaper.

Paper tape digital storage as used by DEC PDP-11 minicomputers in 1975.

Paper tape digital storage as used by DEC PDP-11 minicomputers in 1975.

Paper punch card, a standard for data entry in the 1970s.

Paper punch card, a standard for data entry in the 1970s.

Floppy disks: (from left) 8in; 5-1/4"; 3-1/2". The standard data storage format for microcomputers in the 1980s.

Floppy disks: (from left) 8in; 5-1/4″; 3-1/2″. The standard data storage format for microcomputers in the 1980s.

As can be  seen from the above examples, digital storage has changed remarkably over the last few decades. Even though today we look at multi-terabyte hard drives and SSD (Solid State Drives) as ‘cutting edge’, will we chuckle 20 years from now when we look back at something as archaic as spinning disks or NAND flash memory? With quantum memory, holographic storage and other technologies already showing promise in the labs, it’s highly likely that even the 60TB SSD disks that Samsung just announced will take their place alongside 8-inch floppy disks in a decade or so…

And these issues are actually the least of the problem (the physical storage medium). Yes, if you put your ‘digital negatives’ on a floppy disk 15 years ago and now want to read them you have a challenge at hand… but with patience and some time on eBay you could probably assemble the appropriate hardware to retrieve the data into a modern computer. The bigger issue is that of the data format: both of the drives themselves and the actual image files. The file systems – the method that was used to catalog and find the individual images stored on whatever kind of physical storage device, whether ancient hard drive or floppy disk – have changed rapidly over the years. Most early file systems are no longer supported by current OS (Operating Systems), so hooking up an old drive to a modern computer won’t work.

Even if one could find a translator from an older file system to a current one (there is a very limited capability in this regard, many older file systems can literally only be read by a computer as old as the drive), that doesn’t solve the next issue: the image format itself. The issue of ‘backwards compatibility’ is one of the great Achilles Heels of the entire IT industry. The huge push by all vendors to keep all their users relentlessly updating to the latest software, firmware and hardware is just to avoid these same companies having to support older versions of hardware and software. This is not totally a self-serving issue (although there are significant costs and time involved in doing so) – frequently certain changes in technology just can’t support an older paradigm any longer. The earliest versions of Photoshop files, PICT, etc are not easily opened with current applications. Anyone remember Corel Draw?? Even ‘common interchange’ formats such as TIFF and JPEG have evolved, and not every version is supported by every current image processing application.

The more proprietary and specific the image format is, the more fragile it is – in terms of archival longevity. For instance, it may seem that the best archival format would be the Camera Raw format – essentially the full original capture directly from the camera. File types such as RAW, NEF, CR2 and so on are typical. However, each of these is proprietary and typically has about a 5 year life span, in terms of active application support by the vendor. As camera models keep changing – more or less on a yearly cycle – the Raw formats change as well. 3rd party vendors, such as Adobe Photoshop, are under no obligation to support earlier Raw formats forever… and as previously discussed the challenge of maintaining backwards compatibility grows more complex with each passing year. There will always come a time when such formats will no longer be supported by currently active image retrieval, viewing or processing software.

Challenges of Long-Term Digital Image Preservation

Therefore two major challenges must be resolved in order to achieve long term storage and future accessibility of digital images. The first is the physical storage medium itself, whether that is tape (such as LTO-6), hard disk, SSD, optical, etc. The second is the actual image format. Both must be usable and able to transfer images back to the operating system, device and software that is current at the time of retrieval in order for the entire exercise of archival digital storage to be successful. Unfortunately, this is highly problematic at this time. As the pace of technological advance is exponentially increasing, the continual challenge of obsolescence becomes greater every year.

Currently there is no perfect answer for this dilemma – the only solution is one of proactivity on the part of the user. One must accommodate the continuing obsolescence of physical storage mediums, file systems, operating systems and file formats by moving the image files on a regular and continual basis to current versions of all of the above. Typically this is an exercise that must be repeated every five years – at current rates of technological development. For uncompressed images, other than the cost of the move/update there is no impact on the digital image – that is one of the plus sides of digital imagery. However, many images (almost all if you are other than a professional photographer or filmmaker) are stored in a compressed format (JPG, TIFF-LZW/ZIP, MPG, MOV, WMV, etc.). These images/movies will experience a small degradation in quality each time they are copied. The amount and type of artifacts introduced are highly variable, depending on the level of compression and many other factors. The bottom line is that after a number of copy cycles of a compressed file (say 10) it is quite likely that a visible difference from the original file can be seen.

Therefore, particularly for compressed files, a balance must be struck between updating often enough to avoid technical obsolescence and making the fewest number of copies over time in order to avoid image degradation. [It should be noted that potential image degradation will typically only be due to changing/updating the image file format, not moving a bit-perfect copy from one type of storage medium to another].

This process, while a bit tedious, can be automated with scripts or other similar tools, and for the casual photographer or filmmaker will not be too arduous if undertaken every five years or so. It’s another matter entirely for professionals with large libraries, or for museums, archives and anyone else with thousands or millions of image files. A lot of effort, research and thought has been applied to this problem by these professionals, as this is a large cost of both time and money – and no solution other than what’s been described above has been discovered to date. Some useful practices have been developed, both to preserve the integrity of the original images as well as reduce the time and complexity of the upgrade process.

Methods for Successful Digital Image Archiving

A few of those processes are shared below to serve as a guide for those that are interested. Further search will yield a large amount of sites and information that addresses this challenge in detail.

  • The most important aspect of ensuring a long-term archival process that will result in the ability to retrieve your images in the future is planning. Know what you want, and how much effort you are willing to put in to achieve that.
  • While this may be a significant undertaking for professionals with very large libraries, even a few simple steps will benefit the casual user and can protect family albums for decades.
  • In addition to the steps discussed above (updating storage media, OS and file systems, and image formats) another very important aspect is “Where do I store the backup media?” Making just one copy and having it on the hard drive of your computer is not sufficient. (Think about fire, theft, complete breakdown of the computer, etc.)
    • The current ‘best practices’ recommendation is the “3-2-1” approach: Make 3 copies of the archival backup. Store in at least 2 different locations. Place at least 1 copy off-site. A simple but practical example (for a home user) would be: one copy of your image library in your computer. A 2nd copy on a backup drive that is only used for archival image storage. A 3rd copy either on another hard drive that is stored in a vault environment (fireproof data storage or equivalent) or cloud storage.
    • A note on cloud storage: while this can be convenient, be sure to check the fine print on liability, access, etc. of the cloud provider. This solution is typically feasible for up to a few terabytes, beyond that the cost can become significant, particularly when you consider storage for 10-20  years. Also, will the cloud provider be around in 20 years? What insurance do they provide in terms of buyout, bankruptcy, etc.? While the issue of storage media is not an issue with cloud storage and file formats (it is incumbent on the cloud provider to keep that updated) you are still personally responsible for the image format issue: the cloud vendor is only storing a set of binary files, they cannot guarantee that these files will be readable in 20 years.
    • Unless you have a fairly small image library, current optical media (DVD, etc.) is impractical: even double-sided DVDs only hold about 8GB of formatted data. In addition, as one would need to burn these DVDs in your computer, the longevity of ‘burned’ DVDs is not great (compared to printed DVDs like you purchase when you buy a movie). With DVD usage falling off noticeably this is most likely not a good long-term archival format.
    • The best current solution for off-premise archival storage is to physically store external hard drives (or SSDs) with a well known data vaulting vendor (Iron Mountain is one example). The cost is low, and since you only need access every 5 years or so the extra cost for retrieval and re-storage (after updating the storage media) is acceptable even for the casual user.
  • Another vitally important aspect of image preservation is metadata. This is the information about the images. If you don’t know what you have then future retrieval can be difficult and frustrating. In addition to the very basic metadata (file name, simple description, and a master catalog of all your images) it is highly desirable to put in place a metadata schema that can store keywords and a multitude of other information about the images. This can be invaluable to yourself or others who may want to access these images decades in the future. A full discussion of image metadata is beyond the scope of this post, but there is a wealth of information available. One notable challenge is the most basic (and therefore future-proof) still image formats in use today [JPG and TIFF] do not have any facility to attach metadata directly within the image file – it must be stored externally and cross-referenced somehow. Photoshop files on the other hand store both metadata and the image within the same file – but as discussed above this is not the best format for archival storage. There are techniques to cross-reference information to images: from purpose-built archival image software to a simple spreadsheet that uses the filename of the image as a key to the metadata.
  • An important reminder: the whole purpose of an archival exercise is to be able to recover the images at a future date. So test this. Don’t just assume. After putting it all in place, pull up some images from your local offline storage every 3-6 months and see that everything works. Pull one of your archival drives from off-site storage once a year and test it to be sure you can still read everything. Set up reminders in your calendar – it’s so easy to forget until you need a set of images that was accidentally deleted from your computer – and then find out your backup did work as expected.

A final note:  if you look at entities that store valuable images as their sole activity (Library of Congress, The National Archives, etc.) you will find [for still images] that the two most popular image formats are low-compression JPG and uncompressed TIFF. It’s a good place to start…

 

  • Blog at WordPress.com.
  • Connect with us:
  • Twitter
  • Vimeo
  • YouTube
  • RSS
  • Follow Following
    • Parasam
    • Join 95 other followers
    • Already have a WordPress.com account? Log in now.
    • Parasam
    • Customize
    • Follow Following
    • Sign up
    • Log in
    • Report this content
    • View site in Reader
    • Manage subscriptions
    • Collapse this bar
 

Loading Comments...