• Home
  • about this blog
  • Blog Posts

Parasam

Menu

  • design
  • fashion
  • history
  • philosophy
  • photography
  • post-production
    • Content Protection
    • Quality Control
  • science
  • security
  • technology
    • 2nd screen
    • IoT
  • Uncategorized
  • Enter your email address to follow this blog and receive notifications of new posts by email.

  • Recent Posts

    • Take Control of your Phone
    • DI – Disintermediation, 5 years on…
    • Objective Photography is an Oxymoron (all photos lie…)
    • A Historical Moment: The Sylmar Earthquake of 1971 (Los Angeles, CA)
    • Where Did My Images Go? [the challenge of long-term preservation of digital images]
  • Archives

    • September 2020
    • October 2017
    • August 2016
    • June 2016
    • May 2016
    • November 2015
    • June 2015
    • April 2015
    • March 2015
    • December 2014
    • February 2014
    • September 2012
    • August 2012
    • June 2012
    • May 2012
    • April 2012
    • March 2012
    • February 2012
    • January 2012
  • Categories

    • 2nd screen
    • Content Protection
    • design
    • fashion
    • history
    • IoT
    • philosophy
    • photography
    • post-production
    • Quality Control
    • science
    • security
    • technology
    • Uncategorized
  • Meta

    • Register
    • Log in
    • Entries feed
    • Comments feed
    • WordPress.com

Browsing Tags manual

iPhone4S – Section 4c: Camera Plus Pro app

April 13, 2012 · by parasam

This is a similar app to Camera+ (but not made by the same developers). (This version costs $1.99 at the time of this post – $1 more than Camera+). It’s similar in design and function. The biggest differences are:

  • Ability to tag photos
  • More setup options on selections (self-timer, burst mode, resolution, time lapse, etc.)
  • More sharing options
  • Ability to add date and copyright text to photo
  • A ‘Quick Roll’ (light table type function) has 4 ‘bins’ (All, Photos, Video, Private – can be password protected)
  • Can share photos via WiFi or FTP
  • Bing search from within app
  • Separate ‘Digital Flash’ filter with 3 intensity settings
  • Variable ‘pro’ adjustments in edit mode (Brightness, Saturation, Hue, Contrast, Sharpness, Tint, Color Temperature)
  • Different filters than Camera+, including special ‘geometric distortion’ filters
  • Quick Roll design for selecting which photos to Edit, Share, Sync, Tag, etc.
  • Still Camera Functions  [NOTE: the Video Camera functions will be discussed separately in later in this series when I compare video apps for the iPhone]
    • Ability to split Focus area from Exposure area
    • Can lock White Balance
    • Flash: Off/On (for the 4 & 4S); this feature changes to “Soft Flash” for iPhone 3GS and Touch 4G.
    • Front or Rear camera selection
    • Digital Zoom
    • 4 Shooting Modes: Normal/Stabilized/Self-Timer/Burst (part of the below Photo Options menu)
    • Photo options:
      • Sound On/Off
      • Zoom On/Off
      • Grid Lines On/Off
      • Geo Tags On/Off
      • SubMenu:
        • Tags:  select and add tags from list to the shot; or add a new tag
        • Settings:  a number of advanced settings for the app
          • Photos:
            • Timer (select the time delay for self-timer: 2-10 seconds in 1 second increments
            • Burst Mode (select the number of pictures taken when in burst mode: 3-10
            • Resolution (Original [3264×2448]; Medium [1632×1224]; Low [816×612]) – NOTE: these resolutions for the iPhone4S, each different hardware model supported by this app has a different set of resolutions set by the sensor. Essentially it is Full, Half and Quarter resolution. The exact numbers for each model are in the manual.
            • Copyright (sets the copyright text and text color)  [note: this is a preset – the actual ‘burn in’ of the copyright notice into the image is controlled during Editing]
            • Date (toggle date display on/off; set date format; text color)
        • Videos (covered in later section)
        • Private Access Restriction (Set or Change password for the Private bin inside the Quick Roll)
        • Tags (edit, delete, add tag names here)
        • Share (setup and credentials for social sharing services are entered here):
          • Facebook
          • Twitter
          • Flickr
          • Picasa
          • YouTube (for videos)
        • Review (a link to review the app)
      • Info:
        • Some ‘adware’ is here for other apps from this vendor, and a list of FAQs, Tips, Tricks (all of which are also in the manual available for download as a pdf from here)
  • Live Filters:
    • A set of 18 filters that can be applied before taking your shot, as opposed to adding a filter after the shot during Editing.
      • BW
      • Vintage
      • Antique
      • Retro
      • Nostalgia
      • Old
      • Holga
      • Polaroid
      • Hipster
      • XPro
      • Lomo
      • Crimson
      • Sienna
      • Emerald
      • Bourbon
      • Washed
      • Arctic
      • Warm
    • A note on image quality using Live Filters. A bit more about the filters will be discussed below when we dive into the filter details, but some test shots using various Live Filters show a few interesting things:
      • The pixel resolution stays the same whether the filter is on or off (3264×2448 in the case of the iPhone4S).
      • While the Live Filter function is fully active during preview of an image, once you take the shot there is a delay of about 3 seconds while the filtering is actually applied to the image. Some moving icons on the screen notify the user. Remember that the screen is 960×640 while the full image is 3264×2448 (13 X larger!) so it takes a few seconds to filter all those additional pixels.
      • This does mean that when using Live Filter you can’t use Burst Mode (it is turned off when you turn on a Live Filter), and you can’t shoot that rapidly.
      • Although the pixel dimensions are unchanged, the size of the image file is noticeably smaller when using Live Filters than when not. This can only mean that the jpeg compression ratio is higher (same amount of input data; smaller output data; compression ratio mathematically must be higher).
      • I first noticed this when I went to email myself a full resolution image from my phone to my laptop [faster for one or two pix than syncing with iTunes] as I’m researching for this blog – the images were on average 1.7MB instead of the 2.7MB average for normal iPhone shots.
      • I tested against four other camera apps, including the native Camera app from Apple, and all of them delivered images averaging 2.7MB per image.
      • I then tested this app (Camera Plus Pro) in Unfiltered mode, and the size of the output file jumps up to an average of 2.3MB per image. Not as high as most of the others, but 35% larger. Therefore a 35% reduction in compression ratio. I’ll run some more objective tests during the filter analysis section below, but both in file size and visual observation, the images appear more highly compressed.
      • This does not mean that a more compressed picture is inferior, or softer, etc. – it is highly dependent on subject material, lighting, etc. But, what is true is that a more highly compressed picture will tend to show artifacts more easily in difficult parts of the frame than will the same image at a lower compression ratio.
      • Just all part of my “Know Your Tools” motto…
      • Edit Functions
        • Crop
          • Freeform (variable aspect ratio)
          • Square (1:1 aspect ratio)
          • Rectangular (2:3 aspect ratio) [portrait]
          • Rectangular (3:2 aspect ratio) [landscape]
          • Rectangular (4:3 aspect ratio) [landscape]
  • Rotation
    • Flip Horizontal
    • Right
    • Left
    • Flip Vertical
  • Digital Flash [a filter that simulates flash illumination]
    • Small
    • Medium
    • Large
  • Adjust [image parameter adjustments]
    • Brightness
    • Saturation
    • Hue
    • Contrast
    • Sharpness
    • Tint
    • Color Temperature
  • Effects
    • Nostalgia – 9 ‘retro’ effects
      • Coffee
      • Retro Red
      • Vintage
      • Nostalgia
      • Retro
      • Retro Green
      • 70s
      • Antique
      • Washed
    • Special – 9 custom effects
      • XPro
      • Pop
      • Lomo
      • Holga
      • Diana
      • Polariod
      • Rust
      • Glamorize
      • Hipster
    • Color – 9 tints
      • Black & White
      • Sepia
      • Sunset
      • Moss
      • Lucifer
      • Faded
      • Warm
      • Arctic
      • Allure
    • Artistic – 9 special filters
      • HDR
      • Fantasy
      • Vignette
      • Grunge
      • Pop Art
      • GrayScale
      • Emboss
      • Xray
      • Heat Signature
    • Distortion – 9 geometric distortion (warping) filters
      • Center Offset
      • Pixelate
      • Bulge
      • Squeeze
      • Swirl
      • Noise
      • Light Tunnel
      • Fish Eye
      • Mirror
  • Borders
    • Original (no border)
    • 9 border styles
      • Thin White
      • Rounded Black
      • Double Frame
      • White Frame
      • Polaroid
      • Stamp
      • Torn
      • Striped
      • Grainy

Camera Functions

[Note:  Since this app has a manual available for download that does a pretty fair job of describing the features and how to access and use them, I will not repeat that information here. I will discuss and comment on the features where I believe this will add value to my audience. You may want to have a copy of the manual available for clarity while reading this blog.]

The basic use and function of the camera is addressed in the manual, what I will discuss here are the Live Filters. I have run a series of tests to attempt to illustrate the use of the filters, and provide some basic analysis of each filter to help the user understand how the image will be affected by the filter choice. The resolution of the image is not reduced by the use of a Live Filter – in my case (testing with iPhone4S) the resultant images are still 3264×2448 – native resolution. There are of course the effects of the filter, which in some cases can reduce apparent sharpness, etc.

A note on my testing procedure:  In order to present a uniform set of comparison images to the reader, and have them be similar to my standard test images, the following steps were taken:

Firstly:  my standard test images that I use to analyze filters/scenes/etc for any iPhone camera app consists of two initial test images:  a technical image (calibrated color and grayscale image), and a ‘real-world’ image – a photo I shot of a woman in the foreground with a slightly out-of-focus background. The shot has a wide range of lighting, color, a large amount of skin tone for judging how a given filter changes that important parameter, and a fairly wide exposure range.

 The original source for the calibration chart was a precision 35mm slide (Kodak Q60, Ektachrome) that was scanned on a Nikon Super Coolscan 5000ED using Silverfast custom scanner software. The original image was scanned at 4000dpi, yielding a 21megapixel image sampled at 16bits per pixel. This image was subsequently reduced in gamut (from ProPhotoRGB to sRGB) and size (to match the native iPhone4S resolution of 3264×2448) and bit depth (8bits per pixel) . The image processing was performed using Photoshop CS5.5 in a fully color-calibrated workflow.

The source for the ‘real-world’ image was initially captured using a Nikon D5000 DSLR fitted with a Nikkor 200mm F2.8 prime lens (providing an equivalent focal length of 300mm compared to full-frame 35mm – the D5000 is a 2/3 size sensor [4288×2848]). The exposure was 1/250 sec @ f5.6 using camera raw format – no compression. That camera body captures in sRGB color space, and although outputs a 16bit per pixel format, the sensor is really not capable of anything more than 12 bits in a practical sense. The image was processed in Photoshop CS5.5 in a similar manner as above to yield a working image of 3264×2448, 8 bits per pixel, sRGB.

These image pairs are what are used throughout my blog for analyzing filters, by importing into each camera app as a file.

For this test of Live Filters, I needed to actually shoot with the iPhone, since there is no way using this app to apply the Live Filters to a pre-existing image. To replicate the images discussed above as closely as possible, the following procedure was used:

For the calibration chart, the same source image was used (Kodak Q60), this time as a precision print in 4″x5″ size. These prints were manufactured by Kodak under rigidly controlled processes and yield a highly accurate reflective target. (Most unfortunately, with the demise of Kodak, and film/print processing in general, these are no longer available. Even with the best of storage techniques, prints will fade and become inaccurate for calibration. It will be a challenge to replace these…)  I used my iPhone4S to make the exposures under controlled lighting (special purpose full-spectrum lighting set to 5000°K).

For the ‘real-world’ image, I wanted to stay with the same image of the woman for uniformity, and it provides a good range of test values. To accomplish that (and be able to take the pictures with the iPhone) was challenging, since the original shot was impossible to duplicate in real life. I started with the same original high resolution image (in Photoshop) in its original 16bit, high-gamut format. I then printed that image using a Canon fine art inkjet printer (Pixma Pro 9500 MkII), using a 16 bit driver, on to high quality glossy photo paper at a paper size of 13″ x 19″. At a print density of 267dpi, this yielded an image of over 17megapixels when printed. The purpose was to ensure that no subsampling of printed pixels would occur when photographed by the 8megapixel sensor in the iPhone. [Nyquist sampling theory demands a minimum of 2x sampling – 16megapixels in this case – to ensure that). I photographed the image with the same controlled lighting as used above for the calibration chart. I made one adjustment to each image for normalization purposes:  I mapped the highest white level in the photograph (the clipped area on the subject’s right shoulder – which was pure white in the original raw image) to just reach pure white in the iPhone image. This matched the tonal range for each shot, and made up for the fact that even with a lot of light in the studio it wasn’t enough to fully saturate the little tiny iPhone sensor. No other adjustments of any kind were made. [This adjustment was carried out by exporting the original iPhone image to Photoshop to map the levels].

While even further steps could have been taken to make the process more scientifically accurate, the purpose here is one of relative comparison, not absolute measurement, so I feel the steps taken are sufficient for this exercise.

The Live Filters:

Live Filter = BW

Live Filter = BW

The BW filter provides a monochrome adaptation of the original scene. It is a high contrast filter, this can clearly be seen in the test chart, where columns 1-3 are solid black, as well as all grayscale chips from 19-22. Likewise, on the highlight end of the scale, chips 1-3 have no differentiation. The live image shows this as well, with a strong contrast throughout the scene.

Live Filter = Vintage

Live Filter = Vintage

The Vintage filter is a warming filter that adds a reddish-brown cast to the image. It increases the contrast some (not nearly as much as the previous BW filter) – this can be seen in the chart in the area of columns 1-2 and rows A-J. The white and black ends of the grayscale are likewise compressed. Any cool pastel colors either turn white or a pale warm shade (look at columns 9-11). The live image shows these effects, note particularly how the man’s blue shirt and shorts change color remarkably. The increase in contrast, couple with the warming tint, does tend to make skin tones blotchy – note the subject’s face and chest.

Live Filter = Antique

Live Filter = Antique

The Antique filter offers a large amount of desaturation, a cooling of what color remains, and an increase in contrast. Basically, only pinks and navy blues remain in the color spectrum, and the chart shows the clipping of blacks and whites. The live image shows very little saturation, only some dark blue remains, with a faint pink tinge on what was originally the yellow sign in the window.

Live Filter = Retro

Live Filter = Retro

The Retro filter attempts to recreate the look of cheap film cameras of the 1960’s and 1970’s. These low quality cameras often had simple plastic lenses, light leaks due to imperfect fit of components, etc. The noticeable chromatic aberrations of the lens and other optical ‘faults’ have now seen a resurgence as a style, and that is emulated with digital filters in this and others shown below. This particular filter shows a general warming, but with a pronounced red shift in the low lights. This is easily observable in the gray scale strip on the chart.

Live Filter = Nostalgia

Live Filter = Nostalgia

Nostalgia offers another variation on early low-cost film camera ‘look and feel’. As opposed to the strong red shift in the lowlights of Retro, this filter shifts the low-lights to blue. There is also an increase in saturation of both red and blue, notice that in the chart. The green column, #18, hardly has any change in saturation from the original, while the reds and blues show noticeable increases, particularly in the low-lights. The highlights have a general warming trend, showed in the area bounded by columns 13-19 and rows A-C. The live shot shows the strong magenta/red shift that this filter caused on skin tones.

Live Filter = Old

Live Filter = Old

The Old filter applies significant shifts to the tonal range. It’s not exactly a high contrast filter, although that result is apparent in the ratio of the highlight brightness to the rest of the picture. There is strong overall reduction in brightness – in the chart all differentiation is lost below chip #16. There is also desaturation, this is more obvious when studying the chart. The highlights, like many of these filter types, are warmed toward the yellow spectrum.

Live Filter = Holga

Live Filter = Holga

The Holga filter is named after the all-plastic camera of the same name – from Hong Kong in 1982. A 120 format roll-film camera, the name comes from the phrase “ho gwong” – meaning ‘very bright’. The marketing people twisted that phrase into HOLGA. The actual variations show a warming in the highlights and cooling (blue) in the lowlights. The contrast is also increased. In addition, as with many of the Camera Plus Pro filters, there is a spatial element as well as the traditional tonal and chromatic shifts:  in this case a strong red tint in one corner of the frame. My tests appear to indicate that the placement of this (which corner) is randomized, but the actual shape of the red tint overlay is relatively consistent. Notice that in the chart the overlay is in the upper right corner, in the live shot it moved to lower right. There is also desaturation, this is noticeable in her skin, as well as the central columns of the chart.

Live Filter = Polaroid

Live Filter = Polaroid

The Polaroid filter mimics the look of one of the first ‘instant gratification’ cameras – the forerunner of digital instant photography. The PLC look (Polariod Land Camera) was contrasty with crushed blacks, tended towards blue in the shadows, and had slightly yellowish highlights. This particular filter has a pronounced magenta shift in the skin tones that is not readily apparent from the chart – one of the reasons I always use these two different types of test images.

Live Filter = Hipster

Live Filter = Hipster

The Hipster filter effect is another of the digital memorials to the original Hipstamatic camera – a cheap all plastic 35mm camera that shot square photos. Copied from an original low-cost Russian camera, the two brothers that invented it only produced 157 units. The camera cost $8.25 in 1982 when it was introduced. With a hand-molded plastic lens, this camera was another of the “Lo-Fi” group of older analog film cameras whose ‘look’ has once again become popular. The CameraPlusPro version shows pronounced red in the midtones, crushed blacks (see column 1-2 in the chart and chips #18 and below), along with increased contrast and saturation. In my personal view, this look is harsher and darker than the actual Hipstmatic film look, which tended towards raised blacks (a common trait of cheap film cameras, the backs always leaked a bit of light so a low level ‘fog’ of the film base always tended to raise deep blacks [areas of no light exposure in a negative] to a dull gray); a softer look (lower contrast due to raised blacks) and brighter highlights. But that’s purely a personal observation, the naming of filters is arbitrary at best, that’s why I like to ‘look under the hood’ with these detailed comparisons.

Live Filter = XPro

Live Filter = XPro

The XPro filter as manifested by the CameraPlusPro team looks very similar to their Nostalgia version, but the XPro has highlights that are more white than the yellow of Nostalgia. The term XPro comes from ‘cross-process’ – what happens when you process film in the wrong developer, for instance developing E-6 transparency film in C-41 color negative chemistry. The effects of this process are highly random, although there is a general tendency towards high contrast, unnatural colors, and staining. In this instance, the whites are crushed a bit, blacks tend blue, and contrast is raised.

Live Filter = Lomo

Live Filter = Lomo

The Lomo filter effect is designed to mimic some of the style of photograph produced by the original LOMO Plc camera company of Russia (Leningrad Optical Mechanical Amalgamation). This was a low cost automatic 35mm film camera. While still in production today, this and similar cameras account for only a fraction of LOMO’s production – the bulk is military and medical optical systems – and are world class… Due to the low cost of components and production methods, the LOMO camera exhibited frequent optical defects in imaging, color tints, light leaks, and other artifacts. While anathema to professional photographers, a large community that appreciates the quirky effects of this (and other so-called “Lo-Fi” or Low Fidelity) cameras has sprung up with a world-wide following. Hence the Lomo filter…

This particular instance shows increased contrast and saturation, warming in the highlights, green midtones, and like some other CameraPlusPro filters, an added spatial effect (the red streak – again randomized in location, it shows in upper left in the chart, lower right in the live shot). [Pardon the pilot error:  the soft focus of the live shot was due to faulty autofocus on that iPhone shot – but I didn’t notice it until comping the comparison shots several days later, and didn’t have the time to reset the environment and reshoot for one shot. I think the important issues can be resolved in spite of that, but did not want my readers to assume that soft focus was part of the filter!]

Live Filter = Crimson

Live Filter = Crimson

The Crimson filter is, well, crimson! A bit overstated for my taste, but if you need a filter to make your viewers think of “The Shining” then this one’s for you! What more can I say. Red. Lots of it.

Live Filter = Sienna

Live Filter = Sienna

The Sienna filter always makes me think of my early art school days, when my well-meaning parents thought I needed to be exposed to painting… (burnt sienna is a well-know oil pigment, an iron oxide derivative that is reddish-brown. My art instructor said “think tree trunks”.)   Alas, it didn’t take me (or my instructor) long to learn that painting with oils and brushes was not going to happen in this lifetime. Fortunately I discovered painting with light shortly after that, and I’ve been in love with the camera ever since. The Sienna as shown here is colder than the pigment, a somewhat austere brown. The brown tint is more evident in the lowlights, the whites warm up just slightly. As in many of the CameraPlusPro filters, the blacks are crushed, which creates an overall look of higher contrast, even if the midtone and highlight contrast levels are unchanged (look at the grayscale in the chart). There is also an overall desaturation.

Live Filter = Emerald

Live Filter = Emerald

Emerald brings us, well, green… along with what should now be familiar:  crushed blacks, increased contrast, desaturation.

Live Filter = Bourbon

Live Filter = Bourbon

The Bourbon filter resembles the Sienna filter, but has a decidedly magenta cast in the shadows, while the upper midtones are yellowish. The lowered saturation is another common trait of the CameraPlusPro filters.

Live Filter = Washed

Live Filter = Washed

The Washed filter actually looks more like ‘unwashed’ print paper to me.. Let me explain:  before the world of digits descended on photography, during the print process (well, this applies to film as well but the effect is much better known in the printing process), after developing, stopping and fixing, you need to wash the prints. Really, really well. For a long time, like 30-45 minutes under flowing water. This is necessary to wash out almost all of the residual thiosulfate fixing chemical – if you don’t, your prints will age prematurely, showing bleaching and staining, due to the slow annihilation of elemental silver in the emulsion by the remaining thiosulfate. The prints will end up yellowed and a bit faded, in an uneven manner. In this digital approximation, the biggest difference is (as usual for this filter set) the crushed blacks. In the chemical world, just the opposite would occur, as the blacks in a photographic print have the highest accumulation of silver crystals (that block light or cover up the white paper underneath). The other attributes of this particular filter are: strongly yellowed highlights, lowlights tend to blue, increased contrast and raised saturation.

Live Filter = Arctic

Live Filter = Arctic

This Arctic filter looks cold! Unlike the true arctic landscape (which is subtle but has an amazing spectrum of colors), this filter is actually a tinted monochrome. The image is first reduced to black and white, then tinted with a cold blue. This is very clear by looking at the chart. It’s an effect.

LIve Filter = Warm

Live Filter = Warm

After looking so cold in the last shot, our subject is better when Warm. Slightly increased saturation and a yellow-brown cast to the entire tonal range are the basic components of this filter.

Edit Functions

This app has 6 groups of edit functions:  Crop, Rotate, Flash, Adjust, Filters and Borders. The first two are self-evident, and are more than adequately explained in the manual. The “how-to” of the remaining functions I will leave to the manual, what will be discussed here are examples of each variable in the remaining four groups.

Flash – also known as “Digital Flash” – a filter designed to brighten an overly dark scene. Essentially, this filter attempts to bring the image levels up to what they might have been if a flash had been used to take the photograph initially. As always, this will be a ‘best effort’ – nothing can take the place of a correct exposure in the first place. The most frequent ‘side effects’ of this type of filter are increased noise in the image (since the image was dark in the first place – and therefore would have substantial noise due to the nature of CCD/CMOS sensors), raising the brightness level will also raise the appearance of the noise; and white clipping of those areas of the picture that did receive normal, or near-normal, illumination.

This app supports 3 levels of ‘flash’ [brightness elevation] – I call this ‘shirt-sizing’ – S, M, L.  Below are 4 screen shots of the Flash filter in action: None, Small, Medium, Large.

This filter attempts to be somewhat realistic – it is not just an across-the-board brightness increase. For instance, objects that are very dark in the original scene (such as her handbag or the interior revealed by the doorway in the rear of the scene) only are increased slightly in level, whiile midtones and highlights are raised much more substantially.

Flash: Original

Flash: Small / Medium / Large

Adjust – there are 7 sub-functions within the Adjust edit function; Brightness, Saturation, Hue, Contrast, Sharpness, Tint and Color Temperature. Each function has a slider that is initially centered, moving it left reduces the named parameter, moving it right increases. Once moved off the zero center position, a small “x” on the upper right of the assoiciated icon can be tapped to return the slider to the middle position, effectively turning off any changes. Examples below for each of the sub-functions are shown.

Brightness: Minimum / Original / Maximum

Saturation: Minimum / Original / Maximum

Hue: Minimum / Original / Maximum

Contrast: Minimum / Original / Maximum

Sharpness: Minimum / Original / Maximum

Tint: Minimum / Original / Maximum

Color Temperature: Minimum / Original / Maximum

Color Temperature: Cooler / Original / Warmer

Filters – There are 45 image filters in the Edit section of the app. Some of them are similar or identical in function to the filters of the same name that were discussed in the Live Filter section above. These are contained in 5 groups: Nostalgia, Special, Colorize, Artistic and Distortion. The examples below are similar in format to the presentation of the Live Filters. The source images for these comparisons are imported files (see the note at the beginning of this section for details).

Nostalgia filters:

Nostalgia filter = Coffee

Nostalgia filter = Coffee

The Coffee filter is rather well-named:  it looks like your photo had weak coffee spread over it! You can see from the chart that, as usual for many of the CameraPlusPro filters, increased contrast, crushed blacks and desaturation is the base on which a subtle warm-brown cast is overlayed. The live example shows the increased contrast around her eyes, and the skin tones in both the woman and the man in the background have tended to pale brown as opposed to the original red/yellow/pink.

Nostalgia filter = Retro Red

Nostalgia filter = Retro Red

The Retro Red filter shows increased saturation, a red tint across the board (highlights and lowlights), and does not alter the contrast – note all the steps in the grayscale are mostly discernable – although there is a slight blending/clipping of the top highlights. The overall brightness levels are raised from midtones through the highlights.

Nostalgia filter = Vintage

Nostalgia filter = Vintage

The Vintage filter here in the Edit portion of the app is very similar to the filter of the same name in the Live Filter section. The overall brightness appears higher, but some of that may be due to the different process of shooting with a live filter and applying a filter in the post-production process. This is more noticeable in the live shot as opposed to the charts – a comparision of the “Vintage” filter test charts from the Live Filter section and the Edit section shows almost a dead match. This filter is a warming filter that adds a reddish-brown cast to the image. It increases the contrast some  – this can be seen in the chart in the area of columns 1-2 and rows A-J. The white and black ends of the grayscale are likewise compressed. Any cool pastel colors either turn white or a pale warm shade (look at columns 9-11). The live image shows these effects, note particularly how the man’s blue shirt and shorts change color remarkably. The increase in contrast, coupled with the warming tint, does tend to make skin tones blotchy – note the subject’s face and chest.

Nostalgia filter = Nostalgia

Nostalgia filter = Nostalgia

The Nostalgia filter, like Vintage above, is basically the same filter as the instance offered in the Live Filter section. The main difference is the Live Filter version is more magenta and a bit darker than this filter. Also the cyans tend green more strongly in this version of the filter – check out columns 12-13 in the chart. Some increased contrast, pronounced yellows in the highlights and increased red/blue saturation are also evident.

Nostalgia filter = Retro

Nostalgia filter = Retro

The Retro filter, as in the version in the Live Filter section, attempts to recreate the look of cheap film cameras of the 1960′s and 1970′s. These low quality cameras often had simple plastic lenses, light leaks due to imperfect fit of components, etc. The noticeable chromatic aberrations of the lens and other optical ‘faults’ have now seen a resurgence as a style, and that is emulated with digital filters in this and others shown below. This particular filter shows a general warming, but with a pronounced red shift in the low lights. This is easily observable in the gray scale strip on the chart.

Nostalgia filter = Retro Green

Nostalgia filter = Retro Green

The Retro Green filter is a bit of a twist on Retro, with some of Nostalgia thrown in (yes, filter design is a lot like cooking with spices..)  The lowlights are similar to Nostalgia, with a blue cast, the highlights show the same yellows as both Retro and Nostalgia, the big difference is in the midtones which are now strongly green.

Nostalgia filter = 70s

Nostalgia filter = 70s

The 70s filter gives us some desaturation, no change in contrast, red shift in midtones and lowlights, yellow shift in highlights.

Nostalgia filter = Antique

Nostalgia filter = Antique

The Antique filter is similar to the Antique Live Filter, but is much lighter in terms of brightness. There is a large degree of desaturation, some increase in contrast, significant brightness increase in the highlights, and very slight color shifts at the ends of the grayscale:  yellow in the highlights, blue in the lowlights.

Nostalgia filter = Washed

Nostalgia filter = Washed

The Washed filter here in the Edit section is very different from the filter of the same name in Live Filters. The only real similarity is the strongly yellowed highlights. This filter, like many of the others we have reviewed so far, has a much lighter look (brightness levels raised), a very slight magenta shift, slightly increased contrast, enhanced blues in the lowlights and some increase in cyan in the midtones.

Special filters:

Special filter = XPro

Special filter = XPro

The XPro filter in the Edit functions has a different appearance than the filter of the same name in Live Filters. This instance of the digital emulation of a ‘cross-process’ filter is less contrasty, less magenta, and has more yellow in the highlights. The chart shows the yellows in the highlights, blues in the lowlights, and increased saturation. The live shot reveals the increased white clipping on her dress (due to increased contrast), as well as the crushed blacks (notice the detail of the folds in her leather handbag are lost).

Special filter = Pop

Special filter = Pop

The Pop filter brings the familiar basic tonal adjustments (increased contrast, with crushed whites and blacks, an overall increase in midtone and highlight brightness levels) but this time the lowlights have a distince red/magenta cast, with midtones and highlights tending greenish/yellow. This is particularly evident in the live shot. Look at the black doorway in the original which is now very reddish in the filtered shot.

Special filter = Lomo

Special filter = Lomo

The Lomo filter here in the Edit area is rather different than the same named filter in Live Filters. This particular instance shows increased contrast and saturation, yellowish warming in the highlights, and like some other CameraPlusPro filters, an added spatial effect (the red splotch – in this example the red tint is in the same lower right corner for both chart and woman – if the placement is random, then this is just coincidence – but… it makes it look like the lowlights in the grayscale chart are pushed hard to red:  not so, it’s just that’s where the red tint overlay is this time…). Look at the top of her handbag in the live shot to see that the blacks are not actually shifted red. As with many other CameraPlusPro filters, the whites and blacks are crushed some – you can see on her dress how the highlights are now clipped.

Special filter = Holga

Special filter = Holga

The Holga filter is one where there is a marked similarity between the Live Filter and this instance as an Edit Filter. This version is lighter overall, with a more greenish-yellow cast, particularly in the shadows. The vignette effect is stronger in this Edit filter as well.

Special filter = Diana

Special filter = Diana

The Diana filter is another ‘retro camera’ effect:  based on, wow – surprise, the Diana camera… another of the cheap plastic cameras prevalent in the 1960′s. The vignetting, light leaks, chromatic aberrations and other side-effects of a $10 camera have been brought into the digital age. In a similar fashion to several of the previous ‘retro’ filters discussed already, you will notice crushed blacks & highlights, increased contrast, odd tints (in this case unsaturated highlights tend yellow), increased saturation of colors – and a slight twist in this filter due to even monochrome areas becoming tinted – the silver pendant on her chest now takes on a greenish/yellow tint.

Special filter = Polaroid

Special filter = Polaroid

The Polaroid filter here in the Edit section resembles the effects of the same filter in Live Filters in the highlights (tends yellow with some mild clipping), but diverges in the midtones and shadows. Overall, this instance is lighter, with much less magenta shift in the skin tones. The contrast is not as high as in the Live Filter version, and the saturation is a bit lower.

Special filter = Rust

Special filter = Rust

The Rust filter is really very similar to old-style sepia printing:  this is a post-tint process to a monochrome image. In this filter, the image is first rendered to a black & white image, then colorized with a warm brown overlay. The chart clearly shows this effect.

Special filter = Glamorize

Special filter = Glamorize

The Glamorize filter is a high contrast effect, with considerable clipping in both the blacks and the whites. The overall color balance is mostly unchanged, with a slight increase in saturation in the midtones and lowlights. Thr highlights on the other hand are somewhat desaturated.

Special filter = Hipster

Special filter = Hipster

The Hipster filter follows the same pattern as other filters that have the same name in both the Live Filters section and the Edit section: the Edit version is usually lighter with higher brightness levels, less of a magenta cast in skin tones and lowlights, and a bit less contrast. Still, in relation to the originals, the Hipster has the typical crushed whites and blacks, raised contrast, and in this case an overall warming (red/yellow) of midtones and highlights.

Colorize filters:

Colorize filter = Black & White

Colorize filter = Black & White

The Black & White filter here is almost identical to the effects produced by the same filter in the Live Filter section. A comparison of the chart images shows that. The live shots also render in a similar manner, with as usual the Edit filter being a bit lighter with slightly lower contrast. This is yet another reason to always evaluate a filter with at least two (and the more, the better) different types of source material. While digital filters offer a wealth of possibilities that optical filters never could, there are very fundamental differences in how these filters work.

At a simple level, an optical filter is far more predictable across a wide range of input images than a digital filter. The more complex a digital filter becomes (and many of the filters discussed here that attempt to emulate a multitude of ‘retro’ camera effects are quite complex) the more unexpected results are possible. When you consider that a Wratten #85 warming filter is really very simple (an orange filter that essentially partially blocks bluish/cyan light) – therefore this action will occur no matter what the source image is.

A filter such as Hipster, for example, attempts to mimic what is essentially a series of composited effects from a cheap analog film camera:  chromatic aberration of the cheap plastic lens, spherical lens aberration, light leaks, vignetting due to incomplete coverage of the film (sensor) rectangle, focus anomalies due to imperfect alignment of the focal plane of the lens with the film plane, etc. etc. Trying to mimic all this with mathematics (which is what a digital filter does, it simply applies a set of algorithms to each pixel) means that it’s impossible for even the most skilled visual programmer to fully predict what outputs will occur from a wide variety of inputs.

Colorize filter = Sepia

Colorize filter = Sepia

The Sepia filter is very similar to the Rust filter – it’s another ‘monochrome-then-tint’ filter. This time instead of a reddish-brown tint, the color overlay is a warm yellow.

Colorize filter = Sunset

Colorize filter = Sunset

The Sunset filter brings increased brightness, crushed whites and blacks, increased contrast and an overall warming towards yellow/red. Looks like it’s attempting to emulate the late afternoon light.

Colorize filter = Moss

Colorize filter = Moss

The Moss filter is, well, greenish… It’s a somewhat interesting filter, as most of the tinting effect is concentrated solely on monochromatic midtones. The chart clearly shows this. The live shot demonstrates this as well, the saturated bits keep their colors, the neutrals turn minty-green. Note his shirt, her dress, yellow sign stays yellow, and skin tones/hair don’t take on that much color.

Colorize filter = Lucifer

Colorize filter = Lucifer

The Lucifer filter is – surprise – a reddish warming look. There is an overall desaturation, followed by a magenta/red cast to midtones and lowlights. A slight decrease in contrast actually gives this filter a more faded, retro look than ‘devilish’, and in some ways I prefer this look to some of the previous filters with more ‘retro-sounding’ names.

Colorize filter = Faded

Colorize filter = Faded

The Faded filter offers a desaturated, but contrasty, look. Usually I interpret a ‘faded’ look to mean the kind of visual fading that light causes on a photographic print, where all the blacks and strongly saturated colors fade to a much lighter, softer tone. In this case, much of the color has faded, but the luminance is unchanged (in terms of brightness) and the contrast is increased, resulting in the crushed whites and blacks common to Camera Plus Pro filter design.

Colorize filter = Warm

Colorize filter = Warm

The Warm filter is basically a “plus yellow” filter. Looking at the chart you can see that there is an across-the-board increase in yellow. That’s it.

Colorize filter = Arctic

Colorize filter = Arctic

The Arctic filter is, well, cold. Like several of the other tinted monochromatic filters (Rust, Sepia), this filter first renders the image to a monochrome version, then tints it at all levels with a cold blue color.

Colorize filter = Allure

Colorize filter = Allure

The Allure filter is similar to the Warming filter – an even application of a single color increase – in this case magenta. There is also a slight increase in contrast.

Artistic filters:

Artistic filter = HDR

Artistic filter = HDR

The HDR filter is an attempt to mimic the result from ‘real’ HDR (High Dynamic Range) photography. Of course without true double (or more) exposures, this is not possible, but since some of the ‘look’ that some instances of HDR processing reveal show increased contrast, saturation and so on – this filter emulates some of that. Personally, I believe that true HDR photography should be indistinguishable from a ‘normal’ image – except that it should correctly map a very wide range of illumination levels correctly. A lot of “HDR” images tend to be a bit ‘gimicky’ with excessive edge glow, false saturation, etc. While this can make an interesting ‘special effect’ I think that it would better serve the imaging community if we correctly labeled those images as ‘cartoon’ or some other more accurate name – those filter side-effects really have nothing to do with true HDR imaging. Nevertheless, to complete the description of this filter, it is actually quite ‘color-neutral- (no cast), but does add contrast, particularly edge contrast; and significant vibrance and saturation.

Artistic filter = Fantasy

Artistic filter = Fantasy

The Fantasy filter is another across-the-board ‘color cast’ filter, this time with an increase in yellow-orange. Virtually no change in contrast, just a big shift in color balance.

Artistic filter = Vignette

Artistic filter = Vignette

The Vignette filter is a spatial filter, in that it really just changes the ‘shape’ of the image, not the overall color balance or tonal gradations. It mimics the light fall-off that was typical of early cameras whose lenses had inadequate covering power (the image rendered by the lens did not extend to the edges of the film). There is a tiny loss of brightness even in the center of the frame, but essentially this filter darkens the corners.

Artistic filter = Grunge

Artistic filter = Grunge

The Grunge filter is a combination filter:  both a spatial and tonal filter. It first, like past filters that are ‘tinted monochromatic’ filters, renders the image to black & white, then tints it – in this case with a grayish-yellow cast. There is also a marked decrease in contrast, along with elevated brightness levels. This is easily evident from the grayscale strip in the chart. In the live shot you can see her handbag is now a dark gray instead of black. The spatial elements are then added:  specialized vignetting, to mimic frayed or over-exposed edges of a print, as well as ‘scratches’ and ‘wrinkles’ (formed by spatially localized changes in brightness and contrast). All this combines to offer the look of an old, faded, bent and generally funky print.

Artistic filter = Pop Art

Artistic filter = Pop Art

The Pop Art filter is very much a ‘special effects’ filter. This particular filter is based on the solarization technique. This process (solarization) is in fact a rather complex and highly variable technique. It was initially discovered by Daguerre and others who first pioneered photography in the mid-1800’s. The name comes from the reversal of image tone of a drastically over-exposed part of an image:  in this case, pictures that included the sun in direct view. Instead of the image of the sun going pure white (on the print, pure black in the negative), the sun’s image actually went back to a light gray on the negative, rendering the sun a very dark orb in the final print. One of the very first “optical special effects” in the new field of photography. This is actually cause by halogen ions released within the halide grain by over-exposure diffusing to the grain surface in amounts sufficient to destroy the latent image.

In negatives, this is correctly known as the Sabattier effect after the French photographer, who published an article in Le Moniteur de la Photographie 2 in 1862. The digital equivalent of this technique, as shown in this filter, uses image tonal mapping computation to create high contrast bands where the levels of the original image are ‘flattened’ into distinct and constant brightness bands. This is clearly seen in the grayscale strip in the chart image. It is a very distinctive look and can be visually interesting when used in a creative manner on the correct subject matter.

Artistic filter = Grayscale

Artistic filter = Grayscale

The Grayscale filter is just that:  the rendering of the original image into a grayscale image. The difference between this filter and the Black & White filters (in both Live Filters and this Edit section) is a much lower contrast. By comparing the grayscale strips in the original and filtered chart images, you can see there is virtually no difference. The Black & White filters noticeably increase the contrast.

Artistic filter = Emboss

Artistic filter = Emboss (40%)

Artistic filter = Emboss (100%)

The Emboss filter is another highly specialized effects filter. As can be seen from the chart image, the picture is rendered to a constant monochrome shade of gray, with only contrasting edges being represented by either an increase or decrease in brightness. This creates the appearance of a flat gray sheet that is ‘stamped’ or embossed with the outline of the image elements. High contrast edges are rendered sharply, lower contrast edges are softer in shape. Reading from left to right, a transition from dark to light is represented by a dark edge, from light to dark is shown as light edge. Since each of these Edit filters has an intensity slider, the effect’s strength can be ‘dialed in’ as desired. I have shown all the filters up to now at full strength, for illustrative purposes. Here I have included a sample of this filter at a 40% level, since it shows just how different a look can be achieved in some cases by not using a filter at full strength.

Artistic filter = Xray

Artistic filter = Xray

The Xray filter is yet another ‘monochromatic tint’ filter, with the image first being rendered to a grayscale image, then (in this case) undergoing a complete tonal reversal (to make the image look like a negative), then finally a tint with a dark greenish-cyan color. It’s just a look (since all ‘real’ x-ray films are black and white only), but I’m certain at least one of the millions of people that have downloaded this app will find a use for it.

Artistic filter = Heat Signature

Artistic filter = Heat Signature

The Heat Signature filter is the final filter in this Artistic group. It is illustrative of a scientific imaging method whereby infrared camera images (that see only wavelengths too long for the human eye to see) are rendered into a visual color spectrum to help illustrate relative temperatures of the observed object. In the real scientific camera systems, cooler temperatures are rendered blue, the hottest parts of the image in reds. In between temperatures are rendered in green. Here, this mapping technique is applied against the grayscale. Blacks are blue, midtones are green, highlights are red.

Distortion filters:

The geometric distortion filters are presented differently, since these are spatial filters only. There is no need, nor advantage, to using the color chart test image. I have presented each filter as a triptych, with the first image showing the control as found when the filter is opened within the app, the second image showing a manipulation of the “effects circle” (which can be moved and resized), and the third image is the resultant image after applying the filter. There are no intensity sliders on the distortion filters.

Geometric Filter: Center Offset - Initial / Targeted Area / Result

The Center Offset filter ‘pulls’ the image to the center of the circle, as if the image was on an elastic rubber sheet, and was stretched towards the center of the control circle.

Geometric Filter: Pixelate - Initial / Targeted Area / Result

The Pixelate filter distorts the image inside of the control circle by greatly enlarging the quantization factors in the affected area, causing a large ‘chunking’ of the picture. This renders the affected area virtually recognizable – often used in candid video to obfuscate the identity of a subject.

Geometric Filter: Bulge - Initial / Targeted Area / Result

The Bulge filter is similar to the Center Offset, but this time the image is ‘pulled into’ the control circle, as if a magnifying fish-eye lens was applied to just a portion of the image.

Geometric Filter: Squeeze - Initial / Targeted Area / Result

The Squeeze filter is somewhat the opposite of the Bulge filter, with the image within the control circle being reduced in size and ‘pushed back’ visually.

Geometric Filter: Swirl - Initial / Targeted Area / Result

The Swirl filter does just that:  takes the image within the control circle and rotates it. Moving the little dot controls the amount and direction of the swirl. She needs a chiropractor after this…

Geometric Filter: Noise - Initial / Targeted Area / Result

The Noise filter works in a similar way to the Pixelate filter, only this time large-scale noise is introduced, rather than pixelation.

Geometric Filter: Light Tunnel - Initial / Targeted Area / Result

The Light Tunnel filter is probably a Star Trek shadow – what part of our common culture has not been affected by that far-seeing series? Remember the ‘communicator’?  Flip type cell phones, invented 30 years later, looked suspiciously like that device…

Geometric Filter: Fish Eye - Initial / Result

The Fish Eye filter mimics what a ‘fish eye’ lens might make the picture look like. There is no control circle on this filter – it is a fixed effect. The center of the image is the center of the fish-eye effect. In this case, it’s really not that strong of a curvature effect, to me it looks about like what a 12mm lens (on a 35mm camera system) would look like. If you want to see just how wide a look is possible, go to Nikon’s site and look for examples of their 6.5mm fisheye lens. That is wide!

Geometric Filter: Mirror - Initial / Result

The Mirror filter divides the image down the middle (vertically) and reflects the left half of the image onto the right side. There are no controls – it’s a fixed effect.

Borders:

Borders: Thin White / Rounded Black / Double Frame

Borders: White Frame / Polaroid / Stamp

Borders: Torn / Striped / Grainy

Ok, that’s it. Another iPhone camera app dissected, inspected, respected. Enjoy.

iPhone4S – Section 4b: Camera+ app

March 19, 2012 · by parasam

Camera+  A full-featured camera and editing application. Version described is 3.02

Feature Sets:

  • Light Table design for selecting which photos to Edit, Share, Save or get Info.
  • Camera Functions
    • Ability to split Focus area from Exposure area
    • Can lock White Balance
    • Flash: Off/Auto/On/Torch
    • Front or Rear camera selection
    • Digital Zoom
    • 4 Shooting Modes: Normal/Stabilized/Self-Timer/Burst
  • Camera options:
    • VolumeSnap On/Off
    • Sound On/Off
    • Zoom On/Off
    • Grid On/Off
    • Geotagging On/Off
    • Workflow selection:  Classic (shoot to Lightbox) / Shoot&Share (edit/share after each shot)
    • AutoSave selection:  Lightbox/CameraRoll/Both
    • Quality:  Full/Optimized (1200×1200)
    • Sharing:  [Add social services for auto-post to Twitter, Facebook, etc.]
    • Notifications:
      • App updates On/Off
      • News On/Off
      • Contests On/Off
  • Edit Functions
    • Scenes
      • None
      • Clarity
      • Auto
      • Flash
      • Backlit
      • Darken
      • Cloudy
      • Shade
      • Fluorescent
      • Sunset
      • Night
      • Portrait
      • Beach
      • Scenery
      • Concert
      • Food
      • Text
    • Rotation
      • Left
      • Right
      • Flip Horizontal
      • Flip Vertical
    • Crop
      • Freeform (variable aspect ratio)
      • Original (camera taking aspect ratio)
      • Golden rectangle (1:1.618 aspect ratio)
      • Square (1:1 aspect ratio)
      • Rectangular (3:2 aspect ratio)
      • Rectangular (4:3 aspect ratio)
      • Rectangular (4:6 aspect ratio)
      • Rectangular (5:7 aspect ratio)
      • Rectangular (8:10 aspect ratio)
      • Rectangular (16:9 aspect ratio)
    • Effects
      • Color – 9 tints
        • Vibrant
        • Sunkiss’d
        • Purple Haze
        • So Emo
        • Cyanotype
        • Magic Hour
        • Redscale
        • Black & White
        • Sepia
      • Retro – 9 ‘old camera’ effects
        • Lomographic
        • ‘70s
        • Toy Camera
        • Hipster
        • Tailfins
        • Fashion
        • Lo-Fi
        • Ansel
        • Antique
      • Special – 9 custom effects
        • HDR
        • Miniaturize
        • Polarize
        • Grunge
        • Depth of Field
        • Color Dodge
        • Overlay
        • Faded
        • Cross Process
      • Analog – 9 special filters (in-app purchase)
        • Diana
        • Silver Gelatin
        • Helios
        • Contessa
        • Nostalgia
        • Expired
        • XPRO C-41
        • Pinhole
        • Chromogenic
    • Borders
      • None
      • Simple – 9 basic border styles
        • Thick White
        • Thick Black
        • Light Mat
        • Thin White
        • Thin Black
        • Dark Mat
        • Round White
        • Round Black
        • Vignette
      • Styled – 9 artistic border styles
        • Instant
        • Vintage
        • Offset
        • Light Grit
        • Dark Grit
        • Viewfinder
        • Old-Timey
        • Film
        • Sprockets

Camera Functions

After launching the Camera+ app, the first screen the user sees is the basic camera viewfinder.

Camera view, combined focus & exposure box (normal start screen)

On the top of the screen the Flash selector button is on the left, the Front/Rear Camera selector is on the right. The Flash modes are: Off/Auto/On/Torch. Auto turns the flash off in bright light, on in lower light conditions. Torch is a lower-powered continuous ‘flash’ – also known as a ‘battery-killer’ – use sparingly! Virtually all of the functions of this app are directed to the high-quality rear-facing camera – the front-facing camera is typically reserved for quick low-resolution ID snaps, video calling, etc.

On the bottom of the screen, the Lightbox selector button is on the left, the Shutter release button is in the middle (with the Shutter Release Mode button just to the right-center), and on the right is the Menu button. The Digital Zoom slider is located on the right side of the frame (Digital Zoom will be discussed at the end of this section). Notice in the center of the frame the combined Focus & Exposure area box (square red box with “+” sign). This indicates that both the focus and the exposure for the entire frame are adjusted using the portion of the scene that is contained within this box.

You will notice that the bottle label is correctly exposed and focused, while the background is dark and out of focus.

The next screen shows what happens when the user selects the “+” sign on the upper right edge of the combined Focus/Exposure area box:

Split focus and exposure areas (both on label)

Now the combined box splits into two areas:  an Focus area (square box with bull’s eye), and an Exposure area (round circle resembling the adjustable f-stop ring in a camera lens). The exposure is now measured separately from the focus – allowing more control over the composition and exposure of the image.

In this case the resultant image looks like the previous one, since both the focus and the exposure areas are still placed on the label, which has consistent focus and lighting.

In the next example, the exposure area is left in place – on the label – but the focus area is moved to a point in the rear of the room. You will now notice that the rear of the room has come into focus, and the label has gone soft – out of focus. However, since the the exposure area is unchanged, the relative exposure stays the same – label well lit – but the room beyond still dark.

Split focus and exposure areas, focus moved to rear of room

This level of control allows greater freedom and creativity for the photographer.  [please excuse the slight blurring of some of the screen shot examples – it’s not easy to hold the iPhone completely still while taking a screen shot – which requires simultaneously pressing the Home button and the Power button – even on a tripod]

The next image shows the results of selecting the little ‘padlock’ icon in lower left of the image – this is Lock/Unlock button for Exposure, Focus and White Balance (WB).

Showing 'lock' panel for White Balance (WB), exposure and focus

Each of the three functions (Focus, Exposure, White Balance) can be locked or unlocked independently)

Focus moved back to label, still showing lock panel

In the above example, the focus area has been moved back to the label, showing how the focus now returns to the label, leaving the rear of the room once again out of focus.

The next series of screens demonstrate the options revealed when the Shutter Release Mode button (the little gear icon to the right of the shutter button) is selected:

Shutter type 'Settings' sub-menu displayed, showing 4 options

The ‘Normal’ mode exposes one image each time the shutter button is depressed.

Shutter type changed to Stabilized mode

When the ‘Stabilizer’ mode is selected, the button icon changes to indicate this mode has been selected. This mode is an indication (and an automatic shutter release) of the stability of the iPhone camera once the Stabilizer Shutter Release button is depressed – NOT a true motion-stabilized lens as in some expensive DSLR cameras. You have to hold the camera still to get a sharp picture – this function just helps the user know that the camera is indeed still. Once the Stabilizer Shutter Release is pushed, it glows red if the camera is moving, and text on the screen urges the user to hold still. As the camera detects that motion has stopped (using the iPhone’s internal accelerometer – motion detector) little beeps sound, and the shutter button changes color from red to yellow to green and then the picture is taken.

Shutter type changed to Timer mode - screen shows beginning of 5sec countdown timer

The Self-Timer Shutter Release mode allows a time delay before the actual shutter release occurs – after the shutter button is depressed. The most common use for this feature is a self-portrait (of course you need either a tripod or other method of securing the iPhone so the composition does not change!). This mode can also be useful to avoid jiggling the camera while pressing the shutter release – important in low light situations. The count-down timer is indiated by the numbers in the center of the screen. Once the shutter is depressed, the numbers count down (in seconds) until the exposure occurs. The default time delay is 5 seconds, this can be adjsuted by tapping the number on the screen before the shutter button is selected. The choices are 5, 15 and 30 seconds.

Shutter type changed to Burst mode

The final of the four shutter release modes is the ‘Burst’ mode. This exposes a short series of exposures, one right after the other. This can be useful for sports or other fast moving activity, where the photographer wants to be sure of catching a particular moment. The number of exposures taken is a function of how long you hold down the shutter release – the camera keeps taking pictures as fast as it can as long as you hold down the shutter.

There are a number of things to be aware of while using this mode:

  • You must be in the ‘Classic’ Workflow, not the ‘Shoot & Share’ ( more on this below when we discuss that option)
  • The best performance is obtained when the AutoSave mode is set to ‘Lightbox’ – writing directly to the Camera Roll (using the ‘Camera Roll’ option) is slower, leading to more elapsed time between each exposure. The last option of AutoSave (‘Lightbox & CameraRoll’) is even slower, and not recommended for burst mode.
  • The resolution of burst photos is greatly reduced (from 3264×2448 down to 640×480). This is the only way the data from the camera sensor can be transferred quickly enough – but one of the big differencdes between the iPhone camera system and a DSLR. The full resolution is 8megapixels, the burst resolution is only 0.3megapixels – more than 25x less resolution!

Resultant picture taken with exposure & focus on label

The above shot is an actual unretouched image using the settings from the first example (focus and exposure areas both set on label of the bottle).

Here is an example of how changing the placement of the Exposure Area box within the frame affects the outcome of the image:

exposure area set on wooden desktop - normal range of exposure

exposure area set on white dish - resulting picture is darker than normal

exposure area set on black desk mat - resulting image is lighter than normal

To fully understand what is happening above you need to remember that any camera light metering system sets the exposure assuming that you have placed the exposure area on a ‘middle gray’ value (Zone V). If you place the exposure measurement area on a lighter or darker area of the image the exposure may not be what you envisioned. Further discussion of this topic is outside the scope of this blog – but it’s very important, so if you don’t know – look it up.

The Lightbox

The next step after shooting a frame (or 20) is to process (edit) the images. This is done from the Lightbox. This function is entered by pressing the little icon of a ‘film frame’ on the left of the bottom control bar.

empty Lightbox, ready to import a photograph for editing

The above case shows an empty Lightbox (which is how the app looks after all shots are edited and saved). If you have just exposed a number of images, they will be waiting for you in the Lightbox – you will not need to import them. The following steps are for when you are processing previously exposed images (and it doesn’t matter if they were shot with Camera+ or any other camera app. I sometimes shoot with my film camera, scan the film, import to iPhone and edit with Camera+ in order to use a particular filter that is available).

selection window of the Lightbox image picker

an image selected in the Lightbox image picker, ready for loading into the Lightbox for editing

image imported into the Lightbox, ready for an action

When entering the Edit mode after loading an image, the following screen is displayed. There are two buttons on the top:  Cancel and Done. Cancel returns the user to the Lightbox, abandoning any edits or changes made while in the Edit screen, while Done applies all the edits made and returns the user to the Lightbox where the resultant image can be Shared or Saved to the Camera Roll.

Along the bottom of the screen are two ribbons showing all the edit functions. The bottom ribbon selects the particular edit mode, while the top ribbon selects the actual Scene, Rotation, Crop, Effect or Border that should be applied. The first set of individual edit functions that we will discuss are the Scenes. The following screen shots show the different Scene choices in the upper ribbon.

Scene modes 1 - 5

Scene modes 6 - 9

Scene modes 10 - 14

Scene modes 15 - 17

The ‘Scenes’ that Camera+ offers are one of the most powerful functions of this app. Nevertheless, there are some quirks (mostly about the naming – and the most appropriate way to apply the Scenes, based on the actual content of your image) that will be discussed. The first thing to understand is the basic difference between Scenes and Effects. Both at the most fundamental level transform the brightness, contrast, color, etc of the image (essentially the visual qualities of the image) – as opposed to the spatial qualities of the image that are adjusted with Rotation, Crop and Border. However, a Scene typically adjusts overall contrast, color balance, sometimes modifies white balance, brightness and so on. An Effect is a more specialized filter – often significantly distorting the original colors, changing brightness or contrast in a certain range of values, etc. – the purpose being to introduce a desired effect to the image. Many times a Scene can be used to ‘rescue’ an image that was not correctly exposed, or to change the feeling, mood, etc. of the original image. Another way to think about a Scene is that the result of applying a Scene will still almost always look as if the image had just been taken by the camera, while an Effect very often is clearly an artificially applied filter.

In order to best demonstrate and evaluate the various Scenes that are offered, I have assembled a number of images that show a “before and after” of each Scene type. Within each Scene pair, the left-hand image is always the unadjusted original image, while the right-hand image has the Scene applied. The first series of test images is constructed with two comparisons of each Scene type: the first image pair shows a calibrated color test chart, the second image pair shows a woman in a typical outdoor scene. The color chart can be used to analyze how various ranges of the image (blacks, grays, whites, colors) are affected by the Scene adjustment; while the woman subject image is often a good representation of how the Scene will affect a typical real-world image.

After all of the Scene types are shown in this manner, I have added a number of sample images, with certain Scene types applied – and discussed – to better give a feeling of how and why certain Scene types may work best in certain situations.

Scene = Clarity

Scene = Clarity

The Clarity scene type is one of the most powerful scene manipulations offered – it’s not an accident that it is the first scene type in the ribbon… The power of this scene is not that obvious from the color chart, but it is more obvious in the human subject. This particular subject, while it shows most of the attributes of the clarity filter well, is not ideally suited for application of this filter – better examples follow at the end of this section. The real effect of this scene is to cause an otherwise flat image to ‘pop’ more, and have more visual impact. However, just like in other parts of life – less is often more. My one wish is that an “intensity slider” was included with Scenes (it is only offered on Effects, not Scenes) – as many times I feel that the amount of Clarity is overblown. There are techniques to accomplish a ‘toning down’ of Clarity, but those will only be discussed in Part 5 of this series – Tips & Techniques for iPhonography – as currently this requires the use of multiple apps – which is beyond the scope of the app introduction in this part of the series. The underlying enhancement appears to be a spatially localized increase of contrast, and an increase in vibrance and saturation of color.

Notice in the gray scale of the chart that the edges of each density chip are enhanced – but the overall gamma is unchanged (the steps from white to black remain even and separately identifiable). Look at the color patches – there is an increase in saturation (vivedness of color) – but this is more pronounced in colors that are already somewhat saturated. For instance look at the pastel colors in the range of columns 9-19 and rows B-D:  there is little change in overall saturation. Now look at, for instance, the saturated reds and greens of columns 17-18 and rows J-L:  these colors have picked up noticeabley increased saturation.

Looking at the live subject, the local increase in contrast can easily be seen in her face, with the subtle variations in skin tone in the original becoming much more pronounced in the Clarity scene type. The contrast between the light print on her dress and the gray background is more obvious with Clarity applied. Observe how the wrinkles in the man’s shirt and shorts are much more obvious with Clarity. Notice the shading on the aqua-colored steel piping in the lower left of the image: in the original the square pipes look very evenly illuminated, with Clarity applied there is a noticeable transition from light to dark along the pipe.

Scene = Auto

Scene = Auto

The Auto scene type is sort of like using a ‘point and shoot’ camera set to full automatic – the user ideally doesn’t have to worry about exposure, etc. Of course, in this case since the image has already been exposed there is a limit to the corrections that can be applied! Basically this Scene attempts to ensure full tonal range in the image and will manipulate levels, gamma, etc. to achieve a ‘centered’ look. For completeness I have included this Scene – but as you will notice, and this should be expected – there is almost no difference between the befor and after images.With correctly exposed initial images this is what should happen… It may not be apparent on the blog, but when looking carefully at the color chart images on a large calibrated monitor the contrast is increased slightly at both ends of the gray scale:  the whites and blacks appear to clip a little bit.

Scene = Flash

Scene = Flash

The Flash scene type is an attempt to ‘repair’ an image that was taken in low light without a flash – when it was probably needed. Again, like any post-processing technique, this Scene cannot make up for something that is not there:  areas of shadow in which there was no detail in the original image can at best only be turned into lighter noise… But in many cases it will help underexposed images. The test chart clearly shows the elevation in brightness levels all through the gray scale – look for example at gray chip #11 – the middle gray value of the original is considerably lightened in the right-hand image. This Scene works best on images that are of overall low light level – as you can see from both the chart and the woman areas of the picture that are already well-lit tend to be blown out and clipped.

Scene = Backlit

Scene = Backlit

The Backlit scene type tries to correct for the ‘silhouette’ effect that occurs when strong light is coming from behind the subject without corresponding fill light illuminating the subject from the front. While I agree that this is a useful scene correction, it is hard to perform after the fact – and personally I think this is one Scene that is not well executed. My issue is with the over-saturation of reds and yellows (Caucasian skin tones) that more often than not make the subject look like a boiled lobster. I think this comes from the attempt to raise the percived brightness of an overly dark skin tone (since the most common subject in such a situation is a person standing in front of a brightly lit background). You will notice on the chart that the gray scale is hardly changed from the original (a slight overall brightness increase) – but the general color saturation is raised. A very noticeable increase in red/orange/yellow saturation is obvious:  look at the red group of columns 7-8 and rows A-B. In the original these four squares are clearly differentiated – in the ‘after’ image they have merged into a single fully saturated area. A glance at the woman’s image also shows overly hot saturation of skin tones – even the man in the background has a hot pink face now. So, to summarize, I would reserve this Scene for very dark silhouette situations – where you need to rescue an otherwise potentially unusable shot.

Scene = Darken

Scene = Darken

The Darken scene type does just what it says – darkens the overall scene fairly uniformly. This can often help a somewhat overexposed scene. It cannot fix one of the most common problems with digital photography however:  the clipping of light areas due to overexposure. As explained in a previous post in this series, once a given pixel has been clipped (driven into pure white due to the amount of light it has received) nothing can recover this detail. Lowering the level will only turn the bright white into a dull gray, but no detail will come back. Ever. A quick look at the gray scale in the right-hand image clearly shows the lowering of overall brightness – with the whites manipulated more than the blacks. For instance, chip #1 turns from almost white into a pale gray, while chip #17 shows only a slight darkening. This is appropriate so the blacks in the image are not crushed. You will notice by looking at the colors on the chart that darkening effect is pretty much luminance only – no change in color balance. The apparent increase in reds in the skin tone of the woman is a natural side-effect of less luminance with chrominance held constant – once you remove the ‘white’ from a color mixture the remaining color appears more intense. You can see the same effect in the man’s shirt and the yellow background. Ideally the saturation should be reduced slightly as well as the luminance in this type of filter effect – but that can be tricky with a single filter designed to work with any kind of content. Overall this is a useful filter.

Scene = Cloudy

Scene = Cloudy

The Cloudy scene type appears to normalize the exposure and color shift that occurs when the subject is photographed under direct cloudy skies. This is in contrast to the Shade scene type (discussed next) where the subject is shot in indirect light (open shade) while illuminated from a clear sky. This is mostly a color temperature problem to solve – remember that noon sunlight is approximately 5500°K (degrees Kelvin is a measure of color temperature, low numbers are reddish, high numbers are bluish, middle ‘white’ is about 5000°K). Illumination from a cloudy sky is often very ‘blue’ (in terms of color temperature) – between 8000°K – 10000°K, while open shade is less so, usually between 7000°K – 8000°K. If you compare the two scene types (Cloudy and Shade) you will notice that the image is ‘warmed up’ more with the Cloudy scene type. There is a slight increase in brightness but the main function of this scene type is to warm up the image to compensate for the cold ‘look’ often associated with shots of this type. You can see this occuring in the reddening of the woman’s skin tones, and the warming of the gray values of the sidewalk between her and the man.

Scene = Shade

Scene = Shade

The Shade scene type is similar in function to the Cloudy scene (see above for comparison) but differs in two areas: There is a noticeable increase in brightness (often images exposed in the shade are slightly underexposed) and there is less warming of the image (due to open shade being warmer in color temperature than a full cloudy scene illumination). One easy way to compare the two scene types is to examine the color charts – looking at the surround (where the numbers and letters are) – the differences are easy to see there. A glance at the test shot of the woman shows a definite increase in brightness, as well as a slight warming – again look at the skin tones and the sidewalk.

Scene = Fluorescent

Scene = Fluorescent

The Fluorescent scene type is designed to correct for images shot under fluorescent lighting. This form of illumination is not a ‘full-spectrum’ light source, and as such has some unnatural effects when used to take photographs. Our human vision corrects for this – but film or digital exposures do not – so such photos tend to have a somewhat green/cyan cast to them. In particular this makes light-colored skin look a bit washed out and sickish. The only real change I can see in this scene filter is an increase in magenta in the overall color balance (which will help to correct for the green/cyan shift under fluorescent light). The difference introduced is very small – I think it will help in some cases and may be insufficient in others. It is more noticeable in the image of the woman than the test chart (her skin tones shift towards a magenta hue).

Scene = Sunset

Scene = Sunset

The Sunset scene type starts a slightly different type of scene filter – ones that are apparently designed to make an image look like the scene name, instead of correcting the exposure for images taken under the named situation (like Shade, Cloudy, etc.). This twist in naming conventions is another reason to always really test and understand your tools – know what the filter you are applying does at a fundamental level and you will have much better results. It’s obvious from both the chart and the woman that a marked increase in red/orange is applied by this scene type. There is also a general increase in saturation – just more in the reds than the blues. See the difference in the man’s shirt and shorts to see how the blue saturation increases. Again, like the Backlit filter, I personally feel this effect is a bit overdone – and within Camera+ there is no simple way to remedy this. There are methods, using another app to follow the corrections of this app – these techniques will be discussed in the next of the series (Tips & Techniques for iPhonography).

Scene = Night

Scene = Night

The Night scene type attempts to correct for images taken at night under very low illumination. This scene is a bit like the Flash type – but on steroids! The full gray scale is pushed towards white a lot, with even the darkest shadow values receiving a noticeable bump in brightness. There is also a general increase in saturation – as colors tend to be undersaturated when poorly illuminated. Of course this will make images that are normally exposed look greatly overexposed (see both the chart and the woman), but it still gives you an idea of how the scene filter works. Some better ‘real-world’ examples of the Night scene type follow this introduction of all the scene types.

Scene = Portrait

Scene = Portrait

The Portrait scene type is basically a contrast and brightness adjustment. It’s easy to see in the chart comparison, both in the gray scale and on the color chip chart. Look first at the gray scale: all of the gray and white values from chip #17 and lighter are raised, the chips below #17 are darkened. Chips #1 and #2 are now both pure white, having no differentiation. Likewise with chips #20-22, they are pure black. The area defined by columns 13-19 and rows A-C are now pure white, compared to the original where clear differences can be noted. Likewise row L between columns 20-22 can no longer be differentiated. In the shot of the woman, this results in a lightening of skin tones and increased contrast (notice her black bag is solid black as opposed to very dark gray; the patch of sunlight just above her waist is now solid white instead of a bright highlight). Again, like several scene effects I have noted earlier, I find this one a little overstated – I personally don’t like to see details clipped and crushed – but like any filter, the art is in the application. This can be very effective on a head shot that is flat and without punch. The trick is applying the scene type appropriately. Knowledge furthers…

Scene = Beach

Scene = Beach

The Beach scene type looks like the type of filter first mentioned in Sunset (above). In other words, an effect to make the image look like it was taken on a beach (or at least that type of light!). It’s a little bit like the previous Portrait scene (in that there is an increase in both brightness and contrast – but less than Portrait) but also has a bit of Sunset in it as well (increased saturation in reds and yellows – but again not as much as Sunset). While the Sunset type had greater red saturation, this Beach filter is more towards the yellow. See column #15 in the chart – in the original each square is clearly differentiated, in the right-hand image the more intense yellows are very hard to tell apart. Just to the right, in column #17, the reds have not changed that much. When looking at the woman, you can see that this scene type makes the image ‘bright and yellow/red’ – sort of a ‘beachy’ effect I guess.

Scene = Scenery

Scene = Scenery

The Scenery scene type produces a slight increase in contrast, along with increased saturation in both reds and blues – with blue getting a bit more intensity. Easy to compare using the chart, and in the shot of the woman this can be seen in reddish skin tone, as well as significantly increased saturation in the man’s shirt and shorts. While this effect makes people look odd, it can work well on so-called “postcard” landscape shots (which tend to be flat panoramic views with low contrast and saturation). However, as you will see in the ‘real-world’ examples below, often a different scene or filter can help out a landscape in even a better way – it all depends on the original photo with which you are starting.

Scene = Concert

Scene = Concert

The Concert scene type is oddly enough very similar to the previous scene type (Scenery) – just turned up really loud! A generalized increase in contrast, along with red and blue saturation increase – attempts to make your original scene look like, well, I guess a rock-and-roll concert… Normal exposures (see the woman test shot) come out ‘hot’ (overly warm, contrasty with elelvated brightness) but if you need some color and punch in your shot, or desire an overstated effect this scene type could be useful.

Scene = Food

Scene = Food

The Food scene type offers a slight increase in contrast and a bit of increased saturation in the reds. This can be seen in both the charts and the woman shot. It’s less overdone than several of the other scene types, so keep this in mind for any shot that needs just a bit of punch and warming up – not just for shots of food. And again, I see this scene type in the same vein as Beach, Sunset, etc. – an effect to make your shot look like the feeling of the filter name, not to correct shots of food…

Scene = Text

Scene = Text

The Text scene type is an extreme effect – and best utilized as its name implies – to help render shots of text material come out clearly and legibly. An example of actual text is shown below, but you can see from the chart that this is accomplished by a very high contrast setting. The apparent increase in saturation in the test chart is really more of a result of the high contrast – I don’t think a deliberate increase in saturation was added to this effect.

(Note:  the ‘real-world’ examples referred to in several of the above explanations will be shown at the very end of this discussion, after we have illustrated the remaining basic edit functions (Rotation, Crop, Effects and Borders) in a similar comparative manner as above with the Scene types. This is due to many of the sample shots combining both a Scene and an Effect (one of the powerful capabilities of Camera+) – I want the viewer to fully understand the individual instruments before we assemble a symphony…)

The next group of Edit functions is the image Rotation set.

Rotation submenu

The usual four choices of image rotation are presented.

The Crop functions are displayed next.

group 1 of crop styles

group 2 of crop styles

group 3 of crop styles

The Effects (FX) filters, after the Scene types, are the most complex image manipulation filters included in the Camera+ app. As opposed to Scenes, which tend to affect overall lighting, contrast and saturation -and keep a somewhat realistic look to the image – the Effects filters seriously bend the image into a wide variety of (often but not always) unrealistic appearances. Many of the effects mimic early film processes, emulsions or camera types; some offer recent special filtering techniques (such as HDR – High Dyamic Range photography), and so on.

The Effects filters are grouped into four sections:  Color, Retro, Special and Analog. The Analog filter group requires an in-app purchase ($0.99 at the time of this article). In a similar manner to how Scene types were introduced above, an image comparison is shown along with the description of each filter type. The left-hand image is the original, the right-hand image has the specific Effects filter applied. Some further ‘real-world’ examples of using filters on different types of photography are included at the end of this section.

Color Effects filters

Color Effects filters

Color Filter = Vibrant

The Vibrant filter significantly increases the saturation of colors, and appears to enhance the red channel more than green or blue.

Color Filter = Sunkiss'd

The Sunkiss’d filter is a generalized warming filter. Notice, that in contrast to the Vibrance filter above, even the tinfoil dress on the left-hand dancer has turned a warm gold color – where the Vibrance filter did not alter the silver tint – as that filter has no effect on portions of the image that are monochrome. Also, all colors in Sunkiss’d are moved towards the warm end of the spectrum – note the gobo light effect projected above the dancers: it is pale blue in the original, and becomes a warmer green/aqua after the application of Sunkiss’d.

Color Filter = Purple Haze

The Purple Haze filter does not provide hallucinogenic experiences for the photographer, but does perhaps simulate what the retina might imagine… Increased contrast, increased red/blue saturation and a color shift towards.. well… purple.

Color Filter = So Emo

The So Emo filter type (So Emotional?) starkly increases contrast, shifts color balance towards cyan, and to add a counterpoint to an overall cyan tint there is apparently a narrow band enhancement of magenta – notice the enhancement of the tulle skirt on the center dancer that should otherwise be almost monochromatic with addition of so much cyan shift in the color balance. However, the flesh tones of the dancers’ legs (more reddish) are rendered almost colorless by the cyan tint; this shows that the enhancement is narrow-band, it does not include red.

Color Filter = Cyanotype

The Cyanotype effects filter is reminiscent of early photogram techniques (putting plant leaves, etc. on photo paper sensitized with the cyanotype process in direct sunlight to get a silhouette exposure). This is the same process that makes blueprints. Later it was used (in a similar way as sepia toning) to tint black & white photographs. In the case of this effects filter, the image is first rendered to monochrome, then subsequently tinted with a slightly yellowish cyan color.

Color Filter = Magic Hour

The Magic Hour filter attempts to make the image look like it was taken during the so-called “Magic Hour” – the last hour before sunset – when many photographers feel the light is best for all types of photography, particularly landscape or interpretive portraits. Brightness is raised, contrast is slightly reduced and a generalized warming color shift is applied.

Color Filter = Redscale

The Redscale effects filter is a bit like the previous Magic Hour, but instead of a wider spectrum warming, the effect is more localized to reds and yellows. The contrast is slightly raised, instead of lowered as for Magic Hour, and the effect of the red filter can clearly be seen on the gobo light projected above the dancers:  the original cyan portion of the light is almost completely neutralized by the red enhancement, leaving only the green portion of the original aqua light remaining.

Color Filter = Black White

The Black & White filter does just what it says: renders the original color photograph into a monochrome only version. It looks like this is a simple monochrome conversion of the RGB channels (color information deleted, only luminance kept). There are a number of advanced techniques for rendering a color image into a high quality black & white photo – it’s not as simple as it sounds. If you look at a great black and white image from a film camera, and compare it to a color photograph of the same scene, you will know what I am saying. There are specialized apps for taking monochrome pictures with the iPhone (one of which will be reviewed later in this series of posts); and there is a whole set of custom filters in Photoshop devoted to just this topic – getting the best possible conversion to black & white from a color original. In many cases however a simple filter like this will do the trick.

Color Filter = Sepia

The Sepia filter, like Cyanotype, is a throwback to the early days of photography – before color film – when black & white images were toned to increase interest. In the case of this digital filter, the image is first turned into monochrome, then tinted with a sepia tone via color correction.

Retro Effects filters

Retro Effects filters

Retro Filter = Lomographic

The Lomographic filter effect is designed to mimic some of the style of photograph produced by the original LOMO Plc camera company of Russia (Leningrad Optical Mechanical Amalgamation). This was a low cost automatic 35mm film camera. While still in production today, this and similar cameras account for only a fraction of LOMO’s production – the bulk is military and medical optical systems – and are world class… Due to the low cost of components and production methods, the LOMO camera exhibited frequent optical defects in imaging, color tints, light leaks, and other artifacts. While anathema to professional photographers, a large community that appreciates the quirky effects of this (and other so-called “Lo-Fi” or Low Fidelity) cameras has sprung up with a world-wide following. Hence the Lomographic filter…

While, like all my analysis on Scenes and Effects, I have no direct knowledge of how the effect is produced, I bring my scientific training and decades of photographic experience to help explain what I feel is a likely design, based on empirical study of the effect. That said, this effect appears to show increased contrast, a greenish/yellow tint for the mid-tones (notice the highlights, such as the white front stage, stay almost pure white). A narrow-band enhancement filter for red/magenta keeps the skin tones and center dancer’s dress from desaturating in the face of the green tint.

Retro Filter = '70s

The ’70s effect is another nod to the look of older film photographs, this one more like what Kodachrome looked like when the camera was left in a hot car… All film stock is heat sensitive, with color emulsions, particularly older ones, being even more so. While at first this filter has a resemblance to the Sunkiss’d color filter, the difference lies in the multi-tonal enhancements of the ’70s filter. The reds are indeed punched up, but that’s in the midtones and shadows – the highlights take on a distinct greenish cast. Notice that once the enhanced red nulled out the cyan in the overhead gobo projection, then the remaining highlights have turned bright green – with a similar process occuring on the light stage surface.

Retro Filter = Toy Camera

The Toy Camera effects filter emulates the low cost roll-film cameras of the ’50s and ’60s – with the light leaks, uneven processing, poor focus and other attributes often associated with photographs from that genre of cameras. Increased saturation, a slightly raised black level, spatially localized contrast enhancement (a technique borrowed from HDR filtering) – notice the slight flare on the far right over the seated woman’s head become a bright hot flare in the filtered image, and streaking to simulate light leakage on film all add to the multiplicity of effects in this filter.

Retro Filter = Hipster

The Hipster effect is another of the digital memorials to the original Hipstamatic camera – a cheap all plastic 35mm camera that shot square photos. Copied from an original low-cost Russian camera, the two brothers that invented it only produced 157 units. The camera cost $8.25 in 1982 when it was introduced. With a hand-molded plastic lens, this camera was another of the “Lo-Fi” group of older analog film cameras whose ‘look’ has once again become popular. As derived by the Camera+ crew, the Hipster effect offers a warm, brownish-red image. Achieved apparently with raised black levels (a common trait of cheap film cameras, the backs always leaked a bit of light so a low level ‘fog’ of the film base always tended to raise deep blacks [areas of no light exposure in a negative] to a dull gray); a pronounced color shift towards red/brown in the midtones and lowlights; and an overall white level increase (note the relative brightness of the front stage between the original and the filtered version),

Retro Filter = Tailfins

The Tailfins retro effect is yet another take on the ’50s and ’60s – with an homage to the 1959 Cadillac no doubt – the epitomy of the ‘tailfin era’. It’s similar to the ’70s filter described above, but lacks the distinct ‘overcooked Kodachrome’ look with the green highlights. Red saturation is again pushed up, as well as overall brightness. Once again blacks are raised to simulate the common film fog of the day. Lowered contrast finishes the look.

Retro Filter = Fashion

The Fashion effects filter is an interesting and potentially very useful filter. Although I am sure there are styles of fashion photography that have used this muted look, the potential uses for this filter extend far beyond fashion or portraiture. Essentially this is a desaturating filter that also warms the lowlights more than the highlights. Notice the rear wall – almost neutral gray in the original, a very warm gray in the filtered version. The gobo projected light, the greenish-yellow spill on the ceiling, the center dancer’s dress – all greatly desaturated. The contrast appears just a bit raised:  the white front stage is brighter than the original version, and the black dress of the right-hand dancer is darker. With so many photos today – and filters – that tend to make things go pop! bang! and sparkle! it’s sometimes nice to present an image that is understated, but not cold. This just might be a useful tool to help tell that story.

Retro Filter = Lo-Fi

The Lo-Fi is another retro effect filter that is similar in some respects to the Toy Camera filter reviewed above, but does not express the light streak and obvious film fog artifacts. It again provides an unnatural intensity of color – this through greatly increased saturation. There is also an increase in contrast – note the front stage is nearly pure white and the ceiling to the right of the gobo projection has gone almost pure black. There is a non-uniform assignment of color balance and saturation, dependent on the relative luminance of the original scene. The lighter the original scene, the less saturation is added:  compare the white stage to the dark gray interior of the large “1” on the back wall.

Retro Filter = Ansel

The Ansel filter is of course a tip of the hat to the iconic Ansel Adams – one of the premier black & white photographers ever. Although… Ansel would likely have something to say about separation of gray values in the shadows, particularly around Zones II – III.  Compared to the ‘Black & White’ color filter discussed earlier, this filter is definitely of a higher contrast. Personally, I think the blacks are crushed a bit – most of the detail is lost in the black dress, and the faces of the dancers are almost lost now in dark shadow. But for the right original exposure, this filter will offer more dyanmism than the “Black & White” filter.

Retro Filter = Antique

The Antique effects filter is in the same vein as Sepia and Cyanotope: a filter that first extracts a monochrome image from the color original, then tints it – in this case with a yellow cast. The contrast is increased as well.

Special Effects filters

Special Effects filters

Special Filter = HDR @ 100%

Special Filter = HDR @ 50%

The HDR Special filter is, along with the Clarity Scene type, one of the potentially more powerful filters in this entire application. Because of this (and to demonstrate the Intensity Slider function) I have inserted two examples of this filter, one with the intensity set at 100%, and one with the intensity at 50%. All of the Effects filters have an intensity control, so the relative level of the effect can be adjusted from 0-100%. All of the other examples are shown at full intensity to discuss the attributes of the filter with the greatest ease, but many times a lessening of the intensity will give better results. That is nowhere more evident than with the HDR effect. This terms stands for High Dynamic Range photography. Normally, this can only be performed with multiple exposures of precisely the same shot in the camera – then through complex post-production digital computations, the two (or more) images are superimposed on top of each other, with the various parts of the images seamlessly blended. The whole purpose of this is to make a composite image that has a greater range of exposure than was possible with the taking camera/sensor/film.

The usual reason for this is an extreme range of brightness. An example:  if you stand in a dimly lit barn and shoot a photograph out the open barn door at the brightly lit exterior at noon, the ratio of brightness from the barn interior to the exterior scene can easily approach 100,000:1 – which is impossible for any medium, whether film or digital, to capture in a single exposure. The widest range film stock ever produced could capture about 14 stops – about 16,000:1. And that is theoretical – once you add the imperfections of lens, the small amount of unavoidable base fog and development noise, 12 stops (about 4,000:1) is more realistic. With CCD arrays (high quality digital sensors as found in expensive DSLR cameras, not the CMOS sensors used in the iPhone), it is theoretically possible to get about the same. While the top of the line DSLRs boast a 16-bit converter, and do output 16-bit images, the actual capability of the sensor is not that good. I personally doubt it’s any better than the 12 stops of a good film camera – and that only on camera backs costing the same as a small car…

What this means in practicality is that to capture such a scene leads to one of two scenarios:  either the blacks are underexposed (if you try to avoid blowing out the whites); or the white detail is lost in clipping if you try to keep the black shadow detail in the barn visible. The only other option (employed by professional photographers with a budget of both time and money) is to light the inside of the barn sufficiently that the contrast of the overall scene is brought within range of the taking film or digital array.

With HDR, a whole new possibility has arrived: take two photographs (identical, must line up perfectly so camera has to be on a tripod, and no motion in the scene is allowed – a rather restrictive element, but critical) – then with the magic of digital post-processing, the low-light image (correctly exposed for the shadows, so the highlights are blown out) and the hi-light image (correctly exposed for the brightly lit part of the scene, so the inside of the barn is just solid black with no detail) are combined into a composite photograph that has incredible dynamic range. There is a lot more to it than this, and you can’t get around the display part of the equation (how do you then show an image that has 16 or more stops of dynamic range on a computer monitor that has at best 10 stops of range? or worse yet, ink jet printers, that even on the best high gloss art paper may be able to render 6 stops of dynamic range? We’ll leave those questions for my next part of this blog series, but for now it’s enough to understand that high dynamic range exposures (HDR) are very challenging for the photographer.

So what exactlty IS an HDR filter? Obviously it cannot duplicate the true HDR technique (multiple exposures)… [First, to be clear, there are different types of “HDR filters” – for instance the very complex one in Photoshop is designed to work with multiple source images as discussed above – here we are talking about the HDR filter included with Camera+, and what it can, and cannot, do). The type of filtering process that appears to be used by the HDR filter in this app is known as a “tone mapping” filter. This is actually a very complex process, chock full of high mathematics, and if it weren’t for the power of the iPhone hardware and core software this would be impossible to do on anything but a desktop computer. Essentially, through a process of both global and local tone mapping using specific algorithms, an output image is derived from the input image. As you can see from the results in the right hand images, tone mapping HDR has a unique look. It tends to enhance local contrast, so image sharpness is enhanced. A side effect – that some like and others don’t – is an apparent ‘glow’ around the edges of dark objects in the scene when they are in front of lighter objects. In these examples, look around the edges of the black dress, and the edges of the black outline of the “1” on the back wall. Notice also that in the original photo, the white stage looks almost smooth, in the resultant filtered image, you can see every bit of dust and footscrapes from the models. The overall brightness of the image is enhanced, but nothing is clipped. Due to the enhancement of small detail, noise in low light areas (always an issue with digital sensors) is increased. Look at the area of the ceiling to the right of the projected gobo image:  in the original the low lit area looks relatively smooth, in the filtered image there are many more mottling and other noise artifacts. Due to the amount of detail added, side effects, etc. it is often desired to not ‘overdo’ the tone mapping effect. This can clearly be shown with the second set of comparisions, which has the intensity of the effect set to 50% instead of 100%.

Special Filter = Miniaturize

The Special filter Miniaturize initially confused me – I didn’t understand the naming of this filter in reference to its effects: this filter is very similar to the Depth of Field filter which will be discussed shortly. Essentially this filter increases saturation a bit, and then applies blurring technique to defocus the upper third and lower third of the image, leaving the middle third sharp. A reader of my inital release of this section was kind enough to point out that this filter is attempting to mimic the planar depth-of-field effect that happens when the lens is tilted about the axis of focus. With a physical tilt-shift lens, the areas of soft focus are due to one area of the image being too far to be in focus, the other area being too near to be in focus. This technique is used to simulate miniature photography, hence the filter name. Thanks to darkmain for the update.

Special Filter = Polarize

The Polarize filter is another somewhat odd name for the actual observed effect – since polarizing filtering must take place optically – there is no way to electronically substitute this. Polarizing filters are often used to reduce reflections (from windows, water surface, etc.) – as well as allow us to see 3D movies. None of these techniques are offered by this filter. What this one does do is to substantially increase the contrast, add significant red/blue saturation – but, like the earlier Lo-Fi filter the increase in saturation is inversely proportional to the brightness of the element in the scene: dark areas get increased saturation, light areas do not.

Special Filter = Grunge

The Grunge effect does have a name that makes sense! Looking like the photo was taken through a grungy piece of glass, it has a faded look that is somewhat realistic of old, damaged print photos. The apparent components of this filter are:  substantial desaturation, significant lightening (brightness level raised), a golden/yellow tint added, then the ‘noise’ (scratches).

Special Filter = Depth of Field

The Depth of Field effect is, as mentioned, very similar to the Miniaturize effect. The overall brightness is a bit lower, and the other main difference appears to be a circular area of sharpness in the center of the frame, as opposed to the edge to edge horizontal band of sharpness apparent with the Miniaturize filter. Check the focus of the woman seated on the far right in the two filters and you’ll see what I mean.

Special Filter = Color Dodge

The Color Dodge special filter has me scratching my head again as far as naming goes… In photographic terminology, “dodge” means to hold back, to reduce – while “burn” means to increase. These are techniques originally used in darkroom printing (actually one of the first methods of tone mapping!) to locally increase or decrease the light falling on the print in order to change the local contrast/brightness. In the resultant image from this filter, red saturation has not just been increased, it has been firewalled! Basically, areas in the original image that had little color in them stayed about the same, areas that have significant color values have those values increased dramatically. There is additionally an increase in overall contrast.

Special Filter = Overlay

The Overlay effect has the same contrast and saturation functions as the previous Color Dodge filter, but the saturation is not turned up as high (gratefully!). In addition, there is a pronounced vignette effect – easy to see in the bottom of the frame. It’s circular, just harder to see in this particular image at the top. Like the rest of the effects filters, the ability to reduce the intensity of this effect can make it useful for situations that at first may not be obvious. For instance, since the saturation only works on existing chroma in the image, if one applies this image to a monochrome image now you have a variable vignetter filter – with no color component…

Special Filter = Faded

The Faded special effect filter does just what it says… this is a simple desaturating filter with no other functions visible. With the ability to vary the intensity of desaturation it makes for a powerful tool. Often one would like to just take a ‘bit off the top’ in terms of color gain – digital sensors are inherently more saturated looking than many film emulsions – just compare (if you can, not that many film labs left…) a shot taken of the same scene with both Ektachrome transparency film and the same scene with the iPhone.

Special Filter = Cross Process

The Cross Process is a true special effect! The name comes from a technique, first discovered by accident, where film is processed in a chemical developer that was intended for a different type of film. While the effect in this particular filter is not indicitive of any particular similar chemical cross-process, the overall effect is similar:  high contrast, unnatural colors, and a general ‘retro’ look that has found an audience today. Just like with the chemical version of cross-processing, the results are unpredictable – one just has to try it out and see. Of course, with film, if you didn’t like it… tough.. go reshoot… with digital, just back up and do something else….

Analog Effects filters

Analog Effects filters

Analog Filter = Diana

The Diana effect is based on, wow – surprise, the Diana camera… another of the cheap plastic cameras prevalent in the 1960’s. The vignetting, light leaks, chromatic aberrations and other side-effects of a $10 camera have been brought into the digital age. In a similar fashion to several of the previous ‘retro’ filters discussed already, you will notice raised blacks, slight lowering of contrast, odd tints (in this case unsaturated highlights tend yellow), increased saturation of colors – and a slight twist in this filter due to even monochrome areas becoming tinted – the silver dress (which stayed silver in even some of the strongest effects discussed above) now takes on a greenish/yellow tint.

Analog Filter = Silver Gelatin

The Silver Gelatin effect, based on the original photochemical process that is over 140 years old – is a wonderful and soft effect for the appropriate subject matter. While the process itself was of course only black & white, the very nature of the process (small silver molecules suspended in a thin gelatin coating) caused fading relatively soon after printing. The gelatin fades to a pale yellow, and the silver (which creates the dark parts of the print) tended to turn a purplish color instead of the original pure black.

Analog Filter = Helios

The Helios analog effect, while it’s possible that it was named for the Helios lens that was fabricated for the Russian “Zenit” 35mm SLR camera in 1958 – is just as likely to be called this due to the burning-fire red tint of virtually the entire frame. In a similar manner to other filters we have discussed, the tinting (and this is clearly another example of a tinted monochrome extraction from the original color image) is based on relative luma values:  near whites and near blacks are not tinted, all other mid-range values of gray are tinted strongly red. It’s an interesting technique, but personally I would have only sparing use for this one.

Analog Filter = Contessa

The Contessa effect is named after one of the really great early 35mm Zeiss/Ikon cameras, produced in the late 1940’s. The effect as it exists here is actually not true to the Contessa:  this original film camera would not have caused the vignetting seen – not with one of the world’s greatest lenses attached! However, that’s immaterial, it’s just the name… what we can say about this filter is that obviously it’s another black & white extraction from the color original – but it adds a sense of ‘old time photograph’ with the vignette, the staining/spotting on the sides of the image, and the very slight warm tint – really appears to look faded as opposed to an actual tint. It’s a nice filter, I would adjust it a bit to add more detail/contrast in the dancers’ faces (left and middle dancers’ faces are a bit dark) – but a nice addition to the toolbox.

Analog Filter = Nostalgia

The Nostalgia effect is now reaching into true ‘special effects’ territory. With a cross process look to start with (similar to Diana and Cross Process), some added saturation in the reds, and then the ‘fog’ effect around the perimeter of the frame – this is leaving photorealism behind…

Analog Filter = Expired

The Expired analog effect is a rather good copy of what film used to look like when you left it in the glove box of your car in the summer… or just plain let it get too old. The look created here is just one of many, many possible effects from expired film – it’s a highly unpredictable entity. In this filter, we have strong red saturation increase – again, with no color in the front of the white stage, nothing to saturate… The overall brightness is raised, contrast is lowered, and a light streak is added.

Analog Filter = XPRO C-41

The XPRO C-41 effect is another cross-process filter. This one is loosely based on what happens when you process E-6 film (color transparency) with color negative developer (C-41). Whites get a bit blown out, light areas tend to greenish, with darker areas tending bluish. The red saturation is (I believe) just something these software developers added – I’ve personally never seen this happen in chemical cross processing.

Analog Filter = Pinhole

The Pinhole analog effect is based, of course, on the oldest camera type of all. With major vignetting, considerable grain, monochrome only, enhanced contrast and lowered brightness, this filter does a fair job imitating what an iPhone would have produced in 1850 (when the first actual photograph was taken with a pinhole camera). The issue was that of a photosenstive material – the pinhole camera has been around for well over a thousand years (the Book of Optics published in 1021AD describes it in detail).

Analog Filter = Chromogenic

The Chromogenic analog effect is based on the core methodology of all modern color film emulsions: the coupling of color dyes to exposed silver halide crystals. All current photochemical film emulsions use light-sensitive silver crystals for exposure to light. To make color, typically 3 layers on the film (cyan, yellow, magenta) are actually specialzed chromogenic layers where the dye colors attach themselves to exposed silver crystals in that layer only. This leads to the buildup of a color negative (or positive) by the ‘stacking’ of the three layers to make a completed color image. Very early on, during the experimentation that led to this development, the process was not nearly as well defined – and this filter is one software artist’s idea of what an early chromogenic print may have looked like. In terms of analysis, the overall cast is reddish-brown, with enhanced contrast, slightly crushed blacks and very desaturated colors (aside from the overall tint).

The Border variations

Simple Border styles

Styled border styles

There are, in addition to the “No Border” option, 9 simple borders and 9 styled borders.

‘Real World’ examples using Scenes and Effects

The following examples show some various images, each processed with one or more techniques that have been introduced above. In each case the original image is shown on the left, the processed image is shown on the right. Since, just like in real life, we often learn more from our mistakes than what we get right the first time – I have included some ‘mistakes’ to demonstrate things I believe did not work so well.

Retro Filter = Ansel

The Ansel filter on this subject has too much contrast – there is no detail in the dress and the subtle detail reflected in the glass behind her is washed out.

Scene = Concert

The Concert filter used here looks unnatural:  skin tones too red. It does make the reflections in the glass pop out though…

Scene = Clarity

The Clarity scene is mostly successful here:  improved detail in her dress, the reflections in the window are clearer, the detail in the stone floor pops more. I would opt for a bit less saturation in her skin tones, particularly the legs – the best technique here would be to ‘turn down’ the Clarity a bit. This can be done, but not within Camera+ (at this time).

Special Filter = Overlay

The Overlay filter as used here offers another interpretation of the scene. The slight vignetting helps focus on the subject, the floor is now defocused, the background reflection in the glass behind her is lessened – the only thing I would like to see improved is her dress is a bit dark – hard to see the detail.

Scene = Portrait

The Portrait scene punches up the contrast, and in this case works fairly well. The floor is a bit bright, and it’s a personal decision if the greater detail revealed in the reflection behind the subject is distracting or not… The upper half of her dress is a bit dark due to the increased contrast, it would be nice to see more detail there.

Scene = Shade

This Shade scene applied shows the effects rather well: the scene is warmed up and the brightness is slightly raised.

Analog Filter = Silver Gelatin

The Silver Gelatin effect is demonstrated well here: blacks are a bit purple, all the whites/grays go a bit yellow. It’s a nice soft look for the right subject matter.

Scene = Backlit

The Backlit scene helps add fill to the otherwise underexposed subject. Since in this case she is standing in open shade (very cool in terms of color temperature), the tendency for this Scene to make skin tones too warm is ok. The background is blown out a bit though.

Scene = Food

In this version, the Food scene is used, well, not for food… it doesn’t add as much fill light to the subject as Backlit, but the background isn’t as overexposed either.

Scene = Backlit

Here’s an example of using the Backlit scene for a subject that is not a person – rather for the whole foreground that is in virtual silhouette. It helps marginally, and does tend to wash out the sky some. But the path and the parked cars have more visibility.

Scene = Backlit

Here is the Backlit scene used in the traditional manner – and as was discussed when this scene was introduced above, I find the skin tones just too red. It does fix the lighting however. All is not lost – once we move on to other techniques to be discussed in a future post:  using another app for editing the results of Camera+. For instance, if we now bring this image into PhotoForge2 and apply color correction and some desaturation we can tone down the red skin and back off the overly saturated colors of their tops.

Scene = Backlit

Here’s another example of the Backlit scene. It does again resolve the fill light issue, but once again oversaturates both the skin tones and the background.

Scene = Backlit

Here is Backlit scene used to attempt to fix this shot where insufficient fill light was available on the subject.

Scene = Clarity

This shows the Clarity scene used to help the lack of fill light. I think it works much better than Backlit in this case. I would still follow up with raising the black levels to bring out details in her shirt.

Scene = Beach

Here are three versions of another backlit scene, with different solutions: the first one uses the Beach scene…

Scene = Flash

This version uses Flash…

Scene = Clarity

and the final version trys Clarity. I personally like this one the best, although I would follow up with raising the deep blacks just a bit to bring out the first girl’s top and pants’ detail.

Scene = Beach

This is a woman at the beach… showing what the Beach scene will do… in my opinion, it doesn’t add anything, and has three issues:  the breaking wave is now clipped, with loss of detail in the white foam, the added contrast reduces the detail in her shirt and pants, and the sand at the bottom of the image has lost detail and become blocky. This exemplifies my earlier comment on this type of scene filter:  use it to change the lighting of your image to ‘look like it was shot at the beach’, not fix images that were taken at the beach…

Scene = Clarity

Here once again is our best friend Clarity… this scene really brings out the detail in the waves, sand and her clothes. It’s a great use of this filter, and shows the added ‘punch’ that Clarity can often bring to a shot that is correctly exposed, but just a bit flat.

Scene = Beach, followed by Special Filter = Overlay @ 67%

Now here is a very different interpretation of the same original shot. It all goes to what story you want to tell with your shot. The Clarity version above is a better depiction of the scene that either the original or the version using Beach, but the method shown above (using the Special filter Overlay, at 67% intensity) brings a completely different feeling to the shot. The woman becomes the central figure, with the beach only a hint of the surrounding…

Scene = Scenery

Using the tradional approach… the Scenery scene on, well, scenery…  Doesn’t work well – mid-ground goes dark, mountains in rear too blue, foreground bush loses detail and snap.

Scene = Shade

Here is the same scene using Shade. It does bring out the detail in the middle of the image, and warm up the reflection of the sky in the lake.

Scene = Sunset

Another view, this time using Sunset. Here we have many of the same issues as the first shot did using the Scenery version.

Scene = Beach

Now here I have used the Beach scene type. Although maybe not an intuitive choice, I like the results:  the lake has a warmer reflection of the sky, the contrast between the foreground bush and the middle area is increased, but without losing detail in the middle trees; the mountains in the rear have picked up detail, and even the shore on the left of the lake has more punch. Know your tools…

Scene = Portrait

Next shown are five different versions of a group with some challenging parameters:  white dresses (that pick up reflected light), dark pants and jacket on the man, theatrical lighting (it’s actually a white rug!), and bright backlighting on the left. This first version, using Portrait, with its increased contrast, helps the dresses to a more pure white look, but now the man’s head blends right into the picture – not enough black detail to separate the objects.

Special Filter = HDR

Here is a version using the special filter HDR. Very stylized. Does clearly separate all the details, the picture on the wall is now visible, and easily separated from the man’s head. The glow and the heavy tinting of the model’s dresses is just part of the ‘look’…

Special Filter = Faded @ 50%

As overstated as the last version may have been, this one goes the other way. Using the special filter Faded (at 50% intensity), the desaturation afforded by this method removes the tinting of the white dresses (they pick up the turquoise lighting), and the skin tones look more natural. Maybe a bit flat…

Scene = Clarity

Here is the Clarity scene type.While it does separate out his head from the picture again, the enhanced edges don’t really work as well in this shot. The increased local saturation actually causes the rug in the foreground to blend together – the original has more detail. This is one of the potential side-effects of this filter – when the source has highly saturated colors to start with some weird things can happen.

Scene = Beach

And the last version, using the Beach scene type. The dresses have punch, the skin tones are warmer, but the higher contrast once again merges his head with the picture. The lighting on the rug also looks overdone.

Scene = Scenery

This is a typical landscape shot, treated with the Scenery filter. Doesn’t work. Mountains have gone almost black, sea has lost it’s shading and beautiful turquoise color, the rocks in the foreground have gone harsh.

Scene = Clarity

Now here’s Clarity used on the same scene.This scene type brings out all the detail without overwhelming any one area. The only two minor faults I would point out is the small ‘halo’ effect in the sky near the edges of the mountains, and the emerald area of the ocean (just above the rocks, next to the foam on the shore) is a little oversaturated. But all in all, a much better filter than Scenery – for this shot. It’s all in using the right tool for the right job.

Scene = Clarity

Now as good as Clarity can be for some things, here is causes a very different effect. Not to say it’s wrong – if you are looking for posterization and noise, then this can be a great effect.

Scene = Darken

Although the Darken scene may not seem at all what one would choose on an already poorly lit scene, it focuses attention purely on the subject, and reduces some of the noise and mottling in her top.

Special Filter = Faded @ 33%

Now here is an interesting solution: using the special filter Faded (at 33% intensity) to reduce the saturation of the scene. This immediately brings out a more natural modeling of her face, makes her right hand look less blocky, and brings a more natural look to the subject in general.

Scene = Clarity

A sunset scene using the Clarity filter. In this case, Clarity is not our BFF… the brick goes oversaturated, the shadows in the foreground are just too much -the eye gets confused, doesn’t know where to look.

Scene = Darken

Here, the Darken scene type is tried. Not much better than Clarity, above. (Hint: sometimes the best result is to leave things alone… a properly exposed and composed image often is perfect as it stands, without additional meddling…)

Scene = Sunset

Here we have a number of different expressions of a train leaving the station at sunset… this one using the Sunset scene type. While it does add some color to the sky, most of the detail in the shadows is now gone.

Scene = Clarity

This version shows the use of Clarity. Detail that was barely visible now pops out. The shot now is almost a bit too busy – there is so much extraneous detail to the right and left of the train that the focus of the moment has changed…

Special Filter = HDR (100%)

Here’s a very different version, using HDR at full intensity. Stylized – tells a certain story – but the style is almost overwhelming the image.

Special Filter = HDR @ 30%

This is HDR again, but this time at 30% intensity.What a difference! I feel this selection is even better than Clarity – detail in the middle of the shot is visible, but not detracting from the train. There is now just enough information in the shadows to round out the shot, yet not pull the eye from the story.

Scene = Cloudy

The difference between the Cloudy and Shade scene types is useful to understand. Here are two subjects standing in open shade – i.e. only illiminated by open sky that has no clouds. This version is filtered with the Cloudy scene type. Note that the back wall has changed hue from the original (gone warmer) as has the street. The subjects are still a bit dark as well.

Scene = Shade

Here is the same shot, processed with the Shade scene type. The brightness is improved, the color temperature is not warmed up as much, and overall this is a better solution. (Well, after all, they were standing in the shade 🙂

Scene = Cloudy

Here is another pair of comparisons between the Cloudy and Shade scene type filters. This choice was less obvious, as the hostess was standing just outside the entrance to a restaurant, in partial shade; but the sky was very overcast – a ‘cloudy’ illumination. Notice, that since the Cloudy filter warms the scene more than the Shade filter does, her skin tone has warmed up considerably, as has her sheer top and and the menu. The top and menu are also a bit clipped, losing detail in the highlights.

Scene = Shade

Here is the ‘Shade’ version. Her skin is not as warm, and the top and menu retain more of their original whiteness. The white levels are better as well, with less clipping on the menu and the left side of her top. This shows that often you must try several filter types and make a selection based on the results – this could have easily gone either way, given the nature of the lighting.

Scene = Cloudy

College campus, showing the use of the Cloudy filter. Here this choice is clearly correct. The sky is obviously cloudy <grin> and the resultant warmth added by the filter is appreciated in the scene.

Scene = Shade

This version, using the Shade scene type, is not as effective. The scene is still a bit cold, and the additional brightness added by this filter makes the sky overly bright – the general illumination now somewhat contradicts the feeling of the scene – a cool and cloudy day.

Scene = Concert

We discussed learning from our mistakes… here is why you usually don’t want to use the Concert scene type at a concert…

Scene = Darken

The Darken scene type is called for here to help with the overexposure. While it doesn’t completely fix the problem, it certainly helps.

Scene = Darken

The Darken filter used again to good effect.

Scene = Darken, followed by Special Filter = HDR @ 20%

This example shows a powerful feature of Camera+    the ability to layer an Effect on top of a Scene type. You can only use one of each, and you can’t put a scene on top of another scene (or an effect on top of an effect) – but the potential of changes has now multipled enormously. Here, the original scene was overexposed. A combination of the Darken scene type, followed by the HDR filter at 30% intensity made all the difference.

Scene = Flash

This shows a typical challenging scene to capture correctly – a dark interior looking out to a brightly lit exterior. In the original exposure you can see that the camera exposed for the outdoor portion, leaving the intererior very dark. The first attempt to rectify this is using the Flash scene type. While this helped the bookcases in the hall, the exterior is now totally blown out, and the right foreground is still too dark.

Scene = Night

Here is the result of using the Night scene type. Better detail in both the hallway as well as the right foreground – and the exterior is now visible, even though still a bit overexposed.

Special Filter = HDR

This version uses the HDR effect filter – giving the best overall exposure between the outside, the bookcase and the foreground. Ideally, I would follow up with another app and raise the black levels a bit to bring out more detail in the shadows in the foreground and near part of the hall.

Scene = Flash

A night-blooming cactus photographed before sunrise – it’s a bit underexposed. Using the Flash scene type brings out the plant well, but the white flowers are now too hot.

Scene = Night

The Night scene type provides a better result, with good balance between the flowers, plant and trees behind.

Special Filter = HDR

Using the HDR filter to attempt to improve the foreground illumination of this shot. It helps… but the typical style of this tone-mapping filter oversaturates the reds in the wood, and the foreground is still a bit dark.

Scene = Flash

And here’s another attempt, using the Flash scene type. A different set of side effects.. the sunlight on the wall is now overexposed, and the foreground is still not ideally lit. Camera+ can’t fix everything… (in this case, using a different app – PhotoForge2 – which has a powerful tool “Shadows & Highlights” did a better job. We’ll see that when we get around to discussing that app).

Scene = Flash

Here’s a good use for the Flash scene. The original is very dark, the filtered version really does look like a flash had been used.

Scene = Flash

And here’s one that didn’t work so well. The Flash filter didn’t do what a real flash would have done: illuminated the interior without making any difference at all in the exterior lighting. Here, the opposite took place: the sunset sky is blown out, yet the interior isn’t helped out any at all.

Scene = Flash

Flash scene used on shot out airplane window, takeoff from London in late twilight. Doesn’t work for me…

Scene = Night

This time, Night was used. Less dramatic – personally I prefer the original. But it’s a good example to show how the different filters operate.

Scene = Food

Ok, just had to try food with Food (scene type)… You can see the filter at work:  whites are warmed up with more red; the potatoes now look almost like little sausages, contrast is increased. I would really like it if the Intensity sliders would be added to the Scene types as well as the Effects… here a better result would be found with about 40% of this filter dialed in, I believe…

Special Filter = HDR

Main street in Montagu, a little town in South Africa. The HDR filter shows how to help resolve a high contrast scene. I didn’t redo this one, but I would have had a more natural look if I had dialed back the intensity of the HDR filter to about 50%.

Scene = Scenery

The Scenery filter applied to an outdoor scene.I find this too contrasty, and even the sky looks a bit oversaturated.

Scene = Clarity

Here is Clarity applied to the same image. Shadow detail much improved (except right under nearest arch). However the right side of the image looks a bit ‘post-cardy’ (flat and washed out).

Special Filter = HDR @ 35%

HDR filter applied at 35%. A different set of ‘almost but not quite’ issues… Arches in shadows are too blue, the sunlit portions are blown out, and the sky is too saturated. The trees on the right are a big improvement over the previous version (Clarity) however. Sometimes you really do need Photoshop….

Scene = Clarity

This is a tough shot:  extreme brightness differences – my estimate is over 14 stops of exposure – way more than the iPhone camera can handle. So it’s really a matter of how best to interpret a scene that will inevitably have both white and black clipping. I used Clarity on this version – didn’t help out the highlights at all, but did add some detail in the shadows, as well as a bit of punch to her pants.

Special Filter = HDR @ 50%

The HDR effect was used here at 50% intensity. This brough a bit more control to at least the edges of the highlights, and still opened up the shadows. Note the difference in the shaded carpet at lower left of the image from this version to the previous one (done with Clarity). I actually prefer this version – the Clarity one seems a bit too much. Overall, I think this does a better job of this particular scene.

Scene = Night

Showing the use of the Night scene type at the last few minutes of twilight. As is often the case, I prefer the original shot, but wanted to demonstrate the capabilities of this scene type. Technically it did a great job – it just didn’t tell the story in the same way as the original.

Scene = Night

Now this scene is an excellent example of what the Night filter can do in the right circumstance. Almost full dark, only illumination was from store windows, streetlights and headlights.

Scene = Night

The Night scene type bringing out enough detail to make the shot.

Scene = Night

One more Night shot – again, a very good use of this scene type.

Scene = Night, followed by Special Filter = HDR @ 33%

This is an example of ‘stacking’ two corrections on top of each other to fix a shot that really needed help. You don’t always have to use the Night scene type at night… Here, due to both underexposure and backlight, the Night filter was applied first, then HDR effect at 33% intensity on top of that. It’s not a perfect result but it definitely shows you what can be done with these tools.

Scene = Portrait

The Portrait scene type applied. The contrast is too much, and the red saturation makes her skin tones look like a lobster.

Color Filter = So Emo

A very different effect, using the So Emo filter.

Scene = Clarity

Just to show what Clarity does in this instance. Again, not the best choice – the background gets too busy, and her face suffers…

Scene = Portrait

Here is Portrait applied to, well, a portrait type shot. But once again the high contrast of this filter works against the best result:  blown out highlights, skin tones too bright.

Scene = Shade

Sometimes you use filters in not-obvious ways:  the Shade scene type was applied here, and from that we got a slight improvement in background brightness, without driving her top into clipping. You can now see some definition between her hair and the background, and the slight warmth added does not detract from the image.

Scene = Sunset

The Sunset filter applied at sunset… I think this is too much. The extra contrast killed the detail in the trees at right center, and the red roof is artificially saturated now.

Scene = Scenery

The Scenery filter applied at sunset. Looks a bit different than the Sunset filter, above, but has many of the same issues when used on this scene: loss of detail in the shadows, too much red saturation.

Scene = Scenery

The Scenery filter at sunset. If this scene type could be ratcheted down with a slider, like is possible with the Effects, then this shot could be enhanced nicely. The original is just a tad flat looking, but the full Monty of the Scenery filter is way over the top. I would love to see what 20% of this effect would do.

Scene = Shade

This shows the Shade scene type doing exactly what it is supposed to: warm up the skin tones, add a bit of brightness. Altogether a less forlorn look…

Scene = Clarity

The same shot with Clarity applied. Much punchier – while I like the detail it’s almost too much. Again, Camera+ engineers, please bring us intensity sliders for scene types!

Scene = Sunset

Now here’s a use for the Sunset scene type at last. While it may not be the best storytelling tool for this particular shot, it shows the nice punch and improvement in detail and saturation this filter can provide – once the original image is basically soft enough to take the sometimes overpowering influence of this scene type.

Scene = Sunset

Trying the Sunset scene type once again. If I could use this at half strength it would actually make this shot a bit more colorful – but in its current incarnation the saturation is too much.

Scene = Sunset

Last Sunset for this post. I promise… but just to prove that never say never… here is a sunset where applying Sunset certainly helps the sky and water. I would like the sign not as saturated, and the increased constrast cost us the few remaining details hidden in the market building at the bottom of the frame.

Scene = Text

This is an example of the Text scene type. Ultra high contrast – really just for what it says (or other special effect you want to create using an almost vertical gamma curve!)

Digital Zoom

Now.. one last thing before the end of this post: a very short dicussion of Digital Zoom. I have ignored this topic up until now. But it’s really really important in iPhonography (actually this affects all cellphone cameras, as well as many small inexpensive digital cameras). ‘Real’ cameras (i.e. all DSLRs and many mid-priced and up digital cameras use Optical Zoom (or as in some consumer type digital cameras, both optical and digital zoom are offered). The difference is that with optical zoom, the lens elements physically move (i.e. a real zoom lens), changing both the magnification and the field of view that is projected onto the film or digital sensor. The so-called ‘Digital Zoom’ technique is a result of physical lenses that cannot “zoom”. Cellphone lenses are too small to adjust their focal length like a DSLR lens does. Also, the optical complexity of a zoom lens is far greater than that of a prime lens (fixed focal length) – and the light-gathering power of a zoom lens is always significantly less than that of a prime lens. (You can get relatively fast zoom lenses for DSLRs… as long as you have the budget of a small country…)

What the “digital zoom” technique is actually doing is cropping the image on the CMOS sensor (using only a small portion of the available pixels), then digitally magnifying that area back out to the original size of the sensor (in pixels). For example, the iPhone sensor is 3264×2448. If one were to crop down to 50% of the original area covered by the lens (to 1632×1224) the lens would now only be covering 1/4 of the original area. Do the math:  3264×2448 is 8megapixels; 1632×1224 is now only 2megapixels. What you do get for this is an apparent ‘zoom’ – or greater percieved magnification, due to the new ‘sensor size’ and the fact that the original focal lenght of the lens has not changed. The original 35mm equivalent focal length of the iPhone camera system is 32mm – by ‘cropping/zooming’ as described, the new focal length is now 4x greater, or 128mm (in 35mm equivalent terms). However – and this is a HUGE however – you pay a large price for this: resolution and noise. You now only have a 2megapixel sensor… not the supersharp 8megapixel sensor that you started with. This is like stepping all the way back to the iPhone 3 – which had 2MP resolution. Wow! In addition, these 2megapixels are now “zoomed” back to fill the full size of the original 8MP sensor (in memory), essentially this means that each original pixel of the taking sensor now covers 4 pixels in the newly formed zoomed image. This means that any noise in the original pixel is magnified by 4 times… So the bottom line is that digital zoom ALWAYS makes noisy, low resolution images.

Now.. just like in any good sales effort, you will hear grand ‘snake oil’ stories of how good this camera or that camera does at ‘digital zoom’ – and that it’s “just as good” as optical zoom. BS. Period. You can’t change the laws of physics…. What is possible (and bear in mind that this is a really small band-aid on a big owie…) is to use some really sophisticated noise reduction and image processing algorithms to try to make this pot of beans look like filet again… and most hardware and software camera manufacturers try at some level. Yes, if such attempts succeed, then you are a LITTLE better off than you were before. Not much. So what’s the answer. Don’t use digital zoom. Just say no. Unless you can accept the consequences (noisy, low resolution images). We’ll discuss this further in the last part of this series on Tips & Techniques for iPhonography, but for now you will see why I don’t address it as a feature.

Ok, that’s it. Really. Hope you have a bit more info now about this very useful app. Many of my upcoming posts on the rest of the sofware tools for the iPhone will not be nearly as detailed, but this was an opportunity to discuss many topics that are germane to most photography apps; offer a bit of a guide to a very popular and useful app that currently publishes no manual or even a help screen, and to demonstrate the thought process involved in working with filters, lighting and so on.

Many thanks for your attention.

iPhone4S – Section 4a: Camera app

March 14, 2012 · by parasam

Camera   The original iPhone photo app. Pretty simple – Use camera icon button to take picture, or use Volume Up (+) on side of phone. Option buttons on top of screen:

  • Flash: On/Auto/Off
  • Options:  Grid – On/Off;  HDR – On/Off
  • Rear-facing/Front-facing camera selector

Features/Buttons on bottom of screen:

  • Last shot preview thumbnail
  • Shutter release button
  • Still/Video selector

Blue box is area where both exposure and focus are measured. Option buttons across top of screen.

When "Options" is selected, you can choose to display a grid overlay on the screen for aid in composition; turn HDR feature on or off.

When 'video' mode is selected, the options change, and the shutter button changes to a 'record' button. Press to start recording, press again to stop.

iPhone4S – Section 4: Software

March 13, 2012 · by parasam

This section of the series of posts on the iPhone4S camera system will address the all-important aspect of software – the glue that connects the hardware we discussed in the last section with the human operator. Without software, the camera would have little function. Our discussion will be divided into three parts:  Overview; the iOS camera subsystem of the Operating System; and the actual applications (apps) that users normally interact through to take and process images.

As the audience of this post will likely cover a wide range of knowledge, I will try to not assume too much – and yet also attempt not to bore those of you who likely know far more than I do about writing software and getting it to behave in a somewhat consistent fashion…

Overview

The iPhone, surprise-surprise – is a computer. A full-fledged computer, just like what sits on your desk (or your lap). It has a CPU (brain), memory, graphics controller, keyboard, touch surface (i.e. mouse), network card (WiFi & Bluetooth), a sound card and many other chips and circuits. It even has things most desktops and laptops don’t have:  a GPS radio for location services, an accelerometer (a really tiny gyroscope-like device that senses movement and position of the phone), a vibrating motor (to bzzzzzz at you when you get a phone call in a meeting) – and a camera. A rather cool, capable little camera. Which is rather the point of our discussion…

So… like any good computer, it needs an operating system – a basic set of instructions that allows the phone to make and receive calls, data to be written to and read from memory, information to be sent and retrieved via WiFi – and on and on. In the case of the iDevice crowd (iPod, iPhone, iPad) this is called iOS. It’s a specialized, somewhat scaled down version of the full-blown OS that runs on a Mac. (Actually it’s quite different in the details, but the concept is exactly the same). The important part of all this for our discussion is that a number of basic functions that affect camera operation are baked into the operating system. All an app has to do is interact via software with these command structures in the OS, present the variable to the user in a friendly manner (like turn the flash on or off), and most importantly, take the image data (i.e the photograph) and allow the user to save it or modify it, based on the capability of the app in question.

The basic parameters that are available to the developer of an app are the same for everyone. It’s an equal playing field. Every app developer has exactly the same toolset, the same available parameters from the OS, and the same hardware. It’s up to the cleverness of the development team to achieve either brilliance or mediocrity.

The Core OS functions – iOS Camera subsystem

The following is a very brief introduction to some of the basic functions that the OS exposes to any app developer – which forms the basis for what an app can and cannot do. This is not an attempt to show anyone how to program a camera app for the iPhone! Rather, a small glimpse into some of the constraints that are put on ALL app developers – the only connection any app has with the actual hardware is through the iOS software interface – also known as the API (Application Programming Interface). For instance, Apple passes on to the developers through the API only 3 focus modes. That’s it. So you will start to see certain similarities between all camera apps, as they all have common roots.

There are many differences, due to the way a given developer uses the functions of the camera, the human interface, the graphical design, the accuracy and speed of computations in the app, etc. It’s a wide open field, even if everyone starts from the same place.

In addition, the feature sets made available through the iOS API change with each hardware model, and can (and do!) change with upgrades of the iOS. Of course, each time Apple changes the underlying API, each app developer is likely to need to update their software as well. So then you’ll get the little red number on your App Store icon, telling you it’s time to upgrade your app – again.

The capabilities of the two cameras (front-facing and rear-facing) are markedly different. In fact, all of the discussion in this series has dealt only with the rear-facing camera. That will continue to be the case, since the front-facing camera is of very low resolution, intended pretty much just to support FaceTime and other video calling apps.

Basic iOS structure

The iOS is like an onion, layers built upon layers. At the center of the universe… is the Core. The most basic is the Core OS. Built on top of this are additional Core Layers: Services, Data, Foundation, Graphics, Audio, Video, Motion, Media, Location, Text, Image, Bluetooth – you get the idea…

Wrapped around these “apple cores” are Layers, Frameworks and Kits. These Apple-provided structures further simplify the work of the developer, provide a common and well tuned user interface, and expand the basic functionality of the core systems. Some examples are:  Media Layer (including MediaPlayer, MessageUI, etc.); the AddressBook Framework; the Game Kit; and so on.

Our concern here will be only with a few structures – the whole reason for bringing this up is to allow you, the user, to understand what parameters on the camera and imaging systems can be changed and what can’t.

Focus Modes

There are three focus modes:

  • AVCaptureFocusModeLocked: the focal area is fixed.

This is useful when you want to allow the user to compose a scene then lock the focus.

  • AVCaptureFocusModeAutoFocus: the camera does a single scan focus then reverts to locked.

This is suitable for a situation where you want to select a particular item on which to focus and then maintain focus on that item even if it is not the center of the scene.

  • AVCaptureFocusModeContinuousAutoFocus: the camera continuously auto-focuses as needed.

Exposure Modes

There are two exposure modes:

  • AVCaptureExposureModeLocked: the exposure mode is fixed.
  • AVCaptureExposureModeAutoExpose: the camera continuously changes the exposure level as needed.

Flash Modes

There are three flash modes:

  • AVCaptureFlashModeOff: the flash will never fire.
  • AVCaptureFlashModeOn: the flash will always fire.
  • AVCaptureFlashModeAuto: the flash will fire if needed.

Torch Mode

Torch mode is where a camera uses the flash continuously at a low power to illuminate a video capture. There are three torch modes:

  •    AVCaptureTorchModeOff: the torch is always off.
  •    AVCaptureTorchModeOn: the torch is always on.
  •    AVCaptureTorchModeAuto: the torch is switched on and off as needed.

White Balance Mode

There are two white balance modes:

  •    AVCaptureWhiteBalanceModeLocked: the white balance mode is fixed.
  •    AVCaptureWhiteBalanceModeContinuousAutoWhiteBalance: the camera continuously changes the white balance as needed.

You can see from the above examples that many of the features of the camera apps you use today inherit these basic structures from the underlying CoreImage API. There are obviously many, many more parameters that are available for control by a developer team – depending on whether you are doing basic image capture, video capture, audio playback, modifying images with built-in filter, etc. etc.

While we are on the subject of core functionality exposed by Apple, let’s discuss camera resolution.

Yes, I know we have heard a million times already that the iPhone4S has an 8MP maximum resolution (3264×2448). But there ARE other resolutions available. Sometimes you don’t want or need the full resolution – particularly if the photo function is only a portion of your app (ID, inventory control, etc.) – or even as a photographer you want more memory capacity and for the purpose at hand a lower resolution image is acceptable.

It’s almost impossible to find this data, even on Apple’s website. Very few apps give access to different resolutions, and the ones that do don’t give numbers – it’s ‘shirt sizes’ [S-M-L]. Deep in the programming guidelines for CoreImage I found a parameter AVCaptureStillIMageOutput that allows ‘presetting the session’ to one of the values below:

PresetNameStill           PresetResolutionStill

Photo                              3264×2448

High                                1920×1080

Med                                 640×480

Lo                                   192×144

PresetNameVideo         PresetResolutionVideo

1080P                              1920×1080

720P                                1280×720

480P                                640×480

I then found one of the very few apps that support ALL of these resolutions (almost DSLR) and shot test stills and video at each resolution to verify. Everything matched the above settings EXCEPT for the “Lo” preset in Still image capture. The output frame measured 640×480, the same as “Med” – however the image quality was much lower. I believe that the actual image IS captured at 192×144, but then is scaled up to 640×480 – why I am not sure, but it is apparent that the Lo image is of far lower quality than Med. The image size was lower for the Lo quality image – but not enough that I would ever use it. On the tests I shot, Lo = 86kB, Med = 91kB. The very small difference in size is not worth the big drop in quality.

So… now you know. You may never have need of this, or not have an app that supports it – but if you do require the ability to shoot thousands of images and have them all fit in your phone, now you know how.

There are two other important aspects of image capture that are set by the OS and not changeable by any app:  color space and image compression format. These are fixed, but different, for still images and video footage. The color space (which for the uninitiated is essentially the gamut – or range of colors – that can be reproduced by a color imaging system) is set to sRGB. This is a common and standard setting for many digital cameras, whether full sized DSLR or cellphones.

It’s beyond the scope of this post to get into color space, but I personally will be overjoyed when the relatively limited gamut of sRGB is put to rest… however, it is appropriate for the iPhone and other cellphone camera systems due to the limitations of the small sensors.

The image compression format used by the iPhone (all models) is JPEG, producing the well-known .jpg file format. Additional comments on this format, and potential artifacts, were discussed in the last post. Since there is nothing one can do about this, no further discussion at this time.

In the video world, things are a little different. We actually have to be aware of audio as well – we get stereo audio along with the video, so we have two different compression formats to consider (audio and video), as well as the wrapper format (think of this as the envelope that contains the audio and video track together in sync).

One note on audio:  if you use a stereo external microphone, you can record stereo audio along with the video shot by the iPhone4S. This requires an external device which connects via the 30-pin docking connector. You will get far superior results – but of course it’s not as convenient. Video recordings made with the on-board microphone (same one you use to speak into the phone) are mono only.

The parameters of the video and audio streams are detailed below: (this example is for the full 1080P resolution)

General

Format : MPEG-4
Format profile : QuickTime
Codec ID : qt
Overall bit rate : 22.9 Mbps

Video

ID : 1
Format : AVC
Format/Info : Advanced Video Codec
Format profile : Baseline@L4.1
Format settings, CABAC : No
Format settings, ReFrames : 1 frame
Codec ID : avc1
Codec ID/Info : Advanced Video Coding
Bit rate : 22.4 Mbps
Width : 1 920 pixels
Height : 1 080 pixels
Display aspect ratio : 16:9
Rotation : 90°
Frame rate mode : Variable
Frame rate : 29.500 fps
Minimum frame rate : 15.000 fps
Maximum frame rate : 30.000 fps
Color space : YUV
Chroma subsampling : 4:2:0
Bit depth : 8 bits
Scan type : Progressive
Bits/(Pixel*Frame) : 0.367
Title : Core Media Video
Color primaries : BT.709-5, BT.1361, IEC 61966-2-4, SMPTE RP177
Transfer characteristics : BT.709-5, BT.1361
Matrix coefficients : BT.709-5, BT.1361, IEC 61966-2-4 709, SMPTE RP177

Audio

ID : 2
Format : AAC
Format/Info : Advanced Audio Codec
Format profile : LC
Codec ID : 40
Bit rate mode : Constant
Bit rate : 64.0 Kbps
Channel(s) : 1 channel
Channel positions : Front: C
Sampling rate : 44.1 KHz
Compression mode : Lossy
Title : Core Media Audio

The highlights of the video/audio stream format are:

  • H.264 (MPEG-4) video compression, Baseline Profile @ Level 4.1, 22Mb/s
  • QuickTime wrapper (.mov)
  • AAC-LC audio compression, 44.1kHz, 64kb/s

The color space for the video is the standard adopted for HD television, Rec709. Importantly, this means that videos shot on the iPhone will look correct when played out on an HDTV.

This particular sample video I shot for this exercise was recorded at just under 30 frames per second (fps), the video camera supports a range of 15-30fps, controlled by the application.

Software Applications for Still & Video Imaging on the iPhone4S

The following part of the discussion will cover a few of the apps that I use on the iPhone4S. These are just what I have come across and find useful – this is not even close to all the apps available for the iPhone for imaging. I obtained all of the apps via normal retail Apple App Store – I have no relationship with any of the vendors – they are unaware of this article (well, at least until it’s published…)

I am not a professional reviewer, and take no stance as to absolute objectivity – I do always try to be accurate in my observations, but reserve the right to have favorites! The purpose in this section is really to give examples of how a few representative apps manage to expose the hardware and underlying iOS software to the user, showing the differences in design and functionality.

These apps are mostly ‘purpose-built’ for photography – as opposed to some other apps that have a different overall purpose but contain imaging capabilities as part of the overall feature set. One example (that I have included below) is EasyRelease, an app for obtaining a ‘model release’ [legal approval from the subject to use his/her likeness for commercial purposes]. This app allows taking a picture with the iPhone/iPad for identification purposes – so has some very basic image capture abilities – it’s not a true ‘photo app’.

BTW, this entire post has been focused on only the iPhone camera, not the iPad (both 2nd & 3rd generation iPads contain cameras) – I personally don’t think a tablet is an ideal imaging device – it’s more like a handy accessory if you have your tablet out and need to take a quick snap – than a camera. Evidently Apple feels this way as well, since the camera hardware in the iPads have always lagged significantly behind that of the iPhone. However, most photo apps will work on both the iPad as well as the iPhone (even on the 1st generation model – with no camera), since many of the apps support working with photos from the Camera Roll (library) as well as directly from the camera.

I frequently work this way – shoot on iPhone, transfer to iPad for easier editing (better for tired eyes and big fingers…), then store or share. I won’t get into the workflows of moving images around – it’s not anywhere near as easy as it should be, even with iCloud – but it’s certainly possible and often worth the effort.

Here is the list of apps that will be covered. For quick reference I have listed them all below with a simple description, a more detailed set of discussions on each app follows.

[Note:  due to the level of detail, including many screenshots and photo examples used for the discussion of each app, I have separated the detailed discussions into separate posts – one for each app. This allows the reader to only select the app(s) they may be interested in, as well as keep the overall size of an individual post to a reasonable size. This is important for mobile readers…]

Still Imaging

Each of the app names (except for original Camera) is a link that will take you to the corresponding page in the App Store.

Camera  The original photo app included on every iPhone. Basic but intuitive – and of course the biggest plus is the ability to fast-launch this without logging in to the home page first. For streetphotography (my genre) this a big feature.

Camera+  I use this as much for editing as shooting, biggest advantage over native iPhone camera app is you can set different part of frame for exposure and focus. The info covers the just-released version 3.0

Camera Plus Pro  This is similar to the above app (Camera+) – some additional features, not the least of which it shoots video as well as still images. Although made by a different company, it has many similar features, filters, etc. It allows for some additional editing functions and features ‘live filters’ – where you can add the filter before you start shooting, instead of as a post-production workflow in Camera+. However, there are tradeoffs (compression ratio, shooting speed, etc.)  Compare the apps carefully – as always, know your tools…  {NOTE: There are two different apps with very similar names: Camera+, made by TapTapTap with the help of pixel wizard Lisa Bettany; and Camera Plus, made by Global Delight Technologies – who also make Camera Plus Pro – the app under discussion here. Camera+ costs $0.99 at the time of this post; Camera Plus is free; Camera Plus Pro is $1.99 — are you confused yet? I was… to the point where I felt I needed to clarify this situation of unfortunately very similar brand names for somewhat similar apps – but there are indeed differences. I’m going to be as objective in my observations as possible. I am not reviewing Camera Plus, as I don’t use it. Don’t infer anything from that – this whole blog is about what I know about what I personally use. I will be as scientific and accurate as possible once I write about a topic, but it’s just personal preference as to what I use}

almost DSLR is the closest thing to fully manual control of iPhone camera you can get. Takes some training, but is very powerful once you get the hang of it.

ProHDR I use this a lot for HDR photography. Pic below was taken with this. It’s unretouched! That’s how it came out of the camera…

Big Lens This allows you to manually ‘blur’ background to simulate shallow depth of field. Quite useful since 30mm focal length lens (35mm equivalent) puts almost everything in focus.

Squareready  If you use Instagram then you know you need to upload in square format. Here’s the best way to do that.

PhotoForge2  Powerful editing app. Basically Photoshop on the iPhone.

Snapseed  Another very good editing app. I use this for straightening pix, as well as ability to tweak small areas of picture differently. On some iPhone snaps I have changed 9 different areas of picture with things like saturation, contrast, brightness, etc.

TrueDoF  This one calculates true depth-of-field for a given lens, sensor size, etc. I use this when shooting DSLR to plan my range of focus once I know my shooting distance.

OptimumCS-Pro  This is sort of inverse of the above app – here you enter the depth of field you want, then OCSP tells you the shooting distance and aperture you need for that.

Iris Photo Suite  A powerful editing app, particularly in color balance, changing histograms, etc. Can work with layers like Photoshop, perform noise reduction, etc.

Filterstorm  I use this app to add visible watermarks to images, as well as many other editing functions. Works with layers, masks, variable brushes for effects, etc.

Genius Scan+  While this app was intended for (and I use it for this as well) scanning documents with the camera to pdf, I found that it works really well to straighten photos… like when you are shooting architectural and have unavoidable keystoning distortion… Just be sure to pull back and give yourself some surround on your subject, as the perspective cropping technique that is used to straighten costs you some of your frame…

Juxtaposer  This app lets you layer two different photos onto each other, with very controllable blending.

Frame X Frame  Camera app, used for stop motion video production as well as general photography.

Phonto  One of the best apps for adding titles and text to shots.

SkipBleach  This mimics the effect of skipping (or reducing) the bleach step in photochemical film processing. It’s what gives that high contrast, faded and harsh ‘look’.

Monochromia  You probably know that getting a good B&W shot out of a color original is not as simple as just desaturating.. here’s the best iPhone app for that.

MagicShutter  This app is for time exposures on iPhone, also ‘light painting’ techniques.

Easy Release  Professional model release. Really, really good – I use it on iPad and have never gone back to paper. Full contractual terms & conditions, you can customize with your additional wording, logo, etc. – a relatively expensive app ($10) but totally worth it in terms of convenience and time saved if you need this function.

Photoshop Express  This is actually a bit disappointing for a $5 app, others above do more for less – except the noise reduction (a new feature) is worth it for that alone. It’s really, really good.

Motion Imaging

Movie*Slate  A very good slate app.

Storyboard Composer  Excellent app for building storyboards from shot or library photos, adding actors, camera motion, script, etc. Powerful.

Splice  Unbelievable – a full video editor for the iPhone/iPad. Yes, you can: drop movies and stills on a timeline, add multiple sound tracks and mix them, work in full HD, has loads of video and audio efx, add transitions, burn in titles, resize, crop, etc. etc. Now that doesn’t mean that I would choose to edit my next feature on a phone…

iTC Calc  The ultimate time code app for iDevices. I use on both iPad and iPhone.

FilmiC Pro  Serious movie camera app for iPhone. Select shooting mode, resolution, 26 frame rates, in-camera slating, colorbars, multiple bitrates for each resolution, etc. etc.

Camera+ Pro  This app is listed under both sections, as it has so many features for both still and motion photography. The video capture/edit portion even has numerous filters that can be used during capture.

Camcorder Pro  simple but powerful HD camera app. Anti-shake and other features.

This concludes this post on the iPhone4S camera software. Please check out the individual posts following for each app mentioned above. I will be posting each app discussion as I complete it, so it may be a few days before all the app posts are uploaded. Please remember these discussions on the apps are merely my observations on their behavior – they are not intended to be a full tutorial, operations manual or other such guide. However, in many cases, the app publisher offers little or no extra information, so I believe the data provided will be useful.

iPhone4S – Section 3: Specifications & Hardware

March 11, 2012 · by parasam

This chapter of my series on the iPhone4S attempts to share what I have discovered on the actual hardware device – restricted to the camera in the iPhone. While this is specific to the iPhone, this is also representative of most high quality cellphone cameras.

First, a few notes on how I went about this, and some acknowledgements for those that discovered these bits first. All of the info I am sharing in this post was derived from the public internet. Where feasible I have tried to make direct acknowledgment of the source, but the formatting of this blog doesn’t always allow that without confusion (footnotes not supported, etc.) so I will insert a short list of sources just below. Although this info was pulled from the web, it has taken a LOT of research – it is not easy to find, and often the useful bits are buried in long sometimes boring epistles on the entire iPhone – I want to focus just on the camera.

Apple, more than most manufacturers, is an incredibly secretive company. They put highly onerous stipulations on all their vendors – in terms of saying anything about anything at all that concerns work they do for Apple. Apple publishes only the most vague of specifications, and often that is not enough for a photographer that wants to get the most from his or her hardware. This policy will likely never change, so the continued efforts of myself and others will be required to unearth the useful bits about the device so we can use it to its fullest potential.

Here are some of the sources/people that published information on the iPhone4S that were used as sources for this article:

engadget.com

apple.com

iFixit.com

chipworks.com

arstechnica.com

sony.com

omnivision.com

jawsnap.net

whatdigitalcamera.com – Nigel Atherton

bcove.me

wired.com

geekbench.com

iprolens.com

eoshd.com

macrumors.com

forbes.com

image-sensors-world.blogspot.com

opco.bluematrix.com

oppenheimer.com

barons.com

motleyfool.com

isuppli.com

anandtech.com

teledyne.com

thephoblographer.com

popphoto.com

dvxuser.com – Barry Green

campl.us – Jonathan

Often the only way to finally arrive at a somewhat accurate idea of what made the iPhone tick was to study the ordering patterns of Chinese supply companies – using publicly available financial statements; review observations and suppositions from a large number of commentators and look for sufficient agreement; find labs that tore the phones apart and reverse-engineered them; and often using my decades of experience as a scientist and photographer to intuit the likelihood of a particular way of doing things. It all started with a bit of curiosity on my part – I had no idea what a lengthy treasure hunt this would turn out to be. The research for this chapter has taken several months (after all, I have a day job…) – and then some effort to weed through all the data and assemble what I believe to be as accurate an explanation of what goes on inside this little device as possible.

One important thing to remember:  Apple is in the habit of sourcing parts for their phone from two or more suppliers – a generally accepted business practice, since a single-source supplier could be a problem if that company had either financial or physical difficulties in fulfillment. This means that a description, photos, etc. of one vendor’s parts may not hold true for all iPhone4S models or inventory – but the general principles will be accurate.

Specifications

This post will be presented in three parts:  the hardware specs first, followed by details of the construction of the iPhone camera system (with photos of the insides of the phone/camera), then some examples of photos taken with various models of the iPhone and some DSLR cameras for comparison – this to show the relative capability of the iPhone hardware. The software apps that are the other half of the overall imaging system will be discussed in the next post.

iPhone4S detailed specs

Camera Sensor                 Omnivision  OV8830 or Sony IMX105

Sensor Type                       CMOS-BSI  (Complementary Metal Oxide Semiconductor – Backside Illumination)

Sensor Size                         1/3.2″ (4.54 x 3.42 mm)

Pixel Size                             1.4 µm

Optical Elements              5 plastic lens elements

Focal Length                     4.28 mm

Equivalent Focal Length (ref to 35mm system) – Still        32 mm

Equivalent Focal Length (ref to 35mm system) – Video   42 mm

Aperture                            f 2.4

Angle of View – Still           62°

Aspect Ratio – Still           4:3

Angle of View – Video      46°

Aspect Ratio – Video       16:9

Shutter Speed – Still        1/15 sec – 1/2000 sec

ISO Rating                       64 – 800

Sensor Resolution – Still              3264 x 2448 (8 MP)

Sensor Resolution – Motion        1920 x 1080  (1080P HD)

External Size of Camera System Module                8.5 mm W x 8.5 mm L x 6 mm D

Features:

  • Video Image Stabilization
  • Temporal Noise Reduction
  • Hybrid IR Filter
  • Improved Automatic White Balance (AWB)
  • Improved light sensitivity
  • Macro focus down to 3”
  • Improved Color Accuracy
  • Improved Color Uniformity

Discussion on Specifications

iPhone4S Camera Assembly

Sensor – “Improved Light Sensitivity”

We’ll start with some basics on the heart of the camera assembly, the sensor. There are two types of solid-state devices that are used to image light into a set of electrical signals that can eventually be computed into an image file:  CCD (Charge Coupled Device) and CMOS (Complementary Metal Oxied Semiconductor). CCD is an older technology, but produces superior images due to the way it’s constructed. However, these positive characteristics come at the cost of more outboard circuitry, higher power consumption, and higher cost. These CCD arrays are used almost exclusively in mid-to-high-end DSLR cameras. CMOS offers lower cost, much lower power consumption and requires fewer off-sensor electronics. For these reasons all cellphone cameras, including the iPhone, use CMOS sensors.

For those slightly more technically inclined, here is a good paragraph from Teledyne.com that sums up the differences:

Both types of imagers convert light into electric charge and process it into electronic signals. In a CCD sensor, every pixel’s charge is transferred through a very limited number of output nodes (often just one) to be converted to voltage, buffered, and sent off-chip as an analog signal. All of the pixel can be devoted to light capture, and the output’s uniformity (a key factor in image quality) is high. In a CMOS sensor, each pixel has its own charge-to-voltage conversion, and the sensor often also includes amplifiers, noise-correction, and digitization circuits, so that the chip outputs digital bits. These other functions increase the design complexity and reduce the area available for light capture. With each pixel doing its own conversion, uniformity is lower. But the chip can be built to require less off-chip circuitry for basic operation.

Apple claims “73% more light” for the sensor used in the iPhone4S. Whatever that means… 73% of what?? Anyway, here is what, after some web-diving, I think is meant:  73% more light is converted to electricity as compared to the sensor used in the iPhone4. The reason is an improvement in the design of the CMOS sensor. The new device uses a technology called “Back Side Illumination”, or BSI. To understand this we must briefly discuss the way a CMOS sensor is built.

One of the big differences between CCD and CMOS sensors is that the CMOS chip has a lot of electronics built right into the surface of the sensor (this pre-processes the captured light and digitizes it before it leaves the sensor, greatly reducing the amount of external processing that must be done with additional circuitry in the phone). In the original CMOS design (FSI – or Front Side Illumination), the light had to pass through the layers of transistors, etc. before striking the diode surface that actually absorbs the photons and convert them to electricity. With BSI, the pixel is essentially “turned upside down” so the light strikes the back of the pixel first, increasing the device’s sensitivity to light, and reducing cross-talk to adjacent pixels. Here is a diagram from OmniVision, one of the suppliers of sensors for iPhone products:

Notice that the iPhone4S has 1.4um sized pixels, so we will assume that if a given camera is using an OmniVision chip, that it is provisioned with BSI, as opposed to BSI-2, technology. (Remember that Sony also apparently makes sensors for the iPhone4S using similar technology).

Sensor – Hybrid IR Filter

Although Apple has never clarified what is meant by this term, some research gives us a fairly good idea of this improvement. CMOS sensors are sensitive to a wide range of light, including IR (Infra Red) and UV (Ultra Violet). Since both of these light frequencies are outside the visible spectrum, they don’t contribute anything to a photograph, but can detract from it. Both IR and UV cause problems that result in diffraction, color non-uniformity and other issues in the final image.

For cost and manufacturing reasons, we believe that prior to the iPhone4S, the filter used was a simple thin-film IR filter (essentially another layer deposited on top of the upper-most layer of the sensor. These ‘thin-film’ IR filters have several down-sides:  Due to their very thin design, they are subject to diffraction as the angle of the incident light on the surface of the filter/sensor changes – this leads to color gradations (non-uniformity) over the area of the image. Previously reported “magenta/green circle” issues with some images taken with the iPhone4 are likely due to this issue.

Also, the thin-film filter employed in the iPhone4 offered only a certain reduction in IR, not total by any means. This has been proven by taking pictures of IR laser light with an iPhone4 – which is clearly visible! This should not be the case if the IR filter was efficient. Since silicon (the base material of chips) is transparent to IR light, what happens is the extraneous IR light rays bounce around off the metal structures that make up the CMOS circuitry, adding reflections to the image and effect color balance, etc. UV light, while not visible to the human eye, is absorbed by the light sensitive diode and therefore adds noise to the image.

It appears that a proper ‘thick-film’ combination IR/UV filter has been fitted to the iPhone4S camera assembly, right on top of the sensor assembly. The proof of a more effective filter was a test photograph of the same IR lasers as the iPhone4 shot – and no laser light was visible on an iPhone4S. The color balance does appear to be better, with more uniformity, and less change of color gradation as the camera is moved about its axis.

A good test to try (and this is useful for any camera, not just a cellphone) is to evenly illuminate a large flat white wall (easier said than done BTW! – use a good light meter to ensure a true even illumination – this usually requires multiple diffuse light sources). You will need to put some small black targets on the wall (print a very large “X” on white paper and stick to the wall in several places) so the auto-focus in the iPhone will work (the test needs to be focused on the wall for accuracy in the result). Then take several pictures, starting with the camera perfectly parallel to the wall, both horizontally and vertically. Then angle the camera very slightly (only a few degrees) and take another shot. Repeat this a few times angling the camera both horizontally then vertically. This really requires a tripod. Ensure that the white wall is the only thing captured in the shot (don’t turn the camera so much that the edge of the wall is visible. Stay back as far as possible from the wall – this won’t be more than a few feet unfortunately due to the wide angle lens used on the iPhone – this will give more uniform results.

Ideally, all the shots should be pure white (except for the black targets). Likely, you will see some color shading creep in here and there. To really see this, import the shots into Photoshop or similar image application, enlarge to full screen, then increase the saturation. If the image has no color in it, increasing the saturation should produce no change. If there are color non-uniformities, increasing the saturation will make them more visible.

The claims for “improved color accuracy” and “improved color uniformity” are both most likely due to this filter as well.

Sensor – Improved Automatic White Balance (AWB)

The iPhone4 was notorious for poor white balance, particularly under fluorescent lighting. The iPhone 4S shows notable improvement in this area. While AWB is actually a function of software and the information received from the sensor, it is discussed here as well – more on this function will be introduced in the next section on camera apps and basic iPhone imaging software. The improved speed of this sensor, in addition to the better color accuracy discussed above – all contribute to more accurate data being supplied to the software in order that better white balance is achieved.

Sensor – Full HD (1080P) for video

A big improvement in the iPhone4S is the upgrade from 720P video to 1080P. The resolution is now 1920×1080 at the highest setting, allowing for full HD to be shot. Since this resolution uses only a portion of the image sensor overall resolution (3264 x 2448), another feature is made possible – video image stabilization. One of the largest impediments to high quality video from cellphones is camera movement – which is often jerky and materially detracts from the image. To help ameliorate this problem, the solution is to compare subsequent frames and match up the edges of the frame to each other – this essentially offsets the camera movement on a frame-by-frame basis.

In order for this to be possible, the image sensor must be larger than the image frame – which is true in this case. This allows for the image of the scene to be moved around and offset so the frames can all be stacked up on top of each other, reducing the apparent image movement. There are side-effects to this image-stabilization method:  some softness results from the image processing steps, and this technique works best for small jerky movements such as result from hand-held videography. This method does not really help larger movements (car, airplane, speedboat, roller-coaster).

Sensor – Temporal Noise Reduction

In a similar fashion to the discussion about Automatic White Balance above, the process of Temporal Noise Reduction is a combination of both sensor electronics and associated software. More details of noise reduction will be discussed in the upcoming section on imaging software. Still images can only take advantage of spatial noise reduction, while moving images (video) can also take advantage of temporal noise reduction – a powerful algorithm to reduce background noise in video.

It must be noted that high quality temporal noise reduction is a massively intensive computational task for real-time high-quality results:  purpose built hardware devices used in the professional broadcast industry cost tens of thousands of dollars and are a cubic foot in size… not something that will fit in a cellphone! However, it is quite amazing to see what can be done with modern microelectronics – the NR that is included in the iPhone4S does offer improvements over previous models of video capture.

Essentially, what TNR (Temporal Noise Reduction) does is to find relatively ‘flat’ areas of the image (sky, walls, etc.); analyze the noise by comparing the small background deviations in density and color; then average those statistically and filter the noise out using one of several mathematical models. Because of this, TNR works best on relatively stationary or slow-moving images:  if the image comparator can’t match up similar areas of the image from frame to frame the technique fails. Often that is not important, as the eye can’t easily see small noise in the face of rapidly moving camera or subject material. TNR, like image stabilization, does have side-effects:  it can lead to some softening of the image (due to the filtering process) – typically this is seen as reduction of fine detail. One way to test this is to shoot some video of a subject standing in front of a screen door or window screen in relatively low light. Shoot the same video again in bright light. (More noise is apparent in low light due to the nature of CMOS sensors). You will likely see a softening of the fine detail of the screen in the lower light exposure – due to the TNR kicking in at a higher level.

Overall however, this is good addition to the arsenal of tools that are included in the iPhone4S – particularly since most people end up shooting both stills and videos in less than ideal lighting conditions.

Sensor – Aspect Ratio, ISO Sensitivity, Shutter Speed

The last bits to discuss on the sensor before moving on to the lens assembly are the native aspect ratio and the sensitivity to light (ISO and shutter speed). The base sensor is a 4:3 aspect ratio (3264 x 2448) – the mathematically correct expression is 1:1.33 – but common usage dictates the integer relationship of 4:3. This is the same aspect ratio as older “standard definition” television sets. All of the still pictures from the iPhone4S have this as their native size – of course this is often cropped by the user or various applications. As a note, the aspect ratio of 35mm film is 1:1.5 (3:2), while the aspect ratio of the common 8”x10” photographic print is 1:1.25 (5:4) – so there will always be some cropping or uneven borders when comparing digital photos to print to film.

Shutter speed, ISO and their relationships were discussed in the previous section of this blog series:  Contrast (differences between DSLR & Cellphones Cameras). However, this deservers a brief mention here in regards to how the basic sensor specifications factor in to the ranges of ISO and shutter speed.

ISO rating is basically a measure of the light sensitivity of a photosensitive material (whether film, a CMOS sensor, or an organic soup). [yes, there are organic broths that are light sensitive – interesting possibilities await…]  With the increased sensitivity of the sensor used in the iPhone4S, the base ISO rating should be improved as well. Since Apple does not give details here, the results are from testing by many people (including myself). The apparent lower rating (least sensitive end of the ISO range) has moved from 80 [iPhone4] to 64 [iPhone4S]. The upper rating appears to be the same on each – 800. There is some discrepancy here in the literature – some have reported an ISO of 1000 being shown in the EXIF data (the metadata included with each exposure, the only way to know what occurred during the shot), but others, including myself, have been unable to reproduce that finding. What is for sure is that the noise in the picture is reduced in the iPhone4S as compared to the iPhone4 (for identical exposures of same subject taken at same time under identical lighting conditions).

Shutter speed is, other than base ISO rating, the only way that the iPhone has to modulate the exposure – since the aperture of the lens is fixed. The less time that the little ‘pixel buckets’ in the CMOS sensor have to accumulate photons, the lower the exposure. Since in bright light conditions more photons arrive per second than in low light conditions, a faster shutter speed is necessary to avoid over-exposure – certain death to a good image from a digital sensor. For more on this please read my last post – this is discussed in detail.

The last thing we need to discuss is however very, very important – and this affects virtually all current CMOS-based video cameras, not just cellphones. This concerns the side-effects of the ‘rolling shutter’ technology that is used on all CMOS-based cameras (still or video) to date. CCD sensors use a ‘global shutter’ – i.e. a shuttering mechanism that exposes all the pixels at once, in a similar fashion to film using a mechanical shutter. However, CMOS sensors “roll” the exposure from top to bottom of the sensor – essentially telling one row of pixels at a time to start gathering light, then stop at the end of the exposure period, in sequence. Without getting into the technical details for why this is done, at the high level it allows more rapid readout of image information from the sensor (one of the benefits of CMOS as compared to CCD), and use less power, as well as avoids overheating of the sensor under continuous use (as in video).

The problem with a rolling shutter is when either the camera or subject is moving rapidly. Substantial visual artifacts are introduced: these are called Skew, Wobble and Partial Exposure. There is virtually no way to resolve these issues. You can minimize them by the use of tripods, etc. in some cases – but of course this is not useful for hand held cellphone shots! Rather than reproduce an already excellent short explanation of this topic, please look at Barry Green’s post on this issue, complete with video examples.

The bottom line is that, as I have mentioned before, keep camera movement to an absolute minimum to experience the best performance from the iPhone4S video camera.

Lens – the new 5-element version

The iPhone4S has increased the optical elements in the lens assembly from 5 since the iPhone4 (which had 4 elements). Current research indicates that (following Apple’s usual policy of dual-sourcing all components of their products) Largan Precision Co., Ltd. and Genius Electronic Optical Co., Ltd. are supplying the 5-element lens assembly for the iPhone4S.

The optical elements are most likely manufactured from plastic optical material (Polystyrene and PMMA – a type of acrylic). Although plastic lenses have many issues (coatings don’t stick well, not very many have great optical properties, they have a high coefficient of thermal expansion, high index variation with temperature and less heat resistance or durability among others) – there are two huge mitigating factors:  much lower cost and weight than glass, and the ability to be formed into complex shapes that glass cannot. Both of these factors are extremely important in cellphone camera lens design, to the point that these factors outweigh any of the disadvantages.

Upper diagram is the 4-element design of the iPhone4 lens; lower diagram is the 5-element design of the iPhone4S.

The complex shapes are aspheres, which are difficult to fabricate out of glass, and afford much finer control over aberrations using fewer elements, which is an absolute necessity when working with very little package depth. A modern lens, even a prime lens (which is what the iPhone uses – i.e. a fixed focal length lens) for a DSLR usually has from 4-6 elements, similar to the iPhone lens with 5 elements – but the depth of the lens barrel that holds all the elements is often at least 2” if not greater. This is required for the optical alignment of the basically spheroid ground lenses.

The iPhone lens assembly (similar to many quality cellphone camera lenses) is barely over ¼” in length! Only the properties of highly malleable plastic that allow the molding of aspherical shaped lenses makes this possible. Optical system design, particularly for high quality photography, is a massive subject by itself – I will make no attempt here to delve into that. However, the basic reason for multiple element lenses is they allow the lens system designer to mitigate basic flaws in one lens material by balancing this with another element made from a different material. Also, there are physics and material sciences issues that have to do with how light is bent by a lens that often are greatly improved with a system of multiple elements.

The iPhone lens is a fixed aperture, fixed focal length lens. See the previous post for details on how this lens compares to a similar lens that would be used for 35mm photography (it’s equivalent to a 32mm lens if used on a 35mm camera, even though this actual lens’ focal length is only 4.28mm)

The aperture is fixed at f2.4 – a fairly fast lens. Both cost and the manufacturing complexity of making an adjustable aperture on a lens this small impossible to consider.

The AOV (Angle of View) of the lens for still photography is 62° – this changes to only 46° for video mode. The reduction in AOV is due to the smaller size of the image for video (2MP instead of 8MP). Remember that AOV is a function of the actual focal length of the lens and the size of the sensor. It’s the size of the effective sensor that matters, not how big the physical sensor may be. The smaller effective size of the video sensor – factored against the same focal length lens – means that the AOV get smaller as well.

Since the effective sensor size is different, this also alters the equivalent focal length of the lens (as referenced to a 35mm system), in this case the focal length increases to an effective value of 42mm.

The bottom line is that in video mode, the iPhone camera sees less of a given scene than when in still camera mode.

Discussion on Hardware

This next portion shows a bit of the actual insides of an actual iPhone4S. While the focus of this entire series of posts is on just the camera of the iPhone4S, a few details about the phone in general will be shown. The camera is not totally isolated within the ecosystem of the iPhone, a large portion of the actual usability of this wonderful device is assisted by the powerful dual-core A5 cpu, the substantial on-board memory, the graphics processors, the high-resolution display, etc. etc.

[The following photos and disassembly process from the wonderful site iFixit.com – which apparently loves to rip to pieces any iDevice they can get their hands on. Don’t try this at home – unless you are very brave, have good coordination and tools – and have no use for a warranty or a working iPhone afterwards…]

Yes, it’s a box with a Cloud, a genie named Siri, and a cool camera inside – and BTW you can make phone calls on it as well…

The iPhone4S

Disassembly starts here – specialized screwdriver required (it’s a Pentalobe screw – otherwise known as “Evil Proprietary Tamper Proof Five Point Screw”)

Yes really. Apple engineers looked far and wide for a screw type that no one else had a screwdriver for.

You want one anyway? Order one here.

Inside for the first time…

Closer look

The battery

Prying out the focus of this entire set of posts – the rear-facing camera.

The camera assembly

The rear of the camera assembly.

The logic board. Note the white triangular piece stuck on the shield – it’s one of several dreaded ‘liquid detectors’. Apple service engineers check these to see if you gave it a bath. (they turn color if wet with ANYTHING) – and there goes your warranty…

Logic board with shields removed (lower the shields Scotty…) The main bits are: Apple A5 Dual-core Processor (more on this later) Qualcomm RTR8605 Multi-band/mode RF Transceiver. Chipworks has provided us with a die photo. Skyworks 77464-20 Load-Insensitive Power Amplifier (LIPA®) module developed for WCDMA applications Avago ACPM-7181 Power Amplifier TriQuint TQM9M9030 surface acoustic wave (SAW) filter TriQuint TQM666052 PA-Duplexer Module

Further detail on the logic board TI 343S0538 touchscreen controller STMicro AGD8 2135 LUSDI gyroscope STMicro 8134 33DH 00D35 three-axis accelerometer Apple 338S0987 B0FL1129 SGP, believed by Chipworks to be a Cirrus Logic audio codec chip

The brain – the dual-core A5 cpu chip

An inside look at the A5 chip

A5 chip, with some areas explained

More of logic board Qualcomm MDM6610 chipset (an upgrade from the iPhone 4's MDM6600) Apple 338S0973, which appears to be a power management IC, according to Chipworks.

Murata SW SS1830010. We suspect that this contains the Broadcom chip that reportedly provides Wi-Fi/Bluetooth connectivity

Toshiba THGVX1G7D2GLA08 16 GB 24 nm MLC NAND flash memory (or 32GB or 64GB if your budget allowed it)

The 960 x 640 pixel Retina display. Also what appears to be the ambient light sensor and infra-red LED for the proximity sensor comes off the display assembly.

Close-up of sensor assembly.

Rear view of front-facing camera (the little low resolution camera for FaceTime)

Front view of front-facing camera

Ok, there it is.. in bits… now just put it all back and see if it works again…

Another view of the camera assembly.

Rear of camera assembly

Now it gets interesting – an X-ray photomicrograph of the edge of the CMOS sensor board – clearly showing that at least this unit was manufactured by Sony….

Sectional view of original iPhone4 camera assembly. Note the Front Side Illumination (FSI) design – where the light from the lens must pass through the circuitry to get to the light sensitive part of the pixels.

Section view of the iPhone4S camera assembly. Note the change in design to the Back Side Illumination version.

Another photomicrograph of the sensor board, showing the Sony model number of the sensor.

As promised in the introduction, here are a few sample photos showing the differences between various models of the iPhone as well as several DSLR cameras. Many more such comparisions as well as other example photos will be introduced in the next section of this series – Camera Software.

(Thanks to Jonathan at campl.us for these great comparison shots from his blog on iPhone cameras)

iPhone Original

iPhone3G

iPhone3GS

iPhone4

iPhone4S

CanonS95

CanonEOS5DMkII

iPhone Original

iPhone3G

iPhone3GS

iPhone4

iPhone4S

CanonS95

CanonEOS5DMkII

Well, this concludes this chapter of the blog on the iPhone4S. Hopefully this has shed some light on how the hardware is put together, along with some further details on the technical specifications of this device. A full knowledge of your tools will always help in making better images, particularly in challenging situations.

Stay tuned for the next chapter, which will deal with all the software that makes this hardware actually produce useful images. Both the core software of the Apple operating system (iOS 5.1 at the time of this writing) and a number of popular camera apps (both still and video) will be discussed.

iPhone4S – Section 2: Contrast – the essence of photography – and what that has to do with an iPhone…

March 9, 2012 · by parasam

If a photo was all white – or all black – there would be no contrast, no differentiation, no nothing. A photo is many things, but first and foremost it must contain something. And something is recognized by one shape, one entity, standing out from another. Hence… contrast. This is the prima facie of a photograph – whether color or monochrome, whether tack sharp like a Weston or a blur against a rain smeared window – contrast of elements is the core of a photograph.

After that can come focus, composition, tonal range, texture, evocative subject… all the layers that distinguish great from mundane – but they all run second.

Although this set of posts is indeed concerned with an exploration of the mechanics and limitations of a rather cool cellphone camera (iPhone4S), the larger intent is to embolden the user with a tool to image his or her surroundings. The fact that such a small and portable device is capable of imaging at a level that was only a few years ago relegated to DSLR cameras is a technological wonder. Absolutely it is not a replacement for a high quality camera – but in the hands of a trained and experienced person with a vision and the patience to understand the possibilities of such a device, improbable things are possible.

Contrast of devices – DSLR vs cellphone

This post will cover a few of the limitations and benefits of a relatively high quality cellphone camera. While I am discussing the iPhone4S camera in particular, these observations apply to any modern reasonably high quality cellphone camera.

For the first 50 years or so of photography, portability was not even a possibility. Transportability yes, but large view cameras, glass or tin plates and the need for both camera (on tripod) and subjects (either mountains that didn’t move or people frozen in a tableau) to remain locked in place didn’t do much for spontaneity. Roll film, the Brownie camera, the Instamatic, eventually the 35mm camera system – not to mention Polaroid – changed the way we shot pictures forever.

But more or less, for the first hundred years of this art form, the process was one of delayed gratification. One took a photo, then waited through hours or days of photochemical processes to see what actually happened. The art of previsualization became paramount for a professional photographer, for only if you could reasonably predict how your photo would turn out could you stay in business!

With the first digital picture ever produced in 1975 (in a Kodak lab), this is indeed a young science. Consumer digital photography is only about 20 years old – and a good portion of that was relatively low performance ‘snapshot’ cameras. High end digital cameras for professionals only came on the scene in the late 1990’s – at obscene prices. The pace of development since then has been nothing short of stratospheric.

We now have DSLR (Digital Single Lens Reflex) cameras that have more resolution that any film stock ever produced; with lenses that automatically compensate for vibration, assist in exposure and focus, and have light-gathering capabilities that will allow excellent pictures in starlight.

These high-end systems do not come cheaply, nor are they small and lightweight. Even though they are based on the 35mm film camera system, and employ a digital sensor about the same size as a 35mm film frame – they are considerably complex imaging computers and optical systems – and are not for the faint of heart or pocketbook!

Full-sized DSLR with zoom lense and bellow hood

With camera backs only (no lens) going for $7,000 and high quality lenses costing from $2,000 – $12,000 each, these wonders of modern imaging technology require substantial investment – of both knowledge and cash.

On the other end of the spectrum we have the consumer ‘point and shoot’ cameras, usually of a few megapixels resolution, and mostly automatic in function. The digital equivalent of the Kodak Brownie.

Original Kodak Brownie camera

These digital snapshot cameras revolutionized candid photography. The biggest change was the immediacy – no more waiting and expensive disappointment of a poorly exposed shot – one just looked and tried again. If nothing else, the opportunity to ‘self-teach’ has already improved the general photographic skill of millions of people.

Almost as soon as the cellphone was invented, the idea of stuffing a camera inside came along. With the first analog cellphones arriving in the mid-1980s, within 10 years (1997 to be exact) the first cellphone with a built-in camera was announced (Kyocera, 1MP).

A scant 15 years later we have the iPhone4S and similar camera systems routinely used by millions of people worldwide. In many cases, the user has no photographic training and yet the results are often quite acceptable. This blog however is for those that want to take a bit of time to ‘look under the hood’ and extract the maximum capabilities of these small but powerful imaging devices.

Major differences between DSLR cameras and cellphone cameras

The essence of a cellphone camera is portability, while the prime focus of a DSLR camera is to produce the highest quality photograph possible – given the constraints of cost, weight and complexity of operation. It is only natural then that many compromises are made in the design of a cellphone camera. The challenges of very light weight, low cost, small size and other technical issues forced cellphone cameras into a low quality genre for some time. Not any more. Yes, it is absolutely correct that there are many limitations to even ‘high quality’ cellphone cameras such as the iPhone, but with an understanding of these limitations, it is possible to take photos that many would never assume came from a phone.

One of the primary limitations on a cellphone camera is size. Given the physical constraints of the design package for modern cellphones, the entire camera assembly must be very small, usually on the order of ½” square and less than ¼” thick. Compared to an average DSLR, which is often 4” wide by 3” high and 2” thick – the cellphone camera is microscopic.

The first challenge this presents then is sensor size. Two factors actually come into play here:  actual X-Y sensor size dimensions, and lens focal length. Since the covering power of the lens (the area that a lens can cover with a focused image) is a function of the focal length of the lens – and therefore the physical dimensions required in the lens barrel assembly – materially affects the overall thickness of the lens/camera assembly, compromises have to made here to keep the overall size within limits.

The combination of physical sensor size and the depth that would be required if the actual focal length was more than about 5mm, mandates typical cellphone camera sensor sizes to be in the range of 1/3” in size. For example, the iPhone4S sensor is 4.54mm x  3.42mm and the lens has a focal length of 4.28mm. Most other quality cellphone cameras are in this range.

Full details (and photographs) of the actual iPhone hardware will be discussed in the following blog, iPhone4S Specs & Hardware.

The limitation of sensor size then sets the physical size of the pixels that make up the sensor. With a desire to offer a relatively high megapixel count – for sharp resolution – the camera manufacturer is then forced to a very small pixel size. The iPhone4S pixel size is 1.4μm. That is really, really small. Less than a millionth of an inch square. The average size of a pixel on a high quality “35mm” style DSLR camera is 40X larger…

The small pixel size is one of the largest factors in the differences that make cellphone cameras less capable that full-fledged DSLR cameras. The light sensitivity is much less, due the basic nature of how a CCD/CMOS sensor works.

Film vs CCD/CMOS sensor technology – Blacks & Whites

To fully understand the issues with small digital sensor pixel size we need to briefly revisit the differences between film and digital image capture first. Up until 50 years ago, only photochemical film emulsions could capture images. The fundamental way that light is ‘frozen’ into a captured image is very different from film as compared to digital techniques – and it is visible in the resultant photographs.

That is not to say one is better, they are just different. Furthermore, there is a difference in appearance to the human eye from a fully ‘chemical’ process (photograph captured on film, then printed directly onto photo paper and developed chemically – even from a film image that is scanned and printed digitally. Film scanners also use the same CCD array that digital cameras use, and the basic difference of image capture once again comes into play.

Without getting lost in the wonderful details of materials science, physics and chemistry that all play a part in how photochemical photography works, when light strikes film the energy of the light starts changing certain molecules of the film emulsion. The more light that hits certain areas of the film negative, the more certain molecules start clumping together and changing. Once developed, these little groups of matter become the shadows and light of a photograph. All film photographs show something we call ‘grain’ – very small bits of optical gravel that actually constitute the photograph.

The important bit here is to remember that with film, exposure (light intensity X time) results in increased amounts of ‘clumped optical gravel’ – which when developed looks black on a film negative. Of course black on a negative prints to white on a positive – the print that we actually view.

Conversely, on film, very lightly exposed portions of the negative (the shadows, those portions of the picture that were illuminated the least) show up as very light on the negative. This brings us to one of the MOST important aspects of film photography as compared to digital photography:

  • With film, you expose for the shadows and print for the highlights
  • With digital, you expose for the highlights and print for the shadows

The two mediums really ARE different. An additional challenge here is when we shoot film, but then scan the negative and print digitally. What then? Well, you have to treat this scenario as two serial processes:  expose the film as you should – for the shadows. Then when you scan the film, you must expose for the highlights (since you are in reality taking a picture of a picture) and now that you are in the digital domain, print for the shadows.

The reason behind all this is due to the difference between how film reacts to light and how a digital sensor works. As mentioned above, film emulsions react to light by increasing the amount of ‘converted molecules’ – silver halide crystals to be exact – leaving unexposed areas (dark areas in the original scene) virtually unexposed.

Digital sensor capture, using either the CCD or CMOS technology (more on the difference in a moment) respond to light in a different manner:  the photons that make up light fall on the sensor elements (pixels) and ‘fill up’ the energy levels of the ‘pixel container’. The resultant voltage level of each pixel is read out and turned into an image by the computational circuits associated with the sensor. The dark areas in the original image, since they contribute very little illumination, leave the ‘pixel tanks’ mostly unfilled. The high sensitivity of the photo-sensitive arrays mean that any stray light, electrical noise, etc. can be interpreted as ‘illumination’ by the sensor electronics – and is. The bottom line therefore is that the low light areas (shadows) in an image captured by digital means are always the most noisy.

To sum it up:  blacks in a film negative are the least noisy, as basically nothing is happening there; blacks in a digital image are the most noisy, since the unfilled ‘pixel containers’ are like little magnets for any kind of energy. What this means is that fundamentally, digitally captured images are different looking than film:  noisy blacks in digital, clean blacks in film.

There are two technologies for digital sensor image capture:  CCD and CMOS. While similar at the high level, there are significant differences. CCD (Charge Coupled Devices) are older, typically create high quality, low-noise images. CMOS (Complementary Metal Oxide Semiconductor) arrays consume much less power, are less sensitive to light, and are far less expensive to fabricate. This means that all cellphone cameras use CMOS technology – the iPhone included. Most medium to high-end DSLR cameras use CCD technology.

The above issue only adds to the quality challenge for a cellphone camera:  using less expensive technology means higher noise, lower quality, etc. for the produced image. Therefore, to get a good exposure on digital, one would think that you would want to ‘expose for the shadows’ to be sure you reduced noise by adding exposure to the shadow areas. Unfortunately, the opposite is actually the case!

The reason is that in digital capture, once a pixel has been ‘filled up’ (i.e. received enough light that the level of that pixel is at the maximum [255 for an 8-bit system]) it can no longer hold any detail – that pixel is just clipped at pure white. No amount of post-processing (Photoshop, etc.) can recover detail that is lost since the original capture was clipped. With under-exposed blacks, you can always apply noise-reduction, raise levels, etc. and play with ‘an emptyp-ish container’.

That is why it’s so important to ‘expose for the highlights’ with digital – once you have clipped an area of image at pure white, you can’t ever get that detail back again – it’s burnt out. For film, the opposite was true due the negative process:  if you didn’t get SOME exposure in the shadows, you can’t make something out of nothing – all you get is gray noise if you try to pump up blacks that have no detail. In film, you have to expose for the shadows, then you can always tone down the highlights.

So, with your cellphone cameras,  ALWAYS make sure you don’t blow out the highlights, even if you have to compromise with noisy blacks- you can use various techniques to minimize that issue during post-production.

Fixed vs Adjustable Aperture

Another major difference between digital cameras and cellphone cameras is the issue of fixed aperture. All cellphone cameras have a fixed aperture – i.e. no way to adjust the lens aperture as one does with the “f-stop” ring on a camera lens. Essentially, the cellphone lens is “wide open” all the time.  This is purely a function of cost and complexity. In normal cameras, the aperture is controlled with a ‘variable vane’ system, a rather complex set of curved thin pieces of metal that open and close a portal within the lens assembly to allow either more or less light through the lens as a whole.

With a typical lens measuring 3” in diameter and about 2” – 6” long this was not a mechanical design issue. A cellphone lens on the other hand is usually less than ¼” in diameter and less than ¼” in depth. The mechanical engineering required to insert such an aperture control mechanism would be very difficult and exorbitantly expensive.

Also, the operational desire of most cellphone manufacturers is to keep the device very simple to operate, so having another control that significantly affected camera operations and control was not high on the feature list.

A fixed aperture means several things:  normally this means a shallow depth of field, but with the relatively wide angle lenses employed by most mobile phone manufacturers the depth of field is usually more than enough; and for daylight exposures, adjustment of both ISO and shutter speed are necessary to avoid over-exposure.

Exposure settings

On a digital camera, you can use either automatic or manual controls to set the exposure. Many cameras allow either “shutter priority” or “aperture priority.”  What this means is that with shutter priority the user selects a shutter speed (or range of speeds), and the camera adjusts the aperture as is required to get the correct exposure. This setting does not allow the user to set the f-stop, so the depth of field on a photograph will vary depending on the light level.

With aperture priority, the user selects an f-stop setting, and the camera selects a shutter speed that is appropriate. With this setting, the user does not set the shutter speed, so care is required if slow shutter speeds are anticipated:  camera and/or subject movement must be minimized.

On a film or digital camera, the user can set the ISO speed rating manually. This is not possible on most cellphone cameras. The speed rating of the sensor (ISO #) is really a ‘baseline’ for the exposure setting.

Here is an example of setting the base ISO speed correctly:

Under exposure

Normal exposure

Over exposure

The exposure algorithm inside the cellphone camera software computes both the shutter speed and the ISO (the only two factors that it can change, since the aperture is fixed) and arrives at a compromise that the camera software believes will make the best exposure. Here is where art and experience come into play – no cellphone hardware or software manufacturer has yet to publish their algorithms, no do I ever expect this to happen. One must shoot lots and lots of exposures under controlled conditions to attempt to figure out how a given camera (and app) is deciding to set these parameters.

Like anything else, if one takes the time to know your tools, you get a better result. From what I have observed by shooting several thousand frames with my iPhone4S, and using about 10 different camera apps to do so, the following is very rough approximation of a typical algorithmic process:

  • A very fast ‘pre-exposure’ of the entire frame is performed  (averaging together all the pixels without regard to an exposure ‘area’) to arrive at a sense of the overall illumination of the frame.
  • From this an initial ISO setting is assigned.
  • Based on that ISO, then the ‘exposure area’ (usually shown in a box in the display, or sometimes just centered in the frame) is used to further set the exposure:  the pixels within the exposure area are averaged, then, based on the ISO setting, a shutter speed is chosen to place the average light level at Zone 5 (middle gray value) of a standard exposure index. [Look for an upcoming blog on Zones if you are not familiar with this]

It appears that subsequent adjustments to this process can happen (and most likely do!) – again, determinate on a particular vendor’s choice of algorithm:  for instance, if, based on the above sequence the final shutter speed is very slow (under 1/30 second) the base ISO sensitivity will likely be raised, as slow shutter speeds reveal both camera shake and subject movement.

Apple, nor any other manufacturer, seems to publish the exact limits on their embedded camera’s ISO sensitivity nor range of shutter speeds. With many, many controlled tests, I have determined (and apparently so have others, as the published info I have seen on the web corroborates my findings) that the shutter speeds of the iPhone4S range from 1/15 sec down to 1/2000 sec; while the ISO speeds range from ISO 64 to ISO 1000.

There are apps that will allow much longer exposures, and potentially a wider range of ISO values – I have not had time to run extensive tests with all the apps I have tried. Each camera app vendor has the choice to implement more or less features that Apple exposes in the programmatic interface (more on that in Part 4 of this series), so the potential variations are large.

Movement

As you have undoubtedly experienced, many photographs are spoiled by inadvertent movement of the subject, camera, or both. While sometimes movement is intended – and can make the shot artistically – most often this is not the case. With the very tiny sensor that is normal for any cellphone camera, the sensor is very often hungry for light, so the more you can give it, the better quality picture you will get.

What this means in practicality is that when possible, in lower light conditions, brace the camera against a solid object, put it on a tripod (using an adaptor), etc. Now here is where you again will get better results with experience:  our eyes have fantastic adaptive properties – cameras do not. When we walk inside a mall from a sunny outdoors, within seconds we perceive the mall to be as well lit as the outside – even though in real terms the average light value is less than 1% of what was outdoors!

However, our little iPhone is now struggling to get enough light to expose a picture. Outdoors, we might have found that at ISO 64 we were getting shutter speeds of 1/600 second, indoors we have changed to ISO400 at 1/15 second! The slow shutter speeds almost always will show blurred movement, whether from a person walking, camera shake, or both.

Here are a few examples:

1/20 sec @ ISO400 Camera handheld, you can see camera shake in blurred lines in glass panels, upper left; then in addition subject movement (her left leg is almost totally blurred).

1/15 sec @ ISO800 iPhone held against a signal light post for stability

Image format

Another big difference between DSLR cameras and cellphone cameras is the type (and variations) on image capture format. Cellphone cameras exclusively (at this time) capture only to compressed formats, .jpg usually. Often the user gets some limited control over the amount of compression and the resultant output resolution (I call it ‘shirt size formatting’ – as usually it’s S-M-L).

Regardless, the output format is significantly compressed from the original taking format. For instance, the 8megapixel capture of the iPhone4S typically outputs a frame that is about 2.9MB in file size, in .jpg format. A semi-pro DSLR (2/3 format) at the same megapixel rating will output 48MB per frame in RAW format (16 bits per pixel). This is done mainly to conserve memory space in the cellphone system, as well as greatly speed up transfers of data out of the phone.

However, one loses much by storing pictures only in the compressed jpg format. Without digressing into details of digital photography post-production, once a picture is locked into the compressed world many potential adjustments are lost. That is not to say you can’t get a good picture if compressed, only that a correct exposure up front is even more important, since you can’t employ the rescue methods that ‘camera Raw’ allows one.

The process of jpg compression introduces some artifacts as well, and these can range of invisible to annoying, depending on the content. Again, there is nothing you can do about it in the world of cellphone cameras, other than understand it, and try to mitigate against it by careful composition and exposure when possible. The scope of this discussion precludes a detailed treatise, but suffice it to say that jpeg artifacts become more noticeable with extremes of lighting conditions, so low light, brilliant glare and other such situations may show these more than a normally lit scene.

Flash photography

The ‘flash’ on cellphones is nothing more than a little LED lamp that can ‘flash’ fairly rapidly. Yes, it allows one to take pictures in low light that would otherwise not be possible, but that’s about it. It has almost no similarity to a real strobe light used by DSLR cameras, whether built-in to the camera or a professional outboard unit.

The three big differences:

  1. Speed:  the length of a strobe flash is typically 1ms (1/1000 sec), while the iPhone LED ‘flash’ is about 100ms (1/10 sec). That is a factor of 100x.
  2. Light output:  DSLR strobe units put out MUCH more light than cellphone flash units. Exactly how much is not easy to measure, as Apple does not publish specs, and the LED light unit works very differently than a strobe unit. But conservatively, a typical outboard flash unit (Nikon SB-900 for example) produces 2,500 lumenseconds of illumination, while the iPhone4S is estimated at about 25 lumenseconds. That means a strobe flash is 100x brighter…
  3. Color temperature:  Commercial strobe lights are carefully calibrated to output at approximately 5500°K, while the iPhone (and similar cellphone flash) are uncalibrated. The iPhone in particular seems quite blue, probably around 7000°K or so. Now the automatic white balance will try to fix this, but this function often fails in two common scenarios:  mixed lighting (for instance flash in a room lit with tungsten lamps); and subjects that don’t have much black or white areas (which AWB circuits use to compute the white point).

The bottom line is that reserve the use of cellphone ‘flash’ to emergencies only. And it kills battery life…

Colorimetry, color balance and color temperature

The above terms are all different. Colorimetry is the science of human color perception. Color Balance is the relative balance of colors within a given object in a scene, or within a scene as a whole. Color Temperature (in photographic terms) is the overall hue of a scene based on the white reference associated with the scene.

To give a few examples, in reverse order from the introduction:

The color temperature of an outdoor shot at sunset will be lower (warmer) than that of a shot taken at high noon. The standard for photography for daylight is 5000°K (degrees Kelvin is the unit for color temperature), with sunset being about 2500°K, indoor household lighting equivalent to about 2800°K, while outdoor light in the shade from an open sky (no direct sunlight) is often about 9000°K. The lower numbers look reddish-orange, the higher numbers are bluish.

Color balance can be affected by illumination, the object itself, errors in the color response of the taking sensor, as well as other factors. Sometime we need to correct this, as the original color balance can look unnatural and detract from the photograph – for instance if human skin happens to be illuminated with a fluorescent light, although the larger scene is lit with traditional tungsten (household) lamps, the skin will take on an odd greenish tinge.

Colorimetry comes into play in how the Human Visual System (HVS) actually ‘sees’ what it is looking at. Many variables come into play here, but for our purposes we need to understand that the relative light levels and contrast of the viewing environment can significantly affect what we see. So don’t try to critically judge your cellphone shots outdoors – you can’t. Wait until you are indoors, and it’s best to review them on monitor you can trust – with proper lighting conditions.

More on all these issues will be discussed later, this is just a taste and some quick guides on how cellphones differ from more traditional photography.

Motion Picture Photography vs Still Photography

Most of this discussion so far has focused on still photography as opposed to video (motion photography). All of the principles hold true for video in the same way as for still camera shots. A few things bear mentioning – again in the vein of differences between a traditional video camera and a cellphone camera.

The typical built-in app for video in a cellphone runs at 24 fps (frames per second), the same speed as professional movie cameras. Some after-market apps allow the user to change that, but for our discussion we’ll stick with 24fps. The important bit to remember here is that frame rate is equivalent to a shutter speed of 1/24 sec for each frame shot. (For various technical reasons, the actual shutter speed is a bit higher, since there has to be some time between each frame to read the image from the sensor – so the actual shutter speed is closer to 1/30 sec).

This has two important by-products:  the shutter speed is now fixed, and since the aperture is also fixed the only thing left for the camera app to use to adjust for exposure is the ISO speed. This means less control over lighting conditions. The other issue is, since even 1/30 sec is a fairly slow shutter speed, camera movement is a very, very bad thing. Keep your movements slow and smooth, not jerky. Brace yourself whenever possible. Fast moving objects in the frame will be blurred – there is nothing you can do about that.

Another issue of concern with “cellphone cinemaphotography” is actual sensor size (which affects noise and light sensitivity). For still photography, the full sensor is used, in the case of the iPhone4S that is 3264 x 2448, but in video mode the resolution is decreased to 1920 x 1080. This is a significant decrease in resolution – from 8 megapixels to 2 megapixels per frame! There are a number of reasons for this:

  • The HD video standard is 1920 x 1080
  • Using less than the full sensor allows for vibration reduction to take place in software – as the image jiggles around on the sensor, fast and complex arithmetic can move the offset frames back into place to help reduce the effects of camera shake.
  • The data rate from the sensor is reduced:  no current cellphone cpu and memory could keep up with full motion video at 8megapixels per frame.
  • The resultant file size is manageable.
  • The compression engine can keep up with the output from the camera – again, the iPhone uses H.264 as the video compression codec for movies, and that process uses a lot of computer power – not to mention drains the battery faster than the sun melts butter on hot pavement. Yes, the iPhone will give you a full day of charge if you are not on WiFi, are mostly on standby or just making some phone calls. Want to drain it flat in under 2 hours? Just start shooting video…
  • And, of course, if you are shooting in low light and turn on the ‘torch’ (the little LED flash that stays on for videography) then your battery life can be measured in minutes! Use that sparingly, only when you have to  – it doesn’t actually give that much light, and being so close to the lens, causes some strange lighting effects.

BTW, even still photography uses a ton of battery power. I have surprised myself more than once by walking around shooting for an hour, then noticing my battery is at 21%. More than anything else, this motivated me to get a high quality car charger…

Summary

Ok, that’s it for this section – hope it’s provided a few useful bits of information that will help you make better cellphone shots. Here’s a very brief summary of tips to take away from the above discussion:

  • Give your pictures as much light as you can
  • Hold camera still, brace on solid object if at all possible
  • Expose for the highlights (i.e. don’t let them get overexposed or ‘blown out’)
  • Don’t use the built-in flash unless absolutely necessary

The iPhone4S camera – unraveled and explained…

March 7, 2012 · by parasam

All you ever wanted to know about the cool little camera inside the iPhone4S – but didn’t know to ask…

As a serious semi-pro still photographer I used to look on cellphone cameras as little toys… until the iPhone cameras – and the associated apps – started to change the world of imaging. The first real jump into a half-serious camera was the iPhone4 – then the 4S pushed the bar that much higher. Along with some very powerful apps this combination of hardware and software has brought relatively high quality imaging to a new audience.

I have been shooting with the 4S since it was released last year, and finally wanted to really find out what made it tick – what its limitations were and how it could be used. I have always approached photography from my scientific background – with a firm desire to know all that was possible about my tools so I could know what to expect in a variety of conditions. Turned out this is a rather complex subject. Data was not easy to find:  Apple very deliberately hides or says nothing about even the most basic of specifications on their products, and the iPhone camera is no exception.

I want to be clear about a few things before progressing with this series of articles:  I am not a professional product reviewer – these are just my observations from research and using this device since October 14 of last year. I am also not in any way attempting to offer a course on photography or the physics of lenses, sensors, etc. I will attempt to explain things in what I hope is a clear fashion for anyone who is mildly interested in technology and how this camera can be compared to more traditional DSLR cameras. All of the research was accessed from publicly available documents – whether online or offline.

The expertise I hope to bring to this discussion is the discovery of data that is available, but hard to find; the organization of that data in a way that will hopefully benefit current and future users of this device (or similar cellphone cameras); and the sharing of what I have found useful (in terms of both hardware and software) to maximize the performance of this technology.

I didn’t realize at the start of this little project how it would grow. I thought I would write a few pages and toss in a couple of diagrams on the camera and some of the software. I was kidding myself! Over about a month I amassed over 500 pages of research, tested scores of apps, and took hundreds of shots to test ideas and software. Even then, I am sure that other will find things that I have not seen – and have opinions that either differ or contradict those expressed here. That’s the beauty of a more or less democratic web – anyone gets a chance to have their say.

To keep the posts manageable – and allow me to start posting something before I have another birthday – I am breaking this topic into a series of shorter posts. In addition to this introduction, I plan five additional parts:

  1. Basic camera overview – a short discussion on basic terms, including a glossary. Just enough to allow for a common understanding of terms and words I will use in the rest of the discussion. For further details there are literally thousands of books, websites, etc. on photography.
  2. Primary differences between cellphone cameras and traditional film/digital cameras. This is important as a basis for understanding the limitations of this type of hardware – and to relate many of the numbers and terms used in traditional photography to the new world of high quality “cellphotography.”
  3. The iPhone4S specifications, hardware details, construction and other info.
  4. Software apps I have found useful on the 4S platform. I make no attempt here to review all apps for this device – that would take more days that I have left – but will rather exemplify the apps I have found useful, and how I use them in everyday practice. It’s a starting point…
  5. Techniques, limitations and other general discussion from my experience of this camera, in relation to the decades of shooting with film and digital hardware in more traditional photography.

While the bulk of this series will deal with still photography, I will also address the video capabilities of both the hardware and software. The video features (1080P, etc.) are just as powerful as the still camera features, particularly with good software.

I aim to post all five blogs within the next week – time permitting. I hope you all find this interesting – it’s been fun to discover all this information, and I continue to enjoy seeing just what this little camera can do.

Just as a hint of what’s possible, here are a few shots I took with my iPhone4S. They are unretouched except for basic adjustments of exposure, contrast, etc. – no Photoshop effects, no ‘tarting up’ – these are here to show what the basic camera can do, given an understanding of both the capabilities and limitations of the hardware/software of this platform.