• Home
  • about this blog
  • Blog Posts

Parasam

Menu

  • design
  • fashion
  • history
  • philosophy
  • photography
  • post-production
    • Content Protection
    • Quality Control
  • science
  • security
  • technology
    • 2nd screen
    • IoT
  • Uncategorized
  • Enter your email address to follow this blog and receive notifications of new posts by email.

  • Recent Posts

    • Take Control of your Phone
    • DI – Disintermediation, 5 years on…
    • Objective Photography is an Oxymoron (all photos lie…)
    • A Historical Moment: The Sylmar Earthquake of 1971 (Los Angeles, CA)
    • Where Did My Images Go? [the challenge of long-term preservation of digital images]
  • Archives

    • September 2020
    • October 2017
    • August 2016
    • June 2016
    • May 2016
    • November 2015
    • June 2015
    • April 2015
    • March 2015
    • December 2014
    • February 2014
    • September 2012
    • August 2012
    • June 2012
    • May 2012
    • April 2012
    • March 2012
    • February 2012
    • January 2012
  • Categories

    • 2nd screen
    • Content Protection
    • design
    • fashion
    • history
    • IoT
    • philosophy
    • photography
    • post-production
    • Quality Control
    • science
    • security
    • technology
    • Uncategorized
  • Meta

    • Register
    • Log in
    • Entries feed
    • Comments feed
    • WordPress.com

Browsing Tags camera

iPhone4S – Section 4b: Camera+ app

March 19, 2012 · by parasam

Camera+  A full-featured camera and editing application. Version described is 3.02

Feature Sets:

  • Light Table design for selecting which photos to Edit, Share, Save or get Info.
  • Camera Functions
    • Ability to split Focus area from Exposure area
    • Can lock White Balance
    • Flash: Off/Auto/On/Torch
    • Front or Rear camera selection
    • Digital Zoom
    • 4 Shooting Modes: Normal/Stabilized/Self-Timer/Burst
  • Camera options:
    • VolumeSnap On/Off
    • Sound On/Off
    • Zoom On/Off
    • Grid On/Off
    • Geotagging On/Off
    • Workflow selection:  Classic (shoot to Lightbox) / Shoot&Share (edit/share after each shot)
    • AutoSave selection:  Lightbox/CameraRoll/Both
    • Quality:  Full/Optimized (1200×1200)
    • Sharing:  [Add social services for auto-post to Twitter, Facebook, etc.]
    • Notifications:
      • App updates On/Off
      • News On/Off
      • Contests On/Off
  • Edit Functions
    • Scenes
      • None
      • Clarity
      • Auto
      • Flash
      • Backlit
      • Darken
      • Cloudy
      • Shade
      • Fluorescent
      • Sunset
      • Night
      • Portrait
      • Beach
      • Scenery
      • Concert
      • Food
      • Text
    • Rotation
      • Left
      • Right
      • Flip Horizontal
      • Flip Vertical
    • Crop
      • Freeform (variable aspect ratio)
      • Original (camera taking aspect ratio)
      • Golden rectangle (1:1.618 aspect ratio)
      • Square (1:1 aspect ratio)
      • Rectangular (3:2 aspect ratio)
      • Rectangular (4:3 aspect ratio)
      • Rectangular (4:6 aspect ratio)
      • Rectangular (5:7 aspect ratio)
      • Rectangular (8:10 aspect ratio)
      • Rectangular (16:9 aspect ratio)
    • Effects
      • Color – 9 tints
        • Vibrant
        • Sunkiss’d
        • Purple Haze
        • So Emo
        • Cyanotype
        • Magic Hour
        • Redscale
        • Black & White
        • Sepia
      • Retro – 9 ‘old camera’ effects
        • Lomographic
        • ‘70s
        • Toy Camera
        • Hipster
        • Tailfins
        • Fashion
        • Lo-Fi
        • Ansel
        • Antique
      • Special – 9 custom effects
        • HDR
        • Miniaturize
        • Polarize
        • Grunge
        • Depth of Field
        • Color Dodge
        • Overlay
        • Faded
        • Cross Process
      • Analog – 9 special filters (in-app purchase)
        • Diana
        • Silver Gelatin
        • Helios
        • Contessa
        • Nostalgia
        • Expired
        • XPRO C-41
        • Pinhole
        • Chromogenic
    • Borders
      • None
      • Simple – 9 basic border styles
        • Thick White
        • Thick Black
        • Light Mat
        • Thin White
        • Thin Black
        • Dark Mat
        • Round White
        • Round Black
        • Vignette
      • Styled – 9 artistic border styles
        • Instant
        • Vintage
        • Offset
        • Light Grit
        • Dark Grit
        • Viewfinder
        • Old-Timey
        • Film
        • Sprockets

Camera Functions

After launching the Camera+ app, the first screen the user sees is the basic camera viewfinder.

Camera view, combined focus & exposure box (normal start screen)

On the top of the screen the Flash selector button is on the left, the Front/Rear Camera selector is on the right. The Flash modes are: Off/Auto/On/Torch. Auto turns the flash off in bright light, on in lower light conditions. Torch is a lower-powered continuous ‘flash’ – also known as a ‘battery-killer’ – use sparingly! Virtually all of the functions of this app are directed to the high-quality rear-facing camera – the front-facing camera is typically reserved for quick low-resolution ID snaps, video calling, etc.

On the bottom of the screen, the Lightbox selector button is on the left, the Shutter release button is in the middle (with the Shutter Release Mode button just to the right-center), and on the right is the Menu button. The Digital Zoom slider is located on the right side of the frame (Digital Zoom will be discussed at the end of this section). Notice in the center of the frame the combined Focus & Exposure area box (square red box with “+” sign). This indicates that both the focus and the exposure for the entire frame are adjusted using the portion of the scene that is contained within this box.

You will notice that the bottle label is correctly exposed and focused, while the background is dark and out of focus.

The next screen shows what happens when the user selects the “+” sign on the upper right edge of the combined Focus/Exposure area box:

Split focus and exposure areas (both on label)

Now the combined box splits into two areas:  an Focus area (square box with bull’s eye), and an Exposure area (round circle resembling the adjustable f-stop ring in a camera lens). The exposure is now measured separately from the focus – allowing more control over the composition and exposure of the image.

In this case the resultant image looks like the previous one, since both the focus and the exposure areas are still placed on the label, which has consistent focus and lighting.

In the next example, the exposure area is left in place – on the label – but the focus area is moved to a point in the rear of the room. You will now notice that the rear of the room has come into focus, and the label has gone soft – out of focus. However, since the the exposure area is unchanged, the relative exposure stays the same – label well lit – but the room beyond still dark.

Split focus and exposure areas, focus moved to rear of room

This level of control allows greater freedom and creativity for the photographer.  [please excuse the slight blurring of some of the screen shot examples – it’s not easy to hold the iPhone completely still while taking a screen shot – which requires simultaneously pressing the Home button and the Power button – even on a tripod]

The next image shows the results of selecting the little ‘padlock’ icon in lower left of the image – this is Lock/Unlock button for Exposure, Focus and White Balance (WB).

Showing 'lock' panel for White Balance (WB), exposure and focus

Each of the three functions (Focus, Exposure, White Balance) can be locked or unlocked independently)

Focus moved back to label, still showing lock panel

In the above example, the focus area has been moved back to the label, showing how the focus now returns to the label, leaving the rear of the room once again out of focus.

The next series of screens demonstrate the options revealed when the Shutter Release Mode button (the little gear icon to the right of the shutter button) is selected:

Shutter type 'Settings' sub-menu displayed, showing 4 options

The ‘Normal’ mode exposes one image each time the shutter button is depressed.

Shutter type changed to Stabilized mode

When the ‘Stabilizer’ mode is selected, the button icon changes to indicate this mode has been selected. This mode is an indication (and an automatic shutter release) of the stability of the iPhone camera once the Stabilizer Shutter Release button is depressed – NOT a true motion-stabilized lens as in some expensive DSLR cameras. You have to hold the camera still to get a sharp picture – this function just helps the user know that the camera is indeed still. Once the Stabilizer Shutter Release is pushed, it glows red if the camera is moving, and text on the screen urges the user to hold still. As the camera detects that motion has stopped (using the iPhone’s internal accelerometer – motion detector) little beeps sound, and the shutter button changes color from red to yellow to green and then the picture is taken.

Shutter type changed to Timer mode - screen shows beginning of 5sec countdown timer

The Self-Timer Shutter Release mode allows a time delay before the actual shutter release occurs – after the shutter button is depressed. The most common use for this feature is a self-portrait (of course you need either a tripod or other method of securing the iPhone so the composition does not change!). This mode can also be useful to avoid jiggling the camera while pressing the shutter release – important in low light situations. The count-down timer is indiated by the numbers in the center of the screen. Once the shutter is depressed, the numbers count down (in seconds) until the exposure occurs. The default time delay is 5 seconds, this can be adjsuted by tapping the number on the screen before the shutter button is selected. The choices are 5, 15 and 30 seconds.

Shutter type changed to Burst mode

The final of the four shutter release modes is the ‘Burst’ mode. This exposes a short series of exposures, one right after the other. This can be useful for sports or other fast moving activity, where the photographer wants to be sure of catching a particular moment. The number of exposures taken is a function of how long you hold down the shutter release – the camera keeps taking pictures as fast as it can as long as you hold down the shutter.

There are a number of things to be aware of while using this mode:

  • You must be in the ‘Classic’ Workflow, not the ‘Shoot & Share’ ( more on this below when we discuss that option)
  • The best performance is obtained when the AutoSave mode is set to ‘Lightbox’ – writing directly to the Camera Roll (using the ‘Camera Roll’ option) is slower, leading to more elapsed time between each exposure. The last option of AutoSave (‘Lightbox & CameraRoll’) is even slower, and not recommended for burst mode.
  • The resolution of burst photos is greatly reduced (from 3264×2448 down to 640×480). This is the only way the data from the camera sensor can be transferred quickly enough – but one of the big differencdes between the iPhone camera system and a DSLR. The full resolution is 8megapixels, the burst resolution is only 0.3megapixels – more than 25x less resolution!

Resultant picture taken with exposure & focus on label

The above shot is an actual unretouched image using the settings from the first example (focus and exposure areas both set on label of the bottle).

Here is an example of how changing the placement of the Exposure Area box within the frame affects the outcome of the image:

exposure area set on wooden desktop - normal range of exposure

exposure area set on white dish - resulting picture is darker than normal

exposure area set on black desk mat - resulting image is lighter than normal

To fully understand what is happening above you need to remember that any camera light metering system sets the exposure assuming that you have placed the exposure area on a ‘middle gray’ value (Zone V). If you place the exposure measurement area on a lighter or darker area of the image the exposure may not be what you envisioned. Further discussion of this topic is outside the scope of this blog – but it’s very important, so if you don’t know – look it up.

The Lightbox

The next step after shooting a frame (or 20) is to process (edit) the images. This is done from the Lightbox. This function is entered by pressing the little icon of a ‘film frame’ on the left of the bottom control bar.

empty Lightbox, ready to import a photograph for editing

The above case shows an empty Lightbox (which is how the app looks after all shots are edited and saved). If you have just exposed a number of images, they will be waiting for you in the Lightbox – you will not need to import them. The following steps are for when you are processing previously exposed images (and it doesn’t matter if they were shot with Camera+ or any other camera app. I sometimes shoot with my film camera, scan the film, import to iPhone and edit with Camera+ in order to use a particular filter that is available).

selection window of the Lightbox image picker

an image selected in the Lightbox image picker, ready for loading into the Lightbox for editing

image imported into the Lightbox, ready for an action

When entering the Edit mode after loading an image, the following screen is displayed. There are two buttons on the top:  Cancel and Done. Cancel returns the user to the Lightbox, abandoning any edits or changes made while in the Edit screen, while Done applies all the edits made and returns the user to the Lightbox where the resultant image can be Shared or Saved to the Camera Roll.

Along the bottom of the screen are two ribbons showing all the edit functions. The bottom ribbon selects the particular edit mode, while the top ribbon selects the actual Scene, Rotation, Crop, Effect or Border that should be applied. The first set of individual edit functions that we will discuss are the Scenes. The following screen shots show the different Scene choices in the upper ribbon.

Scene modes 1 - 5

Scene modes 6 - 9

Scene modes 10 - 14

Scene modes 15 - 17

The ‘Scenes’ that Camera+ offers are one of the most powerful functions of this app. Nevertheless, there are some quirks (mostly about the naming – and the most appropriate way to apply the Scenes, based on the actual content of your image) that will be discussed. The first thing to understand is the basic difference between Scenes and Effects. Both at the most fundamental level transform the brightness, contrast, color, etc of the image (essentially the visual qualities of the image) – as opposed to the spatial qualities of the image that are adjusted with Rotation, Crop and Border. However, a Scene typically adjusts overall contrast, color balance, sometimes modifies white balance, brightness and so on. An Effect is a more specialized filter – often significantly distorting the original colors, changing brightness or contrast in a certain range of values, etc. – the purpose being to introduce a desired effect to the image. Many times a Scene can be used to ‘rescue’ an image that was not correctly exposed, or to change the feeling, mood, etc. of the original image. Another way to think about a Scene is that the result of applying a Scene will still almost always look as if the image had just been taken by the camera, while an Effect very often is clearly an artificially applied filter.

In order to best demonstrate and evaluate the various Scenes that are offered, I have assembled a number of images that show a “before and after” of each Scene type. Within each Scene pair, the left-hand image is always the unadjusted original image, while the right-hand image has the Scene applied. The first series of test images is constructed with two comparisons of each Scene type: the first image pair shows a calibrated color test chart, the second image pair shows a woman in a typical outdoor scene. The color chart can be used to analyze how various ranges of the image (blacks, grays, whites, colors) are affected by the Scene adjustment; while the woman subject image is often a good representation of how the Scene will affect a typical real-world image.

After all of the Scene types are shown in this manner, I have added a number of sample images, with certain Scene types applied – and discussed – to better give a feeling of how and why certain Scene types may work best in certain situations.

Scene = Clarity

Scene = Clarity

The Clarity scene type is one of the most powerful scene manipulations offered – it’s not an accident that it is the first scene type in the ribbon… The power of this scene is not that obvious from the color chart, but it is more obvious in the human subject. This particular subject, while it shows most of the attributes of the clarity filter well, is not ideally suited for application of this filter – better examples follow at the end of this section. The real effect of this scene is to cause an otherwise flat image to ‘pop’ more, and have more visual impact. However, just like in other parts of life – less is often more. My one wish is that an “intensity slider” was included with Scenes (it is only offered on Effects, not Scenes) – as many times I feel that the amount of Clarity is overblown. There are techniques to accomplish a ‘toning down’ of Clarity, but those will only be discussed in Part 5 of this series – Tips & Techniques for iPhonography – as currently this requires the use of multiple apps – which is beyond the scope of the app introduction in this part of the series. The underlying enhancement appears to be a spatially localized increase of contrast, and an increase in vibrance and saturation of color.

Notice in the gray scale of the chart that the edges of each density chip are enhanced – but the overall gamma is unchanged (the steps from white to black remain even and separately identifiable). Look at the color patches – there is an increase in saturation (vivedness of color) – but this is more pronounced in colors that are already somewhat saturated. For instance look at the pastel colors in the range of columns 9-19 and rows B-D:  there is little change in overall saturation. Now look at, for instance, the saturated reds and greens of columns 17-18 and rows J-L:  these colors have picked up noticeabley increased saturation.

Looking at the live subject, the local increase in contrast can easily be seen in her face, with the subtle variations in skin tone in the original becoming much more pronounced in the Clarity scene type. The contrast between the light print on her dress and the gray background is more obvious with Clarity applied. Observe how the wrinkles in the man’s shirt and shorts are much more obvious with Clarity. Notice the shading on the aqua-colored steel piping in the lower left of the image: in the original the square pipes look very evenly illuminated, with Clarity applied there is a noticeable transition from light to dark along the pipe.

Scene = Auto

Scene = Auto

The Auto scene type is sort of like using a ‘point and shoot’ camera set to full automatic – the user ideally doesn’t have to worry about exposure, etc. Of course, in this case since the image has already been exposed there is a limit to the corrections that can be applied! Basically this Scene attempts to ensure full tonal range in the image and will manipulate levels, gamma, etc. to achieve a ‘centered’ look. For completeness I have included this Scene – but as you will notice, and this should be expected – there is almost no difference between the befor and after images.With correctly exposed initial images this is what should happen… It may not be apparent on the blog, but when looking carefully at the color chart images on a large calibrated monitor the contrast is increased slightly at both ends of the gray scale:  the whites and blacks appear to clip a little bit.

Scene = Flash

Scene = Flash

The Flash scene type is an attempt to ‘repair’ an image that was taken in low light without a flash – when it was probably needed. Again, like any post-processing technique, this Scene cannot make up for something that is not there:  areas of shadow in which there was no detail in the original image can at best only be turned into lighter noise… But in many cases it will help underexposed images. The test chart clearly shows the elevation in brightness levels all through the gray scale – look for example at gray chip #11 – the middle gray value of the original is considerably lightened in the right-hand image. This Scene works best on images that are of overall low light level – as you can see from both the chart and the woman areas of the picture that are already well-lit tend to be blown out and clipped.

Scene = Backlit

Scene = Backlit

The Backlit scene type tries to correct for the ‘silhouette’ effect that occurs when strong light is coming from behind the subject without corresponding fill light illuminating the subject from the front. While I agree that this is a useful scene correction, it is hard to perform after the fact – and personally I think this is one Scene that is not well executed. My issue is with the over-saturation of reds and yellows (Caucasian skin tones) that more often than not make the subject look like a boiled lobster. I think this comes from the attempt to raise the percived brightness of an overly dark skin tone (since the most common subject in such a situation is a person standing in front of a brightly lit background). You will notice on the chart that the gray scale is hardly changed from the original (a slight overall brightness increase) – but the general color saturation is raised. A very noticeable increase in red/orange/yellow saturation is obvious:  look at the red group of columns 7-8 and rows A-B. In the original these four squares are clearly differentiated – in the ‘after’ image they have merged into a single fully saturated area. A glance at the woman’s image also shows overly hot saturation of skin tones – even the man in the background has a hot pink face now. So, to summarize, I would reserve this Scene for very dark silhouette situations – where you need to rescue an otherwise potentially unusable shot.

Scene = Darken

Scene = Darken

The Darken scene type does just what it says – darkens the overall scene fairly uniformly. This can often help a somewhat overexposed scene. It cannot fix one of the most common problems with digital photography however:  the clipping of light areas due to overexposure. As explained in a previous post in this series, once a given pixel has been clipped (driven into pure white due to the amount of light it has received) nothing can recover this detail. Lowering the level will only turn the bright white into a dull gray, but no detail will come back. Ever. A quick look at the gray scale in the right-hand image clearly shows the lowering of overall brightness – with the whites manipulated more than the blacks. For instance, chip #1 turns from almost white into a pale gray, while chip #17 shows only a slight darkening. This is appropriate so the blacks in the image are not crushed. You will notice by looking at the colors on the chart that darkening effect is pretty much luminance only – no change in color balance. The apparent increase in reds in the skin tone of the woman is a natural side-effect of less luminance with chrominance held constant – once you remove the ‘white’ from a color mixture the remaining color appears more intense. You can see the same effect in the man’s shirt and the yellow background. Ideally the saturation should be reduced slightly as well as the luminance in this type of filter effect – but that can be tricky with a single filter designed to work with any kind of content. Overall this is a useful filter.

Scene = Cloudy

Scene = Cloudy

The Cloudy scene type appears to normalize the exposure and color shift that occurs when the subject is photographed under direct cloudy skies. This is in contrast to the Shade scene type (discussed next) where the subject is shot in indirect light (open shade) while illuminated from a clear sky. This is mostly a color temperature problem to solve – remember that noon sunlight is approximately 5500°K (degrees Kelvin is a measure of color temperature, low numbers are reddish, high numbers are bluish, middle ‘white’ is about 5000°K). Illumination from a cloudy sky is often very ‘blue’ (in terms of color temperature) – between 8000°K – 10000°K, while open shade is less so, usually between 7000°K – 8000°K. If you compare the two scene types (Cloudy and Shade) you will notice that the image is ‘warmed up’ more with the Cloudy scene type. There is a slight increase in brightness but the main function of this scene type is to warm up the image to compensate for the cold ‘look’ often associated with shots of this type. You can see this occuring in the reddening of the woman’s skin tones, and the warming of the gray values of the sidewalk between her and the man.

Scene = Shade

Scene = Shade

The Shade scene type is similar in function to the Cloudy scene (see above for comparison) but differs in two areas: There is a noticeable increase in brightness (often images exposed in the shade are slightly underexposed) and there is less warming of the image (due to open shade being warmer in color temperature than a full cloudy scene illumination). One easy way to compare the two scene types is to examine the color charts – looking at the surround (where the numbers and letters are) – the differences are easy to see there. A glance at the test shot of the woman shows a definite increase in brightness, as well as a slight warming – again look at the skin tones and the sidewalk.

Scene = Fluorescent

Scene = Fluorescent

The Fluorescent scene type is designed to correct for images shot under fluorescent lighting. This form of illumination is not a ‘full-spectrum’ light source, and as such has some unnatural effects when used to take photographs. Our human vision corrects for this – but film or digital exposures do not – so such photos tend to have a somewhat green/cyan cast to them. In particular this makes light-colored skin look a bit washed out and sickish. The only real change I can see in this scene filter is an increase in magenta in the overall color balance (which will help to correct for the green/cyan shift under fluorescent light). The difference introduced is very small – I think it will help in some cases and may be insufficient in others. It is more noticeable in the image of the woman than the test chart (her skin tones shift towards a magenta hue).

Scene = Sunset

Scene = Sunset

The Sunset scene type starts a slightly different type of scene filter – ones that are apparently designed to make an image look like the scene name, instead of correcting the exposure for images taken under the named situation (like Shade, Cloudy, etc.). This twist in naming conventions is another reason to always really test and understand your tools – know what the filter you are applying does at a fundamental level and you will have much better results. It’s obvious from both the chart and the woman that a marked increase in red/orange is applied by this scene type. There is also a general increase in saturation – just more in the reds than the blues. See the difference in the man’s shirt and shorts to see how the blue saturation increases. Again, like the Backlit filter, I personally feel this effect is a bit overdone – and within Camera+ there is no simple way to remedy this. There are methods, using another app to follow the corrections of this app – these techniques will be discussed in the next of the series (Tips & Techniques for iPhonography).

Scene = Night

Scene = Night

The Night scene type attempts to correct for images taken at night under very low illumination. This scene is a bit like the Flash type – but on steroids! The full gray scale is pushed towards white a lot, with even the darkest shadow values receiving a noticeable bump in brightness. There is also a general increase in saturation – as colors tend to be undersaturated when poorly illuminated. Of course this will make images that are normally exposed look greatly overexposed (see both the chart and the woman), but it still gives you an idea of how the scene filter works. Some better ‘real-world’ examples of the Night scene type follow this introduction of all the scene types.

Scene = Portrait

Scene = Portrait

The Portrait scene type is basically a contrast and brightness adjustment. It’s easy to see in the chart comparison, both in the gray scale and on the color chip chart. Look first at the gray scale: all of the gray and white values from chip #17 and lighter are raised, the chips below #17 are darkened. Chips #1 and #2 are now both pure white, having no differentiation. Likewise with chips #20-22, they are pure black. The area defined by columns 13-19 and rows A-C are now pure white, compared to the original where clear differences can be noted. Likewise row L between columns 20-22 can no longer be differentiated. In the shot of the woman, this results in a lightening of skin tones and increased contrast (notice her black bag is solid black as opposed to very dark gray; the patch of sunlight just above her waist is now solid white instead of a bright highlight). Again, like several scene effects I have noted earlier, I find this one a little overstated – I personally don’t like to see details clipped and crushed – but like any filter, the art is in the application. This can be very effective on a head shot that is flat and without punch. The trick is applying the scene type appropriately. Knowledge furthers…

Scene = Beach

Scene = Beach

The Beach scene type looks like the type of filter first mentioned in Sunset (above). In other words, an effect to make the image look like it was taken on a beach (or at least that type of light!). It’s a little bit like the previous Portrait scene (in that there is an increase in both brightness and contrast – but less than Portrait) but also has a bit of Sunset in it as well (increased saturation in reds and yellows – but again not as much as Sunset). While the Sunset type had greater red saturation, this Beach filter is more towards the yellow. See column #15 in the chart – in the original each square is clearly differentiated, in the right-hand image the more intense yellows are very hard to tell apart. Just to the right, in column #17, the reds have not changed that much. When looking at the woman, you can see that this scene type makes the image ‘bright and yellow/red’ – sort of a ‘beachy’ effect I guess.

Scene = Scenery

Scene = Scenery

The Scenery scene type produces a slight increase in contrast, along with increased saturation in both reds and blues – with blue getting a bit more intensity. Easy to compare using the chart, and in the shot of the woman this can be seen in reddish skin tone, as well as significantly increased saturation in the man’s shirt and shorts. While this effect makes people look odd, it can work well on so-called “postcard” landscape shots (which tend to be flat panoramic views with low contrast and saturation). However, as you will see in the ‘real-world’ examples below, often a different scene or filter can help out a landscape in even a better way – it all depends on the original photo with which you are starting.

Scene = Concert

Scene = Concert

The Concert scene type is oddly enough very similar to the previous scene type (Scenery) – just turned up really loud! A generalized increase in contrast, along with red and blue saturation increase – attempts to make your original scene look like, well, I guess a rock-and-roll concert… Normal exposures (see the woman test shot) come out ‘hot’ (overly warm, contrasty with elelvated brightness) but if you need some color and punch in your shot, or desire an overstated effect this scene type could be useful.

Scene = Food

Scene = Food

The Food scene type offers a slight increase in contrast and a bit of increased saturation in the reds. This can be seen in both the charts and the woman shot. It’s less overdone than several of the other scene types, so keep this in mind for any shot that needs just a bit of punch and warming up – not just for shots of food. And again, I see this scene type in the same vein as Beach, Sunset, etc. – an effect to make your shot look like the feeling of the filter name, not to correct shots of food…

Scene = Text

Scene = Text

The Text scene type is an extreme effect – and best utilized as its name implies – to help render shots of text material come out clearly and legibly. An example of actual text is shown below, but you can see from the chart that this is accomplished by a very high contrast setting. The apparent increase in saturation in the test chart is really more of a result of the high contrast – I don’t think a deliberate increase in saturation was added to this effect.

(Note:  the ‘real-world’ examples referred to in several of the above explanations will be shown at the very end of this discussion, after we have illustrated the remaining basic edit functions (Rotation, Crop, Effects and Borders) in a similar comparative manner as above with the Scene types. This is due to many of the sample shots combining both a Scene and an Effect (one of the powerful capabilities of Camera+) – I want the viewer to fully understand the individual instruments before we assemble a symphony…)

The next group of Edit functions is the image Rotation set.

Rotation submenu

The usual four choices of image rotation are presented.

The Crop functions are displayed next.

group 1 of crop styles

group 2 of crop styles

group 3 of crop styles

The Effects (FX) filters, after the Scene types, are the most complex image manipulation filters included in the Camera+ app. As opposed to Scenes, which tend to affect overall lighting, contrast and saturation -and keep a somewhat realistic look to the image – the Effects filters seriously bend the image into a wide variety of (often but not always) unrealistic appearances. Many of the effects mimic early film processes, emulsions or camera types; some offer recent special filtering techniques (such as HDR – High Dyamic Range photography), and so on.

The Effects filters are grouped into four sections:  Color, Retro, Special and Analog. The Analog filter group requires an in-app purchase ($0.99 at the time of this article). In a similar manner to how Scene types were introduced above, an image comparison is shown along with the description of each filter type. The left-hand image is the original, the right-hand image has the specific Effects filter applied. Some further ‘real-world’ examples of using filters on different types of photography are included at the end of this section.

Color Effects filters

Color Effects filters

Color Filter = Vibrant

The Vibrant filter significantly increases the saturation of colors, and appears to enhance the red channel more than green or blue.

Color Filter = Sunkiss'd

The Sunkiss’d filter is a generalized warming filter. Notice, that in contrast to the Vibrance filter above, even the tinfoil dress on the left-hand dancer has turned a warm gold color – where the Vibrance filter did not alter the silver tint – as that filter has no effect on portions of the image that are monochrome. Also, all colors in Sunkiss’d are moved towards the warm end of the spectrum – note the gobo light effect projected above the dancers: it is pale blue in the original, and becomes a warmer green/aqua after the application of Sunkiss’d.

Color Filter = Purple Haze

The Purple Haze filter does not provide hallucinogenic experiences for the photographer, but does perhaps simulate what the retina might imagine… Increased contrast, increased red/blue saturation and a color shift towards.. well… purple.

Color Filter = So Emo

The So Emo filter type (So Emotional?) starkly increases contrast, shifts color balance towards cyan, and to add a counterpoint to an overall cyan tint there is apparently a narrow band enhancement of magenta – notice the enhancement of the tulle skirt on the center dancer that should otherwise be almost monochromatic with addition of so much cyan shift in the color balance. However, the flesh tones of the dancers’ legs (more reddish) are rendered almost colorless by the cyan tint; this shows that the enhancement is narrow-band, it does not include red.

Color Filter = Cyanotype

The Cyanotype effects filter is reminiscent of early photogram techniques (putting plant leaves, etc. on photo paper sensitized with the cyanotype process in direct sunlight to get a silhouette exposure). This is the same process that makes blueprints. Later it was used (in a similar way as sepia toning) to tint black & white photographs. In the case of this effects filter, the image is first rendered to monochrome, then subsequently tinted with a slightly yellowish cyan color.

Color Filter = Magic Hour

The Magic Hour filter attempts to make the image look like it was taken during the so-called “Magic Hour” – the last hour before sunset – when many photographers feel the light is best for all types of photography, particularly landscape or interpretive portraits. Brightness is raised, contrast is slightly reduced and a generalized warming color shift is applied.

Color Filter = Redscale

The Redscale effects filter is a bit like the previous Magic Hour, but instead of a wider spectrum warming, the effect is more localized to reds and yellows. The contrast is slightly raised, instead of lowered as for Magic Hour, and the effect of the red filter can clearly be seen on the gobo light projected above the dancers:  the original cyan portion of the light is almost completely neutralized by the red enhancement, leaving only the green portion of the original aqua light remaining.

Color Filter = Black White

The Black & White filter does just what it says: renders the original color photograph into a monochrome only version. It looks like this is a simple monochrome conversion of the RGB channels (color information deleted, only luminance kept). There are a number of advanced techniques for rendering a color image into a high quality black & white photo – it’s not as simple as it sounds. If you look at a great black and white image from a film camera, and compare it to a color photograph of the same scene, you will know what I am saying. There are specialized apps for taking monochrome pictures with the iPhone (one of which will be reviewed later in this series of posts); and there is a whole set of custom filters in Photoshop devoted to just this topic – getting the best possible conversion to black & white from a color original. In many cases however a simple filter like this will do the trick.

Color Filter = Sepia

The Sepia filter, like Cyanotype, is a throwback to the early days of photography – before color film – when black & white images were toned to increase interest. In the case of this digital filter, the image is first turned into monochrome, then tinted with a sepia tone via color correction.

Retro Effects filters

Retro Effects filters

Retro Filter = Lomographic

The Lomographic filter effect is designed to mimic some of the style of photograph produced by the original LOMO Plc camera company of Russia (Leningrad Optical Mechanical Amalgamation). This was a low cost automatic 35mm film camera. While still in production today, this and similar cameras account for only a fraction of LOMO’s production – the bulk is military and medical optical systems – and are world class… Due to the low cost of components and production methods, the LOMO camera exhibited frequent optical defects in imaging, color tints, light leaks, and other artifacts. While anathema to professional photographers, a large community that appreciates the quirky effects of this (and other so-called “Lo-Fi” or Low Fidelity) cameras has sprung up with a world-wide following. Hence the Lomographic filter…

While, like all my analysis on Scenes and Effects, I have no direct knowledge of how the effect is produced, I bring my scientific training and decades of photographic experience to help explain what I feel is a likely design, based on empirical study of the effect. That said, this effect appears to show increased contrast, a greenish/yellow tint for the mid-tones (notice the highlights, such as the white front stage, stay almost pure white). A narrow-band enhancement filter for red/magenta keeps the skin tones and center dancer’s dress from desaturating in the face of the green tint.

Retro Filter = '70s

The ’70s effect is another nod to the look of older film photographs, this one more like what Kodachrome looked like when the camera was left in a hot car… All film stock is heat sensitive, with color emulsions, particularly older ones, being even more so. While at first this filter has a resemblance to the Sunkiss’d color filter, the difference lies in the multi-tonal enhancements of the ’70s filter. The reds are indeed punched up, but that’s in the midtones and shadows – the highlights take on a distinct greenish cast. Notice that once the enhanced red nulled out the cyan in the overhead gobo projection, then the remaining highlights have turned bright green – with a similar process occuring on the light stage surface.

Retro Filter = Toy Camera

The Toy Camera effects filter emulates the low cost roll-film cameras of the ’50s and ’60s – with the light leaks, uneven processing, poor focus and other attributes often associated with photographs from that genre of cameras. Increased saturation, a slightly raised black level, spatially localized contrast enhancement (a technique borrowed from HDR filtering) – notice the slight flare on the far right over the seated woman’s head become a bright hot flare in the filtered image, and streaking to simulate light leakage on film all add to the multiplicity of effects in this filter.

Retro Filter = Hipster

The Hipster effect is another of the digital memorials to the original Hipstamatic camera – a cheap all plastic 35mm camera that shot square photos. Copied from an original low-cost Russian camera, the two brothers that invented it only produced 157 units. The camera cost $8.25 in 1982 when it was introduced. With a hand-molded plastic lens, this camera was another of the “Lo-Fi” group of older analog film cameras whose ‘look’ has once again become popular. As derived by the Camera+ crew, the Hipster effect offers a warm, brownish-red image. Achieved apparently with raised black levels (a common trait of cheap film cameras, the backs always leaked a bit of light so a low level ‘fog’ of the film base always tended to raise deep blacks [areas of no light exposure in a negative] to a dull gray); a pronounced color shift towards red/brown in the midtones and lowlights; and an overall white level increase (note the relative brightness of the front stage between the original and the filtered version),

Retro Filter = Tailfins

The Tailfins retro effect is yet another take on the ’50s and ’60s – with an homage to the 1959 Cadillac no doubt – the epitomy of the ‘tailfin era’. It’s similar to the ’70s filter described above, but lacks the distinct ‘overcooked Kodachrome’ look with the green highlights. Red saturation is again pushed up, as well as overall brightness. Once again blacks are raised to simulate the common film fog of the day. Lowered contrast finishes the look.

Retro Filter = Fashion

The Fashion effects filter is an interesting and potentially very useful filter. Although I am sure there are styles of fashion photography that have used this muted look, the potential uses for this filter extend far beyond fashion or portraiture. Essentially this is a desaturating filter that also warms the lowlights more than the highlights. Notice the rear wall – almost neutral gray in the original, a very warm gray in the filtered version. The gobo projected light, the greenish-yellow spill on the ceiling, the center dancer’s dress – all greatly desaturated. The contrast appears just a bit raised:  the white front stage is brighter than the original version, and the black dress of the right-hand dancer is darker. With so many photos today – and filters – that tend to make things go pop! bang! and sparkle! it’s sometimes nice to present an image that is understated, but not cold. This just might be a useful tool to help tell that story.

Retro Filter = Lo-Fi

The Lo-Fi is another retro effect filter that is similar in some respects to the Toy Camera filter reviewed above, but does not express the light streak and obvious film fog artifacts. It again provides an unnatural intensity of color – this through greatly increased saturation. There is also an increase in contrast – note the front stage is nearly pure white and the ceiling to the right of the gobo projection has gone almost pure black. There is a non-uniform assignment of color balance and saturation, dependent on the relative luminance of the original scene. The lighter the original scene, the less saturation is added:  compare the white stage to the dark gray interior of the large “1” on the back wall.

Retro Filter = Ansel

The Ansel filter is of course a tip of the hat to the iconic Ansel Adams – one of the premier black & white photographers ever. Although… Ansel would likely have something to say about separation of gray values in the shadows, particularly around Zones II – III.  Compared to the ‘Black & White’ color filter discussed earlier, this filter is definitely of a higher contrast. Personally, I think the blacks are crushed a bit – most of the detail is lost in the black dress, and the faces of the dancers are almost lost now in dark shadow. But for the right original exposure, this filter will offer more dyanmism than the “Black & White” filter.

Retro Filter = Antique

The Antique effects filter is in the same vein as Sepia and Cyanotope: a filter that first extracts a monochrome image from the color original, then tints it – in this case with a yellow cast. The contrast is increased as well.

Special Effects filters

Special Effects filters

Special Filter = HDR @ 100%

Special Filter = HDR @ 50%

The HDR Special filter is, along with the Clarity Scene type, one of the potentially more powerful filters in this entire application. Because of this (and to demonstrate the Intensity Slider function) I have inserted two examples of this filter, one with the intensity set at 100%, and one with the intensity at 50%. All of the Effects filters have an intensity control, so the relative level of the effect can be adjusted from 0-100%. All of the other examples are shown at full intensity to discuss the attributes of the filter with the greatest ease, but many times a lessening of the intensity will give better results. That is nowhere more evident than with the HDR effect. This terms stands for High Dynamic Range photography. Normally, this can only be performed with multiple exposures of precisely the same shot in the camera – then through complex post-production digital computations, the two (or more) images are superimposed on top of each other, with the various parts of the images seamlessly blended. The whole purpose of this is to make a composite image that has a greater range of exposure than was possible with the taking camera/sensor/film.

The usual reason for this is an extreme range of brightness. An example:  if you stand in a dimly lit barn and shoot a photograph out the open barn door at the brightly lit exterior at noon, the ratio of brightness from the barn interior to the exterior scene can easily approach 100,000:1 – which is impossible for any medium, whether film or digital, to capture in a single exposure. The widest range film stock ever produced could capture about 14 stops – about 16,000:1. And that is theoretical – once you add the imperfections of lens, the small amount of unavoidable base fog and development noise, 12 stops (about 4,000:1) is more realistic. With CCD arrays (high quality digital sensors as found in expensive DSLR cameras, not the CMOS sensors used in the iPhone), it is theoretically possible to get about the same. While the top of the line DSLRs boast a 16-bit converter, and do output 16-bit images, the actual capability of the sensor is not that good. I personally doubt it’s any better than the 12 stops of a good film camera – and that only on camera backs costing the same as a small car…

What this means in practicality is that to capture such a scene leads to one of two scenarios:  either the blacks are underexposed (if you try to avoid blowing out the whites); or the white detail is lost in clipping if you try to keep the black shadow detail in the barn visible. The only other option (employed by professional photographers with a budget of both time and money) is to light the inside of the barn sufficiently that the contrast of the overall scene is brought within range of the taking film or digital array.

With HDR, a whole new possibility has arrived: take two photographs (identical, must line up perfectly so camera has to be on a tripod, and no motion in the scene is allowed – a rather restrictive element, but critical) – then with the magic of digital post-processing, the low-light image (correctly exposed for the shadows, so the highlights are blown out) and the hi-light image (correctly exposed for the brightly lit part of the scene, so the inside of the barn is just solid black with no detail) are combined into a composite photograph that has incredible dynamic range. There is a lot more to it than this, and you can’t get around the display part of the equation (how do you then show an image that has 16 or more stops of dynamic range on a computer monitor that has at best 10 stops of range? or worse yet, ink jet printers, that even on the best high gloss art paper may be able to render 6 stops of dynamic range? We’ll leave those questions for my next part of this blog series, but for now it’s enough to understand that high dynamic range exposures (HDR) are very challenging for the photographer.

So what exactlty IS an HDR filter? Obviously it cannot duplicate the true HDR technique (multiple exposures)… [First, to be clear, there are different types of “HDR filters” – for instance the very complex one in Photoshop is designed to work with multiple source images as discussed above – here we are talking about the HDR filter included with Camera+, and what it can, and cannot, do). The type of filtering process that appears to be used by the HDR filter in this app is known as a “tone mapping” filter. This is actually a very complex process, chock full of high mathematics, and if it weren’t for the power of the iPhone hardware and core software this would be impossible to do on anything but a desktop computer. Essentially, through a process of both global and local tone mapping using specific algorithms, an output image is derived from the input image. As you can see from the results in the right hand images, tone mapping HDR has a unique look. It tends to enhance local contrast, so image sharpness is enhanced. A side effect – that some like and others don’t – is an apparent ‘glow’ around the edges of dark objects in the scene when they are in front of lighter objects. In these examples, look around the edges of the black dress, and the edges of the black outline of the “1” on the back wall. Notice also that in the original photo, the white stage looks almost smooth, in the resultant filtered image, you can see every bit of dust and footscrapes from the models. The overall brightness of the image is enhanced, but nothing is clipped. Due to the enhancement of small detail, noise in low light areas (always an issue with digital sensors) is increased. Look at the area of the ceiling to the right of the projected gobo image:  in the original the low lit area looks relatively smooth, in the filtered image there are many more mottling and other noise artifacts. Due to the amount of detail added, side effects, etc. it is often desired to not ‘overdo’ the tone mapping effect. This can clearly be shown with the second set of comparisions, which has the intensity of the effect set to 50% instead of 100%.

Special Filter = Miniaturize

The Special filter Miniaturize initially confused me – I didn’t understand the naming of this filter in reference to its effects: this filter is very similar to the Depth of Field filter which will be discussed shortly. Essentially this filter increases saturation a bit, and then applies blurring technique to defocus the upper third and lower third of the image, leaving the middle third sharp. A reader of my inital release of this section was kind enough to point out that this filter is attempting to mimic the planar depth-of-field effect that happens when the lens is tilted about the axis of focus. With a physical tilt-shift lens, the areas of soft focus are due to one area of the image being too far to be in focus, the other area being too near to be in focus. This technique is used to simulate miniature photography, hence the filter name. Thanks to darkmain for the update.

Special Filter = Polarize

The Polarize filter is another somewhat odd name for the actual observed effect – since polarizing filtering must take place optically – there is no way to electronically substitute this. Polarizing filters are often used to reduce reflections (from windows, water surface, etc.) – as well as allow us to see 3D movies. None of these techniques are offered by this filter. What this one does do is to substantially increase the contrast, add significant red/blue saturation – but, like the earlier Lo-Fi filter the increase in saturation is inversely proportional to the brightness of the element in the scene: dark areas get increased saturation, light areas do not.

Special Filter = Grunge

The Grunge effect does have a name that makes sense! Looking like the photo was taken through a grungy piece of glass, it has a faded look that is somewhat realistic of old, damaged print photos. The apparent components of this filter are:  substantial desaturation, significant lightening (brightness level raised), a golden/yellow tint added, then the ‘noise’ (scratches).

Special Filter = Depth of Field

The Depth of Field effect is, as mentioned, very similar to the Miniaturize effect. The overall brightness is a bit lower, and the other main difference appears to be a circular area of sharpness in the center of the frame, as opposed to the edge to edge horizontal band of sharpness apparent with the Miniaturize filter. Check the focus of the woman seated on the far right in the two filters and you’ll see what I mean.

Special Filter = Color Dodge

The Color Dodge special filter has me scratching my head again as far as naming goes… In photographic terminology, “dodge” means to hold back, to reduce – while “burn” means to increase. These are techniques originally used in darkroom printing (actually one of the first methods of tone mapping!) to locally increase or decrease the light falling on the print in order to change the local contrast/brightness. In the resultant image from this filter, red saturation has not just been increased, it has been firewalled! Basically, areas in the original image that had little color in them stayed about the same, areas that have significant color values have those values increased dramatically. There is additionally an increase in overall contrast.

Special Filter = Overlay

The Overlay effect has the same contrast and saturation functions as the previous Color Dodge filter, but the saturation is not turned up as high (gratefully!). In addition, there is a pronounced vignette effect – easy to see in the bottom of the frame. It’s circular, just harder to see in this particular image at the top. Like the rest of the effects filters, the ability to reduce the intensity of this effect can make it useful for situations that at first may not be obvious. For instance, since the saturation only works on existing chroma in the image, if one applies this image to a monochrome image now you have a variable vignetter filter – with no color component…

Special Filter = Faded

The Faded special effect filter does just what it says… this is a simple desaturating filter with no other functions visible. With the ability to vary the intensity of desaturation it makes for a powerful tool. Often one would like to just take a ‘bit off the top’ in terms of color gain – digital sensors are inherently more saturated looking than many film emulsions – just compare (if you can, not that many film labs left…) a shot taken of the same scene with both Ektachrome transparency film and the same scene with the iPhone.

Special Filter = Cross Process

The Cross Process is a true special effect! The name comes from a technique, first discovered by accident, where film is processed in a chemical developer that was intended for a different type of film. While the effect in this particular filter is not indicitive of any particular similar chemical cross-process, the overall effect is similar:  high contrast, unnatural colors, and a general ‘retro’ look that has found an audience today. Just like with the chemical version of cross-processing, the results are unpredictable – one just has to try it out and see. Of course, with film, if you didn’t like it… tough.. go reshoot… with digital, just back up and do something else….

Analog Effects filters

Analog Effects filters

Analog Filter = Diana

The Diana effect is based on, wow – surprise, the Diana camera… another of the cheap plastic cameras prevalent in the 1960’s. The vignetting, light leaks, chromatic aberrations and other side-effects of a $10 camera have been brought into the digital age. In a similar fashion to several of the previous ‘retro’ filters discussed already, you will notice raised blacks, slight lowering of contrast, odd tints (in this case unsaturated highlights tend yellow), increased saturation of colors – and a slight twist in this filter due to even monochrome areas becoming tinted – the silver dress (which stayed silver in even some of the strongest effects discussed above) now takes on a greenish/yellow tint.

Analog Filter = Silver Gelatin

The Silver Gelatin effect, based on the original photochemical process that is over 140 years old – is a wonderful and soft effect for the appropriate subject matter. While the process itself was of course only black & white, the very nature of the process (small silver molecules suspended in a thin gelatin coating) caused fading relatively soon after printing. The gelatin fades to a pale yellow, and the silver (which creates the dark parts of the print) tended to turn a purplish color instead of the original pure black.

Analog Filter = Helios

The Helios analog effect, while it’s possible that it was named for the Helios lens that was fabricated for the Russian “Zenit” 35mm SLR camera in 1958 – is just as likely to be called this due to the burning-fire red tint of virtually the entire frame. In a similar manner to other filters we have discussed, the tinting (and this is clearly another example of a tinted monochrome extraction from the original color image) is based on relative luma values:  near whites and near blacks are not tinted, all other mid-range values of gray are tinted strongly red. It’s an interesting technique, but personally I would have only sparing use for this one.

Analog Filter = Contessa

The Contessa effect is named after one of the really great early 35mm Zeiss/Ikon cameras, produced in the late 1940’s. The effect as it exists here is actually not true to the Contessa:  this original film camera would not have caused the vignetting seen – not with one of the world’s greatest lenses attached! However, that’s immaterial, it’s just the name… what we can say about this filter is that obviously it’s another black & white extraction from the color original – but it adds a sense of ‘old time photograph’ with the vignette, the staining/spotting on the sides of the image, and the very slight warm tint – really appears to look faded as opposed to an actual tint. It’s a nice filter, I would adjust it a bit to add more detail/contrast in the dancers’ faces (left and middle dancers’ faces are a bit dark) – but a nice addition to the toolbox.

Analog Filter = Nostalgia

The Nostalgia effect is now reaching into true ‘special effects’ territory. With a cross process look to start with (similar to Diana and Cross Process), some added saturation in the reds, and then the ‘fog’ effect around the perimeter of the frame – this is leaving photorealism behind…

Analog Filter = Expired

The Expired analog effect is a rather good copy of what film used to look like when you left it in the glove box of your car in the summer… or just plain let it get too old. The look created here is just one of many, many possible effects from expired film – it’s a highly unpredictable entity. In this filter, we have strong red saturation increase – again, with no color in the front of the white stage, nothing to saturate… The overall brightness is raised, contrast is lowered, and a light streak is added.

Analog Filter = XPRO C-41

The XPRO C-41 effect is another cross-process filter. This one is loosely based on what happens when you process E-6 film (color transparency) with color negative developer (C-41). Whites get a bit blown out, light areas tend to greenish, with darker areas tending bluish. The red saturation is (I believe) just something these software developers added – I’ve personally never seen this happen in chemical cross processing.

Analog Filter = Pinhole

The Pinhole analog effect is based, of course, on the oldest camera type of all. With major vignetting, considerable grain, monochrome only, enhanced contrast and lowered brightness, this filter does a fair job imitating what an iPhone would have produced in 1850 (when the first actual photograph was taken with a pinhole camera). The issue was that of a photosenstive material – the pinhole camera has been around for well over a thousand years (the Book of Optics published in 1021AD describes it in detail).

Analog Filter = Chromogenic

The Chromogenic analog effect is based on the core methodology of all modern color film emulsions: the coupling of color dyes to exposed silver halide crystals. All current photochemical film emulsions use light-sensitive silver crystals for exposure to light. To make color, typically 3 layers on the film (cyan, yellow, magenta) are actually specialzed chromogenic layers where the dye colors attach themselves to exposed silver crystals in that layer only. This leads to the buildup of a color negative (or positive) by the ‘stacking’ of the three layers to make a completed color image. Very early on, during the experimentation that led to this development, the process was not nearly as well defined – and this filter is one software artist’s idea of what an early chromogenic print may have looked like. In terms of analysis, the overall cast is reddish-brown, with enhanced contrast, slightly crushed blacks and very desaturated colors (aside from the overall tint).

The Border variations

Simple Border styles

Styled border styles

There are, in addition to the “No Border” option, 9 simple borders and 9 styled borders.

‘Real World’ examples using Scenes and Effects

The following examples show some various images, each processed with one or more techniques that have been introduced above. In each case the original image is shown on the left, the processed image is shown on the right. Since, just like in real life, we often learn more from our mistakes than what we get right the first time – I have included some ‘mistakes’ to demonstrate things I believe did not work so well.

Retro Filter = Ansel

The Ansel filter on this subject has too much contrast – there is no detail in the dress and the subtle detail reflected in the glass behind her is washed out.

Scene = Concert

The Concert filter used here looks unnatural:  skin tones too red. It does make the reflections in the glass pop out though…

Scene = Clarity

The Clarity scene is mostly successful here:  improved detail in her dress, the reflections in the window are clearer, the detail in the stone floor pops more. I would opt for a bit less saturation in her skin tones, particularly the legs – the best technique here would be to ‘turn down’ the Clarity a bit. This can be done, but not within Camera+ (at this time).

Special Filter = Overlay

The Overlay filter as used here offers another interpretation of the scene. The slight vignetting helps focus on the subject, the floor is now defocused, the background reflection in the glass behind her is lessened – the only thing I would like to see improved is her dress is a bit dark – hard to see the detail.

Scene = Portrait

The Portrait scene punches up the contrast, and in this case works fairly well. The floor is a bit bright, and it’s a personal decision if the greater detail revealed in the reflection behind the subject is distracting or not… The upper half of her dress is a bit dark due to the increased contrast, it would be nice to see more detail there.

Scene = Shade

This Shade scene applied shows the effects rather well: the scene is warmed up and the brightness is slightly raised.

Analog Filter = Silver Gelatin

The Silver Gelatin effect is demonstrated well here: blacks are a bit purple, all the whites/grays go a bit yellow. It’s a nice soft look for the right subject matter.

Scene = Backlit

The Backlit scene helps add fill to the otherwise underexposed subject. Since in this case she is standing in open shade (very cool in terms of color temperature), the tendency for this Scene to make skin tones too warm is ok. The background is blown out a bit though.

Scene = Food

In this version, the Food scene is used, well, not for food… it doesn’t add as much fill light to the subject as Backlit, but the background isn’t as overexposed either.

Scene = Backlit

Here’s an example of using the Backlit scene for a subject that is not a person – rather for the whole foreground that is in virtual silhouette. It helps marginally, and does tend to wash out the sky some. But the path and the parked cars have more visibility.

Scene = Backlit

Here is the Backlit scene used in the traditional manner – and as was discussed when this scene was introduced above, I find the skin tones just too red. It does fix the lighting however. All is not lost – once we move on to other techniques to be discussed in a future post:  using another app for editing the results of Camera+. For instance, if we now bring this image into PhotoForge2 and apply color correction and some desaturation we can tone down the red skin and back off the overly saturated colors of their tops.

Scene = Backlit

Here’s another example of the Backlit scene. It does again resolve the fill light issue, but once again oversaturates both the skin tones and the background.

Scene = Backlit

Here is Backlit scene used to attempt to fix this shot where insufficient fill light was available on the subject.

Scene = Clarity

This shows the Clarity scene used to help the lack of fill light. I think it works much better than Backlit in this case. I would still follow up with raising the black levels to bring out details in her shirt.

Scene = Beach

Here are three versions of another backlit scene, with different solutions: the first one uses the Beach scene…

Scene = Flash

This version uses Flash…

Scene = Clarity

and the final version trys Clarity. I personally like this one the best, although I would follow up with raising the deep blacks just a bit to bring out the first girl’s top and pants’ detail.

Scene = Beach

This is a woman at the beach… showing what the Beach scene will do… in my opinion, it doesn’t add anything, and has three issues:  the breaking wave is now clipped, with loss of detail in the white foam, the added contrast reduces the detail in her shirt and pants, and the sand at the bottom of the image has lost detail and become blocky. This exemplifies my earlier comment on this type of scene filter:  use it to change the lighting of your image to ‘look like it was shot at the beach’, not fix images that were taken at the beach…

Scene = Clarity

Here once again is our best friend Clarity… this scene really brings out the detail in the waves, sand and her clothes. It’s a great use of this filter, and shows the added ‘punch’ that Clarity can often bring to a shot that is correctly exposed, but just a bit flat.

Scene = Beach, followed by Special Filter = Overlay @ 67%

Now here is a very different interpretation of the same original shot. It all goes to what story you want to tell with your shot. The Clarity version above is a better depiction of the scene that either the original or the version using Beach, but the method shown above (using the Special filter Overlay, at 67% intensity) brings a completely different feeling to the shot. The woman becomes the central figure, with the beach only a hint of the surrounding…

Scene = Scenery

Using the tradional approach… the Scenery scene on, well, scenery…  Doesn’t work well – mid-ground goes dark, mountains in rear too blue, foreground bush loses detail and snap.

Scene = Shade

Here is the same scene using Shade. It does bring out the detail in the middle of the image, and warm up the reflection of the sky in the lake.

Scene = Sunset

Another view, this time using Sunset. Here we have many of the same issues as the first shot did using the Scenery version.

Scene = Beach

Now here I have used the Beach scene type. Although maybe not an intuitive choice, I like the results:  the lake has a warmer reflection of the sky, the contrast between the foreground bush and the middle area is increased, but without losing detail in the middle trees; the mountains in the rear have picked up detail, and even the shore on the left of the lake has more punch. Know your tools…

Scene = Portrait

Next shown are five different versions of a group with some challenging parameters:  white dresses (that pick up reflected light), dark pants and jacket on the man, theatrical lighting (it’s actually a white rug!), and bright backlighting on the left. This first version, using Portrait, with its increased contrast, helps the dresses to a more pure white look, but now the man’s head blends right into the picture – not enough black detail to separate the objects.

Special Filter = HDR

Here is a version using the special filter HDR. Very stylized. Does clearly separate all the details, the picture on the wall is now visible, and easily separated from the man’s head. The glow and the heavy tinting of the model’s dresses is just part of the ‘look’…

Special Filter = Faded @ 50%

As overstated as the last version may have been, this one goes the other way. Using the special filter Faded (at 50% intensity), the desaturation afforded by this method removes the tinting of the white dresses (they pick up the turquoise lighting), and the skin tones look more natural. Maybe a bit flat…

Scene = Clarity

Here is the Clarity scene type.While it does separate out his head from the picture again, the enhanced edges don’t really work as well in this shot. The increased local saturation actually causes the rug in the foreground to blend together – the original has more detail. This is one of the potential side-effects of this filter – when the source has highly saturated colors to start with some weird things can happen.

Scene = Beach

And the last version, using the Beach scene type. The dresses have punch, the skin tones are warmer, but the higher contrast once again merges his head with the picture. The lighting on the rug also looks overdone.

Scene = Scenery

This is a typical landscape shot, treated with the Scenery filter. Doesn’t work. Mountains have gone almost black, sea has lost it’s shading and beautiful turquoise color, the rocks in the foreground have gone harsh.

Scene = Clarity

Now here’s Clarity used on the same scene.This scene type brings out all the detail without overwhelming any one area. The only two minor faults I would point out is the small ‘halo’ effect in the sky near the edges of the mountains, and the emerald area of the ocean (just above the rocks, next to the foam on the shore) is a little oversaturated. But all in all, a much better filter than Scenery – for this shot. It’s all in using the right tool for the right job.

Scene = Clarity

Now as good as Clarity can be for some things, here is causes a very different effect. Not to say it’s wrong – if you are looking for posterization and noise, then this can be a great effect.

Scene = Darken

Although the Darken scene may not seem at all what one would choose on an already poorly lit scene, it focuses attention purely on the subject, and reduces some of the noise and mottling in her top.

Special Filter = Faded @ 33%

Now here is an interesting solution: using the special filter Faded (at 33% intensity) to reduce the saturation of the scene. This immediately brings out a more natural modeling of her face, makes her right hand look less blocky, and brings a more natural look to the subject in general.

Scene = Clarity

A sunset scene using the Clarity filter. In this case, Clarity is not our BFF… the brick goes oversaturated, the shadows in the foreground are just too much -the eye gets confused, doesn’t know where to look.

Scene = Darken

Here, the Darken scene type is tried. Not much better than Clarity, above. (Hint: sometimes the best result is to leave things alone… a properly exposed and composed image often is perfect as it stands, without additional meddling…)

Scene = Sunset

Here we have a number of different expressions of a train leaving the station at sunset… this one using the Sunset scene type. While it does add some color to the sky, most of the detail in the shadows is now gone.

Scene = Clarity

This version shows the use of Clarity. Detail that was barely visible now pops out. The shot now is almost a bit too busy – there is so much extraneous detail to the right and left of the train that the focus of the moment has changed…

Special Filter = HDR (100%)

Here’s a very different version, using HDR at full intensity. Stylized – tells a certain story – but the style is almost overwhelming the image.

Special Filter = HDR @ 30%

This is HDR again, but this time at 30% intensity.What a difference! I feel this selection is even better than Clarity – detail in the middle of the shot is visible, but not detracting from the train. There is now just enough information in the shadows to round out the shot, yet not pull the eye from the story.

Scene = Cloudy

The difference between the Cloudy and Shade scene types is useful to understand. Here are two subjects standing in open shade – i.e. only illiminated by open sky that has no clouds. This version is filtered with the Cloudy scene type. Note that the back wall has changed hue from the original (gone warmer) as has the street. The subjects are still a bit dark as well.

Scene = Shade

Here is the same shot, processed with the Shade scene type. The brightness is improved, the color temperature is not warmed up as much, and overall this is a better solution. (Well, after all, they were standing in the shade 🙂

Scene = Cloudy

Here is another pair of comparisons between the Cloudy and Shade scene type filters. This choice was less obvious, as the hostess was standing just outside the entrance to a restaurant, in partial shade; but the sky was very overcast – a ‘cloudy’ illumination. Notice, that since the Cloudy filter warms the scene more than the Shade filter does, her skin tone has warmed up considerably, as has her sheer top and and the menu. The top and menu are also a bit clipped, losing detail in the highlights.

Scene = Shade

Here is the ‘Shade’ version. Her skin is not as warm, and the top and menu retain more of their original whiteness. The white levels are better as well, with less clipping on the menu and the left side of her top. This shows that often you must try several filter types and make a selection based on the results – this could have easily gone either way, given the nature of the lighting.

Scene = Cloudy

College campus, showing the use of the Cloudy filter. Here this choice is clearly correct. The sky is obviously cloudy <grin> and the resultant warmth added by the filter is appreciated in the scene.

Scene = Shade

This version, using the Shade scene type, is not as effective. The scene is still a bit cold, and the additional brightness added by this filter makes the sky overly bright – the general illumination now somewhat contradicts the feeling of the scene – a cool and cloudy day.

Scene = Concert

We discussed learning from our mistakes… here is why you usually don’t want to use the Concert scene type at a concert…

Scene = Darken

The Darken scene type is called for here to help with the overexposure. While it doesn’t completely fix the problem, it certainly helps.

Scene = Darken

The Darken filter used again to good effect.

Scene = Darken, followed by Special Filter = HDR @ 20%

This example shows a powerful feature of Camera+    the ability to layer an Effect on top of a Scene type. You can only use one of each, and you can’t put a scene on top of another scene (or an effect on top of an effect) – but the potential of changes has now multipled enormously. Here, the original scene was overexposed. A combination of the Darken scene type, followed by the HDR filter at 30% intensity made all the difference.

Scene = Flash

This shows a typical challenging scene to capture correctly – a dark interior looking out to a brightly lit exterior. In the original exposure you can see that the camera exposed for the outdoor portion, leaving the intererior very dark. The first attempt to rectify this is using the Flash scene type. While this helped the bookcases in the hall, the exterior is now totally blown out, and the right foreground is still too dark.

Scene = Night

Here is the result of using the Night scene type. Better detail in both the hallway as well as the right foreground – and the exterior is now visible, even though still a bit overexposed.

Special Filter = HDR

This version uses the HDR effect filter – giving the best overall exposure between the outside, the bookcase and the foreground. Ideally, I would follow up with another app and raise the black levels a bit to bring out more detail in the shadows in the foreground and near part of the hall.

Scene = Flash

A night-blooming cactus photographed before sunrise – it’s a bit underexposed. Using the Flash scene type brings out the plant well, but the white flowers are now too hot.

Scene = Night

The Night scene type provides a better result, with good balance between the flowers, plant and trees behind.

Special Filter = HDR

Using the HDR filter to attempt to improve the foreground illumination of this shot. It helps… but the typical style of this tone-mapping filter oversaturates the reds in the wood, and the foreground is still a bit dark.

Scene = Flash

And here’s another attempt, using the Flash scene type. A different set of side effects.. the sunlight on the wall is now overexposed, and the foreground is still not ideally lit. Camera+ can’t fix everything… (in this case, using a different app – PhotoForge2 – which has a powerful tool “Shadows & Highlights” did a better job. We’ll see that when we get around to discussing that app).

Scene = Flash

Here’s a good use for the Flash scene. The original is very dark, the filtered version really does look like a flash had been used.

Scene = Flash

And here’s one that didn’t work so well. The Flash filter didn’t do what a real flash would have done: illuminated the interior without making any difference at all in the exterior lighting. Here, the opposite took place: the sunset sky is blown out, yet the interior isn’t helped out any at all.

Scene = Flash

Flash scene used on shot out airplane window, takeoff from London in late twilight. Doesn’t work for me…

Scene = Night

This time, Night was used. Less dramatic – personally I prefer the original. But it’s a good example to show how the different filters operate.

Scene = Food

Ok, just had to try food with Food (scene type)… You can see the filter at work:  whites are warmed up with more red; the potatoes now look almost like little sausages, contrast is increased. I would really like it if the Intensity sliders would be added to the Scene types as well as the Effects… here a better result would be found with about 40% of this filter dialed in, I believe…

Special Filter = HDR

Main street in Montagu, a little town in South Africa. The HDR filter shows how to help resolve a high contrast scene. I didn’t redo this one, but I would have had a more natural look if I had dialed back the intensity of the HDR filter to about 50%.

Scene = Scenery

The Scenery filter applied to an outdoor scene.I find this too contrasty, and even the sky looks a bit oversaturated.

Scene = Clarity

Here is Clarity applied to the same image. Shadow detail much improved (except right under nearest arch). However the right side of the image looks a bit ‘post-cardy’ (flat and washed out).

Special Filter = HDR @ 35%

HDR filter applied at 35%. A different set of ‘almost but not quite’ issues… Arches in shadows are too blue, the sunlit portions are blown out, and the sky is too saturated. The trees on the right are a big improvement over the previous version (Clarity) however. Sometimes you really do need Photoshop….

Scene = Clarity

This is a tough shot:  extreme brightness differences – my estimate is over 14 stops of exposure – way more than the iPhone camera can handle. So it’s really a matter of how best to interpret a scene that will inevitably have both white and black clipping. I used Clarity on this version – didn’t help out the highlights at all, but did add some detail in the shadows, as well as a bit of punch to her pants.

Special Filter = HDR @ 50%

The HDR effect was used here at 50% intensity. This brough a bit more control to at least the edges of the highlights, and still opened up the shadows. Note the difference in the shaded carpet at lower left of the image from this version to the previous one (done with Clarity). I actually prefer this version – the Clarity one seems a bit too much. Overall, I think this does a better job of this particular scene.

Scene = Night

Showing the use of the Night scene type at the last few minutes of twilight. As is often the case, I prefer the original shot, but wanted to demonstrate the capabilities of this scene type. Technically it did a great job – it just didn’t tell the story in the same way as the original.

Scene = Night

Now this scene is an excellent example of what the Night filter can do in the right circumstance. Almost full dark, only illumination was from store windows, streetlights and headlights.

Scene = Night

The Night scene type bringing out enough detail to make the shot.

Scene = Night

One more Night shot – again, a very good use of this scene type.

Scene = Night, followed by Special Filter = HDR @ 33%

This is an example of ‘stacking’ two corrections on top of each other to fix a shot that really needed help. You don’t always have to use the Night scene type at night… Here, due to both underexposure and backlight, the Night filter was applied first, then HDR effect at 33% intensity on top of that. It’s not a perfect result but it definitely shows you what can be done with these tools.

Scene = Portrait

The Portrait scene type applied. The contrast is too much, and the red saturation makes her skin tones look like a lobster.

Color Filter = So Emo

A very different effect, using the So Emo filter.

Scene = Clarity

Just to show what Clarity does in this instance. Again, not the best choice – the background gets too busy, and her face suffers…

Scene = Portrait

Here is Portrait applied to, well, a portrait type shot. But once again the high contrast of this filter works against the best result:  blown out highlights, skin tones too bright.

Scene = Shade

Sometimes you use filters in not-obvious ways:  the Shade scene type was applied here, and from that we got a slight improvement in background brightness, without driving her top into clipping. You can now see some definition between her hair and the background, and the slight warmth added does not detract from the image.

Scene = Sunset

The Sunset filter applied at sunset… I think this is too much. The extra contrast killed the detail in the trees at right center, and the red roof is artificially saturated now.

Scene = Scenery

The Scenery filter applied at sunset. Looks a bit different than the Sunset filter, above, but has many of the same issues when used on this scene: loss of detail in the shadows, too much red saturation.

Scene = Scenery

The Scenery filter at sunset. If this scene type could be ratcheted down with a slider, like is possible with the Effects, then this shot could be enhanced nicely. The original is just a tad flat looking, but the full Monty of the Scenery filter is way over the top. I would love to see what 20% of this effect would do.

Scene = Shade

This shows the Shade scene type doing exactly what it is supposed to: warm up the skin tones, add a bit of brightness. Altogether a less forlorn look…

Scene = Clarity

The same shot with Clarity applied. Much punchier – while I like the detail it’s almost too much. Again, Camera+ engineers, please bring us intensity sliders for scene types!

Scene = Sunset

Now here’s a use for the Sunset scene type at last. While it may not be the best storytelling tool for this particular shot, it shows the nice punch and improvement in detail and saturation this filter can provide – once the original image is basically soft enough to take the sometimes overpowering influence of this scene type.

Scene = Sunset

Trying the Sunset scene type once again. If I could use this at half strength it would actually make this shot a bit more colorful – but in its current incarnation the saturation is too much.

Scene = Sunset

Last Sunset for this post. I promise… but just to prove that never say never… here is a sunset where applying Sunset certainly helps the sky and water. I would like the sign not as saturated, and the increased constrast cost us the few remaining details hidden in the market building at the bottom of the frame.

Scene = Text

This is an example of the Text scene type. Ultra high contrast – really just for what it says (or other special effect you want to create using an almost vertical gamma curve!)

Digital Zoom

Now.. one last thing before the end of this post: a very short dicussion of Digital Zoom. I have ignored this topic up until now. But it’s really really important in iPhonography (actually this affects all cellphone cameras, as well as many small inexpensive digital cameras). ‘Real’ cameras (i.e. all DSLRs and many mid-priced and up digital cameras use Optical Zoom (or as in some consumer type digital cameras, both optical and digital zoom are offered). The difference is that with optical zoom, the lens elements physically move (i.e. a real zoom lens), changing both the magnification and the field of view that is projected onto the film or digital sensor. The so-called ‘Digital Zoom’ technique is a result of physical lenses that cannot “zoom”. Cellphone lenses are too small to adjust their focal length like a DSLR lens does. Also, the optical complexity of a zoom lens is far greater than that of a prime lens (fixed focal length) – and the light-gathering power of a zoom lens is always significantly less than that of a prime lens. (You can get relatively fast zoom lenses for DSLRs… as long as you have the budget of a small country…)

What the “digital zoom” technique is actually doing is cropping the image on the CMOS sensor (using only a small portion of the available pixels), then digitally magnifying that area back out to the original size of the sensor (in pixels). For example, the iPhone sensor is 3264×2448. If one were to crop down to 50% of the original area covered by the lens (to 1632×1224) the lens would now only be covering 1/4 of the original area. Do the math:  3264×2448 is 8megapixels; 1632×1224 is now only 2megapixels. What you do get for this is an apparent ‘zoom’ – or greater percieved magnification, due to the new ‘sensor size’ and the fact that the original focal lenght of the lens has not changed. The original 35mm equivalent focal length of the iPhone camera system is 32mm – by ‘cropping/zooming’ as described, the new focal length is now 4x greater, or 128mm (in 35mm equivalent terms). However – and this is a HUGE however – you pay a large price for this: resolution and noise. You now only have a 2megapixel sensor… not the supersharp 8megapixel sensor that you started with. This is like stepping all the way back to the iPhone 3 – which had 2MP resolution. Wow! In addition, these 2megapixels are now “zoomed” back to fill the full size of the original 8MP sensor (in memory), essentially this means that each original pixel of the taking sensor now covers 4 pixels in the newly formed zoomed image. This means that any noise in the original pixel is magnified by 4 times… So the bottom line is that digital zoom ALWAYS makes noisy, low resolution images.

Now.. just like in any good sales effort, you will hear grand ‘snake oil’ stories of how good this camera or that camera does at ‘digital zoom’ – and that it’s “just as good” as optical zoom. BS. Period. You can’t change the laws of physics…. What is possible (and bear in mind that this is a really small band-aid on a big owie…) is to use some really sophisticated noise reduction and image processing algorithms to try to make this pot of beans look like filet again… and most hardware and software camera manufacturers try at some level. Yes, if such attempts succeed, then you are a LITTLE better off than you were before. Not much. So what’s the answer. Don’t use digital zoom. Just say no. Unless you can accept the consequences (noisy, low resolution images). We’ll discuss this further in the last part of this series on Tips & Techniques for iPhonography, but for now you will see why I don’t address it as a feature.

Ok, that’s it. Really. Hope you have a bit more info now about this very useful app. Many of my upcoming posts on the rest of the sofware tools for the iPhone will not be nearly as detailed, but this was an opportunity to discuss many topics that are germane to most photography apps; offer a bit of a guide to a very popular and useful app that currently publishes no manual or even a help screen, and to demonstrate the thought process involved in working with filters, lighting and so on.

Many thanks for your attention.

iPhone4S – Section 4a: Camera app

March 14, 2012 · by parasam

Camera   The original iPhone photo app. Pretty simple – Use camera icon button to take picture, or use Volume Up (+) on side of phone. Option buttons on top of screen:

  • Flash: On/Auto/Off
  • Options:  Grid – On/Off;  HDR – On/Off
  • Rear-facing/Front-facing camera selector

Features/Buttons on bottom of screen:

  • Last shot preview thumbnail
  • Shutter release button
  • Still/Video selector

Blue box is area where both exposure and focus are measured. Option buttons across top of screen.

When "Options" is selected, you can choose to display a grid overlay on the screen for aid in composition; turn HDR feature on or off.

When 'video' mode is selected, the options change, and the shutter button changes to a 'record' button. Press to start recording, press again to stop.

iPhone4S – Section 4: Software

March 13, 2012 · by parasam

This section of the series of posts on the iPhone4S camera system will address the all-important aspect of software – the glue that connects the hardware we discussed in the last section with the human operator. Without software, the camera would have little function. Our discussion will be divided into three parts:  Overview; the iOS camera subsystem of the Operating System; and the actual applications (apps) that users normally interact through to take and process images.

As the audience of this post will likely cover a wide range of knowledge, I will try to not assume too much – and yet also attempt not to bore those of you who likely know far more than I do about writing software and getting it to behave in a somewhat consistent fashion…

Overview

The iPhone, surprise-surprise – is a computer. A full-fledged computer, just like what sits on your desk (or your lap). It has a CPU (brain), memory, graphics controller, keyboard, touch surface (i.e. mouse), network card (WiFi & Bluetooth), a sound card and many other chips and circuits. It even has things most desktops and laptops don’t have:  a GPS radio for location services, an accelerometer (a really tiny gyroscope-like device that senses movement and position of the phone), a vibrating motor (to bzzzzzz at you when you get a phone call in a meeting) – and a camera. A rather cool, capable little camera. Which is rather the point of our discussion…

So… like any good computer, it needs an operating system – a basic set of instructions that allows the phone to make and receive calls, data to be written to and read from memory, information to be sent and retrieved via WiFi – and on and on. In the case of the iDevice crowd (iPod, iPhone, iPad) this is called iOS. It’s a specialized, somewhat scaled down version of the full-blown OS that runs on a Mac. (Actually it’s quite different in the details, but the concept is exactly the same). The important part of all this for our discussion is that a number of basic functions that affect camera operation are baked into the operating system. All an app has to do is interact via software with these command structures in the OS, present the variable to the user in a friendly manner (like turn the flash on or off), and most importantly, take the image data (i.e the photograph) and allow the user to save it or modify it, based on the capability of the app in question.

The basic parameters that are available to the developer of an app are the same for everyone. It’s an equal playing field. Every app developer has exactly the same toolset, the same available parameters from the OS, and the same hardware. It’s up to the cleverness of the development team to achieve either brilliance or mediocrity.

The Core OS functions – iOS Camera subsystem

The following is a very brief introduction to some of the basic functions that the OS exposes to any app developer – which forms the basis for what an app can and cannot do. This is not an attempt to show anyone how to program a camera app for the iPhone! Rather, a small glimpse into some of the constraints that are put on ALL app developers – the only connection any app has with the actual hardware is through the iOS software interface – also known as the API (Application Programming Interface). For instance, Apple passes on to the developers through the API only 3 focus modes. That’s it. So you will start to see certain similarities between all camera apps, as they all have common roots.

There are many differences, due to the way a given developer uses the functions of the camera, the human interface, the graphical design, the accuracy and speed of computations in the app, etc. It’s a wide open field, even if everyone starts from the same place.

In addition, the feature sets made available through the iOS API change with each hardware model, and can (and do!) change with upgrades of the iOS. Of course, each time Apple changes the underlying API, each app developer is likely to need to update their software as well. So then you’ll get the little red number on your App Store icon, telling you it’s time to upgrade your app – again.

The capabilities of the two cameras (front-facing and rear-facing) are markedly different. In fact, all of the discussion in this series has dealt only with the rear-facing camera. That will continue to be the case, since the front-facing camera is of very low resolution, intended pretty much just to support FaceTime and other video calling apps.

Basic iOS structure

The iOS is like an onion, layers built upon layers. At the center of the universe… is the Core. The most basic is the Core OS. Built on top of this are additional Core Layers: Services, Data, Foundation, Graphics, Audio, Video, Motion, Media, Location, Text, Image, Bluetooth – you get the idea…

Wrapped around these “apple cores” are Layers, Frameworks and Kits. These Apple-provided structures further simplify the work of the developer, provide a common and well tuned user interface, and expand the basic functionality of the core systems. Some examples are:  Media Layer (including MediaPlayer, MessageUI, etc.); the AddressBook Framework; the Game Kit; and so on.

Our concern here will be only with a few structures – the whole reason for bringing this up is to allow you, the user, to understand what parameters on the camera and imaging systems can be changed and what can’t.

Focus Modes

There are three focus modes:

  • AVCaptureFocusModeLocked: the focal area is fixed.

This is useful when you want to allow the user to compose a scene then lock the focus.

  • AVCaptureFocusModeAutoFocus: the camera does a single scan focus then reverts to locked.

This is suitable for a situation where you want to select a particular item on which to focus and then maintain focus on that item even if it is not the center of the scene.

  • AVCaptureFocusModeContinuousAutoFocus: the camera continuously auto-focuses as needed.

Exposure Modes

There are two exposure modes:

  • AVCaptureExposureModeLocked: the exposure mode is fixed.
  • AVCaptureExposureModeAutoExpose: the camera continuously changes the exposure level as needed.

Flash Modes

There are three flash modes:

  • AVCaptureFlashModeOff: the flash will never fire.
  • AVCaptureFlashModeOn: the flash will always fire.
  • AVCaptureFlashModeAuto: the flash will fire if needed.

Torch Mode

Torch mode is where a camera uses the flash continuously at a low power to illuminate a video capture. There are three torch modes:

  •    AVCaptureTorchModeOff: the torch is always off.
  •    AVCaptureTorchModeOn: the torch is always on.
  •    AVCaptureTorchModeAuto: the torch is switched on and off as needed.

White Balance Mode

There are two white balance modes:

  •    AVCaptureWhiteBalanceModeLocked: the white balance mode is fixed.
  •    AVCaptureWhiteBalanceModeContinuousAutoWhiteBalance: the camera continuously changes the white balance as needed.

You can see from the above examples that many of the features of the camera apps you use today inherit these basic structures from the underlying CoreImage API. There are obviously many, many more parameters that are available for control by a developer team – depending on whether you are doing basic image capture, video capture, audio playback, modifying images with built-in filter, etc. etc.

While we are on the subject of core functionality exposed by Apple, let’s discuss camera resolution.

Yes, I know we have heard a million times already that the iPhone4S has an 8MP maximum resolution (3264×2448). But there ARE other resolutions available. Sometimes you don’t want or need the full resolution – particularly if the photo function is only a portion of your app (ID, inventory control, etc.) – or even as a photographer you want more memory capacity and for the purpose at hand a lower resolution image is acceptable.

It’s almost impossible to find this data, even on Apple’s website. Very few apps give access to different resolutions, and the ones that do don’t give numbers – it’s ‘shirt sizes’ [S-M-L]. Deep in the programming guidelines for CoreImage I found a parameter AVCaptureStillIMageOutput that allows ‘presetting the session’ to one of the values below:

PresetNameStill           PresetResolutionStill

Photo                              3264×2448

High                                1920×1080

Med                                 640×480

Lo                                   192×144

PresetNameVideo         PresetResolutionVideo

1080P                              1920×1080

720P                                1280×720

480P                                640×480

I then found one of the very few apps that support ALL of these resolutions (almost DSLR) and shot test stills and video at each resolution to verify. Everything matched the above settings EXCEPT for the “Lo” preset in Still image capture. The output frame measured 640×480, the same as “Med” – however the image quality was much lower. I believe that the actual image IS captured at 192×144, but then is scaled up to 640×480 – why I am not sure, but it is apparent that the Lo image is of far lower quality than Med. The image size was lower for the Lo quality image – but not enough that I would ever use it. On the tests I shot, Lo = 86kB, Med = 91kB. The very small difference in size is not worth the big drop in quality.

So… now you know. You may never have need of this, or not have an app that supports it – but if you do require the ability to shoot thousands of images and have them all fit in your phone, now you know how.

There are two other important aspects of image capture that are set by the OS and not changeable by any app:  color space and image compression format. These are fixed, but different, for still images and video footage. The color space (which for the uninitiated is essentially the gamut – or range of colors – that can be reproduced by a color imaging system) is set to sRGB. This is a common and standard setting for many digital cameras, whether full sized DSLR or cellphones.

It’s beyond the scope of this post to get into color space, but I personally will be overjoyed when the relatively limited gamut of sRGB is put to rest… however, it is appropriate for the iPhone and other cellphone camera systems due to the limitations of the small sensors.

The image compression format used by the iPhone (all models) is JPEG, producing the well-known .jpg file format. Additional comments on this format, and potential artifacts, were discussed in the last post. Since there is nothing one can do about this, no further discussion at this time.

In the video world, things are a little different. We actually have to be aware of audio as well – we get stereo audio along with the video, so we have two different compression formats to consider (audio and video), as well as the wrapper format (think of this as the envelope that contains the audio and video track together in sync).

One note on audio:  if you use a stereo external microphone, you can record stereo audio along with the video shot by the iPhone4S. This requires an external device which connects via the 30-pin docking connector. You will get far superior results – but of course it’s not as convenient. Video recordings made with the on-board microphone (same one you use to speak into the phone) are mono only.

The parameters of the video and audio streams are detailed below: (this example is for the full 1080P resolution)

General

Format : MPEG-4
Format profile : QuickTime
Codec ID : qt
Overall bit rate : 22.9 Mbps

Video

ID : 1
Format : AVC
Format/Info : Advanced Video Codec
Format profile : Baseline@L4.1
Format settings, CABAC : No
Format settings, ReFrames : 1 frame
Codec ID : avc1
Codec ID/Info : Advanced Video Coding
Bit rate : 22.4 Mbps
Width : 1 920 pixels
Height : 1 080 pixels
Display aspect ratio : 16:9
Rotation : 90°
Frame rate mode : Variable
Frame rate : 29.500 fps
Minimum frame rate : 15.000 fps
Maximum frame rate : 30.000 fps
Color space : YUV
Chroma subsampling : 4:2:0
Bit depth : 8 bits
Scan type : Progressive
Bits/(Pixel*Frame) : 0.367
Title : Core Media Video
Color primaries : BT.709-5, BT.1361, IEC 61966-2-4, SMPTE RP177
Transfer characteristics : BT.709-5, BT.1361
Matrix coefficients : BT.709-5, BT.1361, IEC 61966-2-4 709, SMPTE RP177

Audio

ID : 2
Format : AAC
Format/Info : Advanced Audio Codec
Format profile : LC
Codec ID : 40
Bit rate mode : Constant
Bit rate : 64.0 Kbps
Channel(s) : 1 channel
Channel positions : Front: C
Sampling rate : 44.1 KHz
Compression mode : Lossy
Title : Core Media Audio

The highlights of the video/audio stream format are:

  • H.264 (MPEG-4) video compression, Baseline Profile @ Level 4.1, 22Mb/s
  • QuickTime wrapper (.mov)
  • AAC-LC audio compression, 44.1kHz, 64kb/s

The color space for the video is the standard adopted for HD television, Rec709. Importantly, this means that videos shot on the iPhone will look correct when played out on an HDTV.

This particular sample video I shot for this exercise was recorded at just under 30 frames per second (fps), the video camera supports a range of 15-30fps, controlled by the application.

Software Applications for Still & Video Imaging on the iPhone4S

The following part of the discussion will cover a few of the apps that I use on the iPhone4S. These are just what I have come across and find useful – this is not even close to all the apps available for the iPhone for imaging. I obtained all of the apps via normal retail Apple App Store – I have no relationship with any of the vendors – they are unaware of this article (well, at least until it’s published…)

I am not a professional reviewer, and take no stance as to absolute objectivity – I do always try to be accurate in my observations, but reserve the right to have favorites! The purpose in this section is really to give examples of how a few representative apps manage to expose the hardware and underlying iOS software to the user, showing the differences in design and functionality.

These apps are mostly ‘purpose-built’ for photography – as opposed to some other apps that have a different overall purpose but contain imaging capabilities as part of the overall feature set. One example (that I have included below) is EasyRelease, an app for obtaining a ‘model release’ [legal approval from the subject to use his/her likeness for commercial purposes]. This app allows taking a picture with the iPhone/iPad for identification purposes – so has some very basic image capture abilities – it’s not a true ‘photo app’.

BTW, this entire post has been focused on only the iPhone camera, not the iPad (both 2nd & 3rd generation iPads contain cameras) – I personally don’t think a tablet is an ideal imaging device – it’s more like a handy accessory if you have your tablet out and need to take a quick snap – than a camera. Evidently Apple feels this way as well, since the camera hardware in the iPads have always lagged significantly behind that of the iPhone. However, most photo apps will work on both the iPad as well as the iPhone (even on the 1st generation model – with no camera), since many of the apps support working with photos from the Camera Roll (library) as well as directly from the camera.

I frequently work this way – shoot on iPhone, transfer to iPad for easier editing (better for tired eyes and big fingers…), then store or share. I won’t get into the workflows of moving images around – it’s not anywhere near as easy as it should be, even with iCloud – but it’s certainly possible and often worth the effort.

Here is the list of apps that will be covered. For quick reference I have listed them all below with a simple description, a more detailed set of discussions on each app follows.

[Note:  due to the level of detail, including many screenshots and photo examples used for the discussion of each app, I have separated the detailed discussions into separate posts – one for each app. This allows the reader to only select the app(s) they may be interested in, as well as keep the overall size of an individual post to a reasonable size. This is important for mobile readers…]

Still Imaging

Each of the app names (except for original Camera) is a link that will take you to the corresponding page in the App Store.

Camera  The original photo app included on every iPhone. Basic but intuitive – and of course the biggest plus is the ability to fast-launch this without logging in to the home page first. For streetphotography (my genre) this a big feature.

Camera+  I use this as much for editing as shooting, biggest advantage over native iPhone camera app is you can set different part of frame for exposure and focus. The info covers the just-released version 3.0

Camera Plus Pro  This is similar to the above app (Camera+) – some additional features, not the least of which it shoots video as well as still images. Although made by a different company, it has many similar features, filters, etc. It allows for some additional editing functions and features ‘live filters’ – where you can add the filter before you start shooting, instead of as a post-production workflow in Camera+. However, there are tradeoffs (compression ratio, shooting speed, etc.)  Compare the apps carefully – as always, know your tools…  {NOTE: There are two different apps with very similar names: Camera+, made by TapTapTap with the help of pixel wizard Lisa Bettany; and Camera Plus, made by Global Delight Technologies – who also make Camera Plus Pro – the app under discussion here. Camera+ costs $0.99 at the time of this post; Camera Plus is free; Camera Plus Pro is $1.99 — are you confused yet? I was… to the point where I felt I needed to clarify this situation of unfortunately very similar brand names for somewhat similar apps – but there are indeed differences. I’m going to be as objective in my observations as possible. I am not reviewing Camera Plus, as I don’t use it. Don’t infer anything from that – this whole blog is about what I know about what I personally use. I will be as scientific and accurate as possible once I write about a topic, but it’s just personal preference as to what I use}

almost DSLR is the closest thing to fully manual control of iPhone camera you can get. Takes some training, but is very powerful once you get the hang of it.

ProHDR I use this a lot for HDR photography. Pic below was taken with this. It’s unretouched! That’s how it came out of the camera…

Big Lens This allows you to manually ‘blur’ background to simulate shallow depth of field. Quite useful since 30mm focal length lens (35mm equivalent) puts almost everything in focus.

Squareready  If you use Instagram then you know you need to upload in square format. Here’s the best way to do that.

PhotoForge2  Powerful editing app. Basically Photoshop on the iPhone.

Snapseed  Another very good editing app. I use this for straightening pix, as well as ability to tweak small areas of picture differently. On some iPhone snaps I have changed 9 different areas of picture with things like saturation, contrast, brightness, etc.

TrueDoF  This one calculates true depth-of-field for a given lens, sensor size, etc. I use this when shooting DSLR to plan my range of focus once I know my shooting distance.

OptimumCS-Pro  This is sort of inverse of the above app – here you enter the depth of field you want, then OCSP tells you the shooting distance and aperture you need for that.

Iris Photo Suite  A powerful editing app, particularly in color balance, changing histograms, etc. Can work with layers like Photoshop, perform noise reduction, etc.

Filterstorm  I use this app to add visible watermarks to images, as well as many other editing functions. Works with layers, masks, variable brushes for effects, etc.

Genius Scan+  While this app was intended for (and I use it for this as well) scanning documents with the camera to pdf, I found that it works really well to straighten photos… like when you are shooting architectural and have unavoidable keystoning distortion… Just be sure to pull back and give yourself some surround on your subject, as the perspective cropping technique that is used to straighten costs you some of your frame…

Juxtaposer  This app lets you layer two different photos onto each other, with very controllable blending.

Frame X Frame  Camera app, used for stop motion video production as well as general photography.

Phonto  One of the best apps for adding titles and text to shots.

SkipBleach  This mimics the effect of skipping (or reducing) the bleach step in photochemical film processing. It’s what gives that high contrast, faded and harsh ‘look’.

Monochromia  You probably know that getting a good B&W shot out of a color original is not as simple as just desaturating.. here’s the best iPhone app for that.

MagicShutter  This app is for time exposures on iPhone, also ‘light painting’ techniques.

Easy Release  Professional model release. Really, really good – I use it on iPad and have never gone back to paper. Full contractual terms & conditions, you can customize with your additional wording, logo, etc. – a relatively expensive app ($10) but totally worth it in terms of convenience and time saved if you need this function.

Photoshop Express  This is actually a bit disappointing for a $5 app, others above do more for less – except the noise reduction (a new feature) is worth it for that alone. It’s really, really good.

Motion Imaging

Movie*Slate  A very good slate app.

Storyboard Composer  Excellent app for building storyboards from shot or library photos, adding actors, camera motion, script, etc. Powerful.

Splice  Unbelievable – a full video editor for the iPhone/iPad. Yes, you can: drop movies and stills on a timeline, add multiple sound tracks and mix them, work in full HD, has loads of video and audio efx, add transitions, burn in titles, resize, crop, etc. etc. Now that doesn’t mean that I would choose to edit my next feature on a phone…

iTC Calc  The ultimate time code app for iDevices. I use on both iPad and iPhone.

FilmiC Pro  Serious movie camera app for iPhone. Select shooting mode, resolution, 26 frame rates, in-camera slating, colorbars, multiple bitrates for each resolution, etc. etc.

Camera+ Pro  This app is listed under both sections, as it has so many features for both still and motion photography. The video capture/edit portion even has numerous filters that can be used during capture.

Camcorder Pro  simple but powerful HD camera app. Anti-shake and other features.

This concludes this post on the iPhone4S camera software. Please check out the individual posts following for each app mentioned above. I will be posting each app discussion as I complete it, so it may be a few days before all the app posts are uploaded. Please remember these discussions on the apps are merely my observations on their behavior – they are not intended to be a full tutorial, operations manual or other such guide. However, in many cases, the app publisher offers little or no extra information, so I believe the data provided will be useful.

iPhone4S – Section 3: Specifications & Hardware

March 11, 2012 · by parasam

This chapter of my series on the iPhone4S attempts to share what I have discovered on the actual hardware device – restricted to the camera in the iPhone. While this is specific to the iPhone, this is also representative of most high quality cellphone cameras.

First, a few notes on how I went about this, and some acknowledgements for those that discovered these bits first. All of the info I am sharing in this post was derived from the public internet. Where feasible I have tried to make direct acknowledgment of the source, but the formatting of this blog doesn’t always allow that without confusion (footnotes not supported, etc.) so I will insert a short list of sources just below. Although this info was pulled from the web, it has taken a LOT of research – it is not easy to find, and often the useful bits are buried in long sometimes boring epistles on the entire iPhone – I want to focus just on the camera.

Apple, more than most manufacturers, is an incredibly secretive company. They put highly onerous stipulations on all their vendors – in terms of saying anything about anything at all that concerns work they do for Apple. Apple publishes only the most vague of specifications, and often that is not enough for a photographer that wants to get the most from his or her hardware. This policy will likely never change, so the continued efforts of myself and others will be required to unearth the useful bits about the device so we can use it to its fullest potential.

Here are some of the sources/people that published information on the iPhone4S that were used as sources for this article:

engadget.com

apple.com

iFixit.com

chipworks.com

arstechnica.com

sony.com

omnivision.com

jawsnap.net

whatdigitalcamera.com – Nigel Atherton

bcove.me

wired.com

geekbench.com

iprolens.com

eoshd.com

macrumors.com

forbes.com

image-sensors-world.blogspot.com

opco.bluematrix.com

oppenheimer.com

barons.com

motleyfool.com

isuppli.com

anandtech.com

teledyne.com

thephoblographer.com

popphoto.com

dvxuser.com – Barry Green

campl.us – Jonathan

Often the only way to finally arrive at a somewhat accurate idea of what made the iPhone tick was to study the ordering patterns of Chinese supply companies – using publicly available financial statements; review observations and suppositions from a large number of commentators and look for sufficient agreement; find labs that tore the phones apart and reverse-engineered them; and often using my decades of experience as a scientist and photographer to intuit the likelihood of a particular way of doing things. It all started with a bit of curiosity on my part – I had no idea what a lengthy treasure hunt this would turn out to be. The research for this chapter has taken several months (after all, I have a day job…) – and then some effort to weed through all the data and assemble what I believe to be as accurate an explanation of what goes on inside this little device as possible.

One important thing to remember:  Apple is in the habit of sourcing parts for their phone from two or more suppliers – a generally accepted business practice, since a single-source supplier could be a problem if that company had either financial or physical difficulties in fulfillment. This means that a description, photos, etc. of one vendor’s parts may not hold true for all iPhone4S models or inventory – but the general principles will be accurate.

Specifications

This post will be presented in three parts:  the hardware specs first, followed by details of the construction of the iPhone camera system (with photos of the insides of the phone/camera), then some examples of photos taken with various models of the iPhone and some DSLR cameras for comparison – this to show the relative capability of the iPhone hardware. The software apps that are the other half of the overall imaging system will be discussed in the next post.

iPhone4S detailed specs

Camera Sensor                 Omnivision  OV8830 or Sony IMX105

Sensor Type                       CMOS-BSI  (Complementary Metal Oxide Semiconductor – Backside Illumination)

Sensor Size                         1/3.2″ (4.54 x 3.42 mm)

Pixel Size                             1.4 µm

Optical Elements              5 plastic lens elements

Focal Length                     4.28 mm

Equivalent Focal Length (ref to 35mm system) – Still        32 mm

Equivalent Focal Length (ref to 35mm system) – Video   42 mm

Aperture                            f 2.4

Angle of View – Still           62°

Aspect Ratio – Still           4:3

Angle of View – Video      46°

Aspect Ratio – Video       16:9

Shutter Speed – Still        1/15 sec – 1/2000 sec

ISO Rating                       64 – 800

Sensor Resolution – Still              3264 x 2448 (8 MP)

Sensor Resolution – Motion        1920 x 1080  (1080P HD)

External Size of Camera System Module                8.5 mm W x 8.5 mm L x 6 mm D

Features:

  • Video Image Stabilization
  • Temporal Noise Reduction
  • Hybrid IR Filter
  • Improved Automatic White Balance (AWB)
  • Improved light sensitivity
  • Macro focus down to 3”
  • Improved Color Accuracy
  • Improved Color Uniformity

Discussion on Specifications

iPhone4S Camera Assembly

Sensor – “Improved Light Sensitivity”

We’ll start with some basics on the heart of the camera assembly, the sensor. There are two types of solid-state devices that are used to image light into a set of electrical signals that can eventually be computed into an image file:  CCD (Charge Coupled Device) and CMOS (Complementary Metal Oxied Semiconductor). CCD is an older technology, but produces superior images due to the way it’s constructed. However, these positive characteristics come at the cost of more outboard circuitry, higher power consumption, and higher cost. These CCD arrays are used almost exclusively in mid-to-high-end DSLR cameras. CMOS offers lower cost, much lower power consumption and requires fewer off-sensor electronics. For these reasons all cellphone cameras, including the iPhone, use CMOS sensors.

For those slightly more technically inclined, here is a good paragraph from Teledyne.com that sums up the differences:

Both types of imagers convert light into electric charge and process it into electronic signals. In a CCD sensor, every pixel’s charge is transferred through a very limited number of output nodes (often just one) to be converted to voltage, buffered, and sent off-chip as an analog signal. All of the pixel can be devoted to light capture, and the output’s uniformity (a key factor in image quality) is high. In a CMOS sensor, each pixel has its own charge-to-voltage conversion, and the sensor often also includes amplifiers, noise-correction, and digitization circuits, so that the chip outputs digital bits. These other functions increase the design complexity and reduce the area available for light capture. With each pixel doing its own conversion, uniformity is lower. But the chip can be built to require less off-chip circuitry for basic operation.

Apple claims “73% more light” for the sensor used in the iPhone4S. Whatever that means… 73% of what?? Anyway, here is what, after some web-diving, I think is meant:  73% more light is converted to electricity as compared to the sensor used in the iPhone4. The reason is an improvement in the design of the CMOS sensor. The new device uses a technology called “Back Side Illumination”, or BSI. To understand this we must briefly discuss the way a CMOS sensor is built.

One of the big differences between CCD and CMOS sensors is that the CMOS chip has a lot of electronics built right into the surface of the sensor (this pre-processes the captured light and digitizes it before it leaves the sensor, greatly reducing the amount of external processing that must be done with additional circuitry in the phone). In the original CMOS design (FSI – or Front Side Illumination), the light had to pass through the layers of transistors, etc. before striking the diode surface that actually absorbs the photons and convert them to electricity. With BSI, the pixel is essentially “turned upside down” so the light strikes the back of the pixel first, increasing the device’s sensitivity to light, and reducing cross-talk to adjacent pixels. Here is a diagram from OmniVision, one of the suppliers of sensors for iPhone products:

Notice that the iPhone4S has 1.4um sized pixels, so we will assume that if a given camera is using an OmniVision chip, that it is provisioned with BSI, as opposed to BSI-2, technology. (Remember that Sony also apparently makes sensors for the iPhone4S using similar technology).

Sensor – Hybrid IR Filter

Although Apple has never clarified what is meant by this term, some research gives us a fairly good idea of this improvement. CMOS sensors are sensitive to a wide range of light, including IR (Infra Red) and UV (Ultra Violet). Since both of these light frequencies are outside the visible spectrum, they don’t contribute anything to a photograph, but can detract from it. Both IR and UV cause problems that result in diffraction, color non-uniformity and other issues in the final image.

For cost and manufacturing reasons, we believe that prior to the iPhone4S, the filter used was a simple thin-film IR filter (essentially another layer deposited on top of the upper-most layer of the sensor. These ‘thin-film’ IR filters have several down-sides:  Due to their very thin design, they are subject to diffraction as the angle of the incident light on the surface of the filter/sensor changes – this leads to color gradations (non-uniformity) over the area of the image. Previously reported “magenta/green circle” issues with some images taken with the iPhone4 are likely due to this issue.

Also, the thin-film filter employed in the iPhone4 offered only a certain reduction in IR, not total by any means. This has been proven by taking pictures of IR laser light with an iPhone4 – which is clearly visible! This should not be the case if the IR filter was efficient. Since silicon (the base material of chips) is transparent to IR light, what happens is the extraneous IR light rays bounce around off the metal structures that make up the CMOS circuitry, adding reflections to the image and effect color balance, etc. UV light, while not visible to the human eye, is absorbed by the light sensitive diode and therefore adds noise to the image.

It appears that a proper ‘thick-film’ combination IR/UV filter has been fitted to the iPhone4S camera assembly, right on top of the sensor assembly. The proof of a more effective filter was a test photograph of the same IR lasers as the iPhone4 shot – and no laser light was visible on an iPhone4S. The color balance does appear to be better, with more uniformity, and less change of color gradation as the camera is moved about its axis.

A good test to try (and this is useful for any camera, not just a cellphone) is to evenly illuminate a large flat white wall (easier said than done BTW! – use a good light meter to ensure a true even illumination – this usually requires multiple diffuse light sources). You will need to put some small black targets on the wall (print a very large “X” on white paper and stick to the wall in several places) so the auto-focus in the iPhone will work (the test needs to be focused on the wall for accuracy in the result). Then take several pictures, starting with the camera perfectly parallel to the wall, both horizontally and vertically. Then angle the camera very slightly (only a few degrees) and take another shot. Repeat this a few times angling the camera both horizontally then vertically. This really requires a tripod. Ensure that the white wall is the only thing captured in the shot (don’t turn the camera so much that the edge of the wall is visible. Stay back as far as possible from the wall – this won’t be more than a few feet unfortunately due to the wide angle lens used on the iPhone – this will give more uniform results.

Ideally, all the shots should be pure white (except for the black targets). Likely, you will see some color shading creep in here and there. To really see this, import the shots into Photoshop or similar image application, enlarge to full screen, then increase the saturation. If the image has no color in it, increasing the saturation should produce no change. If there are color non-uniformities, increasing the saturation will make them more visible.

The claims for “improved color accuracy” and “improved color uniformity” are both most likely due to this filter as well.

Sensor – Improved Automatic White Balance (AWB)

The iPhone4 was notorious for poor white balance, particularly under fluorescent lighting. The iPhone 4S shows notable improvement in this area. While AWB is actually a function of software and the information received from the sensor, it is discussed here as well – more on this function will be introduced in the next section on camera apps and basic iPhone imaging software. The improved speed of this sensor, in addition to the better color accuracy discussed above – all contribute to more accurate data being supplied to the software in order that better white balance is achieved.

Sensor – Full HD (1080P) for video

A big improvement in the iPhone4S is the upgrade from 720P video to 1080P. The resolution is now 1920×1080 at the highest setting, allowing for full HD to be shot. Since this resolution uses only a portion of the image sensor overall resolution (3264 x 2448), another feature is made possible – video image stabilization. One of the largest impediments to high quality video from cellphones is camera movement – which is often jerky and materially detracts from the image. To help ameliorate this problem, the solution is to compare subsequent frames and match up the edges of the frame to each other – this essentially offsets the camera movement on a frame-by-frame basis.

In order for this to be possible, the image sensor must be larger than the image frame – which is true in this case. This allows for the image of the scene to be moved around and offset so the frames can all be stacked up on top of each other, reducing the apparent image movement. There are side-effects to this image-stabilization method:  some softness results from the image processing steps, and this technique works best for small jerky movements such as result from hand-held videography. This method does not really help larger movements (car, airplane, speedboat, roller-coaster).

Sensor – Temporal Noise Reduction

In a similar fashion to the discussion about Automatic White Balance above, the process of Temporal Noise Reduction is a combination of both sensor electronics and associated software. More details of noise reduction will be discussed in the upcoming section on imaging software. Still images can only take advantage of spatial noise reduction, while moving images (video) can also take advantage of temporal noise reduction – a powerful algorithm to reduce background noise in video.

It must be noted that high quality temporal noise reduction is a massively intensive computational task for real-time high-quality results:  purpose built hardware devices used in the professional broadcast industry cost tens of thousands of dollars and are a cubic foot in size… not something that will fit in a cellphone! However, it is quite amazing to see what can be done with modern microelectronics – the NR that is included in the iPhone4S does offer improvements over previous models of video capture.

Essentially, what TNR (Temporal Noise Reduction) does is to find relatively ‘flat’ areas of the image (sky, walls, etc.); analyze the noise by comparing the small background deviations in density and color; then average those statistically and filter the noise out using one of several mathematical models. Because of this, TNR works best on relatively stationary or slow-moving images:  if the image comparator can’t match up similar areas of the image from frame to frame the technique fails. Often that is not important, as the eye can’t easily see small noise in the face of rapidly moving camera or subject material. TNR, like image stabilization, does have side-effects:  it can lead to some softening of the image (due to the filtering process) – typically this is seen as reduction of fine detail. One way to test this is to shoot some video of a subject standing in front of a screen door or window screen in relatively low light. Shoot the same video again in bright light. (More noise is apparent in low light due to the nature of CMOS sensors). You will likely see a softening of the fine detail of the screen in the lower light exposure – due to the TNR kicking in at a higher level.

Overall however, this is good addition to the arsenal of tools that are included in the iPhone4S – particularly since most people end up shooting both stills and videos in less than ideal lighting conditions.

Sensor – Aspect Ratio, ISO Sensitivity, Shutter Speed

The last bits to discuss on the sensor before moving on to the lens assembly are the native aspect ratio and the sensitivity to light (ISO and shutter speed). The base sensor is a 4:3 aspect ratio (3264 x 2448) – the mathematically correct expression is 1:1.33 – but common usage dictates the integer relationship of 4:3. This is the same aspect ratio as older “standard definition” television sets. All of the still pictures from the iPhone4S have this as their native size – of course this is often cropped by the user or various applications. As a note, the aspect ratio of 35mm film is 1:1.5 (3:2), while the aspect ratio of the common 8”x10” photographic print is 1:1.25 (5:4) – so there will always be some cropping or uneven borders when comparing digital photos to print to film.

Shutter speed, ISO and their relationships were discussed in the previous section of this blog series:  Contrast (differences between DSLR & Cellphones Cameras). However, this deservers a brief mention here in regards to how the basic sensor specifications factor in to the ranges of ISO and shutter speed.

ISO rating is basically a measure of the light sensitivity of a photosensitive material (whether film, a CMOS sensor, or an organic soup). [yes, there are organic broths that are light sensitive – interesting possibilities await…]  With the increased sensitivity of the sensor used in the iPhone4S, the base ISO rating should be improved as well. Since Apple does not give details here, the results are from testing by many people (including myself). The apparent lower rating (least sensitive end of the ISO range) has moved from 80 [iPhone4] to 64 [iPhone4S]. The upper rating appears to be the same on each – 800. There is some discrepancy here in the literature – some have reported an ISO of 1000 being shown in the EXIF data (the metadata included with each exposure, the only way to know what occurred during the shot), but others, including myself, have been unable to reproduce that finding. What is for sure is that the noise in the picture is reduced in the iPhone4S as compared to the iPhone4 (for identical exposures of same subject taken at same time under identical lighting conditions).

Shutter speed is, other than base ISO rating, the only way that the iPhone has to modulate the exposure – since the aperture of the lens is fixed. The less time that the little ‘pixel buckets’ in the CMOS sensor have to accumulate photons, the lower the exposure. Since in bright light conditions more photons arrive per second than in low light conditions, a faster shutter speed is necessary to avoid over-exposure – certain death to a good image from a digital sensor. For more on this please read my last post – this is discussed in detail.

The last thing we need to discuss is however very, very important – and this affects virtually all current CMOS-based video cameras, not just cellphones. This concerns the side-effects of the ‘rolling shutter’ technology that is used on all CMOS-based cameras (still or video) to date. CCD sensors use a ‘global shutter’ – i.e. a shuttering mechanism that exposes all the pixels at once, in a similar fashion to film using a mechanical shutter. However, CMOS sensors “roll” the exposure from top to bottom of the sensor – essentially telling one row of pixels at a time to start gathering light, then stop at the end of the exposure period, in sequence. Without getting into the technical details for why this is done, at the high level it allows more rapid readout of image information from the sensor (one of the benefits of CMOS as compared to CCD), and use less power, as well as avoids overheating of the sensor under continuous use (as in video).

The problem with a rolling shutter is when either the camera or subject is moving rapidly. Substantial visual artifacts are introduced: these are called Skew, Wobble and Partial Exposure. There is virtually no way to resolve these issues. You can minimize them by the use of tripods, etc. in some cases – but of course this is not useful for hand held cellphone shots! Rather than reproduce an already excellent short explanation of this topic, please look at Barry Green’s post on this issue, complete with video examples.

The bottom line is that, as I have mentioned before, keep camera movement to an absolute minimum to experience the best performance from the iPhone4S video camera.

Lens – the new 5-element version

The iPhone4S has increased the optical elements in the lens assembly from 5 since the iPhone4 (which had 4 elements). Current research indicates that (following Apple’s usual policy of dual-sourcing all components of their products) Largan Precision Co., Ltd. and Genius Electronic Optical Co., Ltd. are supplying the 5-element lens assembly for the iPhone4S.

The optical elements are most likely manufactured from plastic optical material (Polystyrene and PMMA – a type of acrylic). Although plastic lenses have many issues (coatings don’t stick well, not very many have great optical properties, they have a high coefficient of thermal expansion, high index variation with temperature and less heat resistance or durability among others) – there are two huge mitigating factors:  much lower cost and weight than glass, and the ability to be formed into complex shapes that glass cannot. Both of these factors are extremely important in cellphone camera lens design, to the point that these factors outweigh any of the disadvantages.

Upper diagram is the 4-element design of the iPhone4 lens; lower diagram is the 5-element design of the iPhone4S.

The complex shapes are aspheres, which are difficult to fabricate out of glass, and afford much finer control over aberrations using fewer elements, which is an absolute necessity when working with very little package depth. A modern lens, even a prime lens (which is what the iPhone uses – i.e. a fixed focal length lens) for a DSLR usually has from 4-6 elements, similar to the iPhone lens with 5 elements – but the depth of the lens barrel that holds all the elements is often at least 2” if not greater. This is required for the optical alignment of the basically spheroid ground lenses.

The iPhone lens assembly (similar to many quality cellphone camera lenses) is barely over ¼” in length! Only the properties of highly malleable plastic that allow the molding of aspherical shaped lenses makes this possible. Optical system design, particularly for high quality photography, is a massive subject by itself – I will make no attempt here to delve into that. However, the basic reason for multiple element lenses is they allow the lens system designer to mitigate basic flaws in one lens material by balancing this with another element made from a different material. Also, there are physics and material sciences issues that have to do with how light is bent by a lens that often are greatly improved with a system of multiple elements.

The iPhone lens is a fixed aperture, fixed focal length lens. See the previous post for details on how this lens compares to a similar lens that would be used for 35mm photography (it’s equivalent to a 32mm lens if used on a 35mm camera, even though this actual lens’ focal length is only 4.28mm)

The aperture is fixed at f2.4 – a fairly fast lens. Both cost and the manufacturing complexity of making an adjustable aperture on a lens this small impossible to consider.

The AOV (Angle of View) of the lens for still photography is 62° – this changes to only 46° for video mode. The reduction in AOV is due to the smaller size of the image for video (2MP instead of 8MP). Remember that AOV is a function of the actual focal length of the lens and the size of the sensor. It’s the size of the effective sensor that matters, not how big the physical sensor may be. The smaller effective size of the video sensor – factored against the same focal length lens – means that the AOV get smaller as well.

Since the effective sensor size is different, this also alters the equivalent focal length of the lens (as referenced to a 35mm system), in this case the focal length increases to an effective value of 42mm.

The bottom line is that in video mode, the iPhone camera sees less of a given scene than when in still camera mode.

Discussion on Hardware

This next portion shows a bit of the actual insides of an actual iPhone4S. While the focus of this entire series of posts is on just the camera of the iPhone4S, a few details about the phone in general will be shown. The camera is not totally isolated within the ecosystem of the iPhone, a large portion of the actual usability of this wonderful device is assisted by the powerful dual-core A5 cpu, the substantial on-board memory, the graphics processors, the high-resolution display, etc. etc.

[The following photos and disassembly process from the wonderful site iFixit.com – which apparently loves to rip to pieces any iDevice they can get their hands on. Don’t try this at home – unless you are very brave, have good coordination and tools – and have no use for a warranty or a working iPhone afterwards…]

Yes, it’s a box with a Cloud, a genie named Siri, and a cool camera inside – and BTW you can make phone calls on it as well…

The iPhone4S

Disassembly starts here – specialized screwdriver required (it’s a Pentalobe screw – otherwise known as “Evil Proprietary Tamper Proof Five Point Screw”)

Yes really. Apple engineers looked far and wide for a screw type that no one else had a screwdriver for.

You want one anyway? Order one here.

Inside for the first time…

Closer look

The battery

Prying out the focus of this entire set of posts – the rear-facing camera.

The camera assembly

The rear of the camera assembly.

The logic board. Note the white triangular piece stuck on the shield – it’s one of several dreaded ‘liquid detectors’. Apple service engineers check these to see if you gave it a bath. (they turn color if wet with ANYTHING) – and there goes your warranty…

Logic board with shields removed (lower the shields Scotty…) The main bits are: Apple A5 Dual-core Processor (more on this later) Qualcomm RTR8605 Multi-band/mode RF Transceiver. Chipworks has provided us with a die photo. Skyworks 77464-20 Load-Insensitive Power Amplifier (LIPA®) module developed for WCDMA applications Avago ACPM-7181 Power Amplifier TriQuint TQM9M9030 surface acoustic wave (SAW) filter TriQuint TQM666052 PA-Duplexer Module

Further detail on the logic board TI 343S0538 touchscreen controller STMicro AGD8 2135 LUSDI gyroscope STMicro 8134 33DH 00D35 three-axis accelerometer Apple 338S0987 B0FL1129 SGP, believed by Chipworks to be a Cirrus Logic audio codec chip

The brain – the dual-core A5 cpu chip

An inside look at the A5 chip

A5 chip, with some areas explained

More of logic board Qualcomm MDM6610 chipset (an upgrade from the iPhone 4's MDM6600) Apple 338S0973, which appears to be a power management IC, according to Chipworks.

Murata SW SS1830010. We suspect that this contains the Broadcom chip that reportedly provides Wi-Fi/Bluetooth connectivity

Toshiba THGVX1G7D2GLA08 16 GB 24 nm MLC NAND flash memory (or 32GB or 64GB if your budget allowed it)

The 960 x 640 pixel Retina display. Also what appears to be the ambient light sensor and infra-red LED for the proximity sensor comes off the display assembly.

Close-up of sensor assembly.

Rear view of front-facing camera (the little low resolution camera for FaceTime)

Front view of front-facing camera

Ok, there it is.. in bits… now just put it all back and see if it works again…

Another view of the camera assembly.

Rear of camera assembly

Now it gets interesting – an X-ray photomicrograph of the edge of the CMOS sensor board – clearly showing that at least this unit was manufactured by Sony….

Sectional view of original iPhone4 camera assembly. Note the Front Side Illumination (FSI) design – where the light from the lens must pass through the circuitry to get to the light sensitive part of the pixels.

Section view of the iPhone4S camera assembly. Note the change in design to the Back Side Illumination version.

Another photomicrograph of the sensor board, showing the Sony model number of the sensor.

As promised in the introduction, here are a few sample photos showing the differences between various models of the iPhone as well as several DSLR cameras. Many more such comparisions as well as other example photos will be introduced in the next section of this series – Camera Software.

(Thanks to Jonathan at campl.us for these great comparison shots from his blog on iPhone cameras)

iPhone Original

iPhone3G

iPhone3GS

iPhone4

iPhone4S

CanonS95

CanonEOS5DMkII

iPhone Original

iPhone3G

iPhone3GS

iPhone4

iPhone4S

CanonS95

CanonEOS5DMkII

Well, this concludes this chapter of the blog on the iPhone4S. Hopefully this has shed some light on how the hardware is put together, along with some further details on the technical specifications of this device. A full knowledge of your tools will always help in making better images, particularly in challenging situations.

Stay tuned for the next chapter, which will deal with all the software that makes this hardware actually produce useful images. Both the core software of the Apple operating system (iOS 5.1 at the time of this writing) and a number of popular camera apps (both still and video) will be discussed.

iPhone4S – Section 1: Basic Overview & Glossary of Terms

March 8, 2012 · by parasam

This is the first of a five-part blog I am posting on the details of the iPhone4S camera. Please see the intro if you have not read this – it will give you an overview of what I am presenting and why.

Introduction

Before I dive into details of the camera system included in the iPhone4S, I want to establish some common ground with terminology. I have only included terms that are pertinent to this discussion, and have tried to keep the explanations simple. Hopefully this will be of use to both newcomers and professionals alike – I find that even with experienced users there can be differences in interpretation of meaning – and that stating up front how I will use terms will reduce any confusion. For those of you that are fully knowledgeable in photography and/or optical systems – I know I have left many details out. Deliberately. I am not trying to teach optics or photography – just establish some baseline understanding for discussing this cool little camera that is included in the iPhone.

There are many limitations to this, or any other cellphone camera. However, with a bit of understanding of the hardware and software – and what is realistic to expect – some surprisingly good photography can result. The bar for entry to quality imaging is lowered even further. My basic desire with this set of blogs is to share what I have learned in the hope that even more people can express themselves creatively with better control and knowledge over their toolsets.

I’ll begin now with a Glossary of Terms that will be useful during the rest of the posts.

Angle Of View

This is the area of a scene that is visible through a particular lens. While technically this value is expressed in degrees, in general terms we use the phrases “wide angle”, “normal” and “telephoto.” The actual angle of view of an optical system is dependent on focal length of the lens and the sensor (or film) size.

Since by and large the general public, and photographers themselves, have adopted the “35mm” camera as the de-facto lens/body system, most references to angle of view in relation to a particular focal length assume this ‘sensor’ size (even though until very recently the sensor was photochemical film). A “normal” lens for 35mm is typically a 50mm focal length, which gives approximately a 40° AOV (Angle Of View). A “wide angle” lens for 35mm is typically anything from 18-35mm focal length (with so-called ‘fisheye’ lenses having focal lengths less than that). For comparison, a 30mm lens would have a 62° AOV. On the other hand, a “telephoto” lens – typically anything over 85mm focal length – has a narrower angle of view than a “normal” lens. A 200mm focal length is a fairly common telephoto lens, and for the 35mm format this would have a 10° AOV.

Since AOV is such a powerful factor in composition of a photograph it should be well understood. There are other factors that are affected by the choice of a particular focal length of a lens – not just the Angle of View. A few of the important ones (that affect composition and the ‘look’ of the photo) are:

  • Scale:  objects taken with a ‘normal’ lens tend to look, well… normal.
    • A wide angle lens distorts the relative size of objects closer to the lens in relation to those further away – close objects look bigger and far-away objects look smaller.
    • A telephoto lens tends to compress, or flatten, the apparent depth of view. There is little apparent difference in size from a near object to a far one (as long as they are both in focus).
    • Vignetting:  Any lens has light fall-off from the center of the lens towards the edge – this is a basic fact of optics. High quality lenses minimize this, but the effect gets more pronounced the wider the angle of view of the lens. Apparent vignetting will show in most lenses – no matter the quality – with a focal length of less than 24mm (for a 35mm system).

Just as a reference, for the 35mm camera system, the range of AOV currently available in lenses goes from about 220°  (Nikkor 6mm fisheye) to 1° (Reflex Nikkor 2000mm).

A very important aspect of AOV is that of sensor size – a separate discussion below will address how sensor size is a factor along with focal length to give the actual Angle of View. (See Equivalent Focal Length).

Aperture

The size of the lens opening through which light rays can strike the sensor or film. The term “f-stop” is usually associated with aperture – as these numbers are often engraved on the aperture control ring on adjustable aperture lenses (as in 35mm photography). This terminology comes from the “f” in focal length (the numerical relationship between the physical diameter of the aperture and the focal length of the lens) and the physical “stop” – or click – that is felt when the user adjusts the aperture on a lens.

For instance, on a normal 50mm lens, an aperture of f2.8 means that the actual ‘hole’ where light can actively pass through the lens/diaphragm assembly is 18mm in diameter. [1/2.8 = .357 X 50 = 18] The correct nomenclature for aperture is actually f1/2.8 – as the f-stop always refers to a ratio. As an example, the same numerical aperture (f2.8) on a 200mm lens would equate to a physical opening of 71mm.

From this you can see that a low f-number (equating to a larger light-hole) requires a physically larger lens. Typical hobbyist/semi-pro lenses for 35mm systems usually have an aperture that ranges from f3.5 to f5.6 – as this makes the lenses affordable and not too heavy. So called ‘fast’ lenses (ones with wide apertures) – which lets more light in, and therefore a higher shutter speed can be used – are expensive and heavy.

Here is a quick comparison:

f0.95 50mm lens

f3.5 50mm lens

(Note the big difference in the diameter of the glass in the lens) lens

f5.6 400mm

f2.8 400mm lens (Fast lenses are big, heavy and expensive. As an example the above Canon lens weighs 12 lbs., the objective lens is almost 6″ in diameter, and costs $11,500).


ASA

American Standards Association. Originally this group set the standards for film sensitivity to light (and the ratings were expressed with an “ASA” number). The rating index was arbitrary, the important aspect was that of an arithmetic scale – i.e. a film emulsion with a speed of 200 was twice as sensitive as an emulsion with a rating of ASA100.

Essentially, ‘slow’ films (low ASA number) are less sensitive to light, therefore require longer exposure times, while ‘fast’ films (high ASA number) are more sensitive and can acquire a good exposure with a fast shutter speed. The tradeoff is that fast films are more grainy and have lower resolution.

The ASA (as the standards body that handles film/sensor speed) has been supplanted by the ISO (International Standards Organization). The same arithmetic speed numbers apply (i.e. ASA100 = ISO100).

The whole subject of film/sensor sensitivity is complex and deserving of a long discussion, for this purpose the important issue to know is that as sensitivity is set higher (higher ISO #) exposures can be made at higher shutter speeds or higher f-stop settings – with the tradeoff of more grain and more noise in the resultant picture.

 

Aspect Ratio

The ratio of width to height of the sensor or film in a photographic system. In the 35mm film system, the actual image area is 36mm wide x 24mm high, for an aspect ratio of 1:1.5 (often labeled as 3:2). Many digital sensors have a slightly different aspect ratio – and there is no standard here like there was with film – for instance a common ‘consumer’ format, the so-called ½” sensor size is 5.3mm wide x 4mm high, for an aspect ratio of 1:1.3 (often labeled as 4:3). The important factor from this is actually the size of the diagonal measurement of the sensor, as that is the longest dimension – and sets the minimum size that the lens must cover in terms of optical area.

Barrel Distortion

A type of geometric distortion occurring in lens design and manufacture. The effect is that straight lines are bowed outwards at the edge of the picture, resembling the sides of a barrel. This is commonly found in wide angle lenses, and due to their relatively low cost, can be prevalent in cellphone and consumer cameras.

CCD

Charge-Coupled Device. This describes the technology of the image sensor used in virtually all modern still and video camera imaging systems. It consists of a flat plate that is covered with a matrix of very small ‘pixels’ that convert light to electrical signals that are then converted into an electronic image.

The size of the sensor is usually given in mm, and the number of pixels in ‘megapixels’ due to the very high number of individual sensing elements. An older consumer camera or cellphone may only have 1.9megapixels (1600×1200), while a new high-end DSLR camera may have over 24megapixels (6048×4032).

Another important factor to consider is the actual size of the pixel in a sensor. If you take two identical sensors (in terms of megapixels) – say 4288×2848 (12megapixels) – but they are from two different sensor sizes [one is full sized:  36mm x 24mm; the other is 2/3 size:  24mm x 16mm], the individual pixel sizes are 50% larger for the full sized format. This makes a big difference in light-gathering capability as well as noise in low light areas of the picture.

Larger sensors gather light more quickly, hence inherently have a higher ISO rating – and have less noise in the low-light (shadow) areas of the exposure. For these reasons the full sized sensor is used on all professional quality CCD cameras.

Just to give some perspective, the iPhone4S sensor has 8megapixels (3264×2448), all packed into a sensor that is 4.54mm x 3.42mm in size. The size of an individual pixel in this sensor is 1.4μm square – that is 0.00000004” – a very small fraction of a millionth of an inch! Put it another way:  you could put many thousands of those pixels into the period at the end of this sentence.

In comparison, the pixel size in a professional DSLR camera (Canon EOS5 for example) is 8.2μm square – therefore having 34 times the area! The performance (in terms of exposure speed, low noise, etc.) starts to become apparent…

Crop Factor

While I personally don’t think this term is the best choice semantically, it appears to be in use so I will explain it here. This term is related to the term Equivalent Focal Length discussed below, please see that term for full details. Essentially this term (Crop Factor) refers to the ratio of the diagonal of the sensor size of the camera system under discussion, in relation to a reference 35mm camera system.

Professional digital cameras have the same sensor size (well almost, but that’s another story – we’ll save that for a blog on another day…) as the original 35mm film systems. Most of the ‘semi-pro/advanced hobbyist’ DSLR cameras use a so-called “2/3” system (the sensor is 2/3 the size of a full-sized sensor. A full size ((35mm) sensor is 36x24mm, the 2/3 system uses 24x16mm. If you take the ratio of these sizes, the full size sensor is 1.5x the 2/3 system. Therefore the “crop factor” of the 2/3 system is 1.5 [the ratio to the 35mm reference system].

Essentially, as the sensor size decreases, with the lens focal length held constant the Angle Of View (see above) decreases. The importance of this comes as photographers attempt to use their older interchangeable lenses (from their 35mm film camera systems) with newer digital camera backs – often with a sensor size smaller than the 35mm system.

For example, a 50mm lens that was used with a 35mm film camera, when attached to a 2/3 system digital back (very common with semi-pro Canon or Nikon digital cameras) now becomes the equivalent of a 75mm lens. That makes a significant difference in angle of view, as well as flattening of the field of view, etc.

Typical crop factors for cellphone cameras are about 7 (the iPhone4S is 7.62)

Depth of Field

The zone of acceptable sharpness in front of and behind the subject on which the lens is focused. Strictly speaking, an optical system is can only focus on one image plane at a time, so only one distance from the focal plane of the camera to the subject is in perfect focus. However, within a range – called depth of field – all other elements are so close to perfect focus that a viewer cannot tell the difference.

To be accurate, what is happening is that the ‘circle of confusion’ of an imaged spot is less than or equal to the ‘visual acuity’ of the observer within the range of “depth of field.” In optics, the term ‘circle of confusion’ is the optical spot caused by a cone of light rays coming from a lens that is not perfectly focused. The term ‘visual acuity’ refers to the limits of the human visual system in resolving detail in an observed image. So, simply put, if the circle of confusion (blur) of an image is less than the ability of the eye to see it (acuity), then the object still looks perfectly sharp to the viewer.

Once the CoC (Circle of Confusion) becomes equal or greater than the resolving power of the human visual system (HVS), then we start to observer blur – or an out-of-focus condition. For complex reasons that will not be discussed here, this range is not perfectly centered on the focal point: with normal camera systems the depth of field extends approximately 1/3 in front and 2/3 behind the point of optical focus of the lens.

The actual depth of field is dependent on three factors:  focal length of the lens, aperture of the lens, and distance from the focal plane of the camera to the subject. In general terms:

  • Shorter focal lengths give greater depth of field.
  • Smaller apertures give greater depth of field.
  • Longer distances from camera to subject give greater depth of field.

In relation to common lens types, (and point #1 above), wide angle lenses (shorter focal length) have greater depth of field than do telephoto lenses.

Equivalent Focal Length

With the advent of digital photography comes a large variation in sensor size. The original frame size associated with 35mm film photography (36mm wide x 24mm) has been tossed in the blender, and often it seems like every new camera that comes out has a slightly different sensor size. Fortunately, for serious DSLR (Digital Single Lens Reflex) cameras, the choices are very limited – this because these cameras use detachable interchangeable lenses – and the covering area of the lens must match the film/sensor size. A lens designed for a 2/3 size sensor (24x16mm) will not work on a full sized sensor (36x24mm).

But with fixed assemblies (consumer cameras, point and shoot, cellphones, etc.) as long as the lens is matched to the sensor virtually any sensor size is possible. In order for a photographer to have an understanding of the particular lens/sensor system that is being used it is common to relate the actual sizes of the given system to that of a reference 35mm camera system – since that system is so ubiquitous throughout the world.

Like most of this discussion, this subject is more complex than first apparent, but to simplify:  the ratio of the diagonal of the sensor size under discussion, in relation to that of the 35mm system, is multiplied by the actual focal distance of the camera system under discussion to arrive at the equivalent focal distance in a reference 35mm system.

Here’s an example:  The iPhone4S has an image sensor that measures 4.54mm x 3.42mm. The diagonal measurement computes to 5.68mm. The diagonal measurement of a reference 35mm frame (36mm x 24mm) is 43.3mm. The ratio (or Lens Multiplication Factor) is therefore 7.62 [43.3 / 5.68]. The actual focal length of the iPhone lens is 4.28mm (when focused on infinity, which is the standard way all lens focal lengths are measured). Therefore the Equivalent Focal Length of the iPhone camera system is 32.5mm, in terms of a 35mm camera system (4.28 x 7.62]. This is considered a ‘wide angle’ lens, and is rather common for cellphone cameras.

This focal length strikes a good balance in terms of field of view:  enough for typical photography without having too much distortion.

Exposure

This is the quantity of light that falls on the film or sensor during the total exposure time. It is a formula:  Exposure = Light Intensity X Duration. Light intensity is usually modified by the f-stop of the taking lens, Duration is modified by the shutter speed. The film or sensor sensitivity (measured by ASA or ISO #) also affects the exposure, as it is the ‘base’ on which an exposure takes place. If the ISO # is higher, an acceptable exposure can be made with either less time or a smaller aperture (less light intensity).

ISO Speed

(See discussion on ASA above)

Normal Lens

A lens that makes the image in a photograph appear in a perspective similar to that of the original scene (approximately 45°). A normal lens has a shorter focal length and a wider field of view than a telephoto lens, and a longer focal length and narrower field of view than a wide-angle lens. Normal lenses correspond to that portion of human vision in which we can discern sharp detail; technically defined as a lens whose focal length is approximately equal to the diagonal of the film frame; in 35mm photography, the diagonal measures 43mm, but in practice, lenses with focal lengths from 50mm to 60mm are considered normal.

Telephoto Lens

A lens that makes a subject appear larger in an image than does a normal lens at the same camera-to-subject distance. A telephoto lens has a longer focal length and narrower field of view than a normal lens and has a shallower depth of field than a wide angle lens. Telephoto lenses have a  focal length longer than the diagonal of the film or sensor frame; in 35mm photography, lenses longer than 60mm; also referred to as a “long” lens.

Wide-Angle Lens

A lens that has a shorter focal length and a wider field of view (includes more subject area) than a normal lens. A wide-angle lens has a focal length shorter than the diagonal of the film or sensor frame; in 35mm photography, lenses shorter than 50mm; also referred to as a “short” lens.

Summary

As a closing, some of the basic specifications of the iPhone4S camera are listed below. They will make a bit more sense now that the above terms are understood. The next post will focus on the main differences between ‘normal’ DSLR cameras and cellphone cameras in general – and the iPhone camera in particular.

Sensor Format:     1/3.2″  (4.54mm x 3.42mm)

Optical Elements:     5 plastic elements

Pixel Size:     1.4μm

Focal Length:     4.28mm

Aperture:     f1/2.4  fixed

Image Array Size:  3264 x 2448  (8MP)

Equivalent Focal Length:     32mm (referenced to 35mm camera system)

ISO range:   64 – 1000

Shutter speed (minimum duration):     1/2000 sec.

The iPhone4S camera – unraveled and explained…

March 7, 2012 · by parasam

All you ever wanted to know about the cool little camera inside the iPhone4S – but didn’t know to ask…

As a serious semi-pro still photographer I used to look on cellphone cameras as little toys… until the iPhone cameras – and the associated apps – started to change the world of imaging. The first real jump into a half-serious camera was the iPhone4 – then the 4S pushed the bar that much higher. Along with some very powerful apps this combination of hardware and software has brought relatively high quality imaging to a new audience.

I have been shooting with the 4S since it was released last year, and finally wanted to really find out what made it tick – what its limitations were and how it could be used. I have always approached photography from my scientific background – with a firm desire to know all that was possible about my tools so I could know what to expect in a variety of conditions. Turned out this is a rather complex subject. Data was not easy to find:  Apple very deliberately hides or says nothing about even the most basic of specifications on their products, and the iPhone camera is no exception.

I want to be clear about a few things before progressing with this series of articles:  I am not a professional product reviewer – these are just my observations from research and using this device since October 14 of last year. I am also not in any way attempting to offer a course on photography or the physics of lenses, sensors, etc. I will attempt to explain things in what I hope is a clear fashion for anyone who is mildly interested in technology and how this camera can be compared to more traditional DSLR cameras. All of the research was accessed from publicly available documents – whether online or offline.

The expertise I hope to bring to this discussion is the discovery of data that is available, but hard to find; the organization of that data in a way that will hopefully benefit current and future users of this device (or similar cellphone cameras); and the sharing of what I have found useful (in terms of both hardware and software) to maximize the performance of this technology.

I didn’t realize at the start of this little project how it would grow. I thought I would write a few pages and toss in a couple of diagrams on the camera and some of the software. I was kidding myself! Over about a month I amassed over 500 pages of research, tested scores of apps, and took hundreds of shots to test ideas and software. Even then, I am sure that other will find things that I have not seen – and have opinions that either differ or contradict those expressed here. That’s the beauty of a more or less democratic web – anyone gets a chance to have their say.

To keep the posts manageable – and allow me to start posting something before I have another birthday – I am breaking this topic into a series of shorter posts. In addition to this introduction, I plan five additional parts:

  1. Basic camera overview – a short discussion on basic terms, including a glossary. Just enough to allow for a common understanding of terms and words I will use in the rest of the discussion. For further details there are literally thousands of books, websites, etc. on photography.
  2. Primary differences between cellphone cameras and traditional film/digital cameras. This is important as a basis for understanding the limitations of this type of hardware – and to relate many of the numbers and terms used in traditional photography to the new world of high quality “cellphotography.”
  3. The iPhone4S specifications, hardware details, construction and other info.
  4. Software apps I have found useful on the 4S platform. I make no attempt here to review all apps for this device – that would take more days that I have left – but will rather exemplify the apps I have found useful, and how I use them in everyday practice. It’s a starting point…
  5. Techniques, limitations and other general discussion from my experience of this camera, in relation to the decades of shooting with film and digital hardware in more traditional photography.

While the bulk of this series will deal with still photography, I will also address the video capabilities of both the hardware and software. The video features (1080P, etc.) are just as powerful as the still camera features, particularly with good software.

I aim to post all five blogs within the next week – time permitting. I hope you all find this interesting – it’s been fun to discover all this information, and I continue to enjoy seeing just what this little camera can do.

Just as a hint of what’s possible, here are a few shots I took with my iPhone4S. They are unretouched except for basic adjustments of exposure, contrast, etc. – no Photoshop effects, no ‘tarting up’ – these are here to show what the basic camera can do, given an understanding of both the capabilities and limitations of the hardware/software of this platform.

  • Blog at WordPress.com.
  • Connect with us:
  • Twitter
  • Vimeo
  • YouTube
  • RSS
  • Follow Following
    • Parasam
    • Join 95 other followers
    • Already have a WordPress.com account? Log in now.
    • Parasam
    • Customize
    • Follow Following
    • Sign up
    • Log in
    • Report this content
    • View site in Reader
    • Manage subscriptions
    • Collapse this bar
 

Loading Comments...