If a photo was all white – or all black – there would be no contrast, no differentiation, no nothing. A photo is many things, but first and foremost it must contain something. And something is recognized by one shape, one entity, standing out from another. Hence… contrast. This is the prima facie of a photograph – whether color or monochrome, whether tack sharp like a Weston or a blur against a rain smeared window – contrast of elements is the core of a photograph.
After that can come focus, composition, tonal range, texture, evocative subject… all the layers that distinguish great from mundane – but they all run second.
Although this set of posts is indeed concerned with an exploration of the mechanics and limitations of a rather cool cellphone camera (iPhone4S), the larger intent is to embolden the user with a tool to image his or her surroundings. The fact that such a small and portable device is capable of imaging at a level that was only a few years ago relegated to DSLR cameras is a technological wonder. Absolutely it is not a replacement for a high quality camera – but in the hands of a trained and experienced person with a vision and the patience to understand the possibilities of such a device, improbable things are possible.
Contrast of devices – DSLR vs cellphone
This post will cover a few of the limitations and benefits of a relatively high quality cellphone camera. While I am discussing the iPhone4S camera in particular, these observations apply to any modern reasonably high quality cellphone camera.
For the first 50 years or so of photography, portability was not even a possibility. Transportability yes, but large view cameras, glass or tin plates and the need for both camera (on tripod) and subjects (either mountains that didn’t move or people frozen in a tableau) to remain locked in place didn’t do much for spontaneity. Roll film, the Brownie camera, the Instamatic, eventually the 35mm camera system – not to mention Polaroid – changed the way we shot pictures forever.
But more or less, for the first hundred years of this art form, the process was one of delayed gratification. One took a photo, then waited through hours or days of photochemical processes to see what actually happened. The art of previsualization became paramount for a professional photographer, for only if you could reasonably predict how your photo would turn out could you stay in business!
With the first digital picture ever produced in 1975 (in a Kodak lab), this is indeed a young science. Consumer digital photography is only about 20 years old – and a good portion of that was relatively low performance ‘snapshot’ cameras. High end digital cameras for professionals only came on the scene in the late 1990’s – at obscene prices. The pace of development since then has been nothing short of stratospheric.
We now have DSLR (Digital Single Lens Reflex) cameras that have more resolution that any film stock ever produced; with lenses that automatically compensate for vibration, assist in exposure and focus, and have light-gathering capabilities that will allow excellent pictures in starlight.
These high-end systems do not come cheaply, nor are they small and lightweight. Even though they are based on the 35mm film camera system, and employ a digital sensor about the same size as a 35mm film frame – they are considerably complex imaging computers and optical systems – and are not for the faint of heart or pocketbook!
With camera backs only (no lens) going for $7,000 and high quality lenses costing from $2,000 – $12,000 each, these wonders of modern imaging technology require substantial investment – of both knowledge and cash.
On the other end of the spectrum we have the consumer ‘point and shoot’ cameras, usually of a few megapixels resolution, and mostly automatic in function. The digital equivalent of the Kodak Brownie.
These digital snapshot cameras revolutionized candid photography. The biggest change was the immediacy – no more waiting and expensive disappointment of a poorly exposed shot – one just looked and tried again. If nothing else, the opportunity to ‘self-teach’ has already improved the general photographic skill of millions of people.
Almost as soon as the cellphone was invented, the idea of stuffing a camera inside came along. With the first analog cellphones arriving in the mid-1980s, within 10 years (1997 to be exact) the first cellphone with a built-in camera was announced (Kyocera, 1MP).
A scant 15 years later we have the iPhone4S and similar camera systems routinely used by millions of people worldwide. In many cases, the user has no photographic training and yet the results are often quite acceptable. This blog however is for those that want to take a bit of time to ‘look under the hood’ and extract the maximum capabilities of these small but powerful imaging devices.
Major differences between DSLR cameras and cellphone cameras
The essence of a cellphone camera is portability, while the prime focus of a DSLR camera is to produce the highest quality photograph possible – given the constraints of cost, weight and complexity of operation. It is only natural then that many compromises are made in the design of a cellphone camera. The challenges of very light weight, low cost, small size and other technical issues forced cellphone cameras into a low quality genre for some time. Not any more. Yes, it is absolutely correct that there are many limitations to even ‘high quality’ cellphone cameras such as the iPhone, but with an understanding of these limitations, it is possible to take photos that many would never assume came from a phone.
One of the primary limitations on a cellphone camera is size. Given the physical constraints of the design package for modern cellphones, the entire camera assembly must be very small, usually on the order of ½” square and less than ¼” thick. Compared to an average DSLR, which is often 4” wide by 3” high and 2” thick – the cellphone camera is microscopic.
The first challenge this presents then is sensor size. Two factors actually come into play here: actual X-Y sensor size dimensions, and lens focal length. Since the covering power of the lens (the area that a lens can cover with a focused image) is a function of the focal length of the lens – and therefore the physical dimensions required in the lens barrel assembly – materially affects the overall thickness of the lens/camera assembly, compromises have to made here to keep the overall size within limits.
The combination of physical sensor size and the depth that would be required if the actual focal length was more than about 5mm, mandates typical cellphone camera sensor sizes to be in the range of 1/3” in size. For example, the iPhone4S sensor is 4.54mm x 3.42mm and the lens has a focal length of 4.28mm. Most other quality cellphone cameras are in this range.
Full details (and photographs) of the actual iPhone hardware will be discussed in the following blog, iPhone4S Specs & Hardware.
The limitation of sensor size then sets the physical size of the pixels that make up the sensor. With a desire to offer a relatively high megapixel count – for sharp resolution – the camera manufacturer is then forced to a very small pixel size. The iPhone4S pixel size is 1.4μm. That is really, really small. Less than a millionth of an inch square. The average size of a pixel on a high quality “35mm” style DSLR camera is 40X larger…
The small pixel size is one of the largest factors in the differences that make cellphone cameras less capable that full-fledged DSLR cameras. The light sensitivity is much less, due the basic nature of how a CCD/CMOS sensor works.
Film vs CCD/CMOS sensor technology – Blacks & Whites
To fully understand the issues with small digital sensor pixel size we need to briefly revisit the differences between film and digital image capture first. Up until 50 years ago, only photochemical film emulsions could capture images. The fundamental way that light is ‘frozen’ into a captured image is very different from film as compared to digital techniques – and it is visible in the resultant photographs.
That is not to say one is better, they are just different. Furthermore, there is a difference in appearance to the human eye from a fully ‘chemical’ process (photograph captured on film, then printed directly onto photo paper and developed chemically – even from a film image that is scanned and printed digitally. Film scanners also use the same CCD array that digital cameras use, and the basic difference of image capture once again comes into play.
Without getting lost in the wonderful details of materials science, physics and chemistry that all play a part in how photochemical photography works, when light strikes film the energy of the light starts changing certain molecules of the film emulsion. The more light that hits certain areas of the film negative, the more certain molecules start clumping together and changing. Once developed, these little groups of matter become the shadows and light of a photograph. All film photographs show something we call ‘grain’ – very small bits of optical gravel that actually constitute the photograph.
The important bit here is to remember that with film, exposure (light intensity X time) results in increased amounts of ‘clumped optical gravel’ – which when developed looks black on a film negative. Of course black on a negative prints to white on a positive – the print that we actually view.
Conversely, on film, very lightly exposed portions of the negative (the shadows, those portions of the picture that were illuminated the least) show up as very light on the negative. This brings us to one of the MOST important aspects of film photography as compared to digital photography:
- With film, you expose for the shadows and print for the highlights
- With digital, you expose for the highlights and print for the shadows
The two mediums really ARE different. An additional challenge here is when we shoot film, but then scan the negative and print digitally. What then? Well, you have to treat this scenario as two serial processes: expose the film as you should – for the shadows. Then when you scan the film, you must expose for the highlights (since you are in reality taking a picture of a picture) and now that you are in the digital domain, print for the shadows.
The reason behind all this is due to the difference between how film reacts to light and how a digital sensor works. As mentioned above, film emulsions react to light by increasing the amount of ‘converted molecules’ – silver halide crystals to be exact – leaving unexposed areas (dark areas in the original scene) virtually unexposed.
Digital sensor capture, using either the CCD or CMOS technology (more on the difference in a moment) respond to light in a different manner: the photons that make up light fall on the sensor elements (pixels) and ‘fill up’ the energy levels of the ‘pixel container’. The resultant voltage level of each pixel is read out and turned into an image by the computational circuits associated with the sensor. The dark areas in the original image, since they contribute very little illumination, leave the ‘pixel tanks’ mostly unfilled. The high sensitivity of the photo-sensitive arrays mean that any stray light, electrical noise, etc. can be interpreted as ‘illumination’ by the sensor electronics – and is. The bottom line therefore is that the low light areas (shadows) in an image captured by digital means are always the most noisy.
To sum it up: blacks in a film negative are the least noisy, as basically nothing is happening there; blacks in a digital image are the most noisy, since the unfilled ‘pixel containers’ are like little magnets for any kind of energy. What this means is that fundamentally, digitally captured images are different looking than film: noisy blacks in digital, clean blacks in film.
There are two technologies for digital sensor image capture: CCD and CMOS. While similar at the high level, there are significant differences. CCD (Charge Coupled Devices) are older, typically create high quality, low-noise images. CMOS (Complementary Metal Oxide Semiconductor) arrays consume much less power, are less sensitive to light, and are far less expensive to fabricate. This means that all cellphone cameras use CMOS technology – the iPhone included. Most medium to high-end DSLR cameras use CCD technology.
The above issue only adds to the quality challenge for a cellphone camera: using less expensive technology means higher noise, lower quality, etc. for the produced image. Therefore, to get a good exposure on digital, one would think that you would want to ‘expose for the shadows’ to be sure you reduced noise by adding exposure to the shadow areas. Unfortunately, the opposite is actually the case!
The reason is that in digital capture, once a pixel has been ‘filled up’ (i.e. received enough light that the level of that pixel is at the maximum [255 for an 8-bit system]) it can no longer hold any detail – that pixel is just clipped at pure white. No amount of post-processing (Photoshop, etc.) can recover detail that is lost since the original capture was clipped. With under-exposed blacks, you can always apply noise-reduction, raise levels, etc. and play with ‘an emptyp-ish container’.
That is why it’s so important to ‘expose for the highlights’ with digital – once you have clipped an area of image at pure white, you can’t ever get that detail back again – it’s burnt out. For film, the opposite was true due the negative process: if you didn’t get SOME exposure in the shadows, you can’t make something out of nothing – all you get is gray noise if you try to pump up blacks that have no detail. In film, you have to expose for the shadows, then you can always tone down the highlights.
So, with your cellphone cameras, ALWAYS make sure you don’t blow out the highlights, even if you have to compromise with noisy blacks- you can use various techniques to minimize that issue during post-production.
Fixed vs Adjustable Aperture
Another major difference between digital cameras and cellphone cameras is the issue of fixed aperture. All cellphone cameras have a fixed aperture – i.e. no way to adjust the lens aperture as one does with the “f-stop” ring on a camera lens. Essentially, the cellphone lens is “wide open” all the time. This is purely a function of cost and complexity. In normal cameras, the aperture is controlled with a ‘variable vane’ system, a rather complex set of curved thin pieces of metal that open and close a portal within the lens assembly to allow either more or less light through the lens as a whole.
With a typical lens measuring 3” in diameter and about 2” – 6” long this was not a mechanical design issue. A cellphone lens on the other hand is usually less than ¼” in diameter and less than ¼” in depth. The mechanical engineering required to insert such an aperture control mechanism would be very difficult and exorbitantly expensive.
Also, the operational desire of most cellphone manufacturers is to keep the device very simple to operate, so having another control that significantly affected camera operations and control was not high on the feature list.
A fixed aperture means several things: normally this means a shallow depth of field, but with the relatively wide angle lenses employed by most mobile phone manufacturers the depth of field is usually more than enough; and for daylight exposures, adjustment of both ISO and shutter speed are necessary to avoid over-exposure.
Exposure settings
On a digital camera, you can use either automatic or manual controls to set the exposure. Many cameras allow either “shutter priority” or “aperture priority.” What this means is that with shutter priority the user selects a shutter speed (or range of speeds), and the camera adjusts the aperture as is required to get the correct exposure. This setting does not allow the user to set the f-stop, so the depth of field on a photograph will vary depending on the light level.
With aperture priority, the user selects an f-stop setting, and the camera selects a shutter speed that is appropriate. With this setting, the user does not set the shutter speed, so care is required if slow shutter speeds are anticipated: camera and/or subject movement must be minimized.
On a film or digital camera, the user can set the ISO speed rating manually. This is not possible on most cellphone cameras. The speed rating of the sensor (ISO #) is really a ‘baseline’ for the exposure setting.
Here is an example of setting the base ISO speed correctly:
The exposure algorithm inside the cellphone camera software computes both the shutter speed and the ISO (the only two factors that it can change, since the aperture is fixed) and arrives at a compromise that the camera software believes will make the best exposure. Here is where art and experience come into play – no cellphone hardware or software manufacturer has yet to publish their algorithms, no do I ever expect this to happen. One must shoot lots and lots of exposures under controlled conditions to attempt to figure out how a given camera (and app) is deciding to set these parameters.
Like anything else, if one takes the time to know your tools, you get a better result. From what I have observed by shooting several thousand frames with my iPhone4S, and using about 10 different camera apps to do so, the following is very rough approximation of a typical algorithmic process:
- A very fast ‘pre-exposure’ of the entire frame is performed (averaging together all the pixels without regard to an exposure ‘area’) to arrive at a sense of the overall illumination of the frame.
- From this an initial ISO setting is assigned.
- Based on that ISO, then the ‘exposure area’ (usually shown in a box in the display, or sometimes just centered in the frame) is used to further set the exposure: the pixels within the exposure area are averaged, then, based on the ISO setting, a shutter speed is chosen to place the average light level at Zone 5 (middle gray value) of a standard exposure index. [Look for an upcoming blog on Zones if you are not familiar with this]
It appears that subsequent adjustments to this process can happen (and most likely do!) – again, determinate on a particular vendor’s choice of algorithm: for instance, if, based on the above sequence the final shutter speed is very slow (under 1/30 second) the base ISO sensitivity will likely be raised, as slow shutter speeds reveal both camera shake and subject movement.
Apple, nor any other manufacturer, seems to publish the exact limits on their embedded camera’s ISO sensitivity nor range of shutter speeds. With many, many controlled tests, I have determined (and apparently so have others, as the published info I have seen on the web corroborates my findings) that the shutter speeds of the iPhone4S range from 1/15 sec down to 1/2000 sec; while the ISO speeds range from ISO 64 to ISO 1000.
There are apps that will allow much longer exposures, and potentially a wider range of ISO values – I have not had time to run extensive tests with all the apps I have tried. Each camera app vendor has the choice to implement more or less features that Apple exposes in the programmatic interface (more on that in Part 4 of this series), so the potential variations are large.
Movement
As you have undoubtedly experienced, many photographs are spoiled by inadvertent movement of the subject, camera, or both. While sometimes movement is intended – and can make the shot artistically – most often this is not the case. With the very tiny sensor that is normal for any cellphone camera, the sensor is very often hungry for light, so the more you can give it, the better quality picture you will get.
What this means in practicality is that when possible, in lower light conditions, brace the camera against a solid object, put it on a tripod (using an adaptor), etc. Now here is where you again will get better results with experience: our eyes have fantastic adaptive properties – cameras do not. When we walk inside a mall from a sunny outdoors, within seconds we perceive the mall to be as well lit as the outside – even though in real terms the average light value is less than 1% of what was outdoors!
However, our little iPhone is now struggling to get enough light to expose a picture. Outdoors, we might have found that at ISO 64 we were getting shutter speeds of 1/600 second, indoors we have changed to ISO400 at 1/15 second! The slow shutter speeds almost always will show blurred movement, whether from a person walking, camera shake, or both.
Here are a few examples:

1/20 sec @ ISO400 Camera handheld, you can see camera shake in blurred lines in glass panels, upper left; then in addition subject movement (her left leg is almost totally blurred).
Image format
Another big difference between DSLR cameras and cellphone cameras is the type (and variations) on image capture format. Cellphone cameras exclusively (at this time) capture only to compressed formats, .jpg usually. Often the user gets some limited control over the amount of compression and the resultant output resolution (I call it ‘shirt size formatting’ – as usually it’s S-M-L).
Regardless, the output format is significantly compressed from the original taking format. For instance, the 8megapixel capture of the iPhone4S typically outputs a frame that is about 2.9MB in file size, in .jpg format. A semi-pro DSLR (2/3 format) at the same megapixel rating will output 48MB per frame in RAW format (16 bits per pixel). This is done mainly to conserve memory space in the cellphone system, as well as greatly speed up transfers of data out of the phone.
However, one loses much by storing pictures only in the compressed jpg format. Without digressing into details of digital photography post-production, once a picture is locked into the compressed world many potential adjustments are lost. That is not to say you can’t get a good picture if compressed, only that a correct exposure up front is even more important, since you can’t employ the rescue methods that ‘camera Raw’ allows one.
The process of jpg compression introduces some artifacts as well, and these can range of invisible to annoying, depending on the content. Again, there is nothing you can do about it in the world of cellphone cameras, other than understand it, and try to mitigate against it by careful composition and exposure when possible. The scope of this discussion precludes a detailed treatise, but suffice it to say that jpeg artifacts become more noticeable with extremes of lighting conditions, so low light, brilliant glare and other such situations may show these more than a normally lit scene.
Flash photography
The ‘flash’ on cellphones is nothing more than a little LED lamp that can ‘flash’ fairly rapidly. Yes, it allows one to take pictures in low light that would otherwise not be possible, but that’s about it. It has almost no similarity to a real strobe light used by DSLR cameras, whether built-in to the camera or a professional outboard unit.
The three big differences:
- Speed: the length of a strobe flash is typically 1ms (1/1000 sec), while the iPhone LED ‘flash’ is about 100ms (1/10 sec). That is a factor of 100x.
- Light output: DSLR strobe units put out MUCH more light than cellphone flash units. Exactly how much is not easy to measure, as Apple does not publish specs, and the LED light unit works very differently than a strobe unit. But conservatively, a typical outboard flash unit (Nikon SB-900 for example) produces 2,500 lumenseconds of illumination, while the iPhone4S is estimated at about 25 lumenseconds. That means a strobe flash is 100x brighter…
- Color temperature: Commercial strobe lights are carefully calibrated to output at approximately 5500°K, while the iPhone (and similar cellphone flash) are uncalibrated. The iPhone in particular seems quite blue, probably around 7000°K or so. Now the automatic white balance will try to fix this, but this function often fails in two common scenarios: mixed lighting (for instance flash in a room lit with tungsten lamps); and subjects that don’t have much black or white areas (which AWB circuits use to compute the white point).
The bottom line is that reserve the use of cellphone ‘flash’ to emergencies only. And it kills battery life…
Colorimetry, color balance and color temperature
The above terms are all different. Colorimetry is the science of human color perception. Color Balance is the relative balance of colors within a given object in a scene, or within a scene as a whole. Color Temperature (in photographic terms) is the overall hue of a scene based on the white reference associated with the scene.
To give a few examples, in reverse order from the introduction:
The color temperature of an outdoor shot at sunset will be lower (warmer) than that of a shot taken at high noon. The standard for photography for daylight is 5000°K (degrees Kelvin is the unit for color temperature), with sunset being about 2500°K, indoor household lighting equivalent to about 2800°K, while outdoor light in the shade from an open sky (no direct sunlight) is often about 9000°K. The lower numbers look reddish-orange, the higher numbers are bluish.
Color balance can be affected by illumination, the object itself, errors in the color response of the taking sensor, as well as other factors. Sometime we need to correct this, as the original color balance can look unnatural and detract from the photograph – for instance if human skin happens to be illuminated with a fluorescent light, although the larger scene is lit with traditional tungsten (household) lamps, the skin will take on an odd greenish tinge.
Colorimetry comes into play in how the Human Visual System (HVS) actually ‘sees’ what it is looking at. Many variables come into play here, but for our purposes we need to understand that the relative light levels and contrast of the viewing environment can significantly affect what we see. So don’t try to critically judge your cellphone shots outdoors – you can’t. Wait until you are indoors, and it’s best to review them on monitor you can trust – with proper lighting conditions.
More on all these issues will be discussed later, this is just a taste and some quick guides on how cellphones differ from more traditional photography.
Motion Picture Photography vs Still Photography
Most of this discussion so far has focused on still photography as opposed to video (motion photography). All of the principles hold true for video in the same way as for still camera shots. A few things bear mentioning – again in the vein of differences between a traditional video camera and a cellphone camera.
The typical built-in app for video in a cellphone runs at 24 fps (frames per second), the same speed as professional movie cameras. Some after-market apps allow the user to change that, but for our discussion we’ll stick with 24fps. The important bit to remember here is that frame rate is equivalent to a shutter speed of 1/24 sec for each frame shot. (For various technical reasons, the actual shutter speed is a bit higher, since there has to be some time between each frame to read the image from the sensor – so the actual shutter speed is closer to 1/30 sec).
This has two important by-products: the shutter speed is now fixed, and since the aperture is also fixed the only thing left for the camera app to use to adjust for exposure is the ISO speed. This means less control over lighting conditions. The other issue is, since even 1/30 sec is a fairly slow shutter speed, camera movement is a very, very bad thing. Keep your movements slow and smooth, not jerky. Brace yourself whenever possible. Fast moving objects in the frame will be blurred – there is nothing you can do about that.
Another issue of concern with “cellphone cinemaphotography” is actual sensor size (which affects noise and light sensitivity). For still photography, the full sensor is used, in the case of the iPhone4S that is 3264 x 2448, but in video mode the resolution is decreased to 1920 x 1080. This is a significant decrease in resolution – from 8 megapixels to 2 megapixels per frame! There are a number of reasons for this:
- The HD video standard is 1920 x 1080
- Using less than the full sensor allows for vibration reduction to take place in software – as the image jiggles around on the sensor, fast and complex arithmetic can move the offset frames back into place to help reduce the effects of camera shake.
- The data rate from the sensor is reduced: no current cellphone cpu and memory could keep up with full motion video at 8megapixels per frame.
- The resultant file size is manageable.
- The compression engine can keep up with the output from the camera – again, the iPhone uses H.264 as the video compression codec for movies, and that process uses a lot of computer power – not to mention drains the battery faster than the sun melts butter on hot pavement. Yes, the iPhone will give you a full day of charge if you are not on WiFi, are mostly on standby or just making some phone calls. Want to drain it flat in under 2 hours? Just start shooting video…
- And, of course, if you are shooting in low light and turn on the ‘torch’ (the little LED flash that stays on for videography) then your battery life can be measured in minutes! Use that sparingly, only when you have to – it doesn’t actually give that much light, and being so close to the lens, causes some strange lighting effects.
BTW, even still photography uses a ton of battery power. I have surprised myself more than once by walking around shooting for an hour, then noticing my battery is at 21%. More than anything else, this motivated me to get a high quality car charger…
Summary
Ok, that’s it for this section – hope it’s provided a few useful bits of information that will help you make better cellphone shots. Here’s a very brief summary of tips to take away from the above discussion:
- Give your pictures as much light as you can
- Hold camera still, brace on solid object if at all possible
- Expose for the highlights (i.e. don’t let them get overexposed or ‘blown out’)
- Don’t use the built-in flash unless absolutely necessary
Tagged: cellphone, DSLR, iPhone, iPhone4S, manual, photography, tutorial