This chapter of my series on the iPhone4S attempts to share what I have discovered on the actual hardware device – restricted to the camera in the iPhone. While this is specific to the iPhone, this is also representative of most high quality cellphone cameras.
First, a few notes on how I went about this, and some acknowledgements for those that discovered these bits first. All of the info I am sharing in this post was derived from the public internet. Where feasible I have tried to make direct acknowledgment of the source, but the formatting of this blog doesn’t always allow that without confusion (footnotes not supported, etc.) so I will insert a short list of sources just below. Although this info was pulled from the web, it has taken a LOT of research – it is not easy to find, and often the useful bits are buried in long sometimes boring epistles on the entire iPhone – I want to focus just on the camera.
Apple, more than most manufacturers, is an incredibly secretive company. They put highly onerous stipulations on all their vendors – in terms of saying anything about anything at all that concerns work they do for Apple. Apple publishes only the most vague of specifications, and often that is not enough for a photographer that wants to get the most from his or her hardware. This policy will likely never change, so the continued efforts of myself and others will be required to unearth the useful bits about the device so we can use it to its fullest potential.
Here are some of the sources/people that published information on the iPhone4S that were used as sources for this article:
whatdigitalcamera.com – Nigel Atherton
dvxuser.com – Barry Green
campl.us – Jonathan
Often the only way to finally arrive at a somewhat accurate idea of what made the iPhone tick was to study the ordering patterns of Chinese supply companies – using publicly available financial statements; review observations and suppositions from a large number of commentators and look for sufficient agreement; find labs that tore the phones apart and reverse-engineered them; and often using my decades of experience as a scientist and photographer to intuit the likelihood of a particular way of doing things. It all started with a bit of curiosity on my part – I had no idea what a lengthy treasure hunt this would turn out to be. The research for this chapter has taken several months (after all, I have a day job…) – and then some effort to weed through all the data and assemble what I believe to be as accurate an explanation of what goes on inside this little device as possible.
One important thing to remember: Apple is in the habit of sourcing parts for their phone from two or more suppliers – a generally accepted business practice, since a single-source supplier could be a problem if that company had either financial or physical difficulties in fulfillment. This means that a description, photos, etc. of one vendor’s parts may not hold true for all iPhone4S models or inventory – but the general principles will be accurate.
This post will be presented in three parts: the hardware specs first, followed by details of the construction of the iPhone camera system (with photos of the insides of the phone/camera), then some examples of photos taken with various models of the iPhone and some DSLR cameras for comparison – this to show the relative capability of the iPhone hardware. The software apps that are the other half of the overall imaging system will be discussed in the next post.
iPhone4S detailed specs
Camera Sensor Omnivision OV8830 or Sony IMX105
Sensor Type CMOS-BSI (Complementary Metal Oxide Semiconductor – Backside Illumination)
Sensor Size 1/3.2″ (4.54 x 3.42 mm)
Pixel Size 1.4 µm
Optical Elements 5 plastic lens elements
Focal Length 4.28 mm
Equivalent Focal Length (ref to 35mm system) – Still 32 mm
Equivalent Focal Length (ref to 35mm system) – Video 42 mm
Aperture f 2.4
Angle of View – Still 62°
Aspect Ratio – Still 4:3
Angle of View – Video 46°
Aspect Ratio – Video 16:9
Shutter Speed – Still 1/15 sec – 1/2000 sec
ISO Rating 64 – 800
Sensor Resolution – Still 3264 x 2448 (8 MP)
Sensor Resolution – Motion 1920 x 1080 (1080P HD)
External Size of Camera System Module 8.5 mm W x 8.5 mm L x 6 mm D
- Video Image Stabilization
- Temporal Noise Reduction
- Hybrid IR Filter
- Improved Automatic White Balance (AWB)
- Improved light sensitivity
- Macro focus down to 3”
- Improved Color Accuracy
- Improved Color Uniformity
Discussion on Specifications
Sensor – “Improved Light Sensitivity”
We’ll start with some basics on the heart of the camera assembly, the sensor. There are two types of solid-state devices that are used to image light into a set of electrical signals that can eventually be computed into an image file: CCD (Charge Coupled Device) and CMOS (Complementary Metal Oxied Semiconductor). CCD is an older technology, but produces superior images due to the way it’s constructed. However, these positive characteristics come at the cost of more outboard circuitry, higher power consumption, and higher cost. These CCD arrays are used almost exclusively in mid-to-high-end DSLR cameras. CMOS offers lower cost, much lower power consumption and requires fewer off-sensor electronics. For these reasons all cellphone cameras, including the iPhone, use CMOS sensors.
For those slightly more technically inclined, here is a good paragraph from Teledyne.com that sums up the differences:
Both types of imagers convert light into electric charge and process it into electronic signals. In a CCD sensor, every pixel’s charge is transferred through a very limited number of output nodes (often just one) to be converted to voltage, buffered, and sent off-chip as an analog signal. All of the pixel can be devoted to light capture, and the output’s uniformity (a key factor in image quality) is high. In a CMOS sensor, each pixel has its own charge-to-voltage conversion, and the sensor often also includes amplifiers, noise-correction, and digitization circuits, so that the chip outputs digital bits. These other functions increase the design complexity and reduce the area available for light capture. With each pixel doing its own conversion, uniformity is lower. But the chip can be built to require less off-chip circuitry for basic operation.
Apple claims “73% more light” for the sensor used in the iPhone4S. Whatever that means… 73% of what?? Anyway, here is what, after some web-diving, I think is meant: 73% more light is converted to electricity as compared to the sensor used in the iPhone4. The reason is an improvement in the design of the CMOS sensor. The new device uses a technology called “Back Side Illumination”, or BSI. To understand this we must briefly discuss the way a CMOS sensor is built.
One of the big differences between CCD and CMOS sensors is that the CMOS chip has a lot of electronics built right into the surface of the sensor (this pre-processes the captured light and digitizes it before it leaves the sensor, greatly reducing the amount of external processing that must be done with additional circuitry in the phone). In the original CMOS design (FSI – or Front Side Illumination), the light had to pass through the layers of transistors, etc. before striking the diode surface that actually absorbs the photons and convert them to electricity. With BSI, the pixel is essentially “turned upside down” so the light strikes the back of the pixel first, increasing the device’s sensitivity to light, and reducing cross-talk to adjacent pixels. Here is a diagram from OmniVision, one of the suppliers of sensors for iPhone products:
Sensor – Hybrid IR Filter
Although Apple has never clarified what is meant by this term, some research gives us a fairly good idea of this improvement. CMOS sensors are sensitive to a wide range of light, including IR (Infra Red) and UV (Ultra Violet). Since both of these light frequencies are outside the visible spectrum, they don’t contribute anything to a photograph, but can detract from it. Both IR and UV cause problems that result in diffraction, color non-uniformity and other issues in the final image.
For cost and manufacturing reasons, we believe that prior to the iPhone4S, the filter used was a simple thin-film IR filter (essentially another layer deposited on top of the upper-most layer of the sensor. These ‘thin-film’ IR filters have several down-sides: Due to their very thin design, they are subject to diffraction as the angle of the incident light on the surface of the filter/sensor changes – this leads to color gradations (non-uniformity) over the area of the image. Previously reported “magenta/green circle” issues with some images taken with the iPhone4 are likely due to this issue.
Also, the thin-film filter employed in the iPhone4 offered only a certain reduction in IR, not total by any means. This has been proven by taking pictures of IR laser light with an iPhone4 – which is clearly visible! This should not be the case if the IR filter was efficient. Since silicon (the base material of chips) is transparent to IR light, what happens is the extraneous IR light rays bounce around off the metal structures that make up the CMOS circuitry, adding reflections to the image and effect color balance, etc. UV light, while not visible to the human eye, is absorbed by the light sensitive diode and therefore adds noise to the image.
It appears that a proper ‘thick-film’ combination IR/UV filter has been fitted to the iPhone4S camera assembly, right on top of the sensor assembly. The proof of a more effective filter was a test photograph of the same IR lasers as the iPhone4 shot – and no laser light was visible on an iPhone4S. The color balance does appear to be better, with more uniformity, and less change of color gradation as the camera is moved about its axis.
A good test to try (and this is useful for any camera, not just a cellphone) is to evenly illuminate a large flat white wall (easier said than done BTW! – use a good light meter to ensure a true even illumination – this usually requires multiple diffuse light sources). You will need to put some small black targets on the wall (print a very large “X” on white paper and stick to the wall in several places) so the auto-focus in the iPhone will work (the test needs to be focused on the wall for accuracy in the result). Then take several pictures, starting with the camera perfectly parallel to the wall, both horizontally and vertically. Then angle the camera very slightly (only a few degrees) and take another shot. Repeat this a few times angling the camera both horizontally then vertically. This really requires a tripod. Ensure that the white wall is the only thing captured in the shot (don’t turn the camera so much that the edge of the wall is visible. Stay back as far as possible from the wall – this won’t be more than a few feet unfortunately due to the wide angle lens used on the iPhone – this will give more uniform results.
Ideally, all the shots should be pure white (except for the black targets). Likely, you will see some color shading creep in here and there. To really see this, import the shots into Photoshop or similar image application, enlarge to full screen, then increase the saturation. If the image has no color in it, increasing the saturation should produce no change. If there are color non-uniformities, increasing the saturation will make them more visible.
The claims for “improved color accuracy” and “improved color uniformity” are both most likely due to this filter as well.
Sensor – Improved Automatic White Balance (AWB)
The iPhone4 was notorious for poor white balance, particularly under fluorescent lighting. The iPhone 4S shows notable improvement in this area. While AWB is actually a function of software and the information received from the sensor, it is discussed here as well – more on this function will be introduced in the next section on camera apps and basic iPhone imaging software. The improved speed of this sensor, in addition to the better color accuracy discussed above – all contribute to more accurate data being supplied to the software in order that better white balance is achieved.
Sensor – Full HD (1080P) for video
A big improvement in the iPhone4S is the upgrade from 720P video to 1080P. The resolution is now 1920×1080 at the highest setting, allowing for full HD to be shot. Since this resolution uses only a portion of the image sensor overall resolution (3264 x 2448), another feature is made possible – video image stabilization. One of the largest impediments to high quality video from cellphones is camera movement – which is often jerky and materially detracts from the image. To help ameliorate this problem, the solution is to compare subsequent frames and match up the edges of the frame to each other – this essentially offsets the camera movement on a frame-by-frame basis.
In order for this to be possible, the image sensor must be larger than the image frame – which is true in this case. This allows for the image of the scene to be moved around and offset so the frames can all be stacked up on top of each other, reducing the apparent image movement. There are side-effects to this image-stabilization method: some softness results from the image processing steps, and this technique works best for small jerky movements such as result from hand-held videography. This method does not really help larger movements (car, airplane, speedboat, roller-coaster).
Sensor – Temporal Noise Reduction
In a similar fashion to the discussion about Automatic White Balance above, the process of Temporal Noise Reduction is a combination of both sensor electronics and associated software. More details of noise reduction will be discussed in the upcoming section on imaging software. Still images can only take advantage of spatial noise reduction, while moving images (video) can also take advantage of temporal noise reduction – a powerful algorithm to reduce background noise in video.
It must be noted that high quality temporal noise reduction is a massively intensive computational task for real-time high-quality results: purpose built hardware devices used in the professional broadcast industry cost tens of thousands of dollars and are a cubic foot in size… not something that will fit in a cellphone! However, it is quite amazing to see what can be done with modern microelectronics – the NR that is included in the iPhone4S does offer improvements over previous models of video capture.
Essentially, what TNR (Temporal Noise Reduction) does is to find relatively ‘flat’ areas of the image (sky, walls, etc.); analyze the noise by comparing the small background deviations in density and color; then average those statistically and filter the noise out using one of several mathematical models. Because of this, TNR works best on relatively stationary or slow-moving images: if the image comparator can’t match up similar areas of the image from frame to frame the technique fails. Often that is not important, as the eye can’t easily see small noise in the face of rapidly moving camera or subject material. TNR, like image stabilization, does have side-effects: it can lead to some softening of the image (due to the filtering process) – typically this is seen as reduction of fine detail. One way to test this is to shoot some video of a subject standing in front of a screen door or window screen in relatively low light. Shoot the same video again in bright light. (More noise is apparent in low light due to the nature of CMOS sensors). You will likely see a softening of the fine detail of the screen in the lower light exposure – due to the TNR kicking in at a higher level.
Overall however, this is good addition to the arsenal of tools that are included in the iPhone4S – particularly since most people end up shooting both stills and videos in less than ideal lighting conditions.
Sensor – Aspect Ratio, ISO Sensitivity, Shutter Speed
The last bits to discuss on the sensor before moving on to the lens assembly are the native aspect ratio and the sensitivity to light (ISO and shutter speed). The base sensor is a 4:3 aspect ratio (3264 x 2448) – the mathematically correct expression is 1:1.33 – but common usage dictates the integer relationship of 4:3. This is the same aspect ratio as older “standard definition” television sets. All of the still pictures from the iPhone4S have this as their native size – of course this is often cropped by the user or various applications. As a note, the aspect ratio of 35mm film is 1:1.5 (3:2), while the aspect ratio of the common 8”x10” photographic print is 1:1.25 (5:4) – so there will always be some cropping or uneven borders when comparing digital photos to print to film.
Shutter speed, ISO and their relationships were discussed in the previous section of this blog series: Contrast (differences between DSLR & Cellphones Cameras). However, this deservers a brief mention here in regards to how the basic sensor specifications factor in to the ranges of ISO and shutter speed.
ISO rating is basically a measure of the light sensitivity of a photosensitive material (whether film, a CMOS sensor, or an organic soup). [yes, there are organic broths that are light sensitive – interesting possibilities await…] With the increased sensitivity of the sensor used in the iPhone4S, the base ISO rating should be improved as well. Since Apple does not give details here, the results are from testing by many people (including myself). The apparent lower rating (least sensitive end of the ISO range) has moved from 80 [iPhone4] to 64 [iPhone4S]. The upper rating appears to be the same on each – 800. There is some discrepancy here in the literature – some have reported an ISO of 1000 being shown in the EXIF data (the metadata included with each exposure, the only way to know what occurred during the shot), but others, including myself, have been unable to reproduce that finding. What is for sure is that the noise in the picture is reduced in the iPhone4S as compared to the iPhone4 (for identical exposures of same subject taken at same time under identical lighting conditions).
Shutter speed is, other than base ISO rating, the only way that the iPhone has to modulate the exposure – since the aperture of the lens is fixed. The less time that the little ‘pixel buckets’ in the CMOS sensor have to accumulate photons, the lower the exposure. Since in bright light conditions more photons arrive per second than in low light conditions, a faster shutter speed is necessary to avoid over-exposure – certain death to a good image from a digital sensor. For more on this please read my last post – this is discussed in detail.
The last thing we need to discuss is however very, very important – and this affects virtually all current CMOS-based video cameras, not just cellphones. This concerns the side-effects of the ‘rolling shutter’ technology that is used on all CMOS-based cameras (still or video) to date. CCD sensors use a ‘global shutter’ – i.e. a shuttering mechanism that exposes all the pixels at once, in a similar fashion to film using a mechanical shutter. However, CMOS sensors “roll” the exposure from top to bottom of the sensor – essentially telling one row of pixels at a time to start gathering light, then stop at the end of the exposure period, in sequence. Without getting into the technical details for why this is done, at the high level it allows more rapid readout of image information from the sensor (one of the benefits of CMOS as compared to CCD), and use less power, as well as avoids overheating of the sensor under continuous use (as in video).
The problem with a rolling shutter is when either the camera or subject is moving rapidly. Substantial visual artifacts are introduced: these are called Skew, Wobble and Partial Exposure. There is virtually no way to resolve these issues. You can minimize them by the use of tripods, etc. in some cases – but of course this is not useful for hand held cellphone shots! Rather than reproduce an already excellent short explanation of this topic, please look at Barry Green’s post on this issue, complete with video examples.
The bottom line is that, as I have mentioned before, keep camera movement to an absolute minimum to experience the best performance from the iPhone4S video camera.
Lens – the new 5-element version
The iPhone4S has increased the optical elements in the lens assembly from 5 since the iPhone4 (which had 4 elements). Current research indicates that (following Apple’s usual policy of dual-sourcing all components of their products) Largan Precision Co., Ltd. and Genius Electronic Optical Co., Ltd. are supplying the 5-element lens assembly for the iPhone4S.
The optical elements are most likely manufactured from plastic optical material (Polystyrene and PMMA – a type of acrylic). Although plastic lenses have many issues (coatings don’t stick well, not very many have great optical properties, they have a high coefficient of thermal expansion, high index variation with temperature and less heat resistance or durability among others) – there are two huge mitigating factors: much lower cost and weight than glass, and the ability to be formed into complex shapes that glass cannot. Both of these factors are extremely important in cellphone camera lens design, to the point that these factors outweigh any of the disadvantages.
The complex shapes are aspheres, which are difficult to fabricate out of glass, and afford much finer control over aberrations using fewer elements, which is an absolute necessity when working with very little package depth. A modern lens, even a prime lens (which is what the iPhone uses – i.e. a fixed focal length lens) for a DSLR usually has from 4-6 elements, similar to the iPhone lens with 5 elements – but the depth of the lens barrel that holds all the elements is often at least 2” if not greater. This is required for the optical alignment of the basically spheroid ground lenses.
The iPhone lens assembly (similar to many quality cellphone camera lenses) is barely over ¼” in length! Only the properties of highly malleable plastic that allow the molding of aspherical shaped lenses makes this possible. Optical system design, particularly for high quality photography, is a massive subject by itself – I will make no attempt here to delve into that. However, the basic reason for multiple element lenses is they allow the lens system designer to mitigate basic flaws in one lens material by balancing this with another element made from a different material. Also, there are physics and material sciences issues that have to do with how light is bent by a lens that often are greatly improved with a system of multiple elements.
The iPhone lens is a fixed aperture, fixed focal length lens. See the previous post for details on how this lens compares to a similar lens that would be used for 35mm photography (it’s equivalent to a 32mm lens if used on a 35mm camera, even though this actual lens’ focal length is only 4.28mm)
The aperture is fixed at f2.4 – a fairly fast lens. Both cost and the manufacturing complexity of making an adjustable aperture on a lens this small impossible to consider.
The AOV (Angle of View) of the lens for still photography is 62° – this changes to only 46° for video mode. The reduction in AOV is due to the smaller size of the image for video (2MP instead of 8MP). Remember that AOV is a function of the actual focal length of the lens and the size of the sensor. It’s the size of the effective sensor that matters, not how big the physical sensor may be. The smaller effective size of the video sensor – factored against the same focal length lens – means that the AOV get smaller as well.
Since the effective sensor size is different, this also alters the equivalent focal length of the lens (as referenced to a 35mm system), in this case the focal length increases to an effective value of 42mm.
The bottom line is that in video mode, the iPhone camera sees less of a given scene than when in still camera mode.
Discussion on Hardware
This next portion shows a bit of the actual insides of an actual iPhone4S. While the focus of this entire series of posts is on just the camera of the iPhone4S, a few details about the phone in general will be shown. The camera is not totally isolated within the ecosystem of the iPhone, a large portion of the actual usability of this wonderful device is assisted by the powerful dual-core A5 cpu, the substantial on-board memory, the graphics processors, the high-resolution display, etc. etc.
[The following photos and disassembly process from the wonderful site iFixit.com – which apparently loves to rip to pieces any iDevice they can get their hands on. Don’t try this at home – unless you are very brave, have good coordination and tools – and have no use for a warranty or a working iPhone afterwards…]
You want one anyway? Order one here.
As promised in the introduction, here are a few sample photos showing the differences between various models of the iPhone as well as several DSLR cameras. Many more such comparisions as well as other example photos will be introduced in the next section of this series – Camera Software.
(Thanks to Jonathan at campl.us for these great comparison shots from his blog on iPhone cameras)
Well, this concludes this chapter of the blog on the iPhone4S. Hopefully this has shed some light on how the hardware is put together, along with some further details on the technical specifications of this device. A full knowledge of your tools will always help in making better images, particularly in challenging situations.
Stay tuned for the next chapter, which will deal with all the software that makes this hardware actually produce useful images. Both the core software of the Apple operating system (iOS 5.1 at the time of this writing) and a number of popular camera apps (both still and video) will be discussed.