• Home
  • about this blog
  • Blog Posts

Parasam

Menu

  • design
  • fashion
  • history
  • philosophy
  • photography
  • post-production
    • Content Protection
    • Quality Control
  • science
  • security
  • technology
    • 2nd screen
    • IoT
  • Uncategorized
  • Enter your email address to follow this blog and receive notifications of new posts by email.

  • Recent Posts

    • Take Control of your Phone
    • DI – Disintermediation, 5 years on…
    • Objective Photography is an Oxymoron (all photos lie…)
    • A Historical Moment: The Sylmar Earthquake of 1971 (Los Angeles, CA)
    • Where Did My Images Go? [the challenge of long-term preservation of digital images]
  • Archives

    • September 2020
    • October 2017
    • August 2016
    • June 2016
    • May 2016
    • November 2015
    • June 2015
    • April 2015
    • March 2015
    • December 2014
    • February 2014
    • September 2012
    • August 2012
    • June 2012
    • May 2012
    • April 2012
    • March 2012
    • February 2012
    • January 2012
  • Categories

    • 2nd screen
    • Content Protection
    • design
    • fashion
    • history
    • IoT
    • philosophy
    • photography
    • post-production
    • Quality Control
    • science
    • security
    • technology
    • Uncategorized
  • Meta

    • Register
    • Log in
    • Entries feed
    • Comments feed
    • WordPress.com

Browsing Tags iPhone

Take Control of your Phone

September 28, 2020 · by parasam

A ton of info has been well written on the addictive qualities of the smartphone, its intrusion into our daily lives, and the two-edged sword of “free” apps. I won’t repeat any of that here, rather just offer a short set of solutions to make your phone work for you, instead of the platforms, ad agencies and data resellers that all too often have made your attention the product.

If you have not seen it, the movie “/the social dilemma_” is a good summary of the issues. https://www.netflix.com/za/title/81254224

The core of the situation is that our phones (and to a lesser extent our tablets and computers) have become a tool for a relatively few large firms to command and hold our attention, then using that to present ads which fuel that economic ecosystem. You may have heard terms such as “data is the new oil”, “your data is for sale”, etc. These aphorisms miss the point: what is for sale is your attention, the underlying data of your behavior and what is likely to hold your attention is merely the mechanism.

The software that grabs, and then holds, our attention is comprised of two main aspects: the User Experience / User Interface (UX/UI) of the device itself (iPhone, Android, etc.); and the design of individual apps (particularly social media such as Facebook, Instagram, Twitter, etc.)

This post only deals with the former: the things one can easily do to reduce the actions, noise, and other programmatic functions of your phone that are designed to trigger a response (to pick up your phone and interact).

I have used the Apple ecosystem (iOS) as the example here mainly as it is well-known and universal, while there are a large number of variations on the Android OS, with each hardware manufacturer often tweaking it a bit. However the principles are exactly the same, and one can duplicate in most cases my suggestions.

Notifications

This is the animal you need to tame. The blinking, dinging and buzzing that says “Look At Me“; the little red badges that induce the anxiety of FOMO (Fear Of Missing Out)…

To a lesser extent, the layout of apps on your phone, the organization of your apps, and a few other tunings also affect the subtle interaction of phone behavior.

Using iOS as the example, open Settings, tap on Notifications. You will see a list of all your apps. Turn off ALL your notifications [the switch that says Allow Notifications]. As a deterrent you cannot switch them all off at once, you must turn each one off individually. I recommend this procedure as you won’t miss one this way. Turning back on certain app notifications then becomes a conscious decision.

When it comes to turning on a notification, think hard about what do you absolutely have to see without first allowing yourself to be in control – When do you want to check, What do you want to check, Why to you want to check. I recommend only turning notifications on for apps that tell you that People want to connect with you, not things (such as social media, news sites, etc.) For example, in that category here is the list of what is turned on in my phone:

  • Phone
  • FaceTime
  • Messages
  • Signal (an encrypted messaging app)
  • WhatsApp

That’s all! In addition, for the few apps that you do allow to draw your attention, you can modify the behavior of the notification to further lower the level of disturbance. Once the Allow Notifications switch is turned on, the choices listed under Alerts are Lock Screen (which allows the Notification to appear even if your phone is asleep); Notification Center (showing the Alert there); and Banners (which show up at the top of your screen when you are looking at another app).

As a suggestion and example, for WhatsApp I have Lock Screen and Notification Center turned on, but Banners turned off. Here in South Africa WhatsApp is the primary means of text communication, so I do depend on seeing that even on my Lock Screen to know when another person is trying to reach me. But it’s not so vital that my attention needs to be dragged away from answering an e-mail with a banner interrupting me that someone wants to chat on WhatsApp.

If you turn on Banners, I suggest you always use Temporary, as this makes the Banner go away after a few seconds. Otherwise you must further divert your attention to manually dismiss the Banner.

The next group of alert behaviors consists of two switches: Sounds and Badges. Again, be sparing in your use of sound, as that can be quite distracting. I only have Sounds turned on for my phone, everything else I can see the next time I look at my phone. Badges are insidious, it’s that little red circle with a number of what you haven’t given your attention to. Once you are in an app, you will see what is there that you haven’t dealt with, turn Badges off!

The last section (Options) has one important setting: Show Previews. This has three possibilities: Always (the default setting), When Unlocked, and Never. This shows the first few lines of the message, WhatsApp, etc. – and if Always is selected then even on your lock screen (for those apps that you have set to alert you there) messages that may be private are displayed for anyone that can see your phone. I either set this to When Unlocked or Never, depending on the app. The remaining setting (Notification Grouping) is fine left on Automatic.

You will notice I have not allowed notifications for e-mail, even though this can be from people. It is far too disturbing and unnecessary to receive alerts for every e-mail.

There are a few apps that I do allow notifications to appear that are not “people oriented”: mainly security. Here is my list as an example:

  • Buzzer (neighborhood security app)
  • Earthquake
  • Find iPhone
  • LoadShed CT (we have lots of load shedding here in Cape Town)
  • Reminders
  • Weather (for severe weather alerts only)
  • Waze (so I know when to leave for planned trip to an appointment)

There are three last things that will help in terms of taming your phone.

  1. Only put task oriented apps on the Home Page (Reminders, Calendar, Settings, etc. Put all other apps on additional pages. Put ALL apps inside folders – this not only helps in organization, it also requires you to make at least three actions to access a social media app such as Facebook: 1) swipe to 2nd page; 2) open folder; 3) select app.
  2. Set Homepage to monochrome (far less disturbing and distracting). On iPhone this is done by going to Settings/General/Accessibility, scroll all the way to the bottom of the list and tap Accessibility Shortcut. Choose Color Filters. Exit Settings. Triple-clicking the Home button will switch from normal colored icons to gray-scale icons. Try it out…
  3. Lastly, turn on Night Shift. This is in Settings/Display & Brightness. This warms up the color temperature of the screen in dark surroundings, normally in evening and nighttime. You may find that you want to move the slider to the left a bit, I find the default middle setting too orange, but it really does reduce the ‘blue light syndrome’ associated with sleep disturbance.

Hope this is of use.

iPhone5 – Part 1: Features, Performance… and 4G

September 16, 2012 · by parasam

[Note: This is the first of either 2 or 3 posts on the new iPhone5 – depending on how quickly accurate information becomes available on this device. This post covers what Apple has announced, along with info gleaned from other technical sources to date. Further details will have to wait until actual phones are shipped, then torn down by specialists and real benchmarks are run against the new hardware and iOS6]

Introduction

Unless you’ve been living under a very large rock, one couldn’t help but hear that Apple has introduced the next version of its iPhone. This article will look at what this device actually purports to offer the user, along with some of my comments and observations. All of these comments are based on current press releases and ‘paper’ information:  the actual hardware won’t release until Sept. 21, and due to high demand, it may take me a bit longer to get one in hand for personal testing. I’ll go into details below, but I don’t intend to upgrade from my 4S at this time. I do have a good relationship with my local Apple business retailer, and my rep will be setting aside one of the new phones for me to come in and play with for a few hours as soon as she has one that is not immediately promised. Currently we are looking at about first week in October – so look for another post then. As of the date of writing (15 Sep) Apple has said their initial online allocation has sold out, so I expect demand to be high for the first few weeks.

Front and Back of iPhone5

The basic specifications and comparisons to previous models are shown below:

Physical Comparison

 

Apple iPhone 4

Apple iPhone 4S

Apple iPhone 5

Samsung Galaxy S 3

     
Height

115.2 mm (4.5″)

115.2 mm (4.5″)

123.8 mm (4.87″)

136.6 mm (5.38″)

     
Width

58.6 mm (2.31″)

58.6 mm (2.31″)

58.6 mm (2.31″)

70.6 mm (2.78″)

     
Depth

9.3 mm ( 0.37″)

9.3 mm ( 0.37″)

7.6 mm (0.30″)

8.6 mm (0.34″)

     
Weight

137 g (4.8 oz)

140 g (4.9 oz)

112 g (3.95 oz)

133 g (4.7 oz)

     
CPU

Apple A4 @ ~800MHz Cortex A8

Apple A5 @ ~800MHz Dual Core Cortex A9

Apple A6 (Dual Core Cortex A15?)

1.5 GHz MSM8960 Dual Core Krait

     
GPU

PowerVR SGX 535

PowerVR SGX 543MP2

?

Adreno 225

     
RAM

512MB LPDDR1-400

512MB LPDDR2-800

1GB LPDDR2

2GB LPDDR2

     
NAND

16GB or 32GB integrated

16GB, 32GB or 64GB integrated

16GB, 32GB or 64GB integrated

16GB or 32GB NAND with up to 64GB microSDXC

     
Camera

5MP with LED Flash + Front Facing Camera

8MP with LED Flash + Front Facing Camera

8MP with LED Flash + 720p Front Facing Camera

8 MP with LED flash + 1.9 MP front facing

     
Screen

3.5″ 640 x 960 LED backlit LCD

3.5″ 640 x 960 LED backlit LCD

4″ 1136 x 640 LED backlit LCD

4.8″ 1280 x 720 HD Super AMOLED

     
Battery

Integrated 5.254Whr

Integrated 5.291Whr

Integrated ?? Whr

Removable 7.98 Whr

     
WiFi/BT

802.11 b/g/n

Bluetooth 2.1

802.11 b/g/n

Bluetooth 4.0

802.11 a/b/g/n

Bluetooth 4.0

802.11 a/b/g/n

Bluetooth 4.0

     

As can be seen from the above chart, the iPhone5 is an improvement in several areas from the 4S, but in pure technological features still is behind some of the latest Android devices. We’ll now go through some of the details, and what they actually may mean for a user.

Case

The biggest external change is the shape and size of the iPhone5: due to the larger screen (true 16:9 aspect ratio for the first time), the phone is longer while maintaining the same width. It is also slightly thinner. The construction of the case is a bit different as well: the iPhone4S used glass panels for the full front and rear; the iPhone5 replaces the rear panel with a solid aluminum panel except for the very top and bottom of the rear shell which remain glass. This is required for the Wi-Fi, Bluetooth and GPS antennas to receive radio signals (metal blocks reception).

There are two major changes in the case design, both of which will have significant impacts to usage and accessories: the headphone/microphone jack has been moved to the bottom of the case, and the docking connector has been completely redesigned: this is now a new proprietary “Lightning” connector that is much smaller. Both of these changes have instantly rendered obsolete all 3rd-party devices that use the docking connector to plug the iPhone into external accessories such as charging bases, car charging cords, clock-radios and HiFi units, etc. While Apple is offering an adaptor cable in several forms, there are serious drawbacks for many uses.

The basic Lightning-to-USB adaptor cable ($19) is provided as part of the iPhone5 package [along with the small charger], if you have other desktop power supplies or chargers, or are fortunate enough to have a car charger that accepts a USB cable (as opposed to a built in docking connector as most do), you can spend the extra cash and still use those devices with the new iPhone5.

Lightning to USB adaptor cable (1m)

For connecting the new iPhone5 to current 30-pin docking connector devices, Apple offers two solutions: a short cable (0.2m – 8″) [$39] or a stub connector [$29]:

Lightning to 30-pin cable (0.2m)

Lightning to 30-pin stub connector

The Lightning-to-USB adaptor is growing scare already:  in the last 48 hours the shipping dates have slipped from 1-2 days to 3 weeks or more. Neither of the Lightning-to-30-pin adaptors has a ship date yet, a rather nebulous statement of “October” is all that is stated on the Apple store. So early adapters of the iPhone5 should expect a substantial delay before they can make use of any current aftermarket devices that use the docking connector. Another issue:  the cost of the adaptors. As part of their incredible branding the closed-universe of Apple/Mac/iDevice, users have been conditioned to paying a hefty premium for basic utility devices as compared to devices that perform the same funtion for other brands such as Android phones. For example, the same phone-to-USB cable (1m) that Apple sells for $19 is available for the latest model Samsung Galaxy S3 for between $6 to $9 at a number of online retailers. It’s very easy to end up spending $100 or more on iPhone accessories just for a case and a few adaptors.

Now let’s get to the real issue of this new Lightning adaptor – even assuming that one can eventually purchase the necessary adaptors shown above. Basically there are two classes of devices that use the docking connector: those that connect via a flexible cable (chargers and similar devices), and those that mechanically support the iPhone with the docking connector, such as clock/radios, HiFi units, audio and other adaptors, phone holders for cars, just to name a few. The old style 30-pin connector was wide enough, along with the mechanical design, to actually support the iPhone with a minimum of external ‘cradle’ to not put undue stress on the connector. The Apple desktop docking adaptor is such an example:

30-pin docking adaptor

The new Lightning connector is so small that it offers no mechanical stability. Any device that will hold the iPhone will need a new design, not only to add sufficient mechanical support to avoid bending or disconnecting the new docking adaptor, but to accomodate the thinner case as well. Here is a small small sample of devices that will affected by this design change:

As can be seen, this connector change has a profound and wide reaching effect. Users that have a substantial investment in aftermarket devices will need to carefully consider any decision to upgrade to the iPhone5. Virtually all of the above devices will simply not work with the new phone, even if the ‘stub adaptor’ was employed. While a large number of 3rd party providers of iPhone accessories will be happy (they can resell the same product again each time a design change occurs), the end user may be less enchanted. Even simple things such as protective cases can not be ‘recycled’ for use on the new phone. I’ll give one personal example: I have an external camera lens adaptor set, the iPro by Schneider. This set of lenses will not work at all with the iPhone5. Not only is the case different (which is critical for mounting the lenses to the phone in precise alignment with the internal iPhone camera), but the current evidence is that Apple has changed the optics slightly on the iPhone5, such that an optical redesign of accessory lenses would be required. A very careful and methodical analysis of the side-effects of potentially upgrading your iPhone should be performed if you own any significant devices that use the docking connector.

The other design change is the movement of the headphone jack to the bottom left of the case. While this does not in and of itself present the same challenges that the docking connector poses, it does have ramifications that may not be immediately apparent. While, for a user that is just carrying the iPhone as a music playback device (iPod-ish use) the headphone cable connected to the bottom is a superior design choice, it once again poses a challenge for any device where the iPhone is physically ‘docked’. The headphone cable is no longer accessible! For instance, with the original iPhone dock, I could be on the phone (using a headphone/microphone cable assembly) and walk to my desk and drop the iPhone in the docking station/charger and keep talking while my depleted battery was now being refueled… no longer… the cable from the bottom won’t allow the phone to be inserted into the docking station…

The bottom line is that Apple has drawn an absolute line in the sand with the iPhone5:  the user is forced to start completely over with all accessories, from the trivial to the expensive. While it is likely that some of the aftermarket devices can be, and will be, eventually adapted to the new case design, there will be a cost in terms of both money and time delay. Depending on the complexity (plastic cases for the iPhone5 will show up in a few months, while hi-end home HiFi units that accept an iPhone may take 6 months to a year to arrive) there will be a significant delay in being able to use the iPhone5 in as ubiquitous manner as all previous iPhones (which shared the same docking and case design).

The last issue to raise in regards to the change in case design is simply the size of the new phone. It’s longer. We’ve already discussed that this will require new cases, shells, etc. – but this will also affect  many ‘fashion-oriented’ aftermarket handbags, belt-cases, messenger bags, etc. With the iPhone being the darling of the artistic, entertainment and fashion groups, many stylish (and expensive) accoutrements have been created that specifically fit the iPhone 3/4 case size. Those too will have to adapt.

Screen

The driving factor behind the new case size is the increase in screen resolution from 960×640 (1:1.50 aspect ratio) to 1136×640 (1:1.77 aspect ratio). The new size matches current HD display aspect ratio of 16:9 (1.77) so movies viewed on the iPhone will correctly fit the screen. With the iPhone4S, which had full technical capability to both shoot and display 1920×1080 (FHD or Full HD), HD movies were either cut off on the left and right side, or letterboxed (black bars at top and bottom of the picture) when displayed. Many Android devices have had full 16:9 display capabilities for a year or more now. Very few technical details have been released so far by Apple on the actual screen, here is what I have been able to glean to date:

  • The touch-screen interface has changed from “on-cell” to “in-cell” technology. Without getting overly geeky, this means that the actual touch-sensitive surface is now built-in to the LCD surface itself, instead of being a separate layer glued on top of the LCD display. This has three advantages:
    • Thinner display
    • Simplifies manufacture, as one less assembly step (aligning and gluing the touch layer)
    • Slightly brighter and more saturated visible display, due to not having a separate layer on top of the actual LCD layer.
  • The color gamut for virtually all cellphone and computer displays is currently the sRGB standard (which itself is a low-gamut color space – in the future we will see much improved color spaces, but for now that is best thing that can economically be manufactured, particularly for mobile devices). None of the current devices fully reproduce the full sRGB gamut, even as limited as it is. But this improvement gets the iPhone that much closer. One of the tests I intend to run when I get my test drive of the iPhone5 is a gamut check with a precision optical color gamut tester.
  • No firm data is available yet, but anecdotal reports, coupled with known ‘side-effects’ of “in-cell” technology, promise a slightly more efficient display, in terms of battery life. Since the LCD display is one of the largest consumers of battery power, this is significant.

Camera(s)

The rear-facing camera (high resolution one that is used for still and video photography) is essentially unchanged. However… there are potentially three small but significant updates that will likely affect serious iPhonographers: 

  1. Though no firm details have been released by Apple yet, when images were taken at the press conference and compared to images taken with an iPhone4S of the same subject from the same position, the iPhone5 images appear to have a slightly larger field of view. This, if accurate, would indicate that the focal length of the lens has changed slightly. The iPhone4S has an actual focal length of 4.28mm (equivalent to a 32mm lens for a 35mm camera); this may indicate the reduction of focal length to 3.75mm (28mm equivalent focal length). There are several strong reasons that support this theory:
    1. The iPhone5 is thinner, and everything else has to accomodate this. A shorter focal length lens allows the camera lens/sensor assembly to be thinner.
    2. Many users have expressed a desire for a slightly wider angle of view, in fact the most popular aftermarket adaptor lenses for the iPhone are wide angle format.
    3. The slightly wider field of view simplies the new panoramic ‘stitch’ capability of the camera hardware/software.
  2. Apple claims the camera is “25% smaller”. We have no idea what that really means, but IF this in fact results in a smaller sensor surface then the individual pixels will be smaller. The same number of pixels are used (it is still an 8MP sensor), but smaller pixels mean less light-gathering capability, potentially making low light photography more difficult.
    1. Apple does claim new hardware/software to make the camera perform better in low light. What this means is not yet clear.
    2. The math and geometry of optics, sensor size and lens mechanics essentially show us that small sensors are more subject to camera movement, shaking and vibration. (The same angular movement of a full sized 35mm digital camera will cause far less blurring in the resultant image than an iPhone4S. If the sensor is even smaller in the iPhone5, this effect will be more pronounced).
  3. Apple claims a redesigned lens cover for the iPhone5. (In all iPhones, there is a clear plastic window that protects the actual lens. This is part of the exterior case). With the iPhone5, this window is now “sapphire glass” – whatever that actually is… The important issue is that any change is a change – even if this window material is harder and ‘more clear’, it will be different from the iPhone4 or iPhone4S – different materials have different transmissive characteristics. Where this may cause an effect is with external adaptor lenses designed for iPhone4/4S devices.

The front-facing camera (FaceTime, self-portrait) has in the past been a very low resolution device of VGA quality (640×480). This produced very fuzzy images, the sensor was not very sensitive in low light, and the images did not match the display aspect ratio. The iPhone5 has increased the resolution of the front-facing sensor to 1280×720 (720P) for video, 1280×960 for still (1.2MP). While no other specs on this camera have been released, one can assume some degree of other improvements in the combined camera/lens assembly, such that overall image quality will improve.

The faster CPU and other system hardware, combined with new improvement in iOS 6.0, bring several new enhancements to iPhonography. Details are skimpy at this time, but panoramic photos, faster image-taking in general, improved speed for image processing within the phone, better noise reduction for low-light photography are some of the new features mentioned. Experience, testing and a full tear-down of an iPhone5 are the only way we will know for sure. More to come in future posts…

CPU/SystemChips/Memory

Inside the iPhone5

As best as can be determined at this early stage, there are a number of changes inside the iPhone5. Some (very little actually!) of the information below is from Apple, many of the other observations are based on the same detective work that was used for earlier reporting on the iPhone4S:  careful reading of industry trends, tracking of orders for components of typical iPhone parts manufacturers, comments and interviews with industry experts that track Apples, Androids and other such odd things, and to a certain extent just experience. Even though Apple is a phenomenally secretive company, even they can’t make something out of nothing. There are only so many chips to choose from, and when one factors in things like power consumption, desired performance, physical size, compatibility with other parts of the phone and so on, there really aren’t that many choices. So even if some of the assumptions at this early stage are slightly in error, the overall capabilities and functionality will be the same.

Ok, yes, Apple has said there is a new CPU in the iPhone5, and it’s named the “A6”. But that doesn’t actually tell one what it is, how it’s made, or what it does. About all that Apple has said directly so far is that it’s “up to twice as fast as the A5 chip [used in iPhone4S]”, and “the A6 chip offers graphics performance that’s up to twice as fast as the A5.” That’s not a lot of detailed information… Once companies such as Anandtech and Chipworks get a few actual iPhone5 units and tear them apart we will know more. These firms are exhaustive in their analysis (and no, the phone does not work again once they take it to bits!) – they even ‘decap’ the chips and use x-ray techniques to analyze the actual chip substrate to look for vendor codes and other clues as to the makeup of each part. I will report on that once this data becomes available.

At this time, some think that the A6 chip is using 28/32nm technology (absolutely cutting edge for mobile chipsets) and packing in two ARM Cortex A15 cores to create the CPU. Others think that this may in fact be an entirely Apple ‘home-grown’ ARM dual-core chip. The GPU (Graphics Processing Unit) is likely an assembly using four of Imagination’s PowerVR SGX543 cores, which double the GPU cores that are in the iPhone4S. In addition to the actual advanced hardware, the final performance is almost for certain a careful almalgamation of peripheral chips, tweaking and tuning of both firmware and kernel software, etc. The design criteria and implementation of devices such as the iPhone5 is just about as close to the edge of what’s currently possible as current science and human cleverness can get. This is one area where, for all of the downsides to a ‘closed ecosystem’ that is the World of Apple, the upside is that when a company has total control over both the hardware and the software of a device, a level of systems tuning is possible that open-source implementations such as Android simply can never match. If one is interested further in this philosophy, please see my further comments about such “complementary design techniques” in my post on iPhone accessory lenses here.

There are two types of memory in all advanced smartphones, including the iPhone5. The first is SDRAM (which is similar to the RAM in your computer, the very fast working memory that is directly addressed by the CPU chips), the second is NAND (which is similar to the hard disk in your computer – slower but has much greater storage capacity). In smartphones, the NAND is also a solid-state device (not a spinning disk) to save weight and power, but it still is considerably slower in access time than the SDRAM. As a point, it would not be practical, either in terms of economics, power or size, to attempt to use SDRAM for all the memory in a smartphone. The chart at the beginning of this article shows the increase in size of the SDRAM over the various iPhone models, to date the mass storage (NAND) has been available in 3 sizes:  16, 32 and 64GB.

Radios:  Wi-Fi/Cellular/GPS/Bluetooth

Although most of us don’t think about a cellphone in this way, once you get all the peripheral bits out of the way, these devices are just highly sophisticated portable radio transcievers. Sort of CB radio handsets on steroids. There are four main categories of radios used in smartphones: Wi-Fi; cellular radios for both voice and data; GPS and Bluetooth. The design, frequencies used and other parameters are so different for each of these classes that entirely separate radios must be used for each function. In fact, as we will see shortly, even within the cellular radio group it is frequently required to have multiple radios to handle all the variations found in world-wide networks. Each separate radio adds complexity, cost, weight, power consumption and the added issue of antenna design and inter-device interference. It is truly a complicated design task to integrate all the distinct RF components in a device such as the iPhone.

Again, this initial review is lacking in hard facts ‘from the horse’s mouth’ – our particular horse (the Rocking Apple) is mute… but using similar techniques as outlined above for the CPU/GPU chips, here is what my best guess is for the innards of “radio-land” inside an iPhone5:

Wi-Fi

    • At this time there are four Wi-Fi standards in use, all of which are ‘subparts’ of the IEEE 802 wireless communications standard: 802.11a; 802.11b, 802.11g, 802.11n
    • There are a lot of subtle details, but in essence each increase in the appending letter is equivalent to a higher data transfer speed. In a perfect world (pay attention to this – Wi-Fi almost never gets even close to what is theoretically possible! Marketing hype alert…) the highest speed strata, 802.11n, is capable of up to 150Mb/s.
    • Again, I am oversimplifying, but older WiFi technology used a single band of radio frequencies, centered around 2.4GHz. The newest form, 802.11n, allows the use of two bands of frequencies, 2.4GHZ and 5.0GHz. If the designer implements two WiFi radios, it is possible to use both frequency bands simultaneously, thereby increasing the aggregate data transfer, or to better avoid interference that may be present on one of the bands. As always, adding radios adds cost, complexity, etc.

Cellular

This is the area that causes the most confusion, and ultimately required (in the case of the iPhone) two entirely separate versions of hardware (GSM for AT&T, CDMA for Verizon – in the US. Gets even more complicated overseas). Cellular telephone systems unfortunately were developed by different groups in different countries at different times. Adding to this were social, political, geographical, economic and engineering issues that were anything but uniform. This led to a large number of completely incompatible cellular networks over time. Even in the earliest days of analog cellphones there were multiple, incompatible networks. Once the world switched to digital carrier technology, the diaspora continued.. This is such a complicated subject that I have decided to write a separate blog on this – it is really a bit off-topic (in terms of detail) for this post, and may unreasonably detract those that are not interested in such details. I’ll post that in the next week, with a link from here once complete.

For the purposes of this iPhone5 introduction, a very simple and brief primer so we can understand the importance – and limitations!! of what is (incorrectly) called 4G – that bit of marketing hype that has everyone so fascinated even though 93% of humanity has absolutely no idea what it really is. Such is the power of marketing…

Another warning:  telecommunications industries are totally in love with acronyms. Really arcane weird and hard to understand acronyms. If a telecomms engineer can’t wedge in at least eight of them in every sentence, he/she starts twitching and otherwise showing physical symptoms of distress and feelings of incompetence… I’m just going to list them here, in all their other-worldly glory… if you want them deciphered, wait for my blog (promised above) on the cellular system.

 To add some semblance of control to the chaotic jungle of wireless networks there are a number of standards bodies that attempt to set up some rules. Without them we would have no interoperabililty of cellphones from one network to another. The two main groups, in terms of this discussion, are the 3GPP (3rd Generation Partnership Project) and the ITU (International Telecommunication Union). That’s where nomenclature such as 2G, 3G, 4G comes from. And, you guessed it, “G” is generation. (Never mind “1G” – that too will be in the upcoming blog…). For practical purposes, most of us are used to 3G – that was the best data technology for cellular system until recently. 4G is “better”… sort of… we’ll see why in a moment.

The biggest reason I am delving into this archane stuff is to (as simply as I can) educate the user as to why you can’t browse the web or perform other data functions while simultaneosly talking on the phone IF you are using an iPhone on the Sprint or Verizon networks – but can if you are on AT&T. The reason is that LTE is an extension of GSM (the technology that AT&T uses for voice and data currently), whereas both Sprint and Verizon use a different technology for voice/data (CDMA). Each of these technologies requires a separate radio and a separate antenna. For AT&T customers, the iPhone needs 2 antennas (4G LTE + 3G for voice [and 3G data fallback if no 4G/LTE is available in that location), if the iPhone was going to support the same functionality for Sprint/Verizon, a 3rd radio and antenna would be required (4G/LTE for high speed data; 3G fallback data, and CDMA voice). Apple decided not to add the weight, complexity and expense to the iPhone5, so customers on those networks face an either/or choice: voice or data, but not at the same time.

Apple is making some serious claims on improved battery life when using 4G, saying that the battery will last the same (up to 8 hours) whether on 3G or 4G. That’s impressive, early 4G phones from other vendors have had notoriously low battery life on 4G. Some assumptions, other than OS tweaks, are possibly the use of a new Qualcomm chip, the MDM9615LTE.

The range of cellular voice and data types/bands/variations that are said to be supported by the iPhone5 are:  GSM (AT&T), CDMA (Verizon & Sprint), EDGE, EV-DO, HSPA, HSPA+, DC-HSPA, LTE.

Now, another important few points on 4G:

    • The current technology that everyone is calling 4G… isn’t really. The marketing monsters won the battle however, and even the standards bodies caved. LTE (Long Term Evolution – and this does have a technical meaning in terms of digital symbol reconstitution from a multiplexed data stream, as opposed to the actual advancement of intellect, compassion, health and heart of the human species – something that I hold in serious doubt right now…) is a ‘stepping-stone’ on the way to “True 4G”, and is not necessarily the only way to implement 4G – but the marketing folks just HAD to have ‘higher number means better’  term, so just like at one point we had “2.5G” (not quite real 3G but better than 2G in a few weird ways), we now have 4G… to be supplemented next year with “LTE Advanced” or “4G Advanced”. Hmmmm. And once the networks improve to “True 4G” or whatever, will the iPhone5 still work? Yes, but it won’t necessarily support all the features of “LTE Advanced” – for instance, LTE Advanced will support “VoLTE” [Voice over LTE] so that only a single radio/antenna would be required for all voice and data – essentially the voice call is muxed into the data layer and just carried as another stream of data. However, and this is a BIG however, that would require essentially full global coverage of “4G/LTE Advanced” – something that is years away due to cost and time to build out networks.
    • Even with the current “baby 4G”, this is a new technology, and most networks in the world only support this in certain limited areas, if at all. It will improve every month as the carriers slowly build out the networks, but it will take time. The actual radio/antenna systems are different from everything currently deployed, so new hardware has to be stuck onto every single cell tower in the world… not a trivial task… Trying to determine where 4G actually works, on which carrier, is effectively impossible at this time. No one tells the whole story, and you can be sure that Pinnochio would look like a snub-nose in comparision to many of the claims put forth by various cellular carriers… In the US, both Verizon and AT&T claim about 65-75% coverage of their respective markets: but these are in high density population areas where the subscriber base makes this economically attractive.
    • The situation is much more spotty overseas, with two challenges: even within the LTE world there are different frequencies used in different areas, and the iPhone5 does not support all of them. If you are planning to use the iPhone5 outside of the US, and want to use LTE, check carefully. And of course the build-out of 4G is nowhere near as complete as in the US.
    • The final issue with 4G is economic, not technical. Since data usage is what gobbles up network capacity (as opposed to voice/text), the plans that the carriers sell to their users are rapidly changing to offer either high limit or unlimited voice/text at fairly reasonable rates, with data now being capped and the prices increasing. While a typical data plan (say 5GB) allows that much data to be transferred, regardless of whether on 3G or 4G, the issue is speed. Since LTE can run as fast as 100Mb/s (again, your individual mileage may vary…) – which is much, much faster than 3G, and in fact often faster than most Wi-Fi networks, it is easy for the user to consume their cap much faster. If you have ever stood on a street corner and s-l-o-w-l-y waited for a single page to load on your iPhone4, you are not really motivated to stand there for an hour cruising the web or watching sport. But… if the pages go snap! snap! snap!, or the US Open plays great in HD without any pauses or that dreaded ‘buffering’ message – then normal human tendancy will be to use more. And the carriers are just loving that!!
    • As an example, (just to be theoretical and keep math simple) if we assume 100Mb/s on LTE, then your monthly 5GB cap would be consumed in about 7 minutes!! Now this example is assuming constant download of that data rate, which is unrealistic – a typical page load for a mobile device is under 1 MB, and then you stare at it for a bit, then load another one, and so one – so for web browsing you get snappy loads without consuming a ridiculous amount of data – but beware video streaming – which DOES consume constant data. It will take users some time (and sticker shock at bill time if you have auto-renew set on your data plan!) to learn how to manage their data consumption. (Tip: set to lower resolution streaming when on LTE, switch back to high resolution when on WiFi).

GPS

Global Positioning Service, or “Location Services” as Apple likes to call it, requires yet another radio and set of antennas. This is a receive-only technology where simultaneous reception of data from multiple satellites allows the device to be located in 3D space (longitude, latitude and altitude) rather accurately. The actual process used by the underlying hardware, the OS and the apps on the iPhone is quite complex, merging together information from the actual GPS radio, WiFi (if it’s on, which helps a lot with accuracy) and even the internal gyroscope that is built in to each iPhone. This is necessary since consumers just want things to work, no matter the laws of physics (yes my radio should receive satellite signals even if I’m six stories underground in a car park…), interference from cars, electrical wires, etc. etc. The bottom line is we have come to depend on GPS to the point that I see people yelping at their Yelp app when it doesn’t know exactly where the next pizza house is…

Bluetooth

Again, we have become totally dependent on this technology for everyday use of a cellphone. In most states now (and countries outside the US), there are rather strict laws on ‘hands-free’ cellphone while driving a car. While legally this can be accomplished with a wired earplug (know your laws, some places ONLY allow wireless [Bluetooth] headsets! – others allow wired headsets but only in one ear, and it must be of the ‘earbud’ type, not an ‘over the ear’ version), the Bluetooth headset is the most common.

There are other uses for Bluetooth with the iPhone: I frequently use a Bluetooth keyboard when I am actually using the iPhone as a little computer at a coffee bar – it’s SO much faster than pecking on that tiny glass keyboard… There are starting to be a number of interesting external ‘appliances’ that communicate with the iPhone via Bluetooth as well – temperature/humidity meter; various sports/exercise measuring devices; even civil engineering transits can now communicate their reading via Bluetooth to an app for automatic recording and triangulation of data.

And yes, it takes another radio and antenna…

And last but certainly not least:  iOS6

A number of new features are either totally OS-related, or the new hardware improvements are expressed to the user via the new OS. The good news is that some of these new features will now show up in earlier iPhone models, commensurate of course with hardware limitations.

A few of the new features:

  • Improvements to Siri:  open apps and post comments to social apps with voice commands
  • Facebook: integrated into Calendar, Camera, Maps, Photos. (yes, you can turn off sharing via FB, but in typical FB fashion everything is ‘opt out’…)
  • Passbook: a little digital vault for movie tickets, airline boarding passes, etc. Still ‘under construction’ in terms of getting vendors to sign up with Apple
  • FaceTime: now works over 3G/4G as well as WiFi (watch out for your data usage when not at WiFi – with the new 720P front facing video camera, that nice long chat with your significant other just smoked your entire data plan for the month…)
  • Safari:  links open web pages on multiple Apple devices that are all on same iCloud account. Be careful… if you are bored in the office and are cruising ‘artistic’ web sites, they may reflect in real time in your kitchen or your daughter’s iMac…
  • Maps:  Google Maps kicked out of Apple-land, now a home-grown map app that finally includes turn-by-turn navigation.

Summary

It’s a nice upgrade Apple. As usual, the industrial design is good. For me personally, it’s starting to get a bit big – but I’ll admit I have an iPad, so if I want more screen space than my 4S I’ll just pick up the Pad. Most of the improvements are incremental, but good nonetheless. In terms of pure technology, the iPhone5 is a bit behind some of the Android devices, but this is not the article to start on that! Those arguments could go on for years… I’m only commenting here on what’s in this particular phone, and my personal thoughts on upgrading from a recent 4S to the 5. For me, I won’t do that at this time. A lot of this is very individual, and depends on your use, needs, etc. I tend to almost always be near either my own or free Wi-Fi locations, so 4G is just not a huge deal. The improved speed sounds very nice, but my 4S currently is fast enough – I am an avid photographer, and retouch/filter a lot using the iPhone4S, and find that it’s fast enough. I love speedy devices, and if the upgrade were free I would perhaps think differently, but at this point I am not suffering with any aspect of my 4S enough to feel that I have to move to the 5 right away.

Now, I would absolutely feel differently if I had anything earlier than a 4S. I upgraded from the iPhone4 to the 4S without hesitation – in my view the improvements were totally worth it: much better camera, much faster processor, etc. So in the end, my personal recommendation: a highly recommended upgrade for anything at the level of iPhone4 or earlier, for the 4S – it’s down to individual choices and your budget.

iPhone Cinemaphotography – A Proof of Concept (Part 1)

August 3, 2012 · by parasam

I’m introducing a concept that I hope some of my readers may find interesting:  the production of an HD video that is entirely built using only the iPhone (and/or iPad). Everything from storyboard to all photography, editing, sound, titles and credits, graphics and special effects, etc. – and final distribution – can now be performed on a “cellphone.” I’ll show you how. Most of the focus of the new crop of highly capable ‘cellphone cameras’ such as is available with the iPhone and certain Android phones has been focused on still photography. While motion photography (video) is certainly well-known, it has not received the same attention and detail – nor the amount of apps – as its single-image sibling.

While I am using a single platform with which I am familiar (iOS on the iPhone/iPad), this concept can I believe be performed on the Android class of devices as well. I have not (nor do I intend to) research that possibility – I’ll leave that for others who are more familiar with that platform. The purpose is to show that such a feat CAN be done – and hopefully done reasonably well. It’s only been a few years since the production of HD video was strictly in the realm of serious professionals, with budgets of hundreds of thousands of dollars or more. While there of course are many compromises – and I don’t for a minute pretend that the range of possible shots or quality will anywhere near approach what a high quality DSLR, RED, Arri or other professional video camera can produce, I do know that a full HD (1080P) video can now be totally produced on a low-cost mobile platform.

This POC (Proof Of Concept) is intended as more than just a lark or a geeky way to eat some spare time:  the real purpose is to bring awareness that the previous bar of high cost cinemaphotography/editing/distribution has been virtually eliminated. This paves the way for creative individuals almost anywhere in the world to express themselves in a way that was heretofore impossible. Outside of America and Western Europe both budgets and skilled operator/engineers are in far lower supply. But there are just as many people who have a good story to tell in South Africa, Nigeria, Uruguay, Aruba, Nepal, Palestine, Montenegro and many other places as there are in France, Canada or the USA. The internet has now connected all of us – information is being democratized in a huge way. Of course there are still the ‘firewalls’ of North Korea, China and a few others – but the human thirst for knowledge, not to mention the unbelievable cleverness and endurance of 13-year-old boys and girls in figuring out ‘holes in the wall’ shows us that these last bastions of stolidity are doomed to fall in short order.

With Apple and other manufacturers doing their best to leave nary a potential customer anywhere in the world ‘out in the cold’, the availability, both in real terms and affordability, is almost ubiquitous. With apps now costing typically a few dollars (it’s almost insane – the Avid editor for iOS is $5; the Avid Media Composer software for PC/Mac is $2,500) an entire production / post-production platform can be assembled for under $1,000. This exercise is about what’s possible, not what is the easiest, most capable, etc. Yes, there are many limitations. Yes, some things will take a lot longer. But what you CAN do is just nothing short of amazing. That’s the story I’m going to share with you.

A note to my readers:  None of the hardware or software used in this exercise was provided by any vendor. I have no commercial relationship with any vendor, manufacturer or distributor. Choices I have made or examples I use in this post are based purely on my own preference. I am not a professional reviewer, and have made no attempt to exhaustively research every possible solution for the hardware or software that I felt was required to produce this video. All of the hardware and software used in this exercise is currently commercially available – any reasonably competent user should be able to reproduce this process.

Before I get into detail on hardware or software, I need to remind you that the most important part of any video is the story. Just having a low-cost, relatively high quality platform on which to tell your ‘story’ won’t help if you don’t have something compelling to say – and the people/places/things in front of the lens to say it. We have all seen that vast amounts of money and technical talent means nothing in the face of a lousy script or poor production values – just look over some of the (unfortunately many) Hollywood bombs… I’m the first one to admit that motion picture storytelling is not my strong point. I’m an engineer by training and my personal passion is still photography – telling a story with a single image. So… in order to bring this idea to fruition – I needed help. After some thought, I decided that ‘piggybacking’ on an existing production was the most feasible way to produce this idea: basically adding a few iPhone cameras to a shoot where I could take advantage of existing set, actors, lighting, direction, etc. etc. For me, the this was the only practical way to make this happen in a relatively short time frame.

I was lucky enough to know a very talented director, Ambika Leigh, who was receptive and supportive of my idea. After we discussed my general idea of ‘piggybacking’ she kindly identified a potential shoot. After initial discussions with the producers, the green light for the project was given. The details of the process will come in future posts, but what I can say now (the project is an upcoming series that is not released yet – so be patient! It will be worth the wait) is that without the support and willingness of these three incredible women (Ambika Leigh, director; Tiffany Price & Lauren DeLong, producers/actors/writers) this project would not have moved forward with the speed, professionalism and just plain fun that it has. At a very high level, the series brings us into the clever and humorous world of the “Craft Ladies” – a couple of friends that, well, like to craft – and drink wine.

Craft Ladies is the story of Karen and Jane, best friends forever, who love to
craft…they just aren’t any good at it. Over the years Karen and Jane’s lives
have taken slightly different paths but their love of crafting (and wine)
remains strong. Tune in in September to watch these ladies fulfill their
dream…a craft show to call their own. You won’t find Martha Stewart here,
this is crafting Craft Ladies style. Craft Up Nice Things!”

Please check out their links for further updates and details on the ‘real thing’

www.facebook.com/CraftUpNiceThings
www.twitter.com/#!/2craftladies
www.CraftUpNiceThings.com

I am solely responsible for the iPhone portion of your program – so all errors, technical gaffs, editorial bloops and other stumbles are mine. As said, this is a proof of concept – not the next Spielberg epic… My intention is to follow – as closely as my expertise and the available iOS technology will allow – the editorial decisions, effects, titles, etc. that end up on the ‘real show’. To this end I will be necessarily lagging a bit in my production, as I have to review the assembled and edited footage first. However, I will make every effort to have my iPhone version of this series ready for distribution shortly after the real version launches. Currently this is planned for some time in September.

For the iPhone shoot, two iPhone4S devices were used. I need to thank my capable 2nd camerawoman – Tara Lacarna – for her endurance, professionalism and support over two very long days of shooting! In addition to her new career as an iPhonographer (ha!) she is a highly capable engineer, musician and creative spirit. While more detail will be provided later in this post, I would also like to thank Niki Mustain of Schneider Optics for her time (and the efforts of others at this company) in helping me get the best possible performance from the “iPro” supplementary lenses that I used on portions of the shoot.

Before getting down to the technical details of equipment and procedure, I’ll lay out the environment in which I shot the video. Of course, this can vary widely, and therefore the exact technique used, as well as some hardware, may have to change and adapt as required. In this case the entire shoot was indoors using two sets. Professional lighting was provided (3200°K) for the principal photography (which used various high-end DSLR cameras with cinema lenses). I had to work around the available camera positions for the two iPhone cameras, so my shots will not be the same as were used in principal photography. Most shots were locked off with both iPhones on tripods; there were some camera moves and a few handheld shots. The first set of episodes was filmed over two days (two very, very long days!!) and resulted in about 116GB of video material from the two iPhones. In addition to Ambika, Tiffany, Lauren and Tara there was a dedicated and professional crew of camera operators, gaffers, grips, etc. (with many functions often performed by just one person – this was after all about quality not quantity – not to mention the lack of a 7-figure Hollywood budget!). A full list of credits will be in a later post.

Aside from the technical challenges; the basic job of getting lines and emotion on camera; taking enough camera angles, close-ups, inserts and so on to ensure raw material for editorial continuity; and just plain endurance (San Fernando Valley, middle of summer, had to close all windows and turn off all fans and A/C for each shot due to noise, a pile of people on a small set, hot lights… you get the picture…) – the single most important ingredient was laughter. And there was lots of it!! At one time or another, we had to stop down for several minutes until one or the other of us stopped laughing so hard that we couldn’t hold a camera, say a line or direct the next sequence. That alone should prompt you to check this series out – these women are just plain hilarious.

Hardware:

As mentioned previously, two iPhone4S cameras were used. Each one was the 32GB model. Since shooting video generates large files, most user data was temporarily deleted off each phone (easy to restore later with a sync using iTunes). Approximately 20GB free space was made available on each phone. If one was going to use an iPhone for a significant amount of video photography the 64GB version would probably be useful. The down side is that (unless you are shooting very short events) you will still have to download several times a day to an external storage device or computer – and the more you have to download the longer that takes! As in any process, good advance planning is critical. In my case with this shoot, I needed to coordinate ‘dumping times’ with the rest of the shoot:  there was a tight schedule and the production would not wait for me to finish dumping data off the phones. The DSLR cameras use removable memory cards, so it only takes a few minutes to swap cards, then those cameras are ready to roll again. I’ll discuss the logistics of dumping files from the phones in more detail in the software section below. If one was going to attempt long takes with insufficient break time to fully dump the phone before needing to shoot again, the best solution would be to have two iPhones for each camera position, so that one phone could be transferring data while the other one was filming.

In order to provide more visual control, as well as interest, a set of external adapter lenses (the “iPro” system by Schneider Optics) was used on various shots. A total of three different lenses are available: telephoto, wide-angle and a fisheye. A detailed post on these lenses – and adaptor lenses in general – is here. For now, you can visit their site for further detail. These lenses attach to a custom shell that is affixed to the iPhone. The lenses are easily interchanged with a bayonet mounting system. Another vital feature of the iPro shell for the phone is the provision for tripod mounting – a must for serious cinemaphotography – especially with the telephoto lens which magnifies camera movement. Each phone was fitted with one of the iPro shells to facilitate tripod mounting. This also made each phone available for attaching one of the lenses as required for the shot.

iPro “Fisheye” lens

iPro “Wide Angle” lens

iPro “Telephoto” lens

Another hardware requirement is power:  shooting video kills batteries just about faster than any other activity on the iPhone. You are using most of the highest power consuming parts of the phone – all at the same time:  the camera sensor, the display, the processor, and high bandwidth memory writing. A fully charged iPhone won’t even last two hours shooting video, so one must run the phone on external power, or plan the shoot for frequent (and lengthy!) recharge sessions. Bring plenty of extra cables, spare chargers, extension cords, etc. – it’s very cheap insurance to keep the phones running. Damage to cables while on a shoot is almost a guaranteed experience – don’t let that ruin your session.

A particular challenge that I had was a lack of a ‘feed through’ docking connector on the Line6 “Mobile In” audio adapter (more on this below). This meant that while I was using this high quality audio input adapter I was forced to run on battery, since I could not plug in the Mobile In device and the power cable at the same time to the docking connector on the bottom of the phone. I’m not aware of a “Y” adapter for iPhone docking connectors, but that would have really helped. It took a lot of juggling to keep that phone charged enough to keep shooting. On several shots, I had to forgo the high quality audio as I had insufficient power remaining and had to plug in to the charger.

As can be seen, the lack of both removable storage and a removable battery are significant challenges for using the iPhone in cinemaphotography. This can be managed, but it’s a critical point that requires careful attention. Another point to keep in mind is heat. Continual use of the phone as a video camera definitely heats up the phone. While neither phone ever overheated to the point where it became an issue, one should be aware of this fact. If one was shooting outside, it may be helpful to (if possible) shade the phone(s) from direct sunlight as much as practical. However, do not put the iPhones in the ice bucket to keep them cool…

Gitzo tripod with fluid head attached

Close-up of fluid head

Tripods are a must for any real video work:  camera judder and shake is very distracting to the viewer, and is impossible to remove (with any current iPhone app). Even with serious desktop horsepower (there is rather good toolset in Adobe AfterEffects for helping to remove camera shake) it takes a lot of time, skill and computing power. Far better to avoid in the first place whenever possible. Since ‘locked off’ shots are not as interesting, it’s worth getting fluid heads for your tripods so you can pan and tilt smoothly. A good high quality tripod is also well worth the investment:  flimsy ones will bend and shake. While the iPhone is very light – and this may tempt one to go with a very lightweight tripod – this will work against you if you want to make any camera tilts or pans. The very light weight of the phone actually causes problems in this case: it’s hard to smoothly move a camera that has almost no mass. At least having a very rigid and sturdy tripod will help in this regard. One will need considerable practice to get used to the feel of your particular fluid head, get the tension settings just right, etc. – in order to effect the smoothest camera movements. Remember this is a very small sensor, and the best results will be obtained with slow and even camera pans/tilts.

For certain situations, miniature tripods or dollies can be very useful, but they don’t take the place of a normal tripod. I used a tiny tripod for one shot, and experimented with the Pico Dolly (sort of a miniature skateboard that holds a small camera) although did not actually use for a finished shot. This is where the small size and light weight of the iPhone can be a plus: you can hang it and place it in locations that would be difficult to impossible with a normal camera. Like anything else though, don’t get too creative and gimmicky:  the job of the camera is to record the story, not call attention to itself or technology. If a trick or a gadget can help you visually tell the story – then it’s useful. Otherwise stick with the basics.

Another useful trick I discovered that helped stabilize my hand-held shots:  my tripod (as many do) has a removable center post on which the fluid head is mounted (that in turn holds the camera). By removing the entire camera/fluid-head/center-post assembly I was able to hold the camera with far greater accuracy and stability. The added weight of the central post and fluid head, while not much – maybe 500 grams – certainly added stability to those shots.

Tripod showing center shaft extended before removal.

Center shaft removed for “hand-held” use

If you are planning on any camera moves while on the tripod (pans or tilts), it is imperative that the tripod be leveled first – and rechecked every time you move it or dismount the phone. Nothing worse than watching a camera pan move uphill as you traverse from left to right… A small circular spirit level is the perfect accessory. While I have seen very small circular levels actually attached to tripod heads, I find them too small for real accuracy. I prefer a small removable device that I can place on top of the phone itself (which then accounts for all the hardware up to and including the shell) that can affect alignment. The one I use is 25mm (1″) in diameter.

I touched on the external audio input adapter earlier while discussing power for the iPhones, I’ll detail that now. For any serious video photography you must use external microphones: the one in the phone itself – although amazingly sensitive, has many drawbacks. It is single channel – where the iPhone hardware (and several of the better video camera apps) are capable of recording stereo; you can’t focus the sensitivity of the microphone, and most importantly, the mike is on the front of the phone at the bottom – pointing away from where your lens is aimed!

While it is possible to plug a microphone into the combination headphone/microphone connector on the top of the phone, there are a number of drawbacks. The first is it’s still a mono input – only 1 channel of sound. The next is the audio quality is not that great. This input was designed for telephone conversation headpiece use, so extended frequency response, low noise and reduced harmonic distortion were not part of the design parameters. Far better audio quality is available on the digital docking connector on the bottom of the phone. That said, there are very few devices actually on the market today (that I have been able to locate) that will function in the environment of video cinemaphotography, particularly if one is using the iPro shell and tripod mounting the iPhone. Many of the devices treat the iPhone as just an audio device (the phone actually snaps into several of the units, making it impossible to use as a camera); with others the mechanical design is not compatible with either the iPro case or tripod mounting. Others offer only a single channel input (these are mostly designed for guitar input so budding Hendrix types can strum into GarageBand). The only unit I was able to find that met all of my requirements (stereo line input, high audio quality, mechanically did not interfere with tripod or the iPro case) was a unit “Mobile In” manufactured by Line6. Even this device is primarily a guitar input unit, but it does have a line in stereo connector that works very well. In order to use the hardware, you must download and install their free app (and it’s on the fat side, about 55MB) which contains a huge amount of guitar effects. Totally useless for the line input – but it won’t work without it. So just install it and forget about it. You never need to open the MobilePOD app in order to use the line input connector. As discussed above in the section on power, the only major drawback is that once this device is plugged in you can’t run your phone off external power. Really need to find that “Y” adapter for the docking connector..

“Mobile In” audio input adapter attached.

Now you may ask, why do I need a line input connector when I’m using microphones?? My attempt here is to produce the highest quality content possible, while still using the iPhone as the camera/recorder. For the reasons already discussed above, the use of external microphones is required. Typically a number of mikes will be placed, fed into a mixer, and then a line level feed (usually stereo) will be fed to the sound recorder. In all ‘normal’ (aka not using cellphones as cameras!!) video shoots, the sound is almost always recorded on a separate device, just synchronized in some fashion to each of the cameras so the entire shoot is in sync. In this particular shoot, the two actors on the set were individually miked with lavalier microphones (there is a whole hysterical story on that subject, but it will have to wait until after that episode airs…) and a third direction boom mike was used for ambient sound. The three mikes were fed into a small portable mixer/sound recorder. The stereo output (usually used for headphone monitoring – a line level output) was fed (through a “Y” cable) to both the monitoring headphones and the input to the Mobile In device. Essentially, I just ‘piggybacked’ on top of the existing audio feed for the shoot.

This didn’t violate my POC – as one would need this same equipment – or something like it – on any professional shoot. At a minimum, one could just use a small mixer, obviously if the iPhone was recording the sound an external recorder is not required. I won’t attempt to further discuss all the issues in recording high quality sound – that would take a full post (if not a book!) – but there is a massive amount of literature out there on the web if one looks. Good sound recording is an art – if possible avail yourself of someone who knows this skill to assist you on your shoot – it will be invaluable. I’ll just mention a few pointers to complete this part of the discussion:

  • Record the most dynamic range possible without distortion (big range between soft and loud sounds). This will markedly improve the presence of your audio tracks.
  • Keep all background noise to an absolute minimum. Turn off all cellphones! (put the iPhone that are ‘cameras’ in “airplane mode” so they won’t be disturbed by phone calls, texts or e-mails). Turn off fans, air conditioners, refrigerators (if you are near a kitchen), etc. etc. Take a few moments after calling ‘quiet on the set’ to sit still and really listen to your headphones to ensure you don’t hear any noise.
  • As much as possible, keep the loudness levels consistent from take to take – it will help keep your editor (or yourself…) from taking out the long knives after way too many hours trying to normalize levels between takes…
  • If you use lavalier mikes (those tiny microphones that clip onto clothing – they are available in ‘wired’ or ‘wireless’ versions) you need to listen carefully during rehearsals and actual takes for clothing rustle. That can be very distracting – you may have to stop and reposition the mike so that the housing is not touching any clothing. These mikes come with little clips that actually mount on to the cable just below the actual microphone body – thereby insulating clothing movement (rustle) from being transmitted to the sensor through the body of the microphone. Take care in mounting and test with your actor as they move – and remind them that clasping their hands to their chest in excitement (and thumping the mike) will make your sound person deaf – and ruin the audio for that shot!

Actors’ view of the camera setup for a shot, (2 iPhones, 3 DSLRs)

Storage and the process of dumping (transferring video files from the iPhones to external storage) is a vital part of both hardware, software and procedure. The hardware I used will be discussed here, the software and procedure is mentioned in the next section. Since the HD video files consume about 2.5GB for every 10 minutes of filming, even the largest capacity iPhone (64GB) will run out of space in short order. As mentioned earlier, I used the 32GB models on this shoot, with about 20GB free space on each phone. That meant that, at a maximum, I had a little over an hour’s storage on each phone. During the two days of shooting, we shot just under 5 hours of actual footage – which amounted to a total of 116GB from the two iPhones in total. (Not every shot was shadowed by the iPhones: some of the close-ups and inserts could not be performed by the iPhones as they would have been in the shot composed by the DSLR cameras).

The challenge to this project was to not involve anything other than the iPhone/iPad for all factors of the production. The dumping of footage from the iPhones to external storage is one area where Apple (nor any 3rd party developer that I have found) does not offer a purely iOS solution. With the lack of removable storage, there are only two ways to move files off the iPhone: Wi-Fi or the USB cable attached to the docking connector. Wi-Fi is not a practical solution in this environment:  the main reason is it’s too slow. You can find as many ‘facts’ on iPhone Wi-Fi speed as there are types of orchids in the Amazon, but my research (verified by personal tests) show that, in a real-world and practical manner 8Mb/s is a top-end average for upload (which is what you need to transmit files FROM the phone to an external storage device). That’s only 800KB/s – so it would take 7 hours to upload one 2.5GB movie file – which is 10 minutes of shooting! Not to mention the issues of Wi-Fi interference, dropped connections, etc. etc.

That brings us to cabled connections. Currently, the only way to move data off of (or on to for that matter) an iPhone is to use a computer. While the Apple Time Machine could in theory function as a direct-to-phone data storage device, it only connects via Wi-Fi. However, the method I chose only uses the computer as a ‘connection link’ to an external hard drive, so in my view it does not break my premise of an “all iOS” project. When I get to the editing stage, I just reverse the process and pull files back from the external drive through the computer back to the phone (in this case using iTunes).

I will discuss the precise technique and software used below, but suffice to say here that I used a PC as the computer – mainly just because that is the laptop that I have. It also does prove however that there is no issue of “Mac vs PC” as far as the computer goes. I feel this is an important point, as in many countries outside USA and Western Europe the price premium on Apple computers is such that they are very scarce. For this project, I wanted to make sure the required elements were as widely available as possible.

The choice of external storage is important for speed and reliability’s sake. Since the USB connection from the phone to the computer is limited to v2.0 (480Mb/s theoretical) one may assume that just any USB2.0 external drive would be sufficient. That’s not actually the case, as we shall see…  While the link speed of USB2.0 supposedly can provide a maximum of 48MB/s (480Mb/s), that is never matched in reality. USB chipsets in the internal hub in the computer, processing power in the phone and the computer, other processes running on the computer during transfer, bus and cpu speed in the computer, actual disk controller and disk speed of the external storage – all these factors serve to significantly affect transfer speed.

Probably the most important is the actual speed of the external disk. Most common portable USB2.0 disks (the small 2.5″ format) run at 5400RPM, and have disk controller chipsets that are commensurate, with actual performance in the 5-10MB/s range. This is too slow for our purposes. The best solution is to use an external RAID array of two ‘striped’ disks [RAID 0] using high performance 7200RPM SATA disks with an appropriately designed disk controller. Devices such as the G-RAID Mini system is a good example. If you are using a PC, get the best performance with an eSATA connection to the drive (my laptop has a built-in eSATA connector, but PC Card adapters are available that easily support this connectivity for computers that don’t have it built in). This offers the highest performance (real-world tests show average write speeds of 115MB/s using this device). If you are using an Apple computer, opt for the FW800 connection (I’m not aware of eSATA on any Mac computer). While this limits the performance to around 70MB/s maximum, it’s still much faster than the USB2.0 interface from the phone so it’s not an issue. I have proven that having a significant amount of ‘headroom’, in terms of speed performance, on the external drive, is desirable. You just don’t need the drive to slow things down any.

There are other viable alternatives for external drives, particularly if one needed a drive that did not require an external power supply (which the G-RAID does due to the performance). Keep in mind that while it’s possible to run a laptop and external drive all off battery power, you really won’t want to do this – for one, unless you are on a remote outdoor location shoot, you will have AC power – and disk writing at continuous high throughput is a battery killer! That said, a good alternative (for PC) is one of the Seagate GoFlex USB3.0 drives. I use a 1.5TB model that houses a high-performance 7200RPM drive and supports up to 50MB/s write speeds. For the Mac, Seagate has a Thunderbolt model. Although the Thunderbolt interface is twice as fast (10Gb/s vs 5Gb/s) as USB3.0 it makes no difference in transfer speed (these single drive storage devices can’t approach the transfer speeds of either interface). However, there is a very good reason to go with USB3.0/eSATA/Thunderbolt instead of USB2.0 – overall performance. With the newer high-speed interfaces, the full system (hard disk controller, interface chipset, etc.) is designed for high-speed data transfer, and I have proved to myself that it DOES make a difference. It’s very hard to find a USB2.0 system that matches the performance of a USB3.0/etc system – even on a 2.5″ single drive subsystem.

The last thing to cover here under storage is backup. Your video footage is irreplaceable. Procedure will be covered below, but under hardware, provide a second external drive on the set. It’s simply imperative that you immediately back up the footage on to a second physical drive as soon as practical – NOT at the end of the day! If you have a powerful enough computer, with the correct connectivity, etc. – you can actually copy the iPhone files to two drives simultaneously (best solution), but otherwise plan on copying the files from one external drive to the backup while the next scenes are being shot (background task).

I’ll close with a final suggestion:  while this description of hardware and process is not meant in any way to be a tutorial on cinemaphotography, audio, etc. etc. – here is a small list (again, this is under ‘hardware’ as it concerns ‘stuff’) of useful items that will make your life easier “on the set”:

  • Proper transport cases, bags, etc. to store and carry all these bits. Organization, labeling, color-coding, etc. all helps a lot when on a set with lots of activity and other equipment.
  • Spare cables for everything! Murphy will see to it that the one item for which you have no duplicate will get bent during the shoot…
  • Plenty of power strips and extension cords.
  • Gorilla tape or camera tape (this is NOT ‘duct tape’). Find a gaffer and he/she will explain it to you…
  • Small folding table or platform (for your PC/Mac and drives) – putting high value equipment on the floor is asking for BigFoot to visit…
  • Small folding stool (appropriate for the table above), or an ‘apple box’ – crouching in front of computer while manipulating high value content files is distracting, not to mention tiring.
  • If you are shooting outside, more issues come into play. Dust is the big one. Cans of compressed air, lens tissue, camel-hair brushes, zip-lock baggies, etc. etc. – none of the items discussed in this entire post appreciate dust or dirt…
    • Cooling. Mentioned earlier, but you’ll need to keep the phone and computer as cool as practical (unless of course you are shooting in Scotland in February in which case the opposite will be true: trying to figure out how to keep things warm and dry in the middle of a wet and freezing moor will become paramount).
    • Special mention for ocean-front shoots:  corrosion is a deadly enemy of iPhones and other such equipment. Wipe down ALL equipment (with appropriate cloths and solutions) every night after the shoot. Even the salt air makes deposits on every exposed metal surface – and later on a very hard to remove scale will become apparent.
  • A final note for sunny outdoor shoots: seeing the iPhone screen is almost impossible in bright sunlight, and unlike DSLRs the iPhone does not have an optical viewfinder. Some sort of ‘sunshade’ will be required. While researching this online, I came across this little video that shows one possible solution. Obviously this would have to be modified to accommodate the audio adapter, iPro lenses, etc. shown in my project, but it will hopefully give you some ideas. (Thanks to triplelucky for this video).

Software:

As amazing as the hardware capabilities of the above system are (iPhone, supplemental lenses, audio adapters, etc.) – none of this would be possible without the sophisticated software that is now available for this platform at such low cost. The list of software that I am currently using to produce this video is purely of my own choosing – there may be other equally viable solutions for each step or process. I feel what is important is the possibility of the process, not the precise piece of kit used to accomplish the task. Obviously, as I am using the iOS platform, all the apps are “Apple iPhone/iPad compliant”. The reader that chooses an alternate platform will need to do a bit of research to find similar functionality.

As a parallel project, I am currently describing my experiences with the iPhone camera in general, as well as many of the software packages (apps) that support the iPhone still and video camera. These posts are elsewhere in this same blog location. For that reason, I will not describe in any detail the apps here. If software that is discussed or listed here is not yet in my stable of posts, please be patient – I promise that each app used in this project will be discussed in this blog at some point. I will refer the reader to this post where an initial list of apps that will be discussed is located.

Here is a short list of the apps I am currently using. I may add to this list before I complete this project! If so, I will update this and other posts appropriately.

Storyboard Composer Excellent app for building storyboards from shot or library photos, adding actors, camera motion, script, etc. Powerful.

Movie*Slate A very good slate app.

Splice Unbelievable – a full video editor for the iPhone/iPad. Yes, you can: drop movies and stills on a timeline, add multiple sound tracks and mix them, work in full HD, has loads of video and audio efx, add transitions, burn in titles, resize, crop, etc. etc. Now that doesn’t mean that I would choose to edit my next feature on a phone…

Avid Studio  The renowned capability of Avid now stuffed into the iPad. Video, audio, transitions, etc. etc. Similar in capability to Splice (above) – I’ll have a lot more to say after these two apps get a serious test drive while editing all the footage I have shot.

iTC Calc The ultimate time code app for iDevices. I use on both iPad and iPhone.

FilmiC Pro Serious movie camera app for iPhone. Select shooting mode, resolution, 26 frame rates, in-camera slating, colorbars, multiple bitrates for each resolution, etc. etc.

Camera+ I use this as much for editing stills as shooting, biggest advantage over native iPhone camera app is you can set different part of frame for exposure and focus.

almost DSLR is the closest thing to fully manual control of iPhone camera you can get. Takes some training, but is very powerful once you get the hang of it.

PhotoForge2 Powerful editing app. Basically Photoshop on the iPhone.

TrueDoF This one calculates true depth-of-field for a given lens, sensor size, etc. I use this to plan my range of focus once I know my shooting distance.

OptimumCS-Pro This is sort of inverse of the above app – here you enter the depth of field you want, then OCSP tells you the shooting distance and aperture you need for that.

Juxtaposer This app lets you layer two different photos onto each other, with very controllable blending.

Phonto One of the best apps for adding titles and text to shots.

Some of the above apps are designed for still photography only, but since stills can be laid down in the video timeline, they will likely come into use during transitions, effects, title sequences, etc.

I used Filmic Pro as the only video camera app for this project. This was firstly based just on personal preference and the capabilities that it provided (the ability to lock focus, exposure and white balance were critical to maintaining continuity across takes in my opinion). Once I had selected a video camera app with which I was comfortable, I felt it important to use that on both the iPhones – again for continuity of the content. There may be other equally capable apps for this purpose. My focus was on producing as high a quality product as possible within the means and capabilities at my disposal. The particular tools are less important than the totality of the process.

The process of dumping footage off the iPhone (transferring video files to external storage) requires some additional discussion. The required hardware has been mentioned above, now let’s dive into process and the required software. The biggest challenge is logistics: finding enough time in between takes to transfer footage. If the iPhones are the only cameras used, then in one way this is easier – you have control over the timeline in that regard. In my case, this was even more challenging, as I was ‘piggybacking’ on an existing shoot so I had to fit in with the timeline and process in place. Since professional video cameras all use removable storage, they only require a few minutes to effectively be ready to shoot again after the on-camera storage is full. But even if iPhones are the only cameras, taking long ‘time-outs’ to dump footage will hinder your production.

There are several ways to maximize the transfer speed of files off the iPhone, but the best way is to make use of time management:  try to schedule dumping for normal ‘down time’ on the set (breaks, scene changes, wardrobe changes, meal breaks, etc.)  In order to do this you need to have your ‘transfer station’ [computer and external drive] ready and powered up so you can take advantage of even a short break to clear files from the phone. I typically transferred only one to three files at a time, so in case we started up sooner than expected I was not stuck in the middle of a long transfer. The other advantage in my situation was that the iPhone charges while connected via USB cable, so I was able to accomplish two things at once: replenish battery capacity due to shooting with the Mobile In audio adapter not allowing shooting while on line power; and dumping the files to external storage.

My 2nd camerawoman, Tara, brought her Mac Air laptop for file transfer to an external USB drive, I used a Dell PC laptop (discussed above in the hardware section). In both cases, I found that using the native OS file management (Image Capture [part of OS] for the Mac, Windows Explorer for the PC) was hideously slow. It does work (after plugging in the iPhone to the USB connector on the computer, the iPhone shows up as just another external disk. You can navigate down through a few folders and find your video files). On my PC (which BTW is a very fast machine – basically a 4-core mobile workstation that can routinely transfer files to/from external drives at over 150MB/s) the best transfer speed I could obtain with Windows Explorer amounted to needing almost an hour to transfer 10 minutes of video off the iPhone – a complete non-starter in this case. After some research, I located software from WideAngle Software called TouchCopy that solved my problem. They make versions for both Mac and PC, and it allowed transfer off the iPhone to external storage about 6x faster than Windows Explorer. My average transfer times were approximately ‘real time’ – i.e. 10 minutes of footage took about 10 minutes to transfer. There may be other similar applications out there – as mentioned earlier I am not in the software reviewing business – once I find something that works for me I will use that – until I find something “better/faster/cheaper.”

To summarize the challenging file transfer issue:

  • Use the fastest hardware connections and drives that you can.
  • Use time management skills and basic logistics to optimize your ‘windows’ for file transfer.
  • Use supplemental software to maximize your transfer speed from phone to external storage.
  • Transfer in small chunks so you don’t hold up production.

The last bit that requires a mention is file backup. Your original footage is impossible to replace, so you need to take exquisite care with it. The first thing to do it back it up to a second external physical drive immediately after the file transfer. Typically I started this task as soon as I was done with dumping files off the iPhone – this task could run unsupervised during the next takes.. However, one thing to consider before doing that (and this may depend on how much time you have during breaks): the relabeling of the video files. The footage is stored on your iPhone as a generically labeled .mov file, usually something like IMG_2334.mov – not a terribly insightful description of your scene/take. I never change the original label, only add to it. There is a reason… it helps to keep all the files in sequential order when starting the scene selection and editorial process later. This can be very helpful when things go a bit skew – as the always do during a shoot. For instance if the slate is missing on a clip (you DO slate every take, correct??) having the original ‘shot order’ can really help place the orphan take into its correct sequence. In my case, this happened several time due to slate placement:  since my iPhone cameras were in different locations, sometimes the slate was pointed where it was in frame for the DSLR cameras but was not visible by the iPhones.

I developed a short-hand description take from the slate at the head of each shot that I appended to the original file name. This does a few seconds (to launch Quicktime or VLC, shuttle in to the slate, pause and get the slate info), but the sooner you do this, the better. If you have time to rename the shots before the backup, then you don’t have to rename twice – or face the possibility of human error during this task. Here is a sample of one of my files after renaming: IMG_2334_Roll-A1_EP1-1_T-3.mov  This is short for Roll A1, Episode 1, Scene 1, Take 3.

However you go about this, just ensure that you back up the original files quickly. The last step of course is to delete the original video files off the iPhone so you have room for more footage. To double-check this process (you NEVER want to realize you just deleted footage that was not successfully transferred!!!) I do three things:

  1. Play into the file with headphones on to ensure that I have video and audio at head, middle and end of each clip. That only takes a few seconds, but just do it.
  2. Using Finder or Explorer, get the file size directly off the still-connected iPhone and compare it to the copied file on your external drive. Look at actual file size, not ‘size on disk’, as your external disk may have different sector sizes than the iPhone). If they are different, re-transfer the file.
  3. Using the ‘scrub bar’, quickly traverse the entire file using your player of choice (Quicktime, VLC, etc.) and make sure you have picture from end to end in the clip.

Then and only then, double-check exactly what you are about to delete, offer a small prayer to your production spirit of choice, and delete the file(s).

Summary:

This is only the beginning! I will write more as this project moves ahead, but wanted to introduce the concept to my audience. A deep thanks to all of you who have read my past posts on various subjects, and please return for more of this journey. Your comments and appreciation provides the fuel for this blog.

Support and Contact Details:

Please visit and support the talented women that have enabled me to produce this experiment. This would not have been possible otherwise.

Tiffany Price, Writer, Producer, Actress
Lauren DeLong, Writer, Producer, Actress
Ambika Leigh, Director, Producer

A few comments on the iPhone camera posts…

April 13, 2012 · by parasam

I have just posted the third of my ongoing series of discussions on iPhone camera apps (Camera Plus Pro). Thanks to all of you around the world who have taken the time and interest to read my most recent post on Camera+   To date I have had about 7,500 views from 93 countries – that is a fantastic response! Please share with your friends:  according to the developers of Camera+ they have sold about 10 million copies of their app, so that means there are millions more of you out there that might like to have a manual for this app (which does not come with one) – my blog serves as at least a rudimentary guide for this cool app.

It’s taken several weeks to get this next one written, hopefully the rest will come a bit faster. Apps that have a lot of filters require more testing (and lot of image uploading – the most recent post on Camera Plus Pro has 280 images!). BTW, I know this makes the posts a bit on the large side, and increases your download times. However, since the subject matter is comparing details of color, tonal values, etc. I feel that high resolution images are required for the reader to gain useful information, so I believe the extra time is worth it.

Although this list is contained in my intro to the iPhone camera app software, here are the apps for which I intend to post analysis and discussions:

Still Imaging Apps:

  • Camera
  • Camera+
  • Camera Plus Pro
  • almost DSLR
  • ProHDR
  • Big Lens
  • Squareready
  • PhotoForge2
  • Snapseed
  • TrueDoF
  • OptimumCS-Pro
  • Iris Photo Suite
  • Filterstrom
  • Genius Scan+
  • Juxtaposer
  • Frame X Frame
  • Phonto
  • SkipBleach
  • Monochromia
  • MagicShutter
  • Easy Release
  • Photoshop Express
  • 6×6
  • Camera!

Motion Imaging Apps:

  • Movie*Slate
  • Storyboard Composer
  • Splice
  • iTC Calc
  • FilmiC Pro
  • Camera
  • Camera Plus Pro
  • Camcorder Pro

The above apps are selected only because I use them. I am not a professional reviewer, have no relationship with any of the developers of the above apps, am not paid or otherwise motivated externally. I got started on this little mission as I love photography, science, and explaining how things work. At first, I just wanted to know what made the iPhone tick… as my article on the hardware explains, that was more of a mission than I had counted on… but fun! I then turned to the software that makes the hardware actually do something useful… and here we are.

The choice of apps is strictly personal – this is just what I have found useful to me so far. I am sure there are others that are equally as good for others – and I will leave it to those others to discuss. It’s a big world – lots of room for lots of writing… Undoubtedly I will add things from time to time, but this is a fair list to start with!

Readers like you (and so many thanks to those that have commented!) are what brings me back to the keyboard. Please keep the comments coming. If I have made errors, or confused you, please let me know so I can correct that. Blogs are live things – continually open to reshaping.

Thanks!

iPhone4S – Section 2: Contrast – the essence of photography – and what that has to do with an iPhone…

March 9, 2012 · by parasam

If a photo was all white – or all black – there would be no contrast, no differentiation, no nothing. A photo is many things, but first and foremost it must contain something. And something is recognized by one shape, one entity, standing out from another. Hence… contrast. This is the prima facie of a photograph – whether color or monochrome, whether tack sharp like a Weston or a blur against a rain smeared window – contrast of elements is the core of a photograph.

After that can come focus, composition, tonal range, texture, evocative subject… all the layers that distinguish great from mundane – but they all run second.

Although this set of posts is indeed concerned with an exploration of the mechanics and limitations of a rather cool cellphone camera (iPhone4S), the larger intent is to embolden the user with a tool to image his or her surroundings. The fact that such a small and portable device is capable of imaging at a level that was only a few years ago relegated to DSLR cameras is a technological wonder. Absolutely it is not a replacement for a high quality camera – but in the hands of a trained and experienced person with a vision and the patience to understand the possibilities of such a device, improbable things are possible.

Contrast of devices – DSLR vs cellphone

This post will cover a few of the limitations and benefits of a relatively high quality cellphone camera. While I am discussing the iPhone4S camera in particular, these observations apply to any modern reasonably high quality cellphone camera.

For the first 50 years or so of photography, portability was not even a possibility. Transportability yes, but large view cameras, glass or tin plates and the need for both camera (on tripod) and subjects (either mountains that didn’t move or people frozen in a tableau) to remain locked in place didn’t do much for spontaneity. Roll film, the Brownie camera, the Instamatic, eventually the 35mm camera system – not to mention Polaroid – changed the way we shot pictures forever.

But more or less, for the first hundred years of this art form, the process was one of delayed gratification. One took a photo, then waited through hours or days of photochemical processes to see what actually happened. The art of previsualization became paramount for a professional photographer, for only if you could reasonably predict how your photo would turn out could you stay in business!

With the first digital picture ever produced in 1975 (in a Kodak lab), this is indeed a young science. Consumer digital photography is only about 20 years old – and a good portion of that was relatively low performance ‘snapshot’ cameras. High end digital cameras for professionals only came on the scene in the late 1990’s – at obscene prices. The pace of development since then has been nothing short of stratospheric.

We now have DSLR (Digital Single Lens Reflex) cameras that have more resolution that any film stock ever produced; with lenses that automatically compensate for vibration, assist in exposure and focus, and have light-gathering capabilities that will allow excellent pictures in starlight.

These high-end systems do not come cheaply, nor are they small and lightweight. Even though they are based on the 35mm film camera system, and employ a digital sensor about the same size as a 35mm film frame – they are considerably complex imaging computers and optical systems – and are not for the faint of heart or pocketbook!

Full-sized DSLR with zoom lense and bellow hood

With camera backs only (no lens) going for $7,000 and high quality lenses costing from $2,000 – $12,000 each, these wonders of modern imaging technology require substantial investment – of both knowledge and cash.

On the other end of the spectrum we have the consumer ‘point and shoot’ cameras, usually of a few megapixels resolution, and mostly automatic in function. The digital equivalent of the Kodak Brownie.

Original Kodak Brownie camera

These digital snapshot cameras revolutionized candid photography. The biggest change was the immediacy – no more waiting and expensive disappointment of a poorly exposed shot – one just looked and tried again. If nothing else, the opportunity to ‘self-teach’ has already improved the general photographic skill of millions of people.

Almost as soon as the cellphone was invented, the idea of stuffing a camera inside came along. With the first analog cellphones arriving in the mid-1980s, within 10 years (1997 to be exact) the first cellphone with a built-in camera was announced (Kyocera, 1MP).

A scant 15 years later we have the iPhone4S and similar camera systems routinely used by millions of people worldwide. In many cases, the user has no photographic training and yet the results are often quite acceptable. This blog however is for those that want to take a bit of time to ‘look under the hood’ and extract the maximum capabilities of these small but powerful imaging devices.

Major differences between DSLR cameras and cellphone cameras

The essence of a cellphone camera is portability, while the prime focus of a DSLR camera is to produce the highest quality photograph possible – given the constraints of cost, weight and complexity of operation. It is only natural then that many compromises are made in the design of a cellphone camera. The challenges of very light weight, low cost, small size and other technical issues forced cellphone cameras into a low quality genre for some time. Not any more. Yes, it is absolutely correct that there are many limitations to even ‘high quality’ cellphone cameras such as the iPhone, but with an understanding of these limitations, it is possible to take photos that many would never assume came from a phone.

One of the primary limitations on a cellphone camera is size. Given the physical constraints of the design package for modern cellphones, the entire camera assembly must be very small, usually on the order of ½” square and less than ¼” thick. Compared to an average DSLR, which is often 4” wide by 3” high and 2” thick – the cellphone camera is microscopic.

The first challenge this presents then is sensor size. Two factors actually come into play here:  actual X-Y sensor size dimensions, and lens focal length. Since the covering power of the lens (the area that a lens can cover with a focused image) is a function of the focal length of the lens – and therefore the physical dimensions required in the lens barrel assembly – materially affects the overall thickness of the lens/camera assembly, compromises have to made here to keep the overall size within limits.

The combination of physical sensor size and the depth that would be required if the actual focal length was more than about 5mm, mandates typical cellphone camera sensor sizes to be in the range of 1/3” in size. For example, the iPhone4S sensor is 4.54mm x  3.42mm and the lens has a focal length of 4.28mm. Most other quality cellphone cameras are in this range.

Full details (and photographs) of the actual iPhone hardware will be discussed in the following blog, iPhone4S Specs & Hardware.

The limitation of sensor size then sets the physical size of the pixels that make up the sensor. With a desire to offer a relatively high megapixel count – for sharp resolution – the camera manufacturer is then forced to a very small pixel size. The iPhone4S pixel size is 1.4μm. That is really, really small. Less than a millionth of an inch square. The average size of a pixel on a high quality “35mm” style DSLR camera is 40X larger…

The small pixel size is one of the largest factors in the differences that make cellphone cameras less capable that full-fledged DSLR cameras. The light sensitivity is much less, due the basic nature of how a CCD/CMOS sensor works.

Film vs CCD/CMOS sensor technology – Blacks & Whites

To fully understand the issues with small digital sensor pixel size we need to briefly revisit the differences between film and digital image capture first. Up until 50 years ago, only photochemical film emulsions could capture images. The fundamental way that light is ‘frozen’ into a captured image is very different from film as compared to digital techniques – and it is visible in the resultant photographs.

That is not to say one is better, they are just different. Furthermore, there is a difference in appearance to the human eye from a fully ‘chemical’ process (photograph captured on film, then printed directly onto photo paper and developed chemically – even from a film image that is scanned and printed digitally. Film scanners also use the same CCD array that digital cameras use, and the basic difference of image capture once again comes into play.

Without getting lost in the wonderful details of materials science, physics and chemistry that all play a part in how photochemical photography works, when light strikes film the energy of the light starts changing certain molecules of the film emulsion. The more light that hits certain areas of the film negative, the more certain molecules start clumping together and changing. Once developed, these little groups of matter become the shadows and light of a photograph. All film photographs show something we call ‘grain’ – very small bits of optical gravel that actually constitute the photograph.

The important bit here is to remember that with film, exposure (light intensity X time) results in increased amounts of ‘clumped optical gravel’ – which when developed looks black on a film negative. Of course black on a negative prints to white on a positive – the print that we actually view.

Conversely, on film, very lightly exposed portions of the negative (the shadows, those portions of the picture that were illuminated the least) show up as very light on the negative. This brings us to one of the MOST important aspects of film photography as compared to digital photography:

  • With film, you expose for the shadows and print for the highlights
  • With digital, you expose for the highlights and print for the shadows

The two mediums really ARE different. An additional challenge here is when we shoot film, but then scan the negative and print digitally. What then? Well, you have to treat this scenario as two serial processes:  expose the film as you should – for the shadows. Then when you scan the film, you must expose for the highlights (since you are in reality taking a picture of a picture) and now that you are in the digital domain, print for the shadows.

The reason behind all this is due to the difference between how film reacts to light and how a digital sensor works. As mentioned above, film emulsions react to light by increasing the amount of ‘converted molecules’ – silver halide crystals to be exact – leaving unexposed areas (dark areas in the original scene) virtually unexposed.

Digital sensor capture, using either the CCD or CMOS technology (more on the difference in a moment) respond to light in a different manner:  the photons that make up light fall on the sensor elements (pixels) and ‘fill up’ the energy levels of the ‘pixel container’. The resultant voltage level of each pixel is read out and turned into an image by the computational circuits associated with the sensor. The dark areas in the original image, since they contribute very little illumination, leave the ‘pixel tanks’ mostly unfilled. The high sensitivity of the photo-sensitive arrays mean that any stray light, electrical noise, etc. can be interpreted as ‘illumination’ by the sensor electronics – and is. The bottom line therefore is that the low light areas (shadows) in an image captured by digital means are always the most noisy.

To sum it up:  blacks in a film negative are the least noisy, as basically nothing is happening there; blacks in a digital image are the most noisy, since the unfilled ‘pixel containers’ are like little magnets for any kind of energy. What this means is that fundamentally, digitally captured images are different looking than film:  noisy blacks in digital, clean blacks in film.

There are two technologies for digital sensor image capture:  CCD and CMOS. While similar at the high level, there are significant differences. CCD (Charge Coupled Devices) are older, typically create high quality, low-noise images. CMOS (Complementary Metal Oxide Semiconductor) arrays consume much less power, are less sensitive to light, and are far less expensive to fabricate. This means that all cellphone cameras use CMOS technology – the iPhone included. Most medium to high-end DSLR cameras use CCD technology.

The above issue only adds to the quality challenge for a cellphone camera:  using less expensive technology means higher noise, lower quality, etc. for the produced image. Therefore, to get a good exposure on digital, one would think that you would want to ‘expose for the shadows’ to be sure you reduced noise by adding exposure to the shadow areas. Unfortunately, the opposite is actually the case!

The reason is that in digital capture, once a pixel has been ‘filled up’ (i.e. received enough light that the level of that pixel is at the maximum [255 for an 8-bit system]) it can no longer hold any detail – that pixel is just clipped at pure white. No amount of post-processing (Photoshop, etc.) can recover detail that is lost since the original capture was clipped. With under-exposed blacks, you can always apply noise-reduction, raise levels, etc. and play with ‘an emptyp-ish container’.

That is why it’s so important to ‘expose for the highlights’ with digital – once you have clipped an area of image at pure white, you can’t ever get that detail back again – it’s burnt out. For film, the opposite was true due the negative process:  if you didn’t get SOME exposure in the shadows, you can’t make something out of nothing – all you get is gray noise if you try to pump up blacks that have no detail. In film, you have to expose for the shadows, then you can always tone down the highlights.

So, with your cellphone cameras,  ALWAYS make sure you don’t blow out the highlights, even if you have to compromise with noisy blacks- you can use various techniques to minimize that issue during post-production.

Fixed vs Adjustable Aperture

Another major difference between digital cameras and cellphone cameras is the issue of fixed aperture. All cellphone cameras have a fixed aperture – i.e. no way to adjust the lens aperture as one does with the “f-stop” ring on a camera lens. Essentially, the cellphone lens is “wide open” all the time.  This is purely a function of cost and complexity. In normal cameras, the aperture is controlled with a ‘variable vane’ system, a rather complex set of curved thin pieces of metal that open and close a portal within the lens assembly to allow either more or less light through the lens as a whole.

With a typical lens measuring 3” in diameter and about 2” – 6” long this was not a mechanical design issue. A cellphone lens on the other hand is usually less than ¼” in diameter and less than ¼” in depth. The mechanical engineering required to insert such an aperture control mechanism would be very difficult and exorbitantly expensive.

Also, the operational desire of most cellphone manufacturers is to keep the device very simple to operate, so having another control that significantly affected camera operations and control was not high on the feature list.

A fixed aperture means several things:  normally this means a shallow depth of field, but with the relatively wide angle lenses employed by most mobile phone manufacturers the depth of field is usually more than enough; and for daylight exposures, adjustment of both ISO and shutter speed are necessary to avoid over-exposure.

Exposure settings

On a digital camera, you can use either automatic or manual controls to set the exposure. Many cameras allow either “shutter priority” or “aperture priority.”  What this means is that with shutter priority the user selects a shutter speed (or range of speeds), and the camera adjusts the aperture as is required to get the correct exposure. This setting does not allow the user to set the f-stop, so the depth of field on a photograph will vary depending on the light level.

With aperture priority, the user selects an f-stop setting, and the camera selects a shutter speed that is appropriate. With this setting, the user does not set the shutter speed, so care is required if slow shutter speeds are anticipated:  camera and/or subject movement must be minimized.

On a film or digital camera, the user can set the ISO speed rating manually. This is not possible on most cellphone cameras. The speed rating of the sensor (ISO #) is really a ‘baseline’ for the exposure setting.

Here is an example of setting the base ISO speed correctly:

Under exposure

Normal exposure

Over exposure

The exposure algorithm inside the cellphone camera software computes both the shutter speed and the ISO (the only two factors that it can change, since the aperture is fixed) and arrives at a compromise that the camera software believes will make the best exposure. Here is where art and experience come into play – no cellphone hardware or software manufacturer has yet to publish their algorithms, no do I ever expect this to happen. One must shoot lots and lots of exposures under controlled conditions to attempt to figure out how a given camera (and app) is deciding to set these parameters.

Like anything else, if one takes the time to know your tools, you get a better result. From what I have observed by shooting several thousand frames with my iPhone4S, and using about 10 different camera apps to do so, the following is very rough approximation of a typical algorithmic process:

  • A very fast ‘pre-exposure’ of the entire frame is performed  (averaging together all the pixels without regard to an exposure ‘area’) to arrive at a sense of the overall illumination of the frame.
  • From this an initial ISO setting is assigned.
  • Based on that ISO, then the ‘exposure area’ (usually shown in a box in the display, or sometimes just centered in the frame) is used to further set the exposure:  the pixels within the exposure area are averaged, then, based on the ISO setting, a shutter speed is chosen to place the average light level at Zone 5 (middle gray value) of a standard exposure index. [Look for an upcoming blog on Zones if you are not familiar with this]

It appears that subsequent adjustments to this process can happen (and most likely do!) – again, determinate on a particular vendor’s choice of algorithm:  for instance, if, based on the above sequence the final shutter speed is very slow (under 1/30 second) the base ISO sensitivity will likely be raised, as slow shutter speeds reveal both camera shake and subject movement.

Apple, nor any other manufacturer, seems to publish the exact limits on their embedded camera’s ISO sensitivity nor range of shutter speeds. With many, many controlled tests, I have determined (and apparently so have others, as the published info I have seen on the web corroborates my findings) that the shutter speeds of the iPhone4S range from 1/15 sec down to 1/2000 sec; while the ISO speeds range from ISO 64 to ISO 1000.

There are apps that will allow much longer exposures, and potentially a wider range of ISO values – I have not had time to run extensive tests with all the apps I have tried. Each camera app vendor has the choice to implement more or less features that Apple exposes in the programmatic interface (more on that in Part 4 of this series), so the potential variations are large.

Movement

As you have undoubtedly experienced, many photographs are spoiled by inadvertent movement of the subject, camera, or both. While sometimes movement is intended – and can make the shot artistically – most often this is not the case. With the very tiny sensor that is normal for any cellphone camera, the sensor is very often hungry for light, so the more you can give it, the better quality picture you will get.

What this means in practicality is that when possible, in lower light conditions, brace the camera against a solid object, put it on a tripod (using an adaptor), etc. Now here is where you again will get better results with experience:  our eyes have fantastic adaptive properties – cameras do not. When we walk inside a mall from a sunny outdoors, within seconds we perceive the mall to be as well lit as the outside – even though in real terms the average light value is less than 1% of what was outdoors!

However, our little iPhone is now struggling to get enough light to expose a picture. Outdoors, we might have found that at ISO 64 we were getting shutter speeds of 1/600 second, indoors we have changed to ISO400 at 1/15 second! The slow shutter speeds almost always will show blurred movement, whether from a person walking, camera shake, or both.

Here are a few examples:

1/20 sec @ ISO400 Camera handheld, you can see camera shake in blurred lines in glass panels, upper left; then in addition subject movement (her left leg is almost totally blurred).

1/15 sec @ ISO800 iPhone held against a signal light post for stability

Image format

Another big difference between DSLR cameras and cellphone cameras is the type (and variations) on image capture format. Cellphone cameras exclusively (at this time) capture only to compressed formats, .jpg usually. Often the user gets some limited control over the amount of compression and the resultant output resolution (I call it ‘shirt size formatting’ – as usually it’s S-M-L).

Regardless, the output format is significantly compressed from the original taking format. For instance, the 8megapixel capture of the iPhone4S typically outputs a frame that is about 2.9MB in file size, in .jpg format. A semi-pro DSLR (2/3 format) at the same megapixel rating will output 48MB per frame in RAW format (16 bits per pixel). This is done mainly to conserve memory space in the cellphone system, as well as greatly speed up transfers of data out of the phone.

However, one loses much by storing pictures only in the compressed jpg format. Without digressing into details of digital photography post-production, once a picture is locked into the compressed world many potential adjustments are lost. That is not to say you can’t get a good picture if compressed, only that a correct exposure up front is even more important, since you can’t employ the rescue methods that ‘camera Raw’ allows one.

The process of jpg compression introduces some artifacts as well, and these can range of invisible to annoying, depending on the content. Again, there is nothing you can do about it in the world of cellphone cameras, other than understand it, and try to mitigate against it by careful composition and exposure when possible. The scope of this discussion precludes a detailed treatise, but suffice it to say that jpeg artifacts become more noticeable with extremes of lighting conditions, so low light, brilliant glare and other such situations may show these more than a normally lit scene.

Flash photography

The ‘flash’ on cellphones is nothing more than a little LED lamp that can ‘flash’ fairly rapidly. Yes, it allows one to take pictures in low light that would otherwise not be possible, but that’s about it. It has almost no similarity to a real strobe light used by DSLR cameras, whether built-in to the camera or a professional outboard unit.

The three big differences:

  1. Speed:  the length of a strobe flash is typically 1ms (1/1000 sec), while the iPhone LED ‘flash’ is about 100ms (1/10 sec). That is a factor of 100x.
  2. Light output:  DSLR strobe units put out MUCH more light than cellphone flash units. Exactly how much is not easy to measure, as Apple does not publish specs, and the LED light unit works very differently than a strobe unit. But conservatively, a typical outboard flash unit (Nikon SB-900 for example) produces 2,500 lumenseconds of illumination, while the iPhone4S is estimated at about 25 lumenseconds. That means a strobe flash is 100x brighter…
  3. Color temperature:  Commercial strobe lights are carefully calibrated to output at approximately 5500°K, while the iPhone (and similar cellphone flash) are uncalibrated. The iPhone in particular seems quite blue, probably around 7000°K or so. Now the automatic white balance will try to fix this, but this function often fails in two common scenarios:  mixed lighting (for instance flash in a room lit with tungsten lamps); and subjects that don’t have much black or white areas (which AWB circuits use to compute the white point).

The bottom line is that reserve the use of cellphone ‘flash’ to emergencies only. And it kills battery life…

Colorimetry, color balance and color temperature

The above terms are all different. Colorimetry is the science of human color perception. Color Balance is the relative balance of colors within a given object in a scene, or within a scene as a whole. Color Temperature (in photographic terms) is the overall hue of a scene based on the white reference associated with the scene.

To give a few examples, in reverse order from the introduction:

The color temperature of an outdoor shot at sunset will be lower (warmer) than that of a shot taken at high noon. The standard for photography for daylight is 5000°K (degrees Kelvin is the unit for color temperature), with sunset being about 2500°K, indoor household lighting equivalent to about 2800°K, while outdoor light in the shade from an open sky (no direct sunlight) is often about 9000°K. The lower numbers look reddish-orange, the higher numbers are bluish.

Color balance can be affected by illumination, the object itself, errors in the color response of the taking sensor, as well as other factors. Sometime we need to correct this, as the original color balance can look unnatural and detract from the photograph – for instance if human skin happens to be illuminated with a fluorescent light, although the larger scene is lit with traditional tungsten (household) lamps, the skin will take on an odd greenish tinge.

Colorimetry comes into play in how the Human Visual System (HVS) actually ‘sees’ what it is looking at. Many variables come into play here, but for our purposes we need to understand that the relative light levels and contrast of the viewing environment can significantly affect what we see. So don’t try to critically judge your cellphone shots outdoors – you can’t. Wait until you are indoors, and it’s best to review them on monitor you can trust – with proper lighting conditions.

More on all these issues will be discussed later, this is just a taste and some quick guides on how cellphones differ from more traditional photography.

Motion Picture Photography vs Still Photography

Most of this discussion so far has focused on still photography as opposed to video (motion photography). All of the principles hold true for video in the same way as for still camera shots. A few things bear mentioning – again in the vein of differences between a traditional video camera and a cellphone camera.

The typical built-in app for video in a cellphone runs at 24 fps (frames per second), the same speed as professional movie cameras. Some after-market apps allow the user to change that, but for our discussion we’ll stick with 24fps. The important bit to remember here is that frame rate is equivalent to a shutter speed of 1/24 sec for each frame shot. (For various technical reasons, the actual shutter speed is a bit higher, since there has to be some time between each frame to read the image from the sensor – so the actual shutter speed is closer to 1/30 sec).

This has two important by-products:  the shutter speed is now fixed, and since the aperture is also fixed the only thing left for the camera app to use to adjust for exposure is the ISO speed. This means less control over lighting conditions. The other issue is, since even 1/30 sec is a fairly slow shutter speed, camera movement is a very, very bad thing. Keep your movements slow and smooth, not jerky. Brace yourself whenever possible. Fast moving objects in the frame will be blurred – there is nothing you can do about that.

Another issue of concern with “cellphone cinemaphotography” is actual sensor size (which affects noise and light sensitivity). For still photography, the full sensor is used, in the case of the iPhone4S that is 3264 x 2448, but in video mode the resolution is decreased to 1920 x 1080. This is a significant decrease in resolution – from 8 megapixels to 2 megapixels per frame! There are a number of reasons for this:

  • The HD video standard is 1920 x 1080
  • Using less than the full sensor allows for vibration reduction to take place in software – as the image jiggles around on the sensor, fast and complex arithmetic can move the offset frames back into place to help reduce the effects of camera shake.
  • The data rate from the sensor is reduced:  no current cellphone cpu and memory could keep up with full motion video at 8megapixels per frame.
  • The resultant file size is manageable.
  • The compression engine can keep up with the output from the camera – again, the iPhone uses H.264 as the video compression codec for movies, and that process uses a lot of computer power – not to mention drains the battery faster than the sun melts butter on hot pavement. Yes, the iPhone will give you a full day of charge if you are not on WiFi, are mostly on standby or just making some phone calls. Want to drain it flat in under 2 hours? Just start shooting video…
  • And, of course, if you are shooting in low light and turn on the ‘torch’ (the little LED flash that stays on for videography) then your battery life can be measured in minutes! Use that sparingly, only when you have to  – it doesn’t actually give that much light, and being so close to the lens, causes some strange lighting effects.

BTW, even still photography uses a ton of battery power. I have surprised myself more than once by walking around shooting for an hour, then noticing my battery is at 21%. More than anything else, this motivated me to get a high quality car charger…

Summary

Ok, that’s it for this section – hope it’s provided a few useful bits of information that will help you make better cellphone shots. Here’s a very brief summary of tips to take away from the above discussion:

  • Give your pictures as much light as you can
  • Hold camera still, brace on solid object if at all possible
  • Expose for the highlights (i.e. don’t let them get overexposed or ‘blown out’)
  • Don’t use the built-in flash unless absolutely necessary

Blog Site Design Update

February 26, 2012 · by parasam

The site for this blog has been redesigned. It now supports additional features, including search, a more graphical selection pane, drop-down category selection and more. In addition, when the blog is viewed on an iPad a different ‘app-like’ interface is presented that is more appropriate to the “tap & swipe” navigation of a tablet. A slimmed-down, mostly text version is presented to smartphone users. The aim is to present the blog posts in a clean, uncluttered manner no matter what device is used to view the site.

Please comment to this post with any bugs or suggestions. Many thanks for reading!

  • Blog at WordPress.com.
  • Connect with us:
  • Twitter
  • Vimeo
  • YouTube
  • RSS
  • Follow Following
    • Parasam
    • Join 95 other followers
    • Already have a WordPress.com account? Log in now.
    • Parasam
    • Customize
    • Follow Following
    • Sign up
    • Log in
    • Report this content
    • View site in Reader
    • Manage subscriptions
    • Collapse this bar
 

Loading Comments...