• Home
  • about this blog
  • Blog Posts

Parasam

Menu

  • design
  • fashion
  • history
  • philosophy
  • photography
  • post-production
    • Content Protection
    • Quality Control
  • science
  • security
  • technology
    • 2nd screen
    • IoT
  • Uncategorized
  • Enter your email address to follow this blog and receive notifications of new posts by email.

  • Recent Posts

    • Take Control of your Phone
    • DI – Disintermediation, 5 years on…
    • Objective Photography is an Oxymoron (all photos lie…)
    • A Historical Moment: The Sylmar Earthquake of 1971 (Los Angeles, CA)
    • Where Did My Images Go? [the challenge of long-term preservation of digital images]
  • Archives

    • September 2020
    • October 2017
    • August 2016
    • June 2016
    • May 2016
    • November 2015
    • June 2015
    • April 2015
    • March 2015
    • December 2014
    • February 2014
    • September 2012
    • August 2012
    • June 2012
    • May 2012
    • April 2012
    • March 2012
    • February 2012
    • January 2012
  • Categories

    • 2nd screen
    • Content Protection
    • design
    • fashion
    • history
    • IoT
    • philosophy
    • photography
    • post-production
    • Quality Control
    • science
    • security
    • technology
    • Uncategorized
  • Meta

    • Register
    • Log in
    • Entries feed
    • Comments feed
    • WordPress.com

Browsing Category technology

Branding: my comments on possession of ‘mind share’

September 20, 2012 · by parasam

[Note:  I will be using names, logos, service marks, trade marks, etc. of various companies as ‘fair-use’ examples in this essay. The individual marks are owned and copyrighted by their respective owners, and should be respected as such. No association is implied or intended between myself and any of the aforementioned companies.]

Overview

I’m writing this article as a commentary on how I see the issue of “branding” has become so pervasive in our lives, affecting the design and manufacture of most things that we buy, and more importantly, how I see “branding” vie for a share of our minds, how we think and perceive reality around us, and how we make decisions. I believe that this trend has overstepped logic, rational thought, common sense and even good business sense. I will present a brief history, some examples of current practice, and summarize with some observations.

Brand {definition}

According to Webster, a brand is:

  • a mark made by burning with a hot iron to attest manufacture or quality or to designate ownership
  • a printed mark made for similar purposes
  • a mark put on criminals with a hot iron
  • a mark of disgrace
  • a class of goods identified by name as the product of a single firm or manufacturer
  • an arbitrarily adopted name that is given by a manufacturer or merchant to an article or service to distinguish it as produced or sold by that manufacturer or merchant and that may be used and protected as a trademark
  • one having a well-known and usually highly regarded or marketable name

The American Marketing Association Dictionary defines brand as:

  • a “Name, term, design, symbol, or any other feature that identifies one seller’s good or service as distinct from those of other sellers.”

History

The word “brand” is derived from the Old Norse brandr meaning “to burn.” It refers to the practice of producers burning their mark (or brand) onto their products.

The oldest known generic brand in the world is Chyawanprash, च्यवनप्राश – which describes a jam-like mixture of approximately 45 herbs, spices and other ingredients. It has been in continuous use in India and other areas since the Vedic period, about 10,000 years ago. Indian historical evidence shows that this formulation was originally prepared, according to Ayurvedic tradition, by the ‘Royal Vaids’, named ‘Ashwini Kumar brothers’, the twins, who were medical advisers to Devas for Chyawan Rishi at his ashram near Narnaul, Haryana, India – which is where the name Chyawanprash derives. The first historically documented formula for Chywanprash was found in the Ayurvedic treatise Chakara Samhita. The current annual market for this product is about $80million US.

Other early ‘branding’ examples include the use of watermarks on paper by the Italians in the 1200s, the use of distinctive signatures by artists during the Renaissance (1500s), and the branding of cattle and criminals with hot iron tools (1800s). There is other evidence of ‘marking’ or ‘branding’ such as potter’s marks on porcelain and pottery in China, India, Greece and Italy as long ago as 1300 BCE; some early reporting of livestock branding dating back to 2000 BCE [no physical evidence survives today to assert this]; and some archeologists believe that the Babylonians used advertisements as long ago as 3000 BCE. So, for common discussion, the concept of branding has been around for the last 5,000 – 10,000 years – hardly an invention of Madison Avenue.

In terms of slightly more modern expressions of branding, the idea of permanence has long been associated with the concept of a brand – the use of a hot iron to burn a brand into the hide of cattle or the skin of a criminal was considered technologically advanced at that time. For instance, in England during the late Renaissance and right up the beginning of the Industrial Revolution (1600s – 1800s) criminals were frequently branded with a letter on the cheek [for men] or on the breast [for women]. V was used for being convicted of the ‘crime’ of being a vagabond or a gypsy, F for “fravmaker” [brawler or troublemaker], S for a runaway slave. M for malefactor, etc. France used iconographic brands such as the fleur-de-lis on the shoulder. In the American Colonies the branding of suspected/convicted adulterers with the letter A was common practice. The Puritans of that time were not known for their objectivity or legal accuracy, so the brand unfortunately ruined the lives of many based on conjecture and supposition. (And we’ll leave the issue of witches and early Massachusetts alone for now…)  The novel “The Scarlet Letter” is based in part on this most unfortunate part of early American history.

The ‘branding’ of humans with permanent marks also uses the technology of the tattoo, as opposed to burning. While this practice was also used by many governments to mark ‘criminals’ – perhaps the most notorious of which was the ‘prisoner serial number’ at Auschwitz – by far the larger use of tattoos has been by the individuals themselves, either as an expression of body art or alignment with a group/gang. I will address this form of branding further later in this article.

Although we often associate the branding of cattle with the “wild west” of America during the 1800s, this practice predates US cowboys by at least 3,500 years. Nevertheless, it is one area where this ‘hot iron’ method is still practiced today. The cattle still don’t seem to like it much. With the advent of modern technology, this may finally be changing, as various methods of alternative marking are being tested. Embedded chips, long-range RFID tags and other devices that can be read from a vehicle or airplane are much more useful for automatic counting and tracking of livestock than chasing down an otherwise uninterested cow to look at the burn mark on its hind quarter. In theory, using buried detector cables, Wi-Max and other combinations of modern technology, virtual fences may be a possibility, with real-time maps showing each rancher where their livestock is at any time, and allowing easy sorting and retrieval for breeding, medical treatment or harvesting.

As we moved into the 1800s, most parts of the modernizing world started to make rapid use of marking or branding. Silver and gold smiths, book publishers, manufactured goods – the list gets long very quickly. In the UK for example, Bass & Co [brewery] claims their red triangle brand as the world’s first trademark. Lyle’s Golden Syrup, with their green-and-gold packaging – unchanged since 1885 – claims status as Britain’s oldest brand. All of this was done for various reasons.

To the proponents of branding (marketing oriented people, and obviously many consumers), the reasons commonly listed are:  to ensure honesty, provide quality assurance, identify source or ownership, hold producers responsible, and differentiate one product over another.

Current Practice and Effects of Branding

The current use of ‘brands’ is primarily commercial in nature: to increase or maintain sales and market share of a product or a service. The practice and concepts associated with branding are typically overseen by the marketing department of companies that own or manage such brands. From the point of view of brand owners/users, the following elements are often associated with the practice:

  • A brand is the personality that identifies a product, service or company.
  • The brand experience is the experiential aspect of the points of contact with a brand; the perception of a brand’s action or function.
  • The brand image is the psychological aspect of the brand within the mind of the user/consumer. This is a symbolic construct composed of thoughts, information and expectations of the branded product/service.
  • A brand is one of the core elements in an advertising campaign, as it is often the identifier used to relate a particular product, model, individual service, etc. with the larger commonality of the company.
  • The art and business of creating and maintaining a brand is known as brand management.
  • Focusing of the entirety of a business or organization is called brand orientation.
  • A brand which is widely known in the marketplace has achieved brand recognition. Examples are Coca-Cola, Pepsi, Mercedes, Luis Vuitton, etc.
  • A brand franchise is an achievement of successful branding such that a large positive sentiment is generally held towards the brand and the associated product/service. For example, a Ferrari is known as a “hip, cool, fast, desirable car” – whether or not an individual can afford either the car, mechanics or insurance.
  • Brand awareness is the impression that is instilled into a customer or user of a brand, such that they will recognize and link the brand to the underlying company or set of products/services. It involves both brand recognition and brand recall. Brand awareness is considered critical by marketers as consumers won’t consider your brand if they are unaware of it in the first place. Typically, brand awareness is promoted by repeated indoctrination of the consumer with a combination of brand name, logo, jingles, taglines, etc. to reinforce the awareness of the brand and associate it with a particular product or class of products.
  • The “Holy Grail” of brand awareness for a firm is called Top-of-Mind Awareness. This is when a consumer is asked without any external prompting which brand they associate with a particular product, an example might be “Kleenex” if asked about a brand association for facial tissues.
  • Aided Awareness occurs when a prompt such as a list of brands is shown to a consumer, and they express recognition or awareness of your brand once this memory aid has been provided.
  • Strategic Awareness is the combination of Top-of-Mind Awareness coupled with the belief by the consumer that this brand is superior to other brands in the marketplace for similar products or services.
  • The elements that typically comprise a ‘brand experience’ often include some or all of the following:
    • Name – identifying word or words of the product, service, company.
    • Logo – visual glyph or symbol that is associated with the brand.
    • Graphics – associated graphical elements that often supplement the name or logo to create a unique visual reminder that helps to visually associate the brand with the underlying product/service.
    • Tagline – a short phrase often used in advertising, and repeated on product packaging, that is used primarily for memory association of the brand.
    • Shapes – certain product shapes are often associated (and patented/trademarked/etc) with particular products. Examples might be Coca-Cola bottle, the iPod and the Hershey’s Chocolate Bar.
    • Colors – certain colors or color schemes can be associated (and protected if you have good enough lawyers and patent attorneys) with products. Examples are the red-soled shoes of Christian Louboutin, the distinctive pink color of Owens-Corning fiberglass insulation.
    • Sounds – similar to a jingle or a catchphrase, a short melodic tune can be trademarked to a particular brand: the NBC tv network’s ‘chimes’ when the animated logo is displayed; the “5 beeps” of the Close Encounters of the Third Kind’s alien spaceship; etc.
    • Scents – an example is the unique fragrance of Chanel No. 5 perfume: the top notes of aldehydes, bergamot, lemon, neroli and ylang-ylang; the heart of jasmine, rose, lily of the valley and iris; the base of vetiver, sandalwood, vanilla, amber and patchouli.
    • Taste – as noted in the introductory history to branding in this article, Chayawanprash is an Indian paste of typically 45 spices; another example is Kentucky Fried Chicken (not as healthy as Chayawanprash..) with its “11 Herbs & Spices”.
    • Movements – even the directional movement of a car door can be trademarked – as Lamborghini has done with its upward-swinging doors.
  • A Global Brand is one that represents a similar product or service no matter where it is sold. We see this more commonly now that both the internet and global consumption of products and services has proliferated. Some examples are:  Nike, Adidas, Mastercard, Facebook, Google, Apple, Coca-Cola, Pepsi, Mercedes, VISA, Gap, Sony, etc.
    • The practice of global branding is somewhat new, and brings both advantages and challenges. Obviously this is only attractive to those that market in a global way, but that does not mean that only huge multi-national corporations should think in terms of global branding. If one offers a service over the internet, you are immediately exposed (potentially) to a global market. Even this blog is currently being read by 20,000 people in over 100 countries (Thank you all my readers by the way! Your interest and comments are what sustains my writing…)
    • Some advantages of global branding are:
      • Economy of scale (lower marketing, production and distribution costs)
      • Worldwide consistency of brand images
      • Increased exposure to media (international press as well as domestic)
      • Attractiveness to international travellers, both business and pleasure – as people show a preference for buying what they know as opposed to unknowns.
      • Potential of leveraging current domestic market share into international markets, even if your product is relatively new or unknown in those global markets.
    • Some of the challenges are:
      • All wording (company slogan, name, product names, description of services, etc.) must be thoroughly reviewed and translated for each global market segment. This must be revisited frequently, as language, custom and mores change quickly today. A seemingly innocuous tagline from two years ago could have an entirely different association in a region where recent political instability may have changed the landscape of expression.
      • The infamous product name of the Chevy Nova – when it was exported to Mexico without a thorough vetting of the model name meaning – should be remembered: “No Va” means “Doesn’t Go” in Spanish – probably not the best name for a car…
        • the Audi “E-Tron”… étron means “excrement” in French
        • Hulu [tv network] translates to “butt” in Indonesian
        • SyFy [tv network] means “syphilitics” in Polish
        • Gerber [baby food] translated to French means “vomit”
        • WaterPik [electric toothbrush] means [roughly] “morning wood” in Danish… (I’m trying to be somewhat PC here…)
        • Mensa [group of supposedly really smart people] translates to “stupid woman” {Spanish slang}
      • Different cultures communicate differently, so marketing material, focus and visual tone may have to differ from area to area
      • Different locales place varying levels of importance on products and services, so a differentiating factor in the USA may not be appreciated in Nigeria.
      • Regulatory issues, local legislation (most important with medicines, foodstuffs and products that carry liability issues [cars, boats, planes, structural elements, etc.]) must be considered carefully. All of these issues tend to counteract savings that may otherwise result from scale.
      • Consumption patterns can vary widely for both products and services.
  • A Brand Name is arguably the most important feature or aspect of an overall brand. Often this is first element of a brand that is trademarked, servicemarked, etc. Brand names come in a wide variety of styles, some of the common ones are:
    • Acronym Adaptation:  IBM, UPS, NBC, CBS, etc.
    • Descriptions:  Whole Foods, Best Buy, New Balance, etc.
    • Alliterations and rhymes:  Bed, Bath & Beyond, Coca-Cola, Spic and Span, Krispy Kreme (alliterations) [actually Krispy Kreme is also an oxymoron, Spic and Span is also reduplication]; Reese’s Pieces, YouTube, Lean Cuisine, Mellow Yellow (rhymes)
    • Evocative imagery:  Amazon, Crest, BlueSky, RedBull
    • Neologisms: (made up words)   Kodak, Wii, Accenture, Brangelina, webinar, Frisbee, Xerox, etc.
    • Foreign words:  Volvo (at least here the marketers got it right, it’s Spanish for “I roll”), Samsung (Korean for “Three Stars”), Häagen-Dazs (sounds Scandinavian but the ice cream was invented by Polish Jews in the Bronx…) [BTW it’s now owned by Pillsbury]
    • Combination:  Walkman
    • Tautology:  Crown Royal
    • Theronym:  Mustang  [a theronym is a name derived from an animal name, not Charlize Theron…]
    • Mimetics:  Google  [mimetics is the practice of mimicry, in this case to stare ‘google-eyed’ at something to better understand it]
    • Eponym:  Trump Tower
    • Synecdoche:  Staples
    • Metonomy:  Starbucks
    • Allusion:  London Fog
    • Haplology:  Land O’Lakes
    • Clipping:  Fed Ex
    • Morphological borrowing:  Nikon  [morphology of language gives us that the Japanese word Naikan, which is pronounced Nikon… – and the meaning of Naikan is a spiritual state of gratitude, even for small things – such as when you push a shutter button you get a great picture…]
    • Omission:  RAZR
    • Founder’s Names:  Porsche, Ferrari, Hewlett-Packard
    • Geography:  Cisco, Fuji Film
    • Personification:  Nike, Betty Crocker [no such woman, William Crocker was an advertising executive at Washburn/Crosby who thought this up, using the first name Betty because it ‘was a cheery, All-American name’.]
  • The concept of a brandnomer is highly desired, where Top-of-Mind association leads people to refer to a general class of products by a brand name. Examples are Band-Aid for an adhesive bandage, Kleenex as facial tissue, SkilSaw for a rotary hand-held electric saw, etc.
  • The concept of brand identity, particularly visual brand identity, has become paramount in the ecosystem of marketing, branding and intellectual property ownership. Many corporations now issue very detailed manuals on the correct usage of their visual brands, down to precise measurements of placement on written or screen material, etc. The courts are continually littered with ongoing process of various firms either suing each other over alleged violations of branding, or attempting to establish ownership over some aspect of a visual identity for a new or existing brand.
  • One of the original reasons put forth by early businesses (and this belief is carried into current times) is that a brand implies a certain trust or perception of quality by the consumer. This is getting to the core of what will be discussed further in this post, but advertisers, marketers and even top-level executives of the firms that own major brands view this as vitally important to their bottom line and ongoing customer allegiance. This concept of brand trust is part of what is often called “goodwill” when valuing a firm at a time of sale or stock appraisal. Some companies have been valued far higher than their actual assets or current sales warrant, based strictly on a collective belief in the value of the “goodwill” of that firm, which often include brand value, brand trust and brand identity.
  • The role that brands play in commerce, and cultures at large, have changed considerably since the late 1800s when branding of products started exploding as a practice. Initially, as discussed above, brands were used to help differentiate one similar product from another, with the hope of persuading the consumer that, A) there was in fact a difference at all [which was/is often just not true], and B) that once trust was established for a brand (based on one product) that same firm could trade on that trust and extend whatever consumer belief there was in the original product to a new and different type of product – which may or may not be of similar quality or value. For example {and please note, this is not an accusation or assumption of lack of value, it’s merely an example} that fact that Michelin became known for high quality motorcar tires was no guarantee that in a totally unrelated field (restaurant guides) they would provide an equal value. (Turns out they were correct, and have an excellent reputation for this:  a Michelin “star” is a highly sought-after mark of prestige for a restaurant anywhere in the world).
  • Brands today have become synonymous with the promise of a certain performance, reliability, quality, “cool-ness”, etc., not only for the advertised product, but the company (or organization, country, etc.) behind the product or service. Brands have inexorably become intertwined with politics, economics and social issues. The use of icons, visual identities and short taglines – all the elements of a successful branding campaign – has allowed  ‘branding’ to communicate complex feelings quickly. Brands have often become a shorthand for entire soliloquies on a particular subject. For instance, the term “McMansion” as used by the real-estate industry (originally in Los Angeles) is based on the generic type of food, often in “Super Sizes” that is typical of the McDonald’s chain to refer to a generic, over-sized house that is usually stuffed onto a lot that is proportionally too small for a home of that size. This somewhat pejorative derivation of a well-known brand in one sector has now been translated to completely different sector, and is often used in social commentary.
  • Modern branding is now a complex exercise that combines virtually all the senses, psychology, linguistics, cultural analysis, BigData, focus group testing, etc. We now have new buzz-words even in the esoteric world of branding (which as you have seen already in this article delves into the arcane sciences of words, glyphs, meaning and more than one ever thought possible). Such concepts as attitude branding [where the brand no longer represents a single product or service, but the entire ‘feeling’ behind the type of person that would consume such a product or service], and iconic branding [where the goal is for the consumer of such brands to self-identify with the brand to the point of using a brand to express personal identity and the preferred mode of self-expression] are now pervasive. For example, many consumers of Apple products (computers/phones) or Harley-Davidson (motorcycles) are often unreasonably attached to those brands, and view themselves as a particular type of person just because they use those products.

    • The consumer/user behavior of iconic brands is interesting, and worthy of a bit of additional analysis. One of the reasons is that people who use / identify with / consume iconic brands are the most loyal and exhibit two other tendencies that make this group exceptionally valuable to the brand owner:  1) very low ‘churn’ factor [they don’t switch brands, even in the face of objective criticism, without tremendous reason]; and 2) they actively proselytize the product/service without any inducement from the brand owner.
    • (Did you ever try to get a die-hard Mac user to switch back to a PC? Have you approached a guy in leathers on a Harley and suggested that he would be happier on a Suzuki??)
    • Several of the factors that help make a brand ‘iconic’ are:
      • It’s actually got to be a good product/service – the general reputation must uphold this iconic status. It should have a reputation of high quality, with a bit of an esteem factor.
      • There is a story/myth associated with the product/service. Again, like actual quality, the story has to be believable (I didn’t say real…) and cohesive with the product/service. For example, the stories/myths/perceptions of Steve Jobs filled this requirement for Apple.
      • The brand that wishes to be iconic must provide a solution for pent-up desires (doesn’t actually have to provide these, just appear that it can). Most people are less than totally fulfilled in some area of their lives. If a brand can offer a product or service that helps a person feel like they are overcoming one of those frustrations, they will be incredibly supportive and loyal. (Don’t you just feel more cool when you are typing on a Mac Air as opposed to a desktop PC???)
      • The iconic brand must be continually managed to keep its position in the constant change that inevitably surrounds all modern products/services. (Hmmm… didn’t we just get an iPhone5…)
  • The last area of brand analysis we will touch on here is brand extension and brand dilution. I have lumped them together, since the inappropriate use of the first inevitably results in the second… Once a brand has been established in one area/product, it is often the desire of the brand owner, in search of more… to attempt the success of the brand in other areas. The hope/assumption is that if Hugo Boss makes well-liked men’s clothes that this same cachet can be extended to fragrance, sunglasses, etc. I use this as an example (not picking on dear Hugo, just making an example of the fashion industry where it seems that every designer now can’t just make clothes but must equip us from shoes to hats and everything in between…) as here, more often than most, we see attempts at brand extension actually result in brand dilution. None of the current clothing designers actually make sunglasses. Not one. They are all made in China (or if not the bits are and then assembled in a more ‘respected’ country for purpose of labelling). And from an optical standpoint, they are about as differentiated from one another as one pineapple is from another.This is not to say at all that brand extension doesn’t work – just that the brand owner should actually treat a new venture as just that, and almost resist ‘carrying over’ the hard-won success of a current brand to a new segment. There are certainly many success stories (the example of Michelin that I used earlier is one that comes to mind, another [oddly enough another tire maker] is the iconic calendar of Pirelli which features some of the most prestigious fashion models and photographers vying each year to model/shoot for this event).

Observations on the Psychology of Branding

There is an interesting novel written by William Gibson, “Pattern Recognition”, [which I highly recommend, not only for the actual subject and story, but Gibson is a master storyteller, and just the act of digesting words so well laid down on the page is worth one’s time], which I bring to your attention not for the main story (go read it for that answer) but for part of the subtext: the protagonist of the story, Cayce Pollard, is “brand-phobic”. What’s fascinating is the level to which she attempts to be ‘un-branded’ – and just how obscenely difficult that is in modern times.

Here’s a challenge. Just spend a few minutes looking around right now in your immediate surroundings, and see if anything, anything at all, doesn’t have a brand mark on it somewhere. Usually in such a place that it cannot be easily removed/covered, etc. I’ll play guinea pig for a minute right now:  my keyboard is Kensington, as is the trackball. The graphics tablet is Wacom, computer is Dell, monitor is Eizo – all of which have logos and names baked in to the surface. No chance of ‘brand X’ here… If we move on to clothes, car, backpack, luggage, etc. etc. – well you get the picture. We live today in a completely branded environment. It is truly impossible to hide from branding. Part of the reason for this is that ‘brand marks’ have now been extended not just to names and logos, but actual colors, shapes, and even “look and feel” of software. In fact, the motivating factors that propelled me to write this treatise were the recent decisions of patent courts to award Louboutin the sole right (okay, really no pun intended, it just came out of my fingers that way – I write these blogs ‘live’ – i.e. directly online, very little editing – just a quick spell/grammar check and push the button – that’s what a blog is for me) to use the color red on the bottom of his shoes. The only exception granted to Yves St. Laurent (the challenger) was if the shoe is all red. So YSL gets to keep red soles on their red shoes, otherwise – if you see those flashy red contrasting soles on 6″ heels, you know it’s a set of pricy Loubs… The other two recent decisions that factored into my motivation were Lululemon (fashion again, against Calvin Klein – for yoga pants design) and Apple (the infamous case with Samsung which stung Samsung to the iTune of $1.5B).

All three of these cases had a couple of rather new features to the ‘win’:  the ‘brand mark’ was intrinsic to the actual design – this is a watershed statement by the courts, with many ramifications; and the ‘wins’ all went to the defenders (i.e. the designers that first came up with the designs). What this can be construed to mean is that new challengers to a market segment now have even a harder time ahead when desiring to upseat an established rival:  your design better not be anything close at all to what’s out there, or you will be spending time and considerable cash in court instead of on a marketing campaign.

But all this is just the surface, and not really the most important aspect of our current ‘branded’ reality. The more insidious aspect of this is how these companies fight, and win, our actual ‘mindshare’. We have now become so embedded with the constant barrage of branding that we have sublimated it – exactly where the brand owners want it! The last thing any brand owner wants is for a consumer to start thinking. Because then we might actually ask ourselves: is a Chevy truck really better than a Ford? Does it do more? At the end of the day, does any basic truck allow me to put a few hundred pounds of stuff in the back from the local hardware store and bring it home? How many of those tricked out gas monsters jacked up on 8 shocks and balloon tires (for the difficult to navigate off-road experience of Sunset Blvd.) that can – according to the tv ads – actually pull a jet airplane away from the gate really carry more than beer and groceries and an occasional box of bits from the DIY store? The most useful aspect of these high ground clearance Prius-eaters I have seen are the contortions – and resultant fashion shows – that result from the girlfriends trying to get in and out of a vehicle that is 4 ft off the ground…

But that’s all somewhat obvious surface commentary. The important, somewhat darker bits, are the subliminal messaging and actual thought patterns that become embedded in our brains. We no longer just put on a pair of jeans. It’s Levis or Sevens or TrueReligion or Calvins or… When you meet a well-dressed woman at a party, and ask her what she’s wearing, the automatic answer is “Oh, I’m in Vera/Burberry/Donna/Michael/whomever tonight.” I guess she assumes we already know she’s wearing a dress… We no longer think objectively – we don’t put on jeans or a shirt of a pair of shoes, we put on our Diesels with a Michael Kors and a pair of Cole Haan’s. We write with a Mont Blanc or an iPad or a Galaxy. We drive a Merc or a Beemer or a Lambo. (or to be egalitarian, Mini, Leaf, Prius). We eat not just a tomatoe, but a local, certified organic, Kenter Farms pineapple Heirloom. We spend time, money, status and nervous energy selecting the ‘best’ wine at a restaurant – when the vast bulk of us can’t tell the difference between a sauvignon blanc and a chardonnay in a blind taste test. Here is something that has been tested many, many times:  take five mid-range lager beers. Pour into identical glasses, let sit for one minute (some say the initial head can be a ‘tell’), then give to a group of die-hard beer drinkers who have strong opinions on Bud/Miller/Amstel/etc. Uh-huh… how many get that one right… (now this is from a dedicated personal set of testing with some of the brightest engineers and scientists that currently work in broadcast and post-production engineering and standards bodies – I mean these people are objective, right???). How about an average of 10%. That’s less than the statistical probability of chance! What our minds tell us is far more potent that reality. In fact, (and this is a discussion for another day) our minds actually make our reality in each moment.

None of the above should be construed to mean that I am against all branding, or that I don’t want companies to be successful in their marketing and sales efforts. What I am asking is for some semblance of objectivity to return to what I see as an imbalanced system. We are so focused on the ‘brand’ that we have lost sight of the product or service. Do we actually examine the stitching on a Kate Spade bag to see if it’s even? Do we compare the fit of the doors to the surrounding body on a Mercedes vs a BMW vs and Audi? Can we tell if organically raised asparagus by monks in Mendocino tastes better than what’s at Safeway? I’m not saying one or the other – but do we look? Do we see? Do we taste? Do we discern and formulate our own opinions?

Imagine this scenario:  a woman goes into the shop to buy jeans. There are no brand names. The pocket designs, attractive as they may be, are unknown to her in terms of an identifiable brand. How will she choose? She would actually have to look at quality of construction, try them on, feel the denim, see if the legs work with her calves, her thighs, her shoes. All of this can be done, but the biggest issue – that can’t be solved with examination, fit or feel: what will her friends think? How will she know if she is wearing ‘cool’ or ‘yesterday’? What if it didn’t matter…

We are so brand-focused today that we let the brands think for us:  we assume that if it’s a BMW that it’s a good car. We assume that if we pay $50 for a bottle of wine it must be good (don’t get me started: the absolute worst offenders on the planet, in terms of branding, brand extension, etc. are the wine farms and distributors. I love wine and respect the incredible effort and experience it takes to make good wine – but the marketing and distribution of this substance makes Barnum & Baily look like saints…) We have collectively abdicated our reasoning, observations, and critical thinking to the marketing departments of those who make products and services. We need to reclaim some of our own decision-making power.

So far, most of this article has focused on commercial products and services. However, the most important aspect of branding, in my opinion, is when these same techniques are applied to other areas – ones that have the capability to impact far more than our choice of a computer, phone or car: things like politics, religion, intelligence, health, sexual proclivity and so on. I would now like the reader to go back to the section above on iconic branding – but this time re-read this with the point of view of a particular religion as an ‘iconic brand’. Do any of the points raised ring a bell?

  • An iconic brand user won’t switch brands, even when faced with objective evidence that should spawn reconsideration.
  • An iconic brand user will often proselytize the brand, even without inducement of the brand owner.
  • At some point, the iconic brand had to offer ‘good’ and have some esteem amongst a population.
  • There must be a story or myth associated with an iconic brand, and it must be believable to at least some degree.
  • The iconic brand must offer the hope of fulfillment of currently unsatisfied desires, which use/consumption of the brand will provide.
  • The iconic brand must be continually managed to keep it alive as change occurs.

Interesting… and very, very, very profitable for the brand owners. Again, I am using this for analysis and asking ultimately for each human to take command of his or her own thoughts – to be internally responsible for choices of belief – not be a puppet in the hands of any particular religion, software, car, culture, shoes or lingerie. I am not taking any particular religion to task (I do personally not see much use for organized religion, which in my view has very little to do with spirituality, but that is just my own position and I am not arguing that here), but am pointing out that the vast cadre of ‘brand managers’ aka priests, rabbis, pastors, cardinals, sangomas, shamans, etc. do their jobs well, promoting and adapting the ‘iconic brand’ so that it continues to be seen as ‘necessary’ (for ‘saving your soul’, being better than the other tribe, being more likely to get more [fill in the blank] in the next life/heaven/etc. – very convenient that delayed gratification must wait until you are dead where it’s a bit harder to come back to customer service with a complaint about false advertising…)

None of this would be so much of an issue if it merely affected an individual – after all supposedly free choice is what makes us human, right…? But blind belief and adherence to some ‘iconic brands’ can be dangerous. When we are talking Manolos vs Louboutins, the worst that can happen is a catty comment from Joan – when rabid blind belief in certain deities lays waste to millions of lives, that is rather another thing entirely. Now, just to be accurate here:  many, many of the atrocities carried out in the past and present have completely incorrectly used the mantle of religion or other affiliation to attempt to justify just plain criminal or abhorrent sociopathic behavior. It would actually be very good ‘brand management’ if the current brand owners would police this aspect much more rigidly, and disallow the perverted use of supposedly benign deities by those that only aim to disrupt civilization with mayhem and murder.

Brand Trust for the Big Issues

As discussed earlier, one of the major underlying reasons for branding is to establish a sense of trust in the consumer/user of the brand. At the commercial level, firms like Apple, Hermes, Volkswagen, etc. all desire that the consumer will trust their products as being of quality, and that they can expect a continued level of similar form and function from the product in the future. This brand loyalty is incredibly important to the brand owner.

Now, carry this over to branded entities such as political parties, religions, nation-states, cults, social organizations, etc. – and we see that the same issues apply. Whether one expresses brand allegiance to the Democrats or Republicans, Labor or Conservative, ANC or DA – all of these groups wish to instill trust in their brand. They use most of the same advertising techniques that firms such as Ford, Calvin Klein, General Foods or Apple does to inspire loyalty, establish and preserve identity, etc. They all have unique brand names, logos, catchphrases, etc. Some logos of political elements have become so identified with a particular movement that they are ‘super-iconic’ – such as the swastika. That logo is now so identified with the Nazi movement and philosophy of a certain group that it can never again be separated from that meaning. This is the true power of branding – a single graphic element can say so very much. The ‘tagging’ of a synagogue wall with a spray-painted swastika says volumes…

Just as has been posited for brands of cars, clothing or computers, the giving of trust to a brand should be examined, tested and questioned on an ongoing basis. There is nothing at all wrong or illogical about deciding that one prefers Calvin Klein jeans to Diesel: but once trust is given the tendency is to submit to inertia and go back to the same well. We often will stay with a current brand long past the time when perhaps a new analysis should have been performed and another decision taken. Inertia, brand trust (and existing contracts) have kept Blackberry alive far longer than an objective analysis of their performance would have mandated. Often times many people will just drift away from a high level of trust with a brand, but not ‘re-trust’ a competitive brand:  we may find many ‘lapsed’ Catholics – but rather few that switch to either agnosticism or Islam. We are nearing election time here in the US, and a concomitant amount of rabid brand awareness has taken over our airwaves, newspapers and conversations. Wait a couple of years, and the amount of brand allegiance will be much lower, as once again the actuality of political promise fades in the face of reality, coercion, corruption and apathy.

Social organizations that promote one viewpoint or another (whether for/against reproductive rights, gay/lesbian, global warming/cooling, etc/etc) also use the same techniques to gather and keep followers. If one reviews the above list on brand naming techniques (acronyms are big here: PETA, NOW, LGBT, etc), global branding, and so on it can be seen that most social groups have learned quickly from their commercial counterparts. With a little insight, we can see that branding and marketing has become absolutely pervasive in our cultures. And this is world-wide, cuts across all socio-economic groups and affects virtually all groups of people:  children are marketed to with as much fervor as yuppies in search of the next new car.

Personal Branding

We have discussed the issue of branding as it applies to groups, whether these be companies that manufacture goods, provide services, offer a belief structure, purport to provide a better method of government, etc. – but one of the remaining issues is how we brand ourselves. This has two distinctive connotations: actual physical branding (typically with tattoos or piercings/embedded jewelry), and psychological branding. Here I am not discussing alignment with external brands – what we have reviewed above, but something different.

In terms of personal physical branding, while it is true that a number of people will tattoo themselves to state alignment with an external group (gangs, religion, etc.) that is not the focus of this point. This is an individual choice (assuming that the person was afforded choice, as mentioned earlier in this article that has not always been the case) and one must live with that choice. A tattoo does make brand-switching somewhat more of an issue than changing which shoes you wear…  Tattoos are often an expression of rebellion, individual control, etc – they are not ‘mainstream’ – at least in western cultures – and have a high degree of individualism. Many are beautiful and are works of art in their own right. The issue here is not about the practice of tattooing or piercing, but rather the identification or ‘self-branding’ aspect of that choice. These are relatively permanent decisions, and therefore represent the expression of an internal psychological branding that is not transitory. (Well, as always there are exceptions:  the actions of an inebriated sailor on leave  when he inks his current girlfriend’s name on his shoulder may be reviewed later as a less than stellar decision…)

In one way or another, tattoos express a brand alignment that is strong. However, in this case, there is a strength to this choice that all of us could take away and use as a model for other brand decisions. The person that chooses to ink a motif, logo, design, etc. has a strong alignment with whatever that represents to him or her. And (as said, we are not discussing brand marks here that express alignment with well-known external brands) these ‘brands’ are individual. They represent what this person feels, and feels strongly enough to share (with either the world or someone close to them, depending on location of the tattoo…) potentially for the rest of their lives. Not many of us are courageous enough, feel strongly enough about anything, or are committed enough to make that kind of decision.

Now, let’s move on to what I will refer to as ‘the invisible tattoo’ – personal psychological branding that is as permanent, courageous and committed as external ink. This is the rarest form of branding. It is sustained only by strong personal will, continuous and committed choice, and at some level a degree of self-observation / self-honesty. Again, I am not discussing alignment here with any external brand – this is not being a Democrat, wearing Vera Wang or riding a Harley. This topic is referring to the brand of one – yourself. Some questions may make this point a bit more clear:

  • Are your views on (fill in the blank) consistent and strong enough to constitute a brand?
  • Is your personal brand cohesive enough to evoke a feeling, a visual description, etc. in others that interact or see you?
  • Does this personal brand inspire loyalty and respect in others? In other words, put the aspects of branding I originally stated at the beginning of this article to yourself as a gauge, and see what answers you find. Brand image / experience / orientation / recognition, etc.

A final observation:  people who have a strong personal ‘brand’ tend to be strong, powerful people. Writers, scientists, actors, political leaders, etc. do not arrive at those places by accident. The world is too brutal, the pressures too great, for accidental positioning to last more than a minute. No matter whether you like, agree with or support any of their actions or positions, people such as Regan, Newman, Angelina, Einstein, Coelho etc. have/had strong personal brands. You know/knew where they stood, what they felt, what they believed in.

People that have strong personal brands are interestingly enough the least subject to blind allegiance to external brands. They believe in themselves enough to take their own decisions, and whether due to arrogance or internal strength of character, will seldom ‘jump on a bandwagon’ without due consideration. This leads to the end purpose for writing this article: for each reader to take a moment to reconsider his or her own brand, to regain considered choice and not be a lemming to the tide of advertisements and pressure of campaigns for your attention, money and time. There is absolutely nothing wrong with choosing to wear Proenza Schouler instead of Brian Atwood – but if done each time as a personal decision based on considered parameters instead of an habitual following it’s a different decision.

In terms of fashion/cars/electronics, it would be nice to see visual corporate branding take a lesser position in terms of design: often now the logo/name/etc has overtaken the actual design of the product. If we all had a stronger personal brand we would possibly not feel as great a need to align/belong to some set of external brands. I for one do not like to wear what are effectively billboards for clothing or accessory manufacturers, and choose to not do so. Yes, it limits some choices, but I find there are more than enough alternatives to satisfy my need for putting on shirts, pants and shoes in the morning.

We as individual people have enormous power if we take it:  if certain branded items stop selling the vendors will very quickly adapt, believe me. If understated became “in” – the market would respond. Ultimately the choice is yours. Take back some power, some individuality, some level of informed choice – whether that be concerning a handbag, belief, social group or car. You’ll be better off for it, and will accrue individuality.

iPhone5 – Part 1: Features, Performance… and 4G

September 16, 2012 · by parasam

[Note: This is the first of either 2 or 3 posts on the new iPhone5 – depending on how quickly accurate information becomes available on this device. This post covers what Apple has announced, along with info gleaned from other technical sources to date. Further details will have to wait until actual phones are shipped, then torn down by specialists and real benchmarks are run against the new hardware and iOS6]

Introduction

Unless you’ve been living under a very large rock, one couldn’t help but hear that Apple has introduced the next version of its iPhone. This article will look at what this device actually purports to offer the user, along with some of my comments and observations. All of these comments are based on current press releases and ‘paper’ information:  the actual hardware won’t release until Sept. 21, and due to high demand, it may take me a bit longer to get one in hand for personal testing. I’ll go into details below, but I don’t intend to upgrade from my 4S at this time. I do have a good relationship with my local Apple business retailer, and my rep will be setting aside one of the new phones for me to come in and play with for a few hours as soon as she has one that is not immediately promised. Currently we are looking at about first week in October – so look for another post then. As of the date of writing (15 Sep) Apple has said their initial online allocation has sold out, so I expect demand to be high for the first few weeks.

Front and Back of iPhone5

The basic specifications and comparisons to previous models are shown below:

Physical Comparison

 

Apple iPhone 4

Apple iPhone 4S

Apple iPhone 5

Samsung Galaxy S 3

     
Height

115.2 mm (4.5″)

115.2 mm (4.5″)

123.8 mm (4.87″)

136.6 mm (5.38″)

     
Width

58.6 mm (2.31″)

58.6 mm (2.31″)

58.6 mm (2.31″)

70.6 mm (2.78″)

     
Depth

9.3 mm ( 0.37″)

9.3 mm ( 0.37″)

7.6 mm (0.30″)

8.6 mm (0.34″)

     
Weight

137 g (4.8 oz)

140 g (4.9 oz)

112 g (3.95 oz)

133 g (4.7 oz)

     
CPU

Apple A4 @ ~800MHz Cortex A8

Apple A5 @ ~800MHz Dual Core Cortex A9

Apple A6 (Dual Core Cortex A15?)

1.5 GHz MSM8960 Dual Core Krait

     
GPU

PowerVR SGX 535

PowerVR SGX 543MP2

?

Adreno 225

     
RAM

512MB LPDDR1-400

512MB LPDDR2-800

1GB LPDDR2

2GB LPDDR2

     
NAND

16GB or 32GB integrated

16GB, 32GB or 64GB integrated

16GB, 32GB or 64GB integrated

16GB or 32GB NAND with up to 64GB microSDXC

     
Camera

5MP with LED Flash + Front Facing Camera

8MP with LED Flash + Front Facing Camera

8MP with LED Flash + 720p Front Facing Camera

8 MP with LED flash + 1.9 MP front facing

     
Screen

3.5″ 640 x 960 LED backlit LCD

3.5″ 640 x 960 LED backlit LCD

4″ 1136 x 640 LED backlit LCD

4.8″ 1280 x 720 HD Super AMOLED

     
Battery

Integrated 5.254Whr

Integrated 5.291Whr

Integrated ?? Whr

Removable 7.98 Whr

     
WiFi/BT

802.11 b/g/n

Bluetooth 2.1

802.11 b/g/n

Bluetooth 4.0

802.11 a/b/g/n

Bluetooth 4.0

802.11 a/b/g/n

Bluetooth 4.0

     

As can be seen from the above chart, the iPhone5 is an improvement in several areas from the 4S, but in pure technological features still is behind some of the latest Android devices. We’ll now go through some of the details, and what they actually may mean for a user.

Case

The biggest external change is the shape and size of the iPhone5: due to the larger screen (true 16:9 aspect ratio for the first time), the phone is longer while maintaining the same width. It is also slightly thinner. The construction of the case is a bit different as well: the iPhone4S used glass panels for the full front and rear; the iPhone5 replaces the rear panel with a solid aluminum panel except for the very top and bottom of the rear shell which remain glass. This is required for the Wi-Fi, Bluetooth and GPS antennas to receive radio signals (metal blocks reception).

There are two major changes in the case design, both of which will have significant impacts to usage and accessories: the headphone/microphone jack has been moved to the bottom of the case, and the docking connector has been completely redesigned: this is now a new proprietary “Lightning” connector that is much smaller. Both of these changes have instantly rendered obsolete all 3rd-party devices that use the docking connector to plug the iPhone into external accessories such as charging bases, car charging cords, clock-radios and HiFi units, etc. While Apple is offering an adaptor cable in several forms, there are serious drawbacks for many uses.

The basic Lightning-to-USB adaptor cable ($19) is provided as part of the iPhone5 package [along with the small charger], if you have other desktop power supplies or chargers, or are fortunate enough to have a car charger that accepts a USB cable (as opposed to a built in docking connector as most do), you can spend the extra cash and still use those devices with the new iPhone5.

Lightning to USB adaptor cable (1m)

For connecting the new iPhone5 to current 30-pin docking connector devices, Apple offers two solutions: a short cable (0.2m – 8″) [$39] or a stub connector [$29]:

Lightning to 30-pin cable (0.2m)

Lightning to 30-pin stub connector

The Lightning-to-USB adaptor is growing scare already:  in the last 48 hours the shipping dates have slipped from 1-2 days to 3 weeks or more. Neither of the Lightning-to-30-pin adaptors has a ship date yet, a rather nebulous statement of “October” is all that is stated on the Apple store. So early adapters of the iPhone5 should expect a substantial delay before they can make use of any current aftermarket devices that use the docking connector. Another issue:  the cost of the adaptors. As part of their incredible branding the closed-universe of Apple/Mac/iDevice, users have been conditioned to paying a hefty premium for basic utility devices as compared to devices that perform the same funtion for other brands such as Android phones. For example, the same phone-to-USB cable (1m) that Apple sells for $19 is available for the latest model Samsung Galaxy S3 for between $6 to $9 at a number of online retailers. It’s very easy to end up spending $100 or more on iPhone accessories just for a case and a few adaptors.

Now let’s get to the real issue of this new Lightning adaptor – even assuming that one can eventually purchase the necessary adaptors shown above. Basically there are two classes of devices that use the docking connector: those that connect via a flexible cable (chargers and similar devices), and those that mechanically support the iPhone with the docking connector, such as clock/radios, HiFi units, audio and other adaptors, phone holders for cars, just to name a few. The old style 30-pin connector was wide enough, along with the mechanical design, to actually support the iPhone with a minimum of external ‘cradle’ to not put undue stress on the connector. The Apple desktop docking adaptor is such an example:

30-pin docking adaptor

The new Lightning connector is so small that it offers no mechanical stability. Any device that will hold the iPhone will need a new design, not only to add sufficient mechanical support to avoid bending or disconnecting the new docking adaptor, but to accomodate the thinner case as well. Here is a small small sample of devices that will affected by this design change:

As can be seen, this connector change has a profound and wide reaching effect. Users that have a substantial investment in aftermarket devices will need to carefully consider any decision to upgrade to the iPhone5. Virtually all of the above devices will simply not work with the new phone, even if the ‘stub adaptor’ was employed. While a large number of 3rd party providers of iPhone accessories will be happy (they can resell the same product again each time a design change occurs), the end user may be less enchanted. Even simple things such as protective cases can not be ‘recycled’ for use on the new phone. I’ll give one personal example: I have an external camera lens adaptor set, the iPro by Schneider. This set of lenses will not work at all with the iPhone5. Not only is the case different (which is critical for mounting the lenses to the phone in precise alignment with the internal iPhone camera), but the current evidence is that Apple has changed the optics slightly on the iPhone5, such that an optical redesign of accessory lenses would be required. A very careful and methodical analysis of the side-effects of potentially upgrading your iPhone should be performed if you own any significant devices that use the docking connector.

The other design change is the movement of the headphone jack to the bottom left of the case. While this does not in and of itself present the same challenges that the docking connector poses, it does have ramifications that may not be immediately apparent. While, for a user that is just carrying the iPhone as a music playback device (iPod-ish use) the headphone cable connected to the bottom is a superior design choice, it once again poses a challenge for any device where the iPhone is physically ‘docked’. The headphone cable is no longer accessible! For instance, with the original iPhone dock, I could be on the phone (using a headphone/microphone cable assembly) and walk to my desk and drop the iPhone in the docking station/charger and keep talking while my depleted battery was now being refueled… no longer… the cable from the bottom won’t allow the phone to be inserted into the docking station…

The bottom line is that Apple has drawn an absolute line in the sand with the iPhone5:  the user is forced to start completely over with all accessories, from the trivial to the expensive. While it is likely that some of the aftermarket devices can be, and will be, eventually adapted to the new case design, there will be a cost in terms of both money and time delay. Depending on the complexity (plastic cases for the iPhone5 will show up in a few months, while hi-end home HiFi units that accept an iPhone may take 6 months to a year to arrive) there will be a significant delay in being able to use the iPhone5 in as ubiquitous manner as all previous iPhones (which shared the same docking and case design).

The last issue to raise in regards to the change in case design is simply the size of the new phone. It’s longer. We’ve already discussed that this will require new cases, shells, etc. – but this will also affect  many ‘fashion-oriented’ aftermarket handbags, belt-cases, messenger bags, etc. With the iPhone being the darling of the artistic, entertainment and fashion groups, many stylish (and expensive) accoutrements have been created that specifically fit the iPhone 3/4 case size. Those too will have to adapt.

Screen

The driving factor behind the new case size is the increase in screen resolution from 960×640 (1:1.50 aspect ratio) to 1136×640 (1:1.77 aspect ratio). The new size matches current HD display aspect ratio of 16:9 (1.77) so movies viewed on the iPhone will correctly fit the screen. With the iPhone4S, which had full technical capability to both shoot and display 1920×1080 (FHD or Full HD), HD movies were either cut off on the left and right side, or letterboxed (black bars at top and bottom of the picture) when displayed. Many Android devices have had full 16:9 display capabilities for a year or more now. Very few technical details have been released so far by Apple on the actual screen, here is what I have been able to glean to date:

  • The touch-screen interface has changed from “on-cell” to “in-cell” technology. Without getting overly geeky, this means that the actual touch-sensitive surface is now built-in to the LCD surface itself, instead of being a separate layer glued on top of the LCD display. This has three advantages:
    • Thinner display
    • Simplifies manufacture, as one less assembly step (aligning and gluing the touch layer)
    • Slightly brighter and more saturated visible display, due to not having a separate layer on top of the actual LCD layer.
  • The color gamut for virtually all cellphone and computer displays is currently the sRGB standard (which itself is a low-gamut color space – in the future we will see much improved color spaces, but for now that is best thing that can economically be manufactured, particularly for mobile devices). None of the current devices fully reproduce the full sRGB gamut, even as limited as it is. But this improvement gets the iPhone that much closer. One of the tests I intend to run when I get my test drive of the iPhone5 is a gamut check with a precision optical color gamut tester.
  • No firm data is available yet, but anecdotal reports, coupled with known ‘side-effects’ of “in-cell” technology, promise a slightly more efficient display, in terms of battery life. Since the LCD display is one of the largest consumers of battery power, this is significant.

Camera(s)

The rear-facing camera (high resolution one that is used for still and video photography) is essentially unchanged. However… there are potentially three small but significant updates that will likely affect serious iPhonographers: 

  1. Though no firm details have been released by Apple yet, when images were taken at the press conference and compared to images taken with an iPhone4S of the same subject from the same position, the iPhone5 images appear to have a slightly larger field of view. This, if accurate, would indicate that the focal length of the lens has changed slightly. The iPhone4S has an actual focal length of 4.28mm (equivalent to a 32mm lens for a 35mm camera); this may indicate the reduction of focal length to 3.75mm (28mm equivalent focal length). There are several strong reasons that support this theory:
    1. The iPhone5 is thinner, and everything else has to accomodate this. A shorter focal length lens allows the camera lens/sensor assembly to be thinner.
    2. Many users have expressed a desire for a slightly wider angle of view, in fact the most popular aftermarket adaptor lenses for the iPhone are wide angle format.
    3. The slightly wider field of view simplies the new panoramic ‘stitch’ capability of the camera hardware/software.
  2. Apple claims the camera is “25% smaller”. We have no idea what that really means, but IF this in fact results in a smaller sensor surface then the individual pixels will be smaller. The same number of pixels are used (it is still an 8MP sensor), but smaller pixels mean less light-gathering capability, potentially making low light photography more difficult.
    1. Apple does claim new hardware/software to make the camera perform better in low light. What this means is not yet clear.
    2. The math and geometry of optics, sensor size and lens mechanics essentially show us that small sensors are more subject to camera movement, shaking and vibration. (The same angular movement of a full sized 35mm digital camera will cause far less blurring in the resultant image than an iPhone4S. If the sensor is even smaller in the iPhone5, this effect will be more pronounced).
  3. Apple claims a redesigned lens cover for the iPhone5. (In all iPhones, there is a clear plastic window that protects the actual lens. This is part of the exterior case). With the iPhone5, this window is now “sapphire glass” – whatever that actually is… The important issue is that any change is a change – even if this window material is harder and ‘more clear’, it will be different from the iPhone4 or iPhone4S – different materials have different transmissive characteristics. Where this may cause an effect is with external adaptor lenses designed for iPhone4/4S devices.

The front-facing camera (FaceTime, self-portrait) has in the past been a very low resolution device of VGA quality (640×480). This produced very fuzzy images, the sensor was not very sensitive in low light, and the images did not match the display aspect ratio. The iPhone5 has increased the resolution of the front-facing sensor to 1280×720 (720P) for video, 1280×960 for still (1.2MP). While no other specs on this camera have been released, one can assume some degree of other improvements in the combined camera/lens assembly, such that overall image quality will improve.

The faster CPU and other system hardware, combined with new improvement in iOS 6.0, bring several new enhancements to iPhonography. Details are skimpy at this time, but panoramic photos, faster image-taking in general, improved speed for image processing within the phone, better noise reduction for low-light photography are some of the new features mentioned. Experience, testing and a full tear-down of an iPhone5 are the only way we will know for sure. More to come in future posts…

CPU/SystemChips/Memory

Inside the iPhone5

As best as can be determined at this early stage, there are a number of changes inside the iPhone5. Some (very little actually!) of the information below is from Apple, many of the other observations are based on the same detective work that was used for earlier reporting on the iPhone4S:  careful reading of industry trends, tracking of orders for components of typical iPhone parts manufacturers, comments and interviews with industry experts that track Apples, Androids and other such odd things, and to a certain extent just experience. Even though Apple is a phenomenally secretive company, even they can’t make something out of nothing. There are only so many chips to choose from, and when one factors in things like power consumption, desired performance, physical size, compatibility with other parts of the phone and so on, there really aren’t that many choices. So even if some of the assumptions at this early stage are slightly in error, the overall capabilities and functionality will be the same.

Ok, yes, Apple has said there is a new CPU in the iPhone5, and it’s named the “A6”. But that doesn’t actually tell one what it is, how it’s made, or what it does. About all that Apple has said directly so far is that it’s “up to twice as fast as the A5 chip [used in iPhone4S]”, and “the A6 chip offers graphics performance that’s up to twice as fast as the A5.” That’s not a lot of detailed information… Once companies such as Anandtech and Chipworks get a few actual iPhone5 units and tear them apart we will know more. These firms are exhaustive in their analysis (and no, the phone does not work again once they take it to bits!) – they even ‘decap’ the chips and use x-ray techniques to analyze the actual chip substrate to look for vendor codes and other clues as to the makeup of each part. I will report on that once this data becomes available.

At this time, some think that the A6 chip is using 28/32nm technology (absolutely cutting edge for mobile chipsets) and packing in two ARM Cortex A15 cores to create the CPU. Others think that this may in fact be an entirely Apple ‘home-grown’ ARM dual-core chip. The GPU (Graphics Processing Unit) is likely an assembly using four of Imagination’s PowerVR SGX543 cores, which double the GPU cores that are in the iPhone4S. In addition to the actual advanced hardware, the final performance is almost for certain a careful almalgamation of peripheral chips, tweaking and tuning of both firmware and kernel software, etc. The design criteria and implementation of devices such as the iPhone5 is just about as close to the edge of what’s currently possible as current science and human cleverness can get. This is one area where, for all of the downsides to a ‘closed ecosystem’ that is the World of Apple, the upside is that when a company has total control over both the hardware and the software of a device, a level of systems tuning is possible that open-source implementations such as Android simply can never match. If one is interested further in this philosophy, please see my further comments about such “complementary design techniques” in my post on iPhone accessory lenses here.

There are two types of memory in all advanced smartphones, including the iPhone5. The first is SDRAM (which is similar to the RAM in your computer, the very fast working memory that is directly addressed by the CPU chips), the second is NAND (which is similar to the hard disk in your computer – slower but has much greater storage capacity). In smartphones, the NAND is also a solid-state device (not a spinning disk) to save weight and power, but it still is considerably slower in access time than the SDRAM. As a point, it would not be practical, either in terms of economics, power or size, to attempt to use SDRAM for all the memory in a smartphone. The chart at the beginning of this article shows the increase in size of the SDRAM over the various iPhone models, to date the mass storage (NAND) has been available in 3 sizes:  16, 32 and 64GB.

Radios:  Wi-Fi/Cellular/GPS/Bluetooth

Although most of us don’t think about a cellphone in this way, once you get all the peripheral bits out of the way, these devices are just highly sophisticated portable radio transcievers. Sort of CB radio handsets on steroids. There are four main categories of radios used in smartphones: Wi-Fi; cellular radios for both voice and data; GPS and Bluetooth. The design, frequencies used and other parameters are so different for each of these classes that entirely separate radios must be used for each function. In fact, as we will see shortly, even within the cellular radio group it is frequently required to have multiple radios to handle all the variations found in world-wide networks. Each separate radio adds complexity, cost, weight, power consumption and the added issue of antenna design and inter-device interference. It is truly a complicated design task to integrate all the distinct RF components in a device such as the iPhone.

Again, this initial review is lacking in hard facts ‘from the horse’s mouth’ – our particular horse (the Rocking Apple) is mute… but using similar techniques as outlined above for the CPU/GPU chips, here is what my best guess is for the innards of “radio-land” inside an iPhone5:

Wi-Fi

    • At this time there are four Wi-Fi standards in use, all of which are ‘subparts’ of the IEEE 802 wireless communications standard: 802.11a; 802.11b, 802.11g, 802.11n
    • There are a lot of subtle details, but in essence each increase in the appending letter is equivalent to a higher data transfer speed. In a perfect world (pay attention to this – Wi-Fi almost never gets even close to what is theoretically possible! Marketing hype alert…) the highest speed strata, 802.11n, is capable of up to 150Mb/s.
    • Again, I am oversimplifying, but older WiFi technology used a single band of radio frequencies, centered around 2.4GHz. The newest form, 802.11n, allows the use of two bands of frequencies, 2.4GHZ and 5.0GHz. If the designer implements two WiFi radios, it is possible to use both frequency bands simultaneously, thereby increasing the aggregate data transfer, or to better avoid interference that may be present on one of the bands. As always, adding radios adds cost, complexity, etc.

Cellular

This is the area that causes the most confusion, and ultimately required (in the case of the iPhone) two entirely separate versions of hardware (GSM for AT&T, CDMA for Verizon – in the US. Gets even more complicated overseas). Cellular telephone systems unfortunately were developed by different groups in different countries at different times. Adding to this were social, political, geographical, economic and engineering issues that were anything but uniform. This led to a large number of completely incompatible cellular networks over time. Even in the earliest days of analog cellphones there were multiple, incompatible networks. Once the world switched to digital carrier technology, the diaspora continued.. This is such a complicated subject that I have decided to write a separate blog on this – it is really a bit off-topic (in terms of detail) for this post, and may unreasonably detract those that are not interested in such details. I’ll post that in the next week, with a link from here once complete.

For the purposes of this iPhone5 introduction, a very simple and brief primer so we can understand the importance – and limitations!! of what is (incorrectly) called 4G – that bit of marketing hype that has everyone so fascinated even though 93% of humanity has absolutely no idea what it really is. Such is the power of marketing…

Another warning:  telecommunications industries are totally in love with acronyms. Really arcane weird and hard to understand acronyms. If a telecomms engineer can’t wedge in at least eight of them in every sentence, he/she starts twitching and otherwise showing physical symptoms of distress and feelings of incompetence… I’m just going to list them here, in all their other-worldly glory… if you want them deciphered, wait for my blog (promised above) on the cellular system.

 To add some semblance of control to the chaotic jungle of wireless networks there are a number of standards bodies that attempt to set up some rules. Without them we would have no interoperabililty of cellphones from one network to another. The two main groups, in terms of this discussion, are the 3GPP (3rd Generation Partnership Project) and the ITU (International Telecommunication Union). That’s where nomenclature such as 2G, 3G, 4G comes from. And, you guessed it, “G” is generation. (Never mind “1G” – that too will be in the upcoming blog…). For practical purposes, most of us are used to 3G – that was the best data technology for cellular system until recently. 4G is “better”… sort of… we’ll see why in a moment.

The biggest reason I am delving into this archane stuff is to (as simply as I can) educate the user as to why you can’t browse the web or perform other data functions while simultaneosly talking on the phone IF you are using an iPhone on the Sprint or Verizon networks – but can if you are on AT&T. The reason is that LTE is an extension of GSM (the technology that AT&T uses for voice and data currently), whereas both Sprint and Verizon use a different technology for voice/data (CDMA). Each of these technologies requires a separate radio and a separate antenna. For AT&T customers, the iPhone needs 2 antennas (4G LTE + 3G for voice [and 3G data fallback if no 4G/LTE is available in that location), if the iPhone was going to support the same functionality for Sprint/Verizon, a 3rd radio and antenna would be required (4G/LTE for high speed data; 3G fallback data, and CDMA voice). Apple decided not to add the weight, complexity and expense to the iPhone5, so customers on those networks face an either/or choice: voice or data, but not at the same time.

Apple is making some serious claims on improved battery life when using 4G, saying that the battery will last the same (up to 8 hours) whether on 3G or 4G. That’s impressive, early 4G phones from other vendors have had notoriously low battery life on 4G. Some assumptions, other than OS tweaks, are possibly the use of a new Qualcomm chip, the MDM9615LTE.

The range of cellular voice and data types/bands/variations that are said to be supported by the iPhone5 are:  GSM (AT&T), CDMA (Verizon & Sprint), EDGE, EV-DO, HSPA, HSPA+, DC-HSPA, LTE.

Now, another important few points on 4G:

    • The current technology that everyone is calling 4G… isn’t really. The marketing monsters won the battle however, and even the standards bodies caved. LTE (Long Term Evolution – and this does have a technical meaning in terms of digital symbol reconstitution from a multiplexed data stream, as opposed to the actual advancement of intellect, compassion, health and heart of the human species – something that I hold in serious doubt right now…) is a ‘stepping-stone’ on the way to “True 4G”, and is not necessarily the only way to implement 4G – but the marketing folks just HAD to have ‘higher number means better’  term, so just like at one point we had “2.5G” (not quite real 3G but better than 2G in a few weird ways), we now have 4G… to be supplemented next year with “LTE Advanced” or “4G Advanced”. Hmmmm. And once the networks improve to “True 4G” or whatever, will the iPhone5 still work? Yes, but it won’t necessarily support all the features of “LTE Advanced” – for instance, LTE Advanced will support “VoLTE” [Voice over LTE] so that only a single radio/antenna would be required for all voice and data – essentially the voice call is muxed into the data layer and just carried as another stream of data. However, and this is a BIG however, that would require essentially full global coverage of “4G/LTE Advanced” – something that is years away due to cost and time to build out networks.
    • Even with the current “baby 4G”, this is a new technology, and most networks in the world only support this in certain limited areas, if at all. It will improve every month as the carriers slowly build out the networks, but it will take time. The actual radio/antenna systems are different from everything currently deployed, so new hardware has to be stuck onto every single cell tower in the world… not a trivial task… Trying to determine where 4G actually works, on which carrier, is effectively impossible at this time. No one tells the whole story, and you can be sure that Pinnochio would look like a snub-nose in comparision to many of the claims put forth by various cellular carriers… In the US, both Verizon and AT&T claim about 65-75% coverage of their respective markets: but these are in high density population areas where the subscriber base makes this economically attractive.
    • The situation is much more spotty overseas, with two challenges: even within the LTE world there are different frequencies used in different areas, and the iPhone5 does not support all of them. If you are planning to use the iPhone5 outside of the US, and want to use LTE, check carefully. And of course the build-out of 4G is nowhere near as complete as in the US.
    • The final issue with 4G is economic, not technical. Since data usage is what gobbles up network capacity (as opposed to voice/text), the plans that the carriers sell to their users are rapidly changing to offer either high limit or unlimited voice/text at fairly reasonable rates, with data now being capped and the prices increasing. While a typical data plan (say 5GB) allows that much data to be transferred, regardless of whether on 3G or 4G, the issue is speed. Since LTE can run as fast as 100Mb/s (again, your individual mileage may vary…) – which is much, much faster than 3G, and in fact often faster than most Wi-Fi networks, it is easy for the user to consume their cap much faster. If you have ever stood on a street corner and s-l-o-w-l-y waited for a single page to load on your iPhone4, you are not really motivated to stand there for an hour cruising the web or watching sport. But… if the pages go snap! snap! snap!, or the US Open plays great in HD without any pauses or that dreaded ‘buffering’ message – then normal human tendancy will be to use more. And the carriers are just loving that!!
    • As an example, (just to be theoretical and keep math simple) if we assume 100Mb/s on LTE, then your monthly 5GB cap would be consumed in about 7 minutes!! Now this example is assuming constant download of that data rate, which is unrealistic – a typical page load for a mobile device is under 1 MB, and then you stare at it for a bit, then load another one, and so one – so for web browsing you get snappy loads without consuming a ridiculous amount of data – but beware video streaming – which DOES consume constant data. It will take users some time (and sticker shock at bill time if you have auto-renew set on your data plan!) to learn how to manage their data consumption. (Tip: set to lower resolution streaming when on LTE, switch back to high resolution when on WiFi).

GPS

Global Positioning Service, or “Location Services” as Apple likes to call it, requires yet another radio and set of antennas. This is a receive-only technology where simultaneous reception of data from multiple satellites allows the device to be located in 3D space (longitude, latitude and altitude) rather accurately. The actual process used by the underlying hardware, the OS and the apps on the iPhone is quite complex, merging together information from the actual GPS radio, WiFi (if it’s on, which helps a lot with accuracy) and even the internal gyroscope that is built in to each iPhone. This is necessary since consumers just want things to work, no matter the laws of physics (yes my radio should receive satellite signals even if I’m six stories underground in a car park…), interference from cars, electrical wires, etc. etc. The bottom line is we have come to depend on GPS to the point that I see people yelping at their Yelp app when it doesn’t know exactly where the next pizza house is…

Bluetooth

Again, we have become totally dependent on this technology for everyday use of a cellphone. In most states now (and countries outside the US), there are rather strict laws on ‘hands-free’ cellphone while driving a car. While legally this can be accomplished with a wired earplug (know your laws, some places ONLY allow wireless [Bluetooth] headsets! – others allow wired headsets but only in one ear, and it must be of the ‘earbud’ type, not an ‘over the ear’ version), the Bluetooth headset is the most common.

There are other uses for Bluetooth with the iPhone: I frequently use a Bluetooth keyboard when I am actually using the iPhone as a little computer at a coffee bar – it’s SO much faster than pecking on that tiny glass keyboard… There are starting to be a number of interesting external ‘appliances’ that communicate with the iPhone via Bluetooth as well – temperature/humidity meter; various sports/exercise measuring devices; even civil engineering transits can now communicate their reading via Bluetooth to an app for automatic recording and triangulation of data.

And yes, it takes another radio and antenna…

And last but certainly not least:  iOS6

A number of new features are either totally OS-related, or the new hardware improvements are expressed to the user via the new OS. The good news is that some of these new features will now show up in earlier iPhone models, commensurate of course with hardware limitations.

A few of the new features:

  • Improvements to Siri:  open apps and post comments to social apps with voice commands
  • Facebook: integrated into Calendar, Camera, Maps, Photos. (yes, you can turn off sharing via FB, but in typical FB fashion everything is ‘opt out’…)
  • Passbook: a little digital vault for movie tickets, airline boarding passes, etc. Still ‘under construction’ in terms of getting vendors to sign up with Apple
  • FaceTime: now works over 3G/4G as well as WiFi (watch out for your data usage when not at WiFi – with the new 720P front facing video camera, that nice long chat with your significant other just smoked your entire data plan for the month…)
  • Safari:  links open web pages on multiple Apple devices that are all on same iCloud account. Be careful… if you are bored in the office and are cruising ‘artistic’ web sites, they may reflect in real time in your kitchen or your daughter’s iMac…
  • Maps:  Google Maps kicked out of Apple-land, now a home-grown map app that finally includes turn-by-turn navigation.

Summary

It’s a nice upgrade Apple. As usual, the industrial design is good. For me personally, it’s starting to get a bit big – but I’ll admit I have an iPad, so if I want more screen space than my 4S I’ll just pick up the Pad. Most of the improvements are incremental, but good nonetheless. In terms of pure technology, the iPhone5 is a bit behind some of the Android devices, but this is not the article to start on that! Those arguments could go on for years… I’m only commenting here on what’s in this particular phone, and my personal thoughts on upgrading from a recent 4S to the 5. For me, I won’t do that at this time. A lot of this is very individual, and depends on your use, needs, etc. I tend to almost always be near either my own or free Wi-Fi locations, so 4G is just not a huge deal. The improved speed sounds very nice, but my 4S currently is fast enough – I am an avid photographer, and retouch/filter a lot using the iPhone4S, and find that it’s fast enough. I love speedy devices, and if the upgrade were free I would perhaps think differently, but at this point I am not suffering with any aspect of my 4S enough to feel that I have to move to the 5 right away.

Now, I would absolutely feel differently if I had anything earlier than a 4S. I upgraded from the iPhone4 to the 4S without hesitation – in my view the improvements were totally worth it: much better camera, much faster processor, etc. So in the end, my personal recommendation: a highly recommended upgrade for anything at the level of iPhone4 or earlier, for the 4S – it’s down to individual choices and your budget.

Lens Adaptors for iPhone4S: technical details, issues and usage

August 31, 2012 · by parasam

[Note: before moving on with this post, a comment on stupid spell-checkers… my blog writer (and even Microsoft Word!) insists that “adaptor” is a mis-spelling. Not so… “adaptor” is a device that adapts one thing to another that would otherwise be incompatible, while an “adapter” is a person that adapts to new situations or environments… I’ve seen countless instances of mis-use… I fear that even educated users are deferring to software, assuming that it’s always correct. The amount of flat-out wrong instances in both spelling and grammar in most major software applications is actually scary…]

Okay, now for the good stuff…  While the iPhone (in all models) is a fantastic camera for a cellphone, it does have many limitations, some of which have been discussed in previous articles in this blog. The one we’ll address today is the fixed field-of-view (FOV) of the camera lens. Since most users are familiar with 35mm SLR (Single Lens Reflex) – or, if you are young enough to not have used film, then DSLR (Digital SLR) – and have at least an acquaintance with the relative FOV of different focal length lenses. As a quick review, the so-called “normal” lens for a 35mm sensor size is a 50mm focal length. Anything less than that is termed a “wide angle” lens, anything greater than that is termed a “telephoto” lens. This is a somewhat loose description, and at very small focal lengths (which leads to very wide angle of view) the terminology changes to a “fisheye” lens. For a more detailed explanation of focal length and other issues please see my original post on the iPhone4S camera “Basic Overview” here.

Overview

The lens that is part of the iPhone4S camera system is a fixed aperture / fixed focal length lens. The aperture is set at f2.4 while the 35mm equivalent focal length of the lens is 32mm – a moderately wide angle lens. The FOV (Field of View) for this lens is 62° [for still photos], 46° [for video]. {Note: since the video mode of 1920×1080 pixels is smaller than the sensor size used for still photos (3264×2448) the angle of view changes with the focal length held constant} The fixed FOV (i.e. not a zoom lens) affects composition of the image, as well as depth of field. A quick note: yes the iPhone (and most other cellphone cameras) have a “zoom” function, but this is a so-called “digital zoom” which is achieved by cropping and magnifying a small portion of the original image as captured on the sensor. This produces a poor quality image that has low resolution, and is avoided for any serious photography. A true zoom lens (sometimes called ‘optical zoom’) achieves this function by mechanically changing the focal length – something that is impossible to engineer for a cellphone. As a rule of thumb, the smaller the focal length, the greater the depth of field (the areas of the image that are in focus, in relation to the distance from the lens); and the greater the field of view (how much of the total scene fits into the captured image).

In order to add some variety to the compositional choices afforded by the fixed iPhone lens, the only option is to fit external adaptor lenses to the iPhone. There are several manufacturers that offer these, using a variety of mechanical devices to mount the lens. There are two basic divisions of adaptor type: those that provide external lenses and the mounting hardware; and those that provide a mechanical adaptor to use commonly available 35mm lenses with the iPhone. One example of an adaptor for 35mm lenses is here, while an example of lens+mount is here.

I personally don’t find a use for adapting 35mm lenses to the iPhone:  if I am going to deal with the bulk of a full sized lens then I will always choose to attach a real camera body and take advantage of the resolution and control that a full purpose-built camera provides. Not everyone may share this sentiment, and for those that find this useful there are several adaptors available. I do shoot a lot with the iPhone, and found that I did really want to have a relatively small and lightweight set of adaptor lenses to offer more choice in framing an image. I researched the several vendors offering such devices, and for my personal use I chose the iPro lens system manufactured by Schneider Optics. I made this choice based on two primary factors:  I had prior experience with lenses made by Schneider (their unparalleled Super Angulon wide angle for my view camera), and the precision, quality and versatility of the iPro system. This is a personal choice – ultimately any user will find what works for them – but the principles discussed here will apply to any external adaptor lens. As I have mentioned in previous posts, I am not a professional reviewer, have no relationship with any hardware or software vendor (other than the support offered as an end user), and have no commercial interest in any product I mention in this blog. I pick what I like, then write about it.

I do want to point out however, once I started using the iPro lenses and had some questions, that I received a large amount of time and assistance from the staff at Schneider Optics, particularly Niki Mustain. I would like to thank her and all the staff that so generously answered my incessant questions, and did a fair amount of additional research and testing prompted by some of my observations. They kindly made available an internal report on iPro lens performance, and the interactions with the iPhone camera (some of these issues to be discussed below). When and if they make that public (likely as an application note on their website) I will update this blog with a comment to point to that, in the meantime they have allowed me to use some of their comments on the general technology and limitations of any adaptor lens system as background for this post.

Technical specs on the iPro lens adaptor system

This particular system offers three different adaptor lenses (they can be purchased individually or as a set): Wide Angle, Telephoto and Fisheye. Here are the basic specifications:

As can be seen from the above details, the Telephoto is a 2X magnification, doubling the focal length and halving the FOV (Field of View). The Wide Angle changes the stock medium wide-angle view of the iPhone to a “very wide” wide angle (19mm equivalent – about the widest FOV provided by most variable focal length** 35mm lenses). The Fisheye offers what I would consider a ‘medium’ fisheye look, with a 12mm equivalent focal length. With fisheye lenses generally accepted as having focal lengths of 18mm or less, this falls about midway between 6mm*** and 18mm.

**There is a difference between “variable focal length” and “zoom” lenses, although most use the term interchangeably not being aware of the distinction between the two. A variable focal length lens allows a continuous change of focal length, but once the new focal length is established, the image must be refocused. A true zoom lens will maintain focus throughout the entire range of focal lengths allowed by the lens design. Obviously a true zoom lens is more difficult (and therefore costly) to manufacture. Typically, zoom lenses are larger and heavier than a variable focal length lens. It is also more difficult to create such a lens with a wide aperture (low f/stop number). To give an example, you can purchase a reasonable 70-200mm zoom lens for about $200 (with a maximum aperture of f5.6); a high quality zoom lens of the same range (70-200mm) that opens up to f2.8 will run about $2,500.

Another thing to keep in mind is that most ‘variable focal length’ lenses are not advertised as such, they are often marketed as zoom lenses, but careful testing will show that accurate focus is not maintained throughout the full range of focal lengths. Not surprising, as this is a difficult optical feat to do well, which is why high quality zoom lenses cost so much. Really good HD video or cinemaphotography zoom lenses that have an extremely wide range (often used for sports television – for example the Canon DigiSuper 80 with a zoom range of 8.8 to 710mm) can cost upwards of $163,000. Warning: playing with one of these for a few days will produce depression and optical frustration once returning to ‘normal’ inexpensive zoom lenses… A good lens is simply the most important factor in getting a great image. Period.

*** The extreme wide end of fisheye lenses is held by the Nikkor 6mm/f2.8 which is a masterpiece of engineering. With an almost insane 220° FOV, this is the widest lens for 35mm cameras of which I am aware. You won’t find this in your local camera shop however, only a few hundred were ever made – during the 1970s – 1980s. The last time one went on auction (in the UK in April 2012) it sold for just over $160,000. The objective lens is a bit over 236mm (9.25″) in diameter! Here are a few pix of this awesome lens:

actual image taken with 6mm f2.8 Nikkor fisheye

Ok, back to reality (both size and wallet-wise…)

Here are some images of my iPro lenses to give the reader a better idea of the devices which we’ll be discussing further:

The 3-lens iPro kit fully assembled for carrying/storage.

An ‘exploded view’ of all the bits that make up the 3-lens iPro system.

The Fisheye, Telephoto and Wide Angle iPro lenses.

Front view of iPro case mounted to iPhone4S, showing the attached tripod adaptor.

Rear view of the iPro case mounted on iPhone4S.

Close-up of the bayonet lens mounting feature of the iPro case.

2X Telephoto mounted on iPhone.

WideAngle lens mounted on iPhone.

Fisheye lens mounted on iPhone.

Basic use of the iPro lens system

The essential parts of the iPro lens system are the case, which allows precision alignment of the lens with the iPhone camera, and the detachable lens elements themselves. As we will discuss below, the precision and accuracy of mounting an external adaptor lens is crucial to good optical performance. It may seem trivial, but the material and case design is an important overall part of the performance of this adaptor lens system. Due to the necessary rigidity of the case material, once it is installed on the iPhone it is not the easiest to remove… I missed this important part of the instructions provided:  you must attach the tripod adaptor to the case body to provide the additional leverage needed to slightly flex the case for removal. (the hole in the rear of the case that shows the Apple logo is actually a critical design element: that is where you push with a finger of your opposite hand while flexing the case in order to pop out the phone from the case).

In addition to providing the necessary means for taking the iPhone out of the case if you should need to (and you really won’t: I found that this case works just fine as an everyday shell for the phone, protecting the edges, insulating the metallic sideband to avoid the infamous ‘hand soaking up microwaves dropped call iPhone effect’, and is slim enough that it fits perfectly in my belt-mounted carrying case), the tripod mounting screw provides a very important improvement for iPhonography: stability. Even if you don’t use any of the adaptor lenses, the ability to affix the phone to a tripod (or even a small mono-pod) is a boon to getting better photographs with the iPhone. Rather than bore you with various laws of physics and optic science, just know that the smaller the sensor, the more a resultant image is affected by camera movement. The simple truth is that the very small sensor size of the iPhone camera, coupled with the light weight and small case size of the phone, means that most users unconsciously jiggle the camera a lot when taking an image. This is the single greatest reason for lack of sharpness in iPhone images. To compound things, the smaller the sensor size, the less sensitive it is for gathering light, which means that often, in virtually anything but direct sunlight, the iPhone is shooting at relatively slow shutter speeds, which only exaggerates camera movement.

Since the EXIF data (camera image metadata) is collected with each shot, you can see afterwards what shutter speed was used by the iPhone on each of your shots. The range of shutter speeds on the iPhone4S is from 1/15 sec to 1/2000 sec. Any shutter speed slower than 1/250 sec will show some blurring if the camera moves at all during the shot. So, whenever possible, brace your phone against a rigid object when shooting, particularly in partial shade or darker surroundings. Since often a suitable fence post, lamp pole or other object is not right where you need it for your shot, the ability to use some form of tripod will often provide a superior result for your image.

The adaptor lenses themselves twist into the case with a simple bayonet mount. As usual with any fine optics, take care to avoid dropping, scratching or otherwise damaging the delicate optical surfaces of the lenses. The telephoto lens will most benefit from tripod use (when possible), as the narrower the angle of view, the more pronounced camera shake is on the image. On the other hand, the fisheye lens can be handheld for most work with no visible impairment. A note on use of the fisheye lens:  the FOV is so wide that it’s easy for your hand to end up in the image… take some care and practice with how you hold the phone when using this lens.

Optical issues with adaptor lenses, including the iPro lens system

After using the adaptor lenses for a short time, I found several impairments in the images taken. Essentially the artifacts result in a lack of sharpness towards the edge of the image, and color fringing of certain objects near the edge of the frame. I went on to perform extensive tests of each of the lenses and then forwarded my concerns to the staff at Schneider Optics. To my pleasure, they were open to my concerns, and performed a number of tests in their own lab as well. While I will discuss the details below, the bottom line is that both myself and the iPro team agrees that external adaptor lenses are not a perfect science, particularly with the iPhone. We must remember, for all the fantastic capabilities that this device exhibits… it’s a bloody cellphone! I have every confidence that Schneider (and probably other vendors as well) have made every effort within the scope of practicality and budget for such lenses to minimize the side-effects. I have found the actual optical precision of the iPro lenses (as measured for such things as MTF [Modulation Transfer Function – an objective measurement of the resolving capability of a lens system], illumination fall-off, chromatic and geometric aberrations, optical alignment and contrast ratio) are excellent – particularly for lenses that are really quite inexpensive compared to their quality.

The real issue lies with the iPhone camera system itself: Apple never designed this camera to interoperate with external adaptor lenses. One cannot fault the original manufacturer for attempting to produce a piece of hardware that offers good performance at a reasonable price within a self-contained system. The iPhone designers have treated the totality of the hardware and software of the camera system as a fixed and closed universe. This is typical of the way that Apple designs both their hardware and software. There are both pros and cons to this philosophy:  the strong advantage is the ability to blend design characteristics of both hardware and software to mutually complement each other in the effort to meet design criteria with a time/cost budget; the disadvantage is the lack of easy adaptability in many cases for external hardware or software to easily interoperate with Apple products. For example, the software development guidelines for Apple devices are the most stringent in the entire industry. You work within the framework provided, or you don’t get approval for your app. Every app intended for any iDevice must be submitted to Apple directly for testing and approval. This is virtually unique in the entire computer/cellphone industry. (I’m obviously not talking about the gray area of ‘jailbroken’ phones and software).

The way in which this design philosophy shows up in relation to external adaptor lenses is this: the iPhone camera is an amazingly good camera for it’s size, cost and weight, but it was never designed to be complementary to external lenses. Certain design choices that are not evident when images are taken with the native camera show up, sometimes rather glaringly, when external lenses are coupled with the iPhone camera. One might say that latent issues in the lens and sensor design are significantly amplified by external adaptor lenses. This issue is endemic to any external lens, not just the iPro lenses I am discussing here. Each one will of course have its own unique ‘fingerprint’ of interaction with the iPhone camera, but the general issues discussed will be the same.

As usual, I bring all this up to share with my readers the best information I can find or develop in the pursuit of what’s realistically possible with this great little camera. The better we know the capabilities and limitations of our tools, the better able we are to make the images we want. I have taken some great shots with these adaptor lenses that would have been impossible to create any other way. I can live with the distortions introduced as a compromise to get the kind of shot that I want. The more aware I am of what the issues are, the better I can attempt (while composing a shot) to attempt to minimize the visibility of some of these artifacts.

To get started, here are some example shots:

[Note:  all shots are unretouched from the iPhone camera, the only adjustment is resizing to fit the constraints of this blog format]

iPhone4 Normal (no adaptor lens)

iPhone4 WideAngle adaptor lens

iPhone4 Fisheye adaptor lens

iPhone4 Telephoto adaptor lens

iPhone4S #1 Normal

iPhone4S #1 WideAngle

iPhone4S #1 Fisheye

iPhone4S #1 Telephoto

iPhone4S #2 Normal

iPhone4S #2 WideAngle

iPhone4S #2 Fisheye

iPhone4S #2 Telephoto

The above shots were taken to test one of the first potential causes for the artifacts in the images: the softening towards the edges as well as the color fringing of bright areas near the edge of the image (chromatic aberration). A big potential issue with externally mounted adaptor lenses for the iPhone is lens alignment. The iPhone lens is physically aligned to the sensor as part of the entire camera assembly. This unitary assembly is then inserted into the case during final manufacture of the device. Since Apple never considered the use of external adaptor lenses, no effort was made to ensure perfect alignment of the camera assembly into the case. As can be seen from my blog on the iPhone hardware (showing detailed images of an iPhone torn apart), the camera assembly is simply pressed into place – there is no precision mechanical lock to align the optical axis of the camera with the case. In addition, the actual camera lens is protected by being installed behind a clear plastic window that is part of the outer case itself.

What this means is that if the camera assembly is tilted even very slightly it will produce a “tilt-shift” de-focus effect when coupled with an external lens:  the center of the image will be in focus, but both edges will be out of focus. One side will actually be focused a bit behind the sensor plane, the other side will be focused a bit in front of the sensor plane.

The above diagram represents an extreme example, but you can see that if the lens is tilted in relation to the image sensor plane, the plane of focus changes. Objects at the edge of the frame will no longer be in focus, while objects in the center of the frame will remain in focus.

In order to eliminate this probability from my tests, I used three separate iPhones (one iPhone4 and two iPhone4S models). While not a large sample statistically, it did provide some certainty that the issues I was observing were not related to a single iPhone. You can see from the examples above that all of the adaptor lens shots exhibit some degree of the two artifacts (defocused edges and chromatic aberration). So further investigation was required in order to attempt to understand the root cause of these distortions.

Since the first set of test shots was not overly ‘scientific’ (back yard), I was advised by the staff at Schneider that a brick wall was a good test subject. It was easy to visualize the truth of this, so I went off in search of a large public test chart (brick wall…)

WideAngle taken from 15 ft.

Fisheye taken from 15 ft.

Telephoto taken from 15 ft.

To add some control to the shots, and reduce potential errors of camera movement that may affect sharpness in the image, the above and all subsequent test shots were taken while the iPhone was mounted on a stable tripod. In addition, each shot was taken from exactly the same camera position (in the above shots, 15 feet from the wall). Two things stood out here: 1) there was a lack of visible chromatic aberration [I think likely due to the flat lighting on the wall and lack of high contrast edges, which typically enhance that form of artifact]; and 2) the soft focus artifact is more pronounced on the left and right sides as opposed to the top and bottom edges. [More on why I think this may occur later in this article].

WideAngle, 8 ft.

Fisheye, 8 ft.

WideAngle, 30 ft.

Fisheye, 30 ft.

Telephoto, 30 ft.

WideAngle, 50 ft.

Fisheye, 50 ft.

Telephoto, 150 ft.

Telephoto, 150 ft.

Telephoto, 500 ft.

The above set of images represented the next test series of shots. Here, various distances to the “test chart” [this time I needed even a larger ‘chart’ so had to find a 3-story brick building…] were used in order to see what effect that may have on the resultant image. A few ‘real world’ images were shot using just the telephoto at long distances – here the large distance from camera to subject, using a telephoto lens, would normally result in a completely ‘flat’ image with everything in the same focal plane. Once again, we continue to see soft focus and chromatic aberrations at the edges.

Normal (no adaptor), auto-focus

Normal, Selective Focus

WideAngle, Auto Focus

WideAngle, Selective Focus

Fisheye, Auto Focus

Fisheye, Selective Focus

Telephoto, Auto Focus

Telephoto, Selective Focus

This last set of test shots was suggested by the Schneider staff, based on some tests they ran and subsequent discussions. One theory is that there is a difference in how the iPhone camera internal software (firmware + OS kernel software – not anything a camera app developer has access to) handles auto-focus vs selective-focus. Selective focus is where the user can select the focus area, usually with a little square that can be moved to different parts of the image. In all the above tests, the selective focus area was set to the center of the image. In theory, since my test images were flat and all at the same difference from the camera, there should have been no difference between auto-focus or selective-focus, no matter which lens was used. Careful examination of the above images shows an inconsistent result:  the fisheye showed no difference between the two focus modes, the normal and telephoto looked better with selective focus, while the wideangle looked best when auto focus was applied.

The internal test report I received from Schneider pointed out another potential anomaly, one I have not yet had time to attempt to reproduce: using selective focus off-center in the image. This usage appeared to generate results that would be unexpected in normal photographic work: the area of selective focus was sharp, most of the rest of the image was a bit softer, but a mirror image position of the original selective focus region was once again sharp on the opposite side of the image. This does seem to clearly point to some image-enhancement algorithms behaving in an unexpected fashion.

The issue of auto-focus methods is a bit beyond the scope of this article, but some considerable research shows that the most likely methodology used in the iPhone camera is passive detection (that is certain – there is no range finder on an iPhone!) controlled lens barrel or lens element adjustment. There are a large number of vendors that support this form of auto-focus (and here, I mean ‘not manual focus’ since there is no mechanical focus ring on cellphones… – the ‘auto-focus’ can either be entirely automatic [as I use the term “auto-focus” in my tests above] or selective area auto-focus, where the user indicates a region of the image on which the auto-focus is concentrated. One of the most advanced methods is MEMS (Micro-Electrical Mechanical Systems) which moves a single optical element within the lens barrel, another popular method is the ‘voice-coil’ micro-motor which moves the entire lens barrel to effect focus.

With the advances brought to bear with iOS5, including face area recognition (the camera attempts to recognize faces in the image and focus on those when in full auto-focus mode), it is apparent that significant image recognition and processing are being done at the kernel level, before any camera app ‘gets their hands on’ the camera controls. The bottom line is that there may well be some interactions between the way in which the passive detection and image processing algorithms are affected by an unexpected (to the iPhone software) presence of an external adaptor lens. Another way to put this is that the internal software of the camera is likely ‘tuned’ to the lens that is part of the camera assembly, and the addition of a significant change to the optical pattern drawn on the camera sensor (now that a telephoto lens adaptor is attached) alters the focusing algorithm in an unexpected manner, producing the artifacts we see in the examples.

This issue is not at all unknown in engineering and quality control: a holistically designed system where all of the variables are thought to be known can be significantly degraded when even one element is externally modified without knowledge of the full scope of design parameters. This often occurs with after-market additions or changes to automobiles. One simple example is if you change the tire size (radius, not width) the speedometer is no longer is accurate – the entire system of the car, including wheel and tire diameter, was part of the calculus for determining how many turns of the axle per minute (all the speedometer mechanism actually measures) are required to indicate X amount of kph (or mph) on the instrument panel.

Another factor that may have a material effect on the focus and observed chromatic aberration is the lens design itself, and how an external adaptor lens may interact with the native design. Simple lenses are often portions of a sphere, so called “spherical lenses.”  Such a lens suffers from significant optical aberrations, as not all of the light rays that are focused by a spherical lens converge to a single point (producing a lack of sharp focus). Also, such lenses bend different colors of light differently, leading to chromatic aberrations (where one sees color fringing, usually blue/purple on one side of a high contrast object and green/yellow on the opposite side). Most high quality modern camera lenses are either aspherical (specially modified shapes that deviate away from a perfect spheroid shape) or groups of elements, some of which may be spherical and others aspherical. Several examples are shown below:

We know from published literature that the lens used in the iPhone4S is a 5 element lens system with at least several aspherical elements. A diagram released by Apple is shown below:

iPhone4 lens system [top] and iPhone4S lens system [bottom]

Again, as described earlier, the iPhone camera system was designed as a unitary system, with factors from the lens system, the individual lens elements, the sensor, firmware and kernel software all becoming known variables in a highly complex opto-electronic equation. The introduction of an external adaptor array of additional elements can produce unplanned effects. All in all, the various vendors of such adaptor lenses, including iPro, have done a good job in dealing with many unknowns. Apple is a highly secretive manufacturer, and does not publish much information. Attempts to gain further technical knowledge are very difficult, at some point one invariably comes up against Apple’s draconian NDAs (Non-Disclosure Agreements) which have penalties large enough to deter even the most aggressive seekers of information. Even the accumulation of knowledge that I have acquired over the past year while writing about the iPhone has been slow, tedious and has taken a tremendous amount of research and ‘fact comparison.’

As a final example, using a more real-world subject, here are a few camera images and screen shots that demonstrate the challenge if one attempts to correct, using post-production techniques, some of the errors introduced by such a lens adaptor:

Original image, unretouched but annotated.

The original image shows significant chromatic aberrations (color fringing) around the reflections in the shop window, grout lines in the brickwork on the pavement, and on the left side of the man’s shirt.

Editing using Photoshop ‘CameraRaw’ to attempt to correct the chromatic aberrations.

Using the Photoshop Camera Raw module, it is possible to manually correct for color fringing shifts… but this affects the entire image. So a fix for the edges causes a new set of errors in the middle of the image.

Chromatic aberrations removed from around the reflections in the window…

Notice here that the color fringing is gone from around the bright reflections in the window, but now the left edge of the man’s shirt has the color shifted, leaving only the monochromatic outline behind, producing a dark gray edge instead of the uniform blue that should exist.

…but reciprocal chromatic edge errors are introduced in the central portion of the image where highly saturated colors abut more neutral areas.

Likewise, the green paint on the steel column has shifted, revealing a gray line on the right of the woman’s leg, with a corresponding shift of the flesh tone onto the green steelwork on the left side of her leg.

final retouched shot after ‘painting in’ was performed to resolve the chroma offset errors in the central portion of the image.

To fix all these new errors, a technique known as ‘painting in’ was used, sampling and filling the color errors with the correct shade, texture and intensity. This takes time, skill and patience. It is impractical in the most part – this was done as an example.

Summary

The use of external adaptor lenses, including the iPro system discussed here, can offer a useful extension to the creative composition of images with the iPhone. Such lenses bring a set of compromises with them, but hopefully once these are known, careful choice of lighting, camera position and other factors can be used to reduce the visibility of such effects. As with any ‘creative device’ less is often more… sparing use of such adaptors will likely bring the best results. However, there are shots that I have obtained with the iPro that would have been impossible with the basic iPhone camera/lens, so I happy to have this additional tool.

To close, here are a few more examples using the iPro lenses:

Fisheye

WideAngle

Telephoto

Why do musicians have lousy music systems?

August 18, 2012 · by parasam

[NOTE: this article is a repost of an e-mail thread started by a good friend of mine. It raised an interesting question, and I found the answers and comments fascinating and wanted to share with you. The original thread has been slightly edited for continuity and presentation here].

To begin, the original post that started this discussion:

Why do musicians have lousy hi-fis?

It’s one of life’s little mysteries, but most musicians have the crappiest stereo systems.

  by Steve Guttenberg

August 11, 2012 7:36 AM PDT

I know it doesn’t make sense, but it’s true: most musicians don’t have good hi-fis.

To be fair, most musicians don’t have hi-fis at all, because like most people musicians listen in their cars, on computers, or with cheap headphones. Musicians don’t have turntables, CD players, stereo amplifiers, and speakers. Granted, most musicians aren’t rich, so they’re more likely to invest whatever available cash they have in buying instruments. That’s understandable, but since they so rarely hear music over a decent system they’re pretty clueless about the sound of their recordings.

(Credit: Steve Guttenberg/CNET)

Musicians who are also audiophiles are rare, though I’ve met quite a few. Trumpet player Jon Faddis was definitely into it, and I found he had a great set of ears when he came to my apartment years ago to listen to his favorite Dizzy Gillespie recordings. Most musicians I’ve met at recording sessions focus on the sound of their own instrument, and how it stands out in the mix. They don’t seem all that interested in the sound of the group.

I remember a bass player at a jazz recording session who grew impatient with the time the engineer was taking to get the best possible sound from his 200-year-old-acoustic bass. After ten minutes the bassist asked the engineer to plug into a pickup on his instrument, so he wouldn’t take up any more time setting up the microphone. The engineer wasn’t thrilled with the idea, because he would then just have the generic sound of a pickup rather than the gorgeous sound of the instrument. I was amazed: the man probably paid $100,000 for his bass, and he didn’t care if its true sound was recorded or not. His performance was what mattered.

From what I’ve seen, musicians listen differently from everyone else. They focus on how well the music is being played, the structure of the music, and the production. The quality of the sound? Not so much!

Some musicians have home studios, but very few of today’s home (or professional) studios sound good in the audiophile sense. Studios use big pro monitor speakers designed to be hyperanalytical, so you hear all of even the most subtle details in the sound. That’s the top requirement, but listening for pleasure is not the same as monitoring. That’s not just my opinion — very, very few audiophiles use studio monitors at home. It’s not their large size or four-figure price tags that stop them, as most high-end audiophile speakers are bigger and more expensive. No, studio monitor sound has little appeal for the cognoscenti because pro speakers don’t sound good.

I have seen the big Bowers & Wilkins, Energy, ProAc, and Wilson audiophile speakers used by mastering engineers, so it does work the other way around. Audiophile speakers can be used as monitors, but I can’t name one pro monitor that has found widespread acceptance in the audiophile world.

Like I said, musicians rarely listen over any sort of decent hi-fi, and that might be part of the reason they make so few great-sounding records. They don’t know what they’re missing.

——–

Now, in order, the original comment and replies:  [due to the authors of these e-mails being located in USA, Sweden, UK, etc. not all of the timestamps line up, but the messages are in order]

From: Tom McMahon
Sent: Saturday, August 11, 2012 6:09 PM
To: Mikael Reichel; ‘Per Sjofors’; John Watkinson
Subject: Why do musicians have lousy hi-fis?

I agree to some of this, have same observations.

But I don’t agree with the use broad use of “most musicians” as it may be interpreted that it is the majority. Neither of us can know this. Neil Young evidently cares a lot.

However, the statement “pro speakers do not sound good” is a subjective statement.  It´s like saying distilled water (i.e 100% H20) doesn’t taste good. Possibly, many think so but distilled water is the purest form of water and by definition anything less pure is not pure water. Whether you like it or not.

The water is the messenger and shooting it for delivering the truth is not productive.

If Audiophiles don’t like to hear the truth is sort deflates them.

A friend, singer/songwriter with fifteen CD´s behind her in the rock/blues genre, on a rare occasion when I got her to listen to her own stuff over a pair of Earo speakers, commented on the detail and realism. I then suggested that her forthcoming CD should be mastered over these speakers, she replied “ I don’t dare”.

Best/Mike

——-

From: John Watkinson
Sent: Sun 8/12/2012 6:46 AM
To: Mikael Reichel; Per Sjofors; Tom McMahon; Ed Elliott
Subject: Why do musicians have lousy hi-fis?

Hello All,

If a pro loudspeaker reproduces the input waveform and an audiophool [ed.note: letting this possible mis-spelling stand, in case it’s intended…] speaker also does, then why do they sound different?

We know the reasons, which are that practically no loudspeakers are accurate enough.  We have specialist speakers that fail in different ways.

The reason musicians are perceived to have lousy hi-fis may be that practically everyone does. The resultant imprinting means that my crap speaker is correct and your crap speaker is wrong, whereas in fact they are all crap.

Our author doesn’t seem to know any of this, so he is just wasting our time.

Furthermore I know plenty of musicians with good ears and good hi-fi.

Best,

John

——-

From: Mikael Reichel
Sent: Sun 8/12/2012 12:58 PM
To: John Watkinson; Per Sjofors; Tom McMahon; Ed Elliott
Subject: Why do musicians have lousy hi-fis?

Andrew is a really nice guy.

He has a talent in selecting demo material for his demos and his TAD speakers sound quite good. But they are passive and also use bass-reflex. This more or less puts the attainable quality level against a brick wall. Add the soft dome tweeter and I am a bit surprised at Mr. Jones choices, dome tweeters are fundamentally flawed designs.

One logical result of making “new” drivers is to skip ferrite magnets because they become a size and weight thief and also limits mechanical freedom for the design engineer. You almost automatically get higher sensitivity by using neodymium. But this is also a myth, as little is made to increase the fundamental mismatch of the driver to the air itself. I would guess Andrew has had the good sense to go with neodymium magnets.

To deliver affordable speakers is a matter of having a strong brand to begin with that allows for volumes so that you can have clients buy without listening first. This then allows for direct delivery thus avoiding importing and retail links in the chain to be removed. Typically out of the MSRP, only 25% reaches the manufacturer. Remove the manufacturing cost and you realize it’s  a numbers game.

This is exactly what is going on in the “audio” business today. The classical sales structures are being torn down. A very large number or speaker manufacturers are going to disappear because they don’t have the brand and volumes to sell over the web. To survive new ways of reaching the clients have to be invented. A true paradigm shift.

TAD has been the motor to provide this brand recognition and consumers are marketed to believe that they can get almost $80 performance from a less than $1 speaker, which is naïve.

If the speakers can be made active with DSP, they can be made to sound unbelievably good.  This is the real snapshot of the future.

/Mike

—-

From: John Watkinson
Sent: Sun 8/12/2012 11:13 PM
To: Mikael Reichel; Per Sjofors; Tom McMahon; Ed Elliott
Subject: Why do musicians have lousy hi-fis?

Hello All,

Mike is right. The combination of dome tweeter, bass reflex and passive crossover is a recipe for failure. But our journalist friend doesn’t know. I wonder what he does know?

Best,

John

——

From: Ed Elliott
Sent: Mon 8/13/2012 7:02 AM
To: Mikael Reichel; Per Sjofors; Tom McMahon; John Watkinson
Subject: Why do musicians have lousy hi-fis?

Hi Mike,

Well, this must be answered at several levels. Firstly the author has erred in two major, but unfortunately not at all uncommon ways:  the linguistic construction of “most <fill_in_the_blank>” is inaccurate and unscientific at the best of times, and all too often a device for aligning some margin of factuality to a desired hypothesis; the other issue is the very basis of the premise raised is left undefined in the article – what is “a good hi-fi system”?

Forgoing for the moment the gaps in logic and ontological reasoning that may be applied to the world of aural perception, the author does raise a most interesting question – one that if had been pursued in a different manner would have made for a far more interesting article. Forgetting for the moment issues (that are a total red herring today – the affordability of quality components has never been more accessible) of cost or availability – WHY don’t ‘most’ musicians apparently care to have ‘better’ sound systems? There is no argument that many musicians DO have excellent systems, at all levels of affordability; and appreciate the aural experience provided. However – and I personally have spent many decades closely connected to both the professional audio industry, musicians in general, and the larger post-production community – I do agree that based purely on anecdotal observation, many talented musicians do in fact not attach a large importance to the expense or quality of their ‘retail playback equipment.’ The same of course is not valid for their instruments or any equipment they deem necessary to express their music.

The answer I believe is most interesting:  I believe that good musicians simply don’t need a high quality audio system in order to hear music. The same synaptic wiring and neural fabric connectedness – the stuff that really is the “application layer” in the brain – means that this group of people actually ‘hears’ differently. Hearing, just like seeing, is almost 90% a neurological activity. Beyond the very basic mechanical issues of sound capture, focus, filtering and conversion from pressure waves to chemico-electical impulses (provided by the ears, ear canal, eardrum, cochlea) all the rest of ‘hearing’ is provided by  the ‘brain software.’

To cut to the chase:  this group of people already has a highly refined ‘sample set’ of musical notes, harmonies, melodies, rhythms, etc. in their brains, and needs very little external stimulation in order to ‘fire off’ those internalized ‘letters and words’ of musical sound. Just as an inexperienced reader may ‘read’ individual words – and a highly competent and experienced reader basically digests entire sentences as a single optic-with-meaning element, so a lay person may actually need a ‘better’ sound system in order to ‘hear’ the same things that a trained musician would hear.

That is not to say that musicians don’t hear – and appreciate – external acoustic reality:  just try playing a bit out of tune, lag a few microseconds on a lead guitar riff, or not express the same voice as others in the brass sections and you will get a quick lesson in just how acute their hearing is. It’s just tuned to different things. Once a composed song is underway, the merest hint of a well-known chord progression ‘fires off’ that experience in the musician’s brain software – so they ‘hear’ it was it was intended – the harmonic distortion, the lack of coherent imaging, the flappy bass – all those ‘noise elements’ are filtered out by their brains – they already know what it’s supposed to sound like.

If you realize that someone like Anne-Sophie Mutter has most likely played over 100,000 hours of violin already in her life, and look at what this has done to her brain in terms of listening (forgoing for the moment the musculo-skeletal reprogramming that has turned her body into as much of a musical instrument as the Stradivarius) – you can see that there is not a single passage of classical stringed or piano music that is not already etched into her neural fabric at almost an atomic level.

With this level of ‘programming’ it just doesn’t take a lot of external stimulation in order for the brain to start ‘playing the music.’ Going at this issue from a different but orthogonal point of view:  a study of how hearing impaired people ‘hear’ music is also revealing – as well as the other side of that equation: those that have damaged or uncommon neural software for hearing. People in this realm include autistics (who often have an extreme sensitivity to sound); stroke victims; head trauma victims, etc. A study here shows that the ‘brain software’ is far more of an issue in terms of quality of hearing than the mechanics or objective scientific ‘quality’ (perhaps an oxymoron) of the acoustic pressure waves provided to the human ear.

Evelyn Glennie – profoundly deaf – is a master percussionist (we just saw her play at the Opening Ceremonies) – and has adapted ‘hearing’ to an astounding level of physical sense in vibrations – including her feet (she mostly plays barefoot for this reason). I would strongly encourage the reading of three short and highly informative letters she published on hearing, disabilities and professional music. Evelyn does not need, nor can she appreciate, DACs with only .0001%THD and time-domain accuracies of sub-milliseconds – but there is no question whatsoever that this woman hears music!

This may have been a bit of a round-about answer to the issues of why ‘most musicians’ may have what the author perceives to be ‘sub-optimal’ hi-fi systems – but I believe it more fully answers the larger question of aural perception. I for instance completely appreciate (to the limits of my ability as a listener – which are professional but not ‘golden ears’) the accuracy, imaging and clarity of high end sound systems (most of which are way beyond my budget for personal consumption); but the lack of such does not get in the way of my personal enjoyment of many musicians’ work – even if played back from my iPod. Maybe I have trained my own brain software just a little bit…

In closing, I would like to take an analogy from the still photographer’s world:  this group of amateurs and professional alike put an almost unbelievable level of importance on their kit – with various bits of hardware (and now software) taking either the blame or the glory for the quality (or lack thereof) of their images. My personal observation is that the eye/brain behind the viewfinder is responsible for about 93% of both the successes and failures of a given image to match the desired state. I think a very similar situation exists today in both ‘audiophile’ as well as ‘professional audio’ – it would be a welcome change to discuss facts, not fancy.

Warmest regards,

Ed

——-

From: John Watkinson
Sent: Mon 8/13/2012 12:50 AM
To: Mikael Reichel; Per Sjofors; Tom McMahon; Ed Elliott
Subject: Why do musicians have lousy hi-fis?

Hello All,

I think Ed has hit the nail on he head. It is generally true that people hear what they ought to hear and see what they ought to see, not what is actually there. It is not restricted to musicians, but they have refined it for music.

The consequences are that transistor radios and portable cassette recorders, which sound like strangled cats, were popular, as iPods with their MP3 distortion are today. But in photography, the Brownie and the Instamatic were popular, yet the realism or quality of the snaps was in the viewer’s mind. Most people are content to watch television sets that are grossly misadjusted and they don’t see spelling mistakes.

I would go a little further than Ed’s erudite analysis and say that most people not only see and hear what they ought to, but they also think what they ought to, even if it defies logic. People in groups reach consensus, even if it is wrong, because the person who is right suffers peer pressure to conform else risk being ejected from the group. This is where urban myths that have no basis in physics come from. The result is that most decisions are emotive and science or physics will be ignored. Why else do 40 percent of Americans believe in Creation? I look forward to having problems with groups because it confirms that my ability to use logic is undiminished. Was it Wittgenstein who said what most people think doesn’t amount to much?

Marketing, that modern cancer, leaps into this human failing, by playing on emotions to sell things. It follows that cars do not need advanced technology if they can be sold by draping them with scantily-clad women. Modern cars are still primitive because the technical requirements are distorted downwards by emotion. In contrast  Ed’s accurate observation that photographers obsess about their kit, as do audiophiles illustrates that technical requirements can also be distorted upwards by emotion.

Marketing also preys on people to convince them that success depends on having all the right accessories and clothing for the endeavour. Look at all the stuff that sportsmen wear.

Whilst it would be nice for hi-fi magazines to discuss facts instead of fancy, I don’t see it happening as it gets in the way of the marketing.

Best,

John

——

From: Ed Elliott
Sent: Monday, August 13, 2012 8:11 PM
To: Mikael Reichel; Per Sjofors; Tom McMahon; John Watkinson
Subject: Why do musicians have lousy hi-fis?

Hi John, Mike, et al

Love your further comments, but I’m afraid that “marketing, that modern cancer” is a bit older that we would all like to think.. one example that comes to mind is about 400-odd years old now – and actually represents one of the most powerful and enduring ‘brands’ ever to be promoted in Western culture: Shakespeare. Never mind that allusions to and adaptations of his plays have permeated our culture for hundreds of years – even ‘in the time’ Shakespeare created, bonded with and nurtured his customer base. Now this was admittedly marketing in a more pure sense (you actually got something for your money) – but nonetheless repeat business was just as much of an issue then as now. Understanding his audience, knowing that both tragedy and comedy was required to build the dramatic tension that would bring crowds back for more; recognizing the capabilities and understanding of his audience so that they were stimulated but not puzzled, entertained but not insulted – there was a mastery of marketing there beyond just the tights, skulls and iambic pentameter.

Unfortunately I do agree that with time, marketing hype has diverged substantially from the underlying product to the point that often they don’t share the same solar system… And what’s worse is that now in many cases the marketing department actually runs product development in many large corporations… and I love your comments on ‘stuff sportsmen wear’ – to copy my earlier analogy on photography, if I was to pack up all the things that the latest photo consumer magazine and camera shop said I needed to have to take a picture I would need a band of Sherpas…

Now there is a bit of light ahead potentially:  the almost certain demise of most printed magazines (along with newspapers, etc.) is creating a tumultuous landscape that won’t stabilize right away. This means that what entities do remain and survive to publish information no longer have to conform to selling X amount of pages of ads to keep the magazine alive (and hence pander to marketing, etc.) There are two very interesting facts about digital publishing that to date have mostly been ignored (and IMHO are the root cause of digital mags being so poorly constructed and read – those that want to think that they can convert all their print reader to e-zine subscriptions need to check out multi-year retention stats – they are abysmal.)

#1 is people read digital material in a very different way than paper. (The details must wait for another thread – too much now). Bottom line is that real information (aka CONTENT) is what keeps readership. Splash and video might get some hits, but the fickle-factor is astronomical in online reading – if you don’t give your reader useful facts or real entertainment they won’t stay.

#2 is that, if done correctly, digital publishing can be very effective, beautiful, evocative and compelling at a very low cost. There simply isn’t the need for massive ad dollars any more. So the type of information that you all are sharing here can be distributed much more widely than ever before. I do believe there is a window of opportunity for getting real info out in front of a large audience, to start chipping away at this Himalayan pile of stink that defines so much of (fill in the blank: audio, tv, cars, vitamins, anti-aging creams, etc.)

Ok, off to answer some e-mails for that dwindling supply of really importance:  paying clients!

Many thanks

Ed

——–

From: John Watkinson
Sent: Tue 8/14/2012 12:57 AM
To: Mikael Reichel; Per Sjofors; Tom McMahon; Ed Elliott
Subject: Why do musicians have lousy hi-fis?

Dear Ed,

This is starting to be interesting. I take your point about Shakespeare being marketed, but if we want to go back even further, we have to look at religion as the oldest use of marketing. It’s actually remarkable that the religions managed to prosper to the point of being degenerate when they had no tangible product at all. Invent a problem, namely evil, and then sell a solution, namely a god. It’s a protection racket. Give us money so we can build cathedrals and you go to heaven. It makes oxygen free speaker cables seem fairly innocuous. At least the hi-fi industry doesn’t threaten you with torture. If you  read anything about the evolution of self-replicating viruses, suddenly you see why the Pope is opposed to contraception.

I read an interesting book about Chartres cathedral, in which it was pointed out that the engineering skills and the underlying science needed to make the place stand up (it’s more air than stone) had stemmed from a curiosity about the laws of nature that would ultimately lead to the conclusion that there was no Creation and no evidence for a god, that the earth goes round the sun and that virgin birth is due to people living in poverty sharing bathwater.

If you look at the achievements of hi-fi and religion in comparison to the achievements of science and engineering, the results are glaring. The first two have made no progress in decades, because they are based on marketing and have nothing to offer. Prayer didn’t defeat Hitler, but radar, supercharging and decryption may have played a small part.

Your comments about printed magazines and newspapers are pertinent. These are marketing tools and as a result the copy is seldom of any great merit, as Steve Gutenberg continues to demonstrate in his own way. Actually the same is true for television. People think the screensaver was a computer invention. Actually it’s not, it’s what television broadcasts between commercial breaks.

So yes, you are right that digital/internet publishing is in the process of pulling the rug on traditional media. Television is over. I don’t have one and I don’t miss the dumbed-down crap and the waste of time. Himalayan pile of stink is a wonderful and evocative term!

Actually services like eBay are changing the world as well. I hardly ever buy anything new if I can get one someone doesn’t want on eBay. It’s good for the vendor, for me and the environment.

In a sense the present slump/recession has been good in some ways. Certainly it has eroded peoples’ faith in politicians and bankers and the shortage of ready cash has led many to question consumerism.

Once you stop being a consumer, reverse the spiral and decide to tread lightly on the earth, the need to earn lots of money goes away. My carbon neutral house has zero energy bills and my  policy of buying old things and repairing them means I have all the gadgets I need, but without the cost. The time liberated by not needing to earn lots of money allows me to make things I can’t buy, like decent loudspeakers. It means I never buy pre-prepared food because I’m not short of time. Instead I can buy decent ingredients and know what I’m eating.

One of the experiences I treasure due to reversing the spiral was turning up at a gas station in Luxembourg. There must have a been a couple of million dollars worth of pretentious cars filling up. BMW, Lexus, Mercedes, the lot. And they all turned to stare at my old Jaguar when I turned up. It was something they couldn’t have because they were too busy running on the treadmill to run a car that needs some fixing.

Best,

John

——

From: Ed Elliott
Sent: Wed 8/15/2012 1:01 PM
To: Mikael Reichel; Per Sjofors; Tom McMahon; John Watkinson
Subject: Why do musicians have lousy hi-fis?

Hi John,

Yes, I’m finding this part of my inbox so much more interesting than the chatterings of well-intentioned (but boring) missives; and of course the ubiquitous efforts of (who else!) the current transformation of tele-marketers into spam producers… I never knew that so many of my body parts needed either extending, flattening, bulking up, slimming down, etc. etc!

Ahh! Religion… yes, got that one right the first time. I actually find that there’s a more nefarious aspect to organized religion: to act as a proxy for nation-states that couldn’t get away with the murder, land grabs, misogyny, physical torture and mutilation if these practices were “state sponsored” as opposed to “expressions of religious freedom.”  Always makes me think of that Bob Dylan song, “With God on Our Side…”

On to marketing in television.. and tv in general… I actually turned mine on the other day (well, since I don’t have a ‘real’ tv – but I do have the cable box as I use that for high speed internet – I turned on the little tuner in my laptop so I could watch Olympics in HD (the bandwidth of the NBC streaming left something to be desired) – and as usual found the production quality and techniques used in the adverts mostly exceed the filler… The message, well that went the way of all adverts: straight back out my head into the ether… What I want to know – this is a better trick than almost anything – how did the advertisers ever get convinced that watching this drivel actually effects what people buy?? Or I am just an odd-bod that is not swayed by hype, mesmerizing disinformation [if I buy those sunglasses I’ll get Giselle to come home with me…], or downright charlatantry.

And yeah for fixing things and older cars… I bought my last car in 1991 and have found no reason [or desire] to replace it. And since (thank g-d) it was “pre-computer” it is still ‘fixable’ with things like screwdrivers and spanners… I think another issue in general is that our cultures have lost the understanding of ‘preventative maintenance’ – a lot of what ends up in the rubbish bin is due to lack of care while it was alive..

Which brings me back to a final point:  I do like quality, high tech equipment, when it does something useful and fulfills a purpose. But I see a disappointing tendency with one of the prime vendors in this sector:  Apple. I am currently (in my blog) writing about the use of iPhones as a practical camera system for HD cinemaphotography – with all of the issues and compromises well understood! Turns out that two of the fundamental design decisions by Apple are at the core of limiting the broader adoption of this platform (I describe how to work around this, but it’s challenging):  the lack of a removable battery and removable storage.

While there are obvious advantages to those decisions, in terms of reliability and industrial design, it can’t be ignored that the lack of both of those features certainly mitigate towards a user ‘upgrading’ at regular intervals (since they can’t swap out a battery or add more storage). And now they have migrated this ‘sealed design’ to the laptops… the new Mac Air is for all practical purposes unrepairable (again, no removable battery, the screen is glued into the aluminium case, and all sub-assemblies are wave-soldered to the main board). The construction of even the Mac Pro is moving in that direction.

So my trusty Dell laptop, with all of its warts, is still appreciated for its many little screws and flaps… when a bit breaks, I can take it apart and actually change out just a Bluetooth receiver, or upgrade memory, or even swap the cpu. Makes me feel a little less redundant in this throw-away world.

I’ll leave you with this:

Jay Leno driving down Olive Ave. last Sunday in his recently restored 1909 Doble & White steam car. At 103 years old, this car would qualify for all current California “low emissions” and “fuel efficiency” standards…

(snapped from my iPhone)

Here is the link to Jay’s videos on the restoration process.

Enjoy!

Ed

iPhone Cinemaphotography – A Proof of Concept (Part 1)

August 3, 2012 · by parasam

I’m introducing a concept that I hope some of my readers may find interesting:  the production of an HD video that is entirely built using only the iPhone (and/or iPad). Everything from storyboard to all photography, editing, sound, titles and credits, graphics and special effects, etc. – and final distribution – can now be performed on a “cellphone.” I’ll show you how. Most of the focus of the new crop of highly capable ‘cellphone cameras’ such as is available with the iPhone and certain Android phones has been focused on still photography. While motion photography (video) is certainly well-known, it has not received the same attention and detail – nor the amount of apps – as its single-image sibling.

While I am using a single platform with which I am familiar (iOS on the iPhone/iPad), this concept can I believe be performed on the Android class of devices as well. I have not (nor do I intend to) research that possibility – I’ll leave that for others who are more familiar with that platform. The purpose is to show that such a feat CAN be done – and hopefully done reasonably well. It’s only been a few years since the production of HD video was strictly in the realm of serious professionals, with budgets of hundreds of thousands of dollars or more. While there of course are many compromises – and I don’t for a minute pretend that the range of possible shots or quality will anywhere near approach what a high quality DSLR, RED, Arri or other professional video camera can produce, I do know that a full HD (1080P) video can now be totally produced on a low-cost mobile platform.

This POC (Proof Of Concept) is intended as more than just a lark or a geeky way to eat some spare time:  the real purpose is to bring awareness that the previous bar of high cost cinemaphotography/editing/distribution has been virtually eliminated. This paves the way for creative individuals almost anywhere in the world to express themselves in a way that was heretofore impossible. Outside of America and Western Europe both budgets and skilled operator/engineers are in far lower supply. But there are just as many people who have a good story to tell in South Africa, Nigeria, Uruguay, Aruba, Nepal, Palestine, Montenegro and many other places as there are in France, Canada or the USA. The internet has now connected all of us – information is being democratized in a huge way. Of course there are still the ‘firewalls’ of North Korea, China and a few others – but the human thirst for knowledge, not to mention the unbelievable cleverness and endurance of 13-year-old boys and girls in figuring out ‘holes in the wall’ shows us that these last bastions of stolidity are doomed to fall in short order.

With Apple and other manufacturers doing their best to leave nary a potential customer anywhere in the world ‘out in the cold’, the availability, both in real terms and affordability, is almost ubiquitous. With apps now costing typically a few dollars (it’s almost insane – the Avid editor for iOS is $5; the Avid Media Composer software for PC/Mac is $2,500) an entire production / post-production platform can be assembled for under $1,000. This exercise is about what’s possible, not what is the easiest, most capable, etc. Yes, there are many limitations. Yes, some things will take a lot longer. But what you CAN do is just nothing short of amazing. That’s the story I’m going to share with you.

A note to my readers:  None of the hardware or software used in this exercise was provided by any vendor. I have no commercial relationship with any vendor, manufacturer or distributor. Choices I have made or examples I use in this post are based purely on my own preference. I am not a professional reviewer, and have made no attempt to exhaustively research every possible solution for the hardware or software that I felt was required to produce this video. All of the hardware and software used in this exercise is currently commercially available – any reasonably competent user should be able to reproduce this process.

Before I get into detail on hardware or software, I need to remind you that the most important part of any video is the story. Just having a low-cost, relatively high quality platform on which to tell your ‘story’ won’t help if you don’t have something compelling to say – and the people/places/things in front of the lens to say it. We have all seen that vast amounts of money and technical talent means nothing in the face of a lousy script or poor production values – just look over some of the (unfortunately many) Hollywood bombs… I’m the first one to admit that motion picture storytelling is not my strong point. I’m an engineer by training and my personal passion is still photography – telling a story with a single image. So… in order to bring this idea to fruition – I needed help. After some thought, I decided that ‘piggybacking’ on an existing production was the most feasible way to produce this idea: basically adding a few iPhone cameras to a shoot where I could take advantage of existing set, actors, lighting, direction, etc. etc. For me, the this was the only practical way to make this happen in a relatively short time frame.

I was lucky enough to know a very talented director, Ambika Leigh, who was receptive and supportive of my idea. After we discussed my general idea of ‘piggybacking’ she kindly identified a potential shoot. After initial discussions with the producers, the green light for the project was given. The details of the process will come in future posts, but what I can say now (the project is an upcoming series that is not released yet – so be patient! It will be worth the wait) is that without the support and willingness of these three incredible women (Ambika Leigh, director; Tiffany Price & Lauren DeLong, producers/actors/writers) this project would not have moved forward with the speed, professionalism and just plain fun that it has. At a very high level, the series brings us into the clever and humorous world of the “Craft Ladies” – a couple of friends that, well, like to craft – and drink wine.

Craft Ladies is the story of Karen and Jane, best friends forever, who love to
craft…they just aren’t any good at it. Over the years Karen and Jane’s lives
have taken slightly different paths but their love of crafting (and wine)
remains strong. Tune in in September to watch these ladies fulfill their
dream…a craft show to call their own. You won’t find Martha Stewart here,
this is crafting Craft Ladies style. Craft Up Nice Things!”

Please check out their links for further updates and details on the ‘real thing’

www.facebook.com/CraftUpNiceThings
www.twitter.com/#!/2craftladies
www.CraftUpNiceThings.com

I am solely responsible for the iPhone portion of your program – so all errors, technical gaffs, editorial bloops and other stumbles are mine. As said, this is a proof of concept – not the next Spielberg epic… My intention is to follow – as closely as my expertise and the available iOS technology will allow – the editorial decisions, effects, titles, etc. that end up on the ‘real show’. To this end I will be necessarily lagging a bit in my production, as I have to review the assembled and edited footage first. However, I will make every effort to have my iPhone version of this series ready for distribution shortly after the real version launches. Currently this is planned for some time in September.

For the iPhone shoot, two iPhone4S devices were used. I need to thank my capable 2nd camerawoman – Tara Lacarna – for her endurance, professionalism and support over two very long days of shooting! In addition to her new career as an iPhonographer (ha!) she is a highly capable engineer, musician and creative spirit. While more detail will be provided later in this post, I would also like to thank Niki Mustain of Schneider Optics for her time (and the efforts of others at this company) in helping me get the best possible performance from the “iPro” supplementary lenses that I used on portions of the shoot.

Before getting down to the technical details of equipment and procedure, I’ll lay out the environment in which I shot the video. Of course, this can vary widely, and therefore the exact technique used, as well as some hardware, may have to change and adapt as required. In this case the entire shoot was indoors using two sets. Professional lighting was provided (3200°K) for the principal photography (which used various high-end DSLR cameras with cinema lenses). I had to work around the available camera positions for the two iPhone cameras, so my shots will not be the same as were used in principal photography. Most shots were locked off with both iPhones on tripods; there were some camera moves and a few handheld shots. The first set of episodes was filmed over two days (two very, very long days!!) and resulted in about 116GB of video material from the two iPhones. In addition to Ambika, Tiffany, Lauren and Tara there was a dedicated and professional crew of camera operators, gaffers, grips, etc. (with many functions often performed by just one person – this was after all about quality not quantity – not to mention the lack of a 7-figure Hollywood budget!). A full list of credits will be in a later post.

Aside from the technical challenges; the basic job of getting lines and emotion on camera; taking enough camera angles, close-ups, inserts and so on to ensure raw material for editorial continuity; and just plain endurance (San Fernando Valley, middle of summer, had to close all windows and turn off all fans and A/C for each shot due to noise, a pile of people on a small set, hot lights… you get the picture…) – the single most important ingredient was laughter. And there was lots of it!! At one time or another, we had to stop down for several minutes until one or the other of us stopped laughing so hard that we couldn’t hold a camera, say a line or direct the next sequence. That alone should prompt you to check this series out – these women are just plain hilarious.

Hardware:

As mentioned previously, two iPhone4S cameras were used. Each one was the 32GB model. Since shooting video generates large files, most user data was temporarily deleted off each phone (easy to restore later with a sync using iTunes). Approximately 20GB free space was made available on each phone. If one was going to use an iPhone for a significant amount of video photography the 64GB version would probably be useful. The down side is that (unless you are shooting very short events) you will still have to download several times a day to an external storage device or computer – and the more you have to download the longer that takes! As in any process, good advance planning is critical. In my case with this shoot, I needed to coordinate ‘dumping times’ with the rest of the shoot:  there was a tight schedule and the production would not wait for me to finish dumping data off the phones. The DSLR cameras use removable memory cards, so it only takes a few minutes to swap cards, then those cameras are ready to roll again. I’ll discuss the logistics of dumping files from the phones in more detail in the software section below. If one was going to attempt long takes with insufficient break time to fully dump the phone before needing to shoot again, the best solution would be to have two iPhones for each camera position, so that one phone could be transferring data while the other one was filming.

In order to provide more visual control, as well as interest, a set of external adapter lenses (the “iPro” system by Schneider Optics) was used on various shots. A total of three different lenses are available: telephoto, wide-angle and a fisheye. A detailed post on these lenses – and adaptor lenses in general – is here. For now, you can visit their site for further detail. These lenses attach to a custom shell that is affixed to the iPhone. The lenses are easily interchanged with a bayonet mounting system. Another vital feature of the iPro shell for the phone is the provision for tripod mounting – a must for serious cinemaphotography – especially with the telephoto lens which magnifies camera movement. Each phone was fitted with one of the iPro shells to facilitate tripod mounting. This also made each phone available for attaching one of the lenses as required for the shot.

iPro “Fisheye” lens

iPro “Wide Angle” lens

iPro “Telephoto” lens

Another hardware requirement is power:  shooting video kills batteries just about faster than any other activity on the iPhone. You are using most of the highest power consuming parts of the phone – all at the same time:  the camera sensor, the display, the processor, and high bandwidth memory writing. A fully charged iPhone won’t even last two hours shooting video, so one must run the phone on external power, or plan the shoot for frequent (and lengthy!) recharge sessions. Bring plenty of extra cables, spare chargers, extension cords, etc. – it’s very cheap insurance to keep the phones running. Damage to cables while on a shoot is almost a guaranteed experience – don’t let that ruin your session.

A particular challenge that I had was a lack of a ‘feed through’ docking connector on the Line6 “Mobile In” audio adapter (more on this below). This meant that while I was using this high quality audio input adapter I was forced to run on battery, since I could not plug in the Mobile In device and the power cable at the same time to the docking connector on the bottom of the phone. I’m not aware of a “Y” adapter for iPhone docking connectors, but that would have really helped. It took a lot of juggling to keep that phone charged enough to keep shooting. On several shots, I had to forgo the high quality audio as I had insufficient power remaining and had to plug in to the charger.

As can be seen, the lack of both removable storage and a removable battery are significant challenges for using the iPhone in cinemaphotography. This can be managed, but it’s a critical point that requires careful attention. Another point to keep in mind is heat. Continual use of the phone as a video camera definitely heats up the phone. While neither phone ever overheated to the point where it became an issue, one should be aware of this fact. If one was shooting outside, it may be helpful to (if possible) shade the phone(s) from direct sunlight as much as practical. However, do not put the iPhones in the ice bucket to keep them cool…

Gitzo tripod with fluid head attached

Close-up of fluid head

Tripods are a must for any real video work:  camera judder and shake is very distracting to the viewer, and is impossible to remove (with any current iPhone app). Even with serious desktop horsepower (there is rather good toolset in Adobe AfterEffects for helping to remove camera shake) it takes a lot of time, skill and computing power. Far better to avoid in the first place whenever possible. Since ‘locked off’ shots are not as interesting, it’s worth getting fluid heads for your tripods so you can pan and tilt smoothly. A good high quality tripod is also well worth the investment:  flimsy ones will bend and shake. While the iPhone is very light – and this may tempt one to go with a very lightweight tripod – this will work against you if you want to make any camera tilts or pans. The very light weight of the phone actually causes problems in this case: it’s hard to smoothly move a camera that has almost no mass. At least having a very rigid and sturdy tripod will help in this regard. One will need considerable practice to get used to the feel of your particular fluid head, get the tension settings just right, etc. – in order to effect the smoothest camera movements. Remember this is a very small sensor, and the best results will be obtained with slow and even camera pans/tilts.

For certain situations, miniature tripods or dollies can be very useful, but they don’t take the place of a normal tripod. I used a tiny tripod for one shot, and experimented with the Pico Dolly (sort of a miniature skateboard that holds a small camera) although did not actually use for a finished shot. This is where the small size and light weight of the iPhone can be a plus: you can hang it and place it in locations that would be difficult to impossible with a normal camera. Like anything else though, don’t get too creative and gimmicky:  the job of the camera is to record the story, not call attention to itself or technology. If a trick or a gadget can help you visually tell the story – then it’s useful. Otherwise stick with the basics.

Another useful trick I discovered that helped stabilize my hand-held shots:  my tripod (as many do) has a removable center post on which the fluid head is mounted (that in turn holds the camera). By removing the entire camera/fluid-head/center-post assembly I was able to hold the camera with far greater accuracy and stability. The added weight of the central post and fluid head, while not much – maybe 500 grams – certainly added stability to those shots.

Tripod showing center shaft extended before removal.

Center shaft removed for “hand-held” use

If you are planning on any camera moves while on the tripod (pans or tilts), it is imperative that the tripod be leveled first – and rechecked every time you move it or dismount the phone. Nothing worse than watching a camera pan move uphill as you traverse from left to right… A small circular spirit level is the perfect accessory. While I have seen very small circular levels actually attached to tripod heads, I find them too small for real accuracy. I prefer a small removable device that I can place on top of the phone itself (which then accounts for all the hardware up to and including the shell) that can affect alignment. The one I use is 25mm (1″) in diameter.

I touched on the external audio input adapter earlier while discussing power for the iPhones, I’ll detail that now. For any serious video photography you must use external microphones: the one in the phone itself – although amazingly sensitive, has many drawbacks. It is single channel – where the iPhone hardware (and several of the better video camera apps) are capable of recording stereo; you can’t focus the sensitivity of the microphone, and most importantly, the mike is on the front of the phone at the bottom – pointing away from where your lens is aimed!

While it is possible to plug a microphone into the combination headphone/microphone connector on the top of the phone, there are a number of drawbacks. The first is it’s still a mono input – only 1 channel of sound. The next is the audio quality is not that great. This input was designed for telephone conversation headpiece use, so extended frequency response, low noise and reduced harmonic distortion were not part of the design parameters. Far better audio quality is available on the digital docking connector on the bottom of the phone. That said, there are very few devices actually on the market today (that I have been able to locate) that will function in the environment of video cinemaphotography, particularly if one is using the iPro shell and tripod mounting the iPhone. Many of the devices treat the iPhone as just an audio device (the phone actually snaps into several of the units, making it impossible to use as a camera); with others the mechanical design is not compatible with either the iPro case or tripod mounting. Others offer only a single channel input (these are mostly designed for guitar input so budding Hendrix types can strum into GarageBand). The only unit I was able to find that met all of my requirements (stereo line input, high audio quality, mechanically did not interfere with tripod or the iPro case) was a unit “Mobile In” manufactured by Line6. Even this device is primarily a guitar input unit, but it does have a line in stereo connector that works very well. In order to use the hardware, you must download and install their free app (and it’s on the fat side, about 55MB) which contains a huge amount of guitar effects. Totally useless for the line input – but it won’t work without it. So just install it and forget about it. You never need to open the MobilePOD app in order to use the line input connector. As discussed above in the section on power, the only major drawback is that once this device is plugged in you can’t run your phone off external power. Really need to find that “Y” adapter for the docking connector..

“Mobile In” audio input adapter attached.

Now you may ask, why do I need a line input connector when I’m using microphones?? My attempt here is to produce the highest quality content possible, while still using the iPhone as the camera/recorder. For the reasons already discussed above, the use of external microphones is required. Typically a number of mikes will be placed, fed into a mixer, and then a line level feed (usually stereo) will be fed to the sound recorder. In all ‘normal’ (aka not using cellphones as cameras!!) video shoots, the sound is almost always recorded on a separate device, just synchronized in some fashion to each of the cameras so the entire shoot is in sync. In this particular shoot, the two actors on the set were individually miked with lavalier microphones (there is a whole hysterical story on that subject, but it will have to wait until after that episode airs…) and a third direction boom mike was used for ambient sound. The three mikes were fed into a small portable mixer/sound recorder. The stereo output (usually used for headphone monitoring – a line level output) was fed (through a “Y” cable) to both the monitoring headphones and the input to the Mobile In device. Essentially, I just ‘piggybacked’ on top of the existing audio feed for the shoot.

This didn’t violate my POC – as one would need this same equipment – or something like it – on any professional shoot. At a minimum, one could just use a small mixer, obviously if the iPhone was recording the sound an external recorder is not required. I won’t attempt to further discuss all the issues in recording high quality sound – that would take a full post (if not a book!) – but there is a massive amount of literature out there on the web if one looks. Good sound recording is an art – if possible avail yourself of someone who knows this skill to assist you on your shoot – it will be invaluable. I’ll just mention a few pointers to complete this part of the discussion:

  • Record the most dynamic range possible without distortion (big range between soft and loud sounds). This will markedly improve the presence of your audio tracks.
  • Keep all background noise to an absolute minimum. Turn off all cellphones! (put the iPhone that are ‘cameras’ in “airplane mode” so they won’t be disturbed by phone calls, texts or e-mails). Turn off fans, air conditioners, refrigerators (if you are near a kitchen), etc. etc. Take a few moments after calling ‘quiet on the set’ to sit still and really listen to your headphones to ensure you don’t hear any noise.
  • As much as possible, keep the loudness levels consistent from take to take – it will help keep your editor (or yourself…) from taking out the long knives after way too many hours trying to normalize levels between takes…
  • If you use lavalier mikes (those tiny microphones that clip onto clothing – they are available in ‘wired’ or ‘wireless’ versions) you need to listen carefully during rehearsals and actual takes for clothing rustle. That can be very distracting – you may have to stop and reposition the mike so that the housing is not touching any clothing. These mikes come with little clips that actually mount on to the cable just below the actual microphone body – thereby insulating clothing movement (rustle) from being transmitted to the sensor through the body of the microphone. Take care in mounting and test with your actor as they move – and remind them that clasping their hands to their chest in excitement (and thumping the mike) will make your sound person deaf – and ruin the audio for that shot!

Actors’ view of the camera setup for a shot, (2 iPhones, 3 DSLRs)

Storage and the process of dumping (transferring video files from the iPhones to external storage) is a vital part of both hardware, software and procedure. The hardware I used will be discussed here, the software and procedure is mentioned in the next section. Since the HD video files consume about 2.5GB for every 10 minutes of filming, even the largest capacity iPhone (64GB) will run out of space in short order. As mentioned earlier, I used the 32GB models on this shoot, with about 20GB free space on each phone. That meant that, at a maximum, I had a little over an hour’s storage on each phone. During the two days of shooting, we shot just under 5 hours of actual footage – which amounted to a total of 116GB from the two iPhones in total. (Not every shot was shadowed by the iPhones: some of the close-ups and inserts could not be performed by the iPhones as they would have been in the shot composed by the DSLR cameras).

The challenge to this project was to not involve anything other than the iPhone/iPad for all factors of the production. The dumping of footage from the iPhones to external storage is one area where Apple (nor any 3rd party developer that I have found) does not offer a purely iOS solution. With the lack of removable storage, there are only two ways to move files off the iPhone: Wi-Fi or the USB cable attached to the docking connector. Wi-Fi is not a practical solution in this environment:  the main reason is it’s too slow. You can find as many ‘facts’ on iPhone Wi-Fi speed as there are types of orchids in the Amazon, but my research (verified by personal tests) show that, in a real-world and practical manner 8Mb/s is a top-end average for upload (which is what you need to transmit files FROM the phone to an external storage device). That’s only 800KB/s – so it would take 7 hours to upload one 2.5GB movie file – which is 10 minutes of shooting! Not to mention the issues of Wi-Fi interference, dropped connections, etc. etc.

That brings us to cabled connections. Currently, the only way to move data off of (or on to for that matter) an iPhone is to use a computer. While the Apple Time Machine could in theory function as a direct-to-phone data storage device, it only connects via Wi-Fi. However, the method I chose only uses the computer as a ‘connection link’ to an external hard drive, so in my view it does not break my premise of an “all iOS” project. When I get to the editing stage, I just reverse the process and pull files back from the external drive through the computer back to the phone (in this case using iTunes).

I will discuss the precise technique and software used below, but suffice to say here that I used a PC as the computer – mainly just because that is the laptop that I have. It also does prove however that there is no issue of “Mac vs PC” as far as the computer goes. I feel this is an important point, as in many countries outside USA and Western Europe the price premium on Apple computers is such that they are very scarce. For this project, I wanted to make sure the required elements were as widely available as possible.

The choice of external storage is important for speed and reliability’s sake. Since the USB connection from the phone to the computer is limited to v2.0 (480Mb/s theoretical) one may assume that just any USB2.0 external drive would be sufficient. That’s not actually the case, as we shall see…  While the link speed of USB2.0 supposedly can provide a maximum of 48MB/s (480Mb/s), that is never matched in reality. USB chipsets in the internal hub in the computer, processing power in the phone and the computer, other processes running on the computer during transfer, bus and cpu speed in the computer, actual disk controller and disk speed of the external storage – all these factors serve to significantly affect transfer speed.

Probably the most important is the actual speed of the external disk. Most common portable USB2.0 disks (the small 2.5″ format) run at 5400RPM, and have disk controller chipsets that are commensurate, with actual performance in the 5-10MB/s range. This is too slow for our purposes. The best solution is to use an external RAID array of two ‘striped’ disks [RAID 0] using high performance 7200RPM SATA disks with an appropriately designed disk controller. Devices such as the G-RAID Mini system is a good example. If you are using a PC, get the best performance with an eSATA connection to the drive (my laptop has a built-in eSATA connector, but PC Card adapters are available that easily support this connectivity for computers that don’t have it built in). This offers the highest performance (real-world tests show average write speeds of 115MB/s using this device). If you are using an Apple computer, opt for the FW800 connection (I’m not aware of eSATA on any Mac computer). While this limits the performance to around 70MB/s maximum, it’s still much faster than the USB2.0 interface from the phone so it’s not an issue. I have proven that having a significant amount of ‘headroom’, in terms of speed performance, on the external drive, is desirable. You just don’t need the drive to slow things down any.

There are other viable alternatives for external drives, particularly if one needed a drive that did not require an external power supply (which the G-RAID does due to the performance). Keep in mind that while it’s possible to run a laptop and external drive all off battery power, you really won’t want to do this – for one, unless you are on a remote outdoor location shoot, you will have AC power – and disk writing at continuous high throughput is a battery killer! That said, a good alternative (for PC) is one of the Seagate GoFlex USB3.0 drives. I use a 1.5TB model that houses a high-performance 7200RPM drive and supports up to 50MB/s write speeds. For the Mac, Seagate has a Thunderbolt model. Although the Thunderbolt interface is twice as fast (10Gb/s vs 5Gb/s) as USB3.0 it makes no difference in transfer speed (these single drive storage devices can’t approach the transfer speeds of either interface). However, there is a very good reason to go with USB3.0/eSATA/Thunderbolt instead of USB2.0 – overall performance. With the newer high-speed interfaces, the full system (hard disk controller, interface chipset, etc.) is designed for high-speed data transfer, and I have proved to myself that it DOES make a difference. It’s very hard to find a USB2.0 system that matches the performance of a USB3.0/etc system – even on a 2.5″ single drive subsystem.

The last thing to cover here under storage is backup. Your video footage is irreplaceable. Procedure will be covered below, but under hardware, provide a second external drive on the set. It’s simply imperative that you immediately back up the footage on to a second physical drive as soon as practical – NOT at the end of the day! If you have a powerful enough computer, with the correct connectivity, etc. – you can actually copy the iPhone files to two drives simultaneously (best solution), but otherwise plan on copying the files from one external drive to the backup while the next scenes are being shot (background task).

I’ll close with a final suggestion:  while this description of hardware and process is not meant in any way to be a tutorial on cinemaphotography, audio, etc. etc. – here is a small list (again, this is under ‘hardware’ as it concerns ‘stuff’) of useful items that will make your life easier “on the set”:

  • Proper transport cases, bags, etc. to store and carry all these bits. Organization, labeling, color-coding, etc. all helps a lot when on a set with lots of activity and other equipment.
  • Spare cables for everything! Murphy will see to it that the one item for which you have no duplicate will get bent during the shoot…
  • Plenty of power strips and extension cords.
  • Gorilla tape or camera tape (this is NOT ‘duct tape’). Find a gaffer and he/she will explain it to you…
  • Small folding table or platform (for your PC/Mac and drives) – putting high value equipment on the floor is asking for BigFoot to visit…
  • Small folding stool (appropriate for the table above), or an ‘apple box’ – crouching in front of computer while manipulating high value content files is distracting, not to mention tiring.
  • If you are shooting outside, more issues come into play. Dust is the big one. Cans of compressed air, lens tissue, camel-hair brushes, zip-lock baggies, etc. etc. – none of the items discussed in this entire post appreciate dust or dirt…
    • Cooling. Mentioned earlier, but you’ll need to keep the phone and computer as cool as practical (unless of course you are shooting in Scotland in February in which case the opposite will be true: trying to figure out how to keep things warm and dry in the middle of a wet and freezing moor will become paramount).
    • Special mention for ocean-front shoots:  corrosion is a deadly enemy of iPhones and other such equipment. Wipe down ALL equipment (with appropriate cloths and solutions) every night after the shoot. Even the salt air makes deposits on every exposed metal surface – and later on a very hard to remove scale will become apparent.
  • A final note for sunny outdoor shoots: seeing the iPhone screen is almost impossible in bright sunlight, and unlike DSLRs the iPhone does not have an optical viewfinder. Some sort of ‘sunshade’ will be required. While researching this online, I came across this little video that shows one possible solution. Obviously this would have to be modified to accommodate the audio adapter, iPro lenses, etc. shown in my project, but it will hopefully give you some ideas. (Thanks to triplelucky for this video).

Software:

As amazing as the hardware capabilities of the above system are (iPhone, supplemental lenses, audio adapters, etc.) – none of this would be possible without the sophisticated software that is now available for this platform at such low cost. The list of software that I am currently using to produce this video is purely of my own choosing – there may be other equally viable solutions for each step or process. I feel what is important is the possibility of the process, not the precise piece of kit used to accomplish the task. Obviously, as I am using the iOS platform, all the apps are “Apple iPhone/iPad compliant”. The reader that chooses an alternate platform will need to do a bit of research to find similar functionality.

As a parallel project, I am currently describing my experiences with the iPhone camera in general, as well as many of the software packages (apps) that support the iPhone still and video camera. These posts are elsewhere in this same blog location. For that reason, I will not describe in any detail the apps here. If software that is discussed or listed here is not yet in my stable of posts, please be patient – I promise that each app used in this project will be discussed in this blog at some point. I will refer the reader to this post where an initial list of apps that will be discussed is located.

Here is a short list of the apps I am currently using. I may add to this list before I complete this project! If so, I will update this and other posts appropriately.

Storyboard Composer Excellent app for building storyboards from shot or library photos, adding actors, camera motion, script, etc. Powerful.

Movie*Slate A very good slate app.

Splice Unbelievable – a full video editor for the iPhone/iPad. Yes, you can: drop movies and stills on a timeline, add multiple sound tracks and mix them, work in full HD, has loads of video and audio efx, add transitions, burn in titles, resize, crop, etc. etc. Now that doesn’t mean that I would choose to edit my next feature on a phone…

Avid Studio  The renowned capability of Avid now stuffed into the iPad. Video, audio, transitions, etc. etc. Similar in capability to Splice (above) – I’ll have a lot more to say after these two apps get a serious test drive while editing all the footage I have shot.

iTC Calc The ultimate time code app for iDevices. I use on both iPad and iPhone.

FilmiC Pro Serious movie camera app for iPhone. Select shooting mode, resolution, 26 frame rates, in-camera slating, colorbars, multiple bitrates for each resolution, etc. etc.

Camera+ I use this as much for editing stills as shooting, biggest advantage over native iPhone camera app is you can set different part of frame for exposure and focus.

almost DSLR is the closest thing to fully manual control of iPhone camera you can get. Takes some training, but is very powerful once you get the hang of it.

PhotoForge2 Powerful editing app. Basically Photoshop on the iPhone.

TrueDoF This one calculates true depth-of-field for a given lens, sensor size, etc. I use this to plan my range of focus once I know my shooting distance.

OptimumCS-Pro This is sort of inverse of the above app – here you enter the depth of field you want, then OCSP tells you the shooting distance and aperture you need for that.

Juxtaposer This app lets you layer two different photos onto each other, with very controllable blending.

Phonto One of the best apps for adding titles and text to shots.

Some of the above apps are designed for still photography only, but since stills can be laid down in the video timeline, they will likely come into use during transitions, effects, title sequences, etc.

I used Filmic Pro as the only video camera app for this project. This was firstly based just on personal preference and the capabilities that it provided (the ability to lock focus, exposure and white balance were critical to maintaining continuity across takes in my opinion). Once I had selected a video camera app with which I was comfortable, I felt it important to use that on both the iPhones – again for continuity of the content. There may be other equally capable apps for this purpose. My focus was on producing as high a quality product as possible within the means and capabilities at my disposal. The particular tools are less important than the totality of the process.

The process of dumping footage off the iPhone (transferring video files to external storage) requires some additional discussion. The required hardware has been mentioned above, now let’s dive into process and the required software. The biggest challenge is logistics: finding enough time in between takes to transfer footage. If the iPhones are the only cameras used, then in one way this is easier – you have control over the timeline in that regard. In my case, this was even more challenging, as I was ‘piggybacking’ on an existing shoot so I had to fit in with the timeline and process in place. Since professional video cameras all use removable storage, they only require a few minutes to effectively be ready to shoot again after the on-camera storage is full. But even if iPhones are the only cameras, taking long ‘time-outs’ to dump footage will hinder your production.

There are several ways to maximize the transfer speed of files off the iPhone, but the best way is to make use of time management:  try to schedule dumping for normal ‘down time’ on the set (breaks, scene changes, wardrobe changes, meal breaks, etc.)  In order to do this you need to have your ‘transfer station’ [computer and external drive] ready and powered up so you can take advantage of even a short break to clear files from the phone. I typically transferred only one to three files at a time, so in case we started up sooner than expected I was not stuck in the middle of a long transfer. The other advantage in my situation was that the iPhone charges while connected via USB cable, so I was able to accomplish two things at once: replenish battery capacity due to shooting with the Mobile In audio adapter not allowing shooting while on line power; and dumping the files to external storage.

My 2nd camerawoman, Tara, brought her Mac Air laptop for file transfer to an external USB drive, I used a Dell PC laptop (discussed above in the hardware section). In both cases, I found that using the native OS file management (Image Capture [part of OS] for the Mac, Windows Explorer for the PC) was hideously slow. It does work (after plugging in the iPhone to the USB connector on the computer, the iPhone shows up as just another external disk. You can navigate down through a few folders and find your video files). On my PC (which BTW is a very fast machine – basically a 4-core mobile workstation that can routinely transfer files to/from external drives at over 150MB/s) the best transfer speed I could obtain with Windows Explorer amounted to needing almost an hour to transfer 10 minutes of video off the iPhone – a complete non-starter in this case. After some research, I located software from WideAngle Software called TouchCopy that solved my problem. They make versions for both Mac and PC, and it allowed transfer off the iPhone to external storage about 6x faster than Windows Explorer. My average transfer times were approximately ‘real time’ – i.e. 10 minutes of footage took about 10 minutes to transfer. There may be other similar applications out there – as mentioned earlier I am not in the software reviewing business – once I find something that works for me I will use that – until I find something “better/faster/cheaper.”

To summarize the challenging file transfer issue:

  • Use the fastest hardware connections and drives that you can.
  • Use time management skills and basic logistics to optimize your ‘windows’ for file transfer.
  • Use supplemental software to maximize your transfer speed from phone to external storage.
  • Transfer in small chunks so you don’t hold up production.

The last bit that requires a mention is file backup. Your original footage is impossible to replace, so you need to take exquisite care with it. The first thing to do it back it up to a second external physical drive immediately after the file transfer. Typically I started this task as soon as I was done with dumping files off the iPhone – this task could run unsupervised during the next takes.. However, one thing to consider before doing that (and this may depend on how much time you have during breaks): the relabeling of the video files. The footage is stored on your iPhone as a generically labeled .mov file, usually something like IMG_2334.mov – not a terribly insightful description of your scene/take. I never change the original label, only add to it. There is a reason… it helps to keep all the files in sequential order when starting the scene selection and editorial process later. This can be very helpful when things go a bit skew – as the always do during a shoot. For instance if the slate is missing on a clip (you DO slate every take, correct??) having the original ‘shot order’ can really help place the orphan take into its correct sequence. In my case, this happened several time due to slate placement:  since my iPhone cameras were in different locations, sometimes the slate was pointed where it was in frame for the DSLR cameras but was not visible by the iPhones.

I developed a short-hand description take from the slate at the head of each shot that I appended to the original file name. This does a few seconds (to launch Quicktime or VLC, shuttle in to the slate, pause and get the slate info), but the sooner you do this, the better. If you have time to rename the shots before the backup, then you don’t have to rename twice – or face the possibility of human error during this task. Here is a sample of one of my files after renaming: IMG_2334_Roll-A1_EP1-1_T-3.mov  This is short for Roll A1, Episode 1, Scene 1, Take 3.

However you go about this, just ensure that you back up the original files quickly. The last step of course is to delete the original video files off the iPhone so you have room for more footage. To double-check this process (you NEVER want to realize you just deleted footage that was not successfully transferred!!!) I do three things:

  1. Play into the file with headphones on to ensure that I have video and audio at head, middle and end of each clip. That only takes a few seconds, but just do it.
  2. Using Finder or Explorer, get the file size directly off the still-connected iPhone and compare it to the copied file on your external drive. Look at actual file size, not ‘size on disk’, as your external disk may have different sector sizes than the iPhone). If they are different, re-transfer the file.
  3. Using the ‘scrub bar’, quickly traverse the entire file using your player of choice (Quicktime, VLC, etc.) and make sure you have picture from end to end in the clip.

Then and only then, double-check exactly what you are about to delete, offer a small prayer to your production spirit of choice, and delete the file(s).

Summary:

This is only the beginning! I will write more as this project moves ahead, but wanted to introduce the concept to my audience. A deep thanks to all of you who have read my past posts on various subjects, and please return for more of this journey. Your comments and appreciation provides the fuel for this blog.

Support and Contact Details:

Please visit and support the talented women that have enabled me to produce this experiment. This would not have been possible otherwise.

Tiffany Price, Writer, Producer, Actress
Lauren DeLong, Writer, Producer, Actress
Ambika Leigh, Director, Producer

The Perception of Privacy

June 5, 2012 · by parasam

Another in my series of posts on privacy in our connected world…  with a particular focus on photography and imaging

As I continue to listen and communicate with many others in our world – both ‘real’ and ‘virtual’ (although the lines are blurring more and more) – I recognize that the concept of privacy is rather elusive and hard to define. It changes all the time. It is affected by cultural norms, age, education, location and upbringing. There are differing perceptions of personal privacy vs collective privacy. Among other things, this means that most often, heavy-handed regulatory schemes by governments will fail – as by the very nature of a centralized entity, the one-size-must-fit-all solution will never work well in this regard.

A few items that have recently made news show just how far, and how fast, our perception of privacy is changing – and how comfortable many of us are now with a level of social sharing that would have been unthinkable just a few years ago. An article (here) explains ‘ambient video’ as a new way that many young people are ‘chatting’ using persistent video feeds. With technologies such as Skype and OoVoo that allow simultaneous video ‘group calls’ – teenagers are coming home from school, putting on the webcam and leaving it on in the background for the rest of the day. The group of connected friends are all ‘sharing’ each other’s lives, in real time, on video. If someone has a problem with homework, they just shout out to the ‘virtual room’ for help. [The implications for bandwidth usage on the backbone of networks for connecting millions of teens with simultaneous live video will be reserved for a future article!]

More and more videos are posted to YouTube, Vimeo and others now that are ‘un-edited’ – we appear, collectively, to be moving to more acceptance of a casual and ‘candid’ portrayal of our daily lives. Things like FaceTime, Skype video calls and so on make us all more comfortable with sharing not only our voices, but our visual surroundings during communication. Maybe this shouldn’t be so surprising, since that is what conversation was ‘back in the day’ when face-to-face communication was all there was…

We are surrounded by cameras today:  you cannot walk anywhere in a major city (or even increasingly in small towns) without being recorded by thousands of cameras. Almost every street corner now has cameras on the light poles, every shop has cameras, people by the billions have cellphone cameras, not to mention Google (with StreetView camera cars, GoogleEarth, etc.)  One of the odd things about cameras and photography in general is that our perceptions are not necessarily aligned with logic. If I walk down a busy street and look closely at someone, even if they see me looking at them, there might either complete disregard, or at most a glance implying “I see you seeing me” and life moves on. If I repeat the same action but take that person’s picture with a big DSLR and a 200mm lens I will almost certainly get a different reaction, usually one that implies the subject has a different perception of being ‘seen’ by a camera than a person. If I repeat the action again with a cellphone camera, the typical reaction is somewhere in between. Logically, there is no difference: one person is seeing another, the only difference is a record in a brain, a small sensor or a bigger sensor.

Emotionally, there is a difference, and therein lies the title of this post – The Perception of Privacy. Our interpretations of reality govern our response to that reality, and these are most often colored by past history, perceptions, feelings, projections, etc. etc.  Many years ago, some people had an unreasonable fear of photography, feeling that it ‘took’ something from them. In reality we know this to be complete fallacy:  a camera captures light just like a human eye (well, not quite, but you get the idea). The sense of permanence – that a moment could be frozen and looked at again – was the difference. With video, we can now record whole streams of ‘moments’ and play them back. But how different really is this from replaying an image in one’s head, whether still or moving? Depending on one’s memory, not very different at all. What is different then? The fact that we can share these moments.. Photography, for the first time, gave us a way to socialize one person’s vision of a scene with a group. It’s one thing to try to describe in words to a friend what you saw – it’s a whole different effect when you can share a picture.

Again, we need to see the logic of the objective situation:  if a large group shares a visual experience (watching a street performer for example) what is the difference between direct vision and photography? Here, the subject should feel no difference, as this is already a ‘shared visual experience’ – but if asked, almost every person would say it is different, in some way. There is still a feeling that a photograph or video is different from even a crowed of people watching the same event. Once again, we have to look to what IS different – and the answer can only be that not only can a photo be shared, but it can shared ‘out of time’ with others. The real ‘difference’ then of a photo or video of a person or an event is that it can be viewed in a different manner than ‘in the moment’ of occurrence.

As our collective technology has improved, we now can share more efficiently, in higher resolution, than in the days of campfire songs and tales. Books, newspapers, movies, photos, videos… it’s amazing to think just how much of technology (in the largest sense – not just Apple products!) has been focused on methods improving the sharing of human thought, voice, image. We are extremely social creatures and appear to crave, at a molecular level, this activity. In many cultures today, we see a far more relaxed and tolerant attitude towards sharing of expression and appearance (nudity / partial nudity, no makeup, candid or casual appearance in public, etc. etc.) than existed a decade ago. We are becoming more comfortable in ‘existing’ in public – whether that ‘public’ is a small group of ‘friends’ or the world at large.

One way of looking at this ‘perception of privacy’ is through the lens of a particular genre of photography:  streetphotography. While, like most descriptions of a genre, it’s hard to pin down – basically this has evolved to mean candid shots in public – sort of ‘cinema vérité’ in a still photo. Actually, the term paparazzi is a ‘sub-group’ of this genre, with typically their focus limited to ‘people of note’ (fashion, movie, sports personalities) – whose likenesses can be sold to magazines. While this small section has undoubtably overstepped the bounds of acceptable behavior in some cases, it should not be allowed to taint the larger genre of artistic practice.

The facts, in terms of what’s legally permissible, for ‘streetphotography’ do vary by state and country, but for most of the USA here are the basics – and just like other perceptions surrounding photography, they may surprise some:

  • Basically, as the starting premise, anything can be photographed at any time, in any place where there is NOT a ‘reasonable expectation of privacy’.
  • This means, that similar to our judicial system where ‘innocent until proven guilty’ is the byword, in photography, the assumption is that it is always permissible to take a picture, unless specifically told not to by the owner of the property on which you are standing, by posted signs, or if you are taking pictures of what would generally be accepted as ‘private locations’ – and interestingly there are far fewer of these than you might think.
  • The practice of public photography is strongly protected in our legal system under First Amendment rulings, and has been litigated thousands of times – with most of the rulings coming down in the favor of the photographer.
  • Here are some basic guidelines:  [and, I have to say this:  I am not a lawyer. This is not legal advice. This is a commentary and reporting on publicly available information. Please consult an attorney for specific advice on any legal matter].
    • Public property, in terms of photography, is “any location that offers unfettered access to the public, and where there is not a reasonable expectation of privacy”
    • This means, that in addition to technically public property (streets, sidewalks, public land, beaches, etc. etc.), that malls, shops, outdoor patios of restaurants, airports, train stations, ships, etc. etc. are all ‘fair game’ for photos, unless specifically signposted to the contrary, or if the owner (or a representative such as a security guard) asks you to refrain from photography while on their private property.
    • If the photographer is standing on public property, he or she can shoot anything they can see, even if the object of their photography is on private property. This means that it is perfectly legal to stand on the sidewalk and shoot through the front window of a residence to capture people sitting on a sofa… or for those low flying GoogleEarth satellites to capture you sun-bathing in your back yard… or to shoot people while inside a car (entering the car is forbidden, that is clearly private property).
    • In many states there are specific rulings about areas within ‘public places’ that are considered “areas where one has a reasonable expectation of privacy” such as restrooms, changing rooms, and so on. One would think that common sense and basic decorum would suffice… but alas the laws had to be made…
    • And here’s an area that is potentially challenging:  photography of police officers ‘at work’ in public. It is legal. It has been consistently upheld in the courts. It is not popular with many in police work, and often photographers have been unjustifiably hassled, detained, etc. – but ‘unless a clear and obvious threat to the security of the police officer or the general public would occur due to the photography’ this is permitted in all fifty states.
    • Now, some common sense… be polite. If requested to not shoot, then don’t. Unless you feel that you have just captured the next Pulitzer (and you did it legally), then go on your way. There’s always another day, another subject.
    • It is not legal for a policeman, security guard or any other person to demand your camera, film, memory cards – or even to demand to be shown what you photographed. If they attempt to take your camera they can be prosecuted for theft.
    • One last, but very important, item:  laws are local. Don’t get yourself into a situation where you are getting up close and personal with the inside of a Ugandan jail… many foreign countries have drastically different laws on photography (and even in places where national law may permit, local police may be ignorant… and they have the keys to the cell…)  Always check first, and balance your need for the shot against your need for freedom… 🙂

What this all shows is that photography (still or moving) is accepted, even at the legal level, as a fundamental right in the US. That’s actually a very interesting premise, as not many things are specifically called out in this way. Most other practices are not prohibited, but very few are specifically allowed. For instance, there is no specific legal right to carpentry, although of course it is not prohibited. The fact that imaging, along with reporting and a few other activities are specifically allowed points to the importance of social activities within our culture.

The public/private interface is fundamental to literally all aspects of collective life. This will be a constantly evolving process – and it is being pushed and challenged now at a rate that has never before existed in our history – mainly due to the incredible pace of technological innovation. While I have focused most of this discussion on the issues of privacy surrounding imaging, the same issues pertain to what is now called Big Data – that collection of data that describes YOU – what you do, what you like, what you buy, where you go, who you see, etc. Just as in imaging, the basic tenet of Big Data is “it’s ok unless specifically prohibited.” While that is under discussion at many levels (with potentially some changes from ‘opt out’ to ‘opt in’), many of the same issues of ‘what is private’ will continue to be open.

UltraViolet – A Report on the “Academy of UltraViolet” Seminar

May 18, 2012 · by parasam

I attended the full day seminar on the new UltraViolet technology earlier this week. UltraViolet is the recently launched “Digital Entertainment Cloud” service that allows a user to essentially ‘pay once, watch many’ across a wide range of devices, with the content being sourced from a protected cloud environment – physical media is no longer required.

While this report on the seminar is primarily intended for those on my team and in my firm that could not make the date, I will include a brief introduction to level-set my audience.

The UltraViolet Premise

The purpose is to offer a Digital Collection of consumer content (you can think of this as a “DVD for the Internet”), allowing the user to enjoy a universal viewing experience not limited by where you bought the content, the format of the content (or even whether physical or virtual), the type of device (as long as it supports a UV Player) or where the user is located [fine print: UV does allow local laws to be enforced via a geographies module, so not all features or content may be available in all territories].

I strongly recommend a visit to the UV FAQ site here – which is kept current on roughly a monthly basis. Even knowledgable members of this audience will find useful bits there: history, features, technical details, what-ifs to cover business cases [will my UV file still work even if the vendor that sold it to me goes out of business?} {the answer is yes BTW}, and many other useful bits.

For those that want a more detailed set of technical information, including the publicly available CFF (Common File Format) download specification, UV ecosystem and Role information, licensing info, etc. please visit the UV Business site here

The UV Academy Seminar

Firstly, a thanks to both the organizers and presenters:  this seminar did not have a lot of lead time, and took place in a nice venue with breakfast and lunch provided – which helped the audience (mostly industry professionals) to digest a rather enormous helping of information in a short time. Many of the presentations were aided with excellent graphics or video which greatly enhanced the understanding of a complex subject. The full list of presenters and sponsors is here.

We began with an update on current status, noting that basically the UV rollout is still in early stages – currently “Phase 1” where UV content is only available online in streaming formats. Essentially, legacy streaming providers that have signed up to be part of the UV ecosystem (known as LASPs – Locker Access Streaming Provider)  [come on, you must expect any new geeky infrastructure to have at least 376 new acronyms… 🙂 ] – will be able to stream content that the user has added to his/her “UV Cloud”. The availability, quality, etc of the stream will be the same as you currently get from that same provider.

Phase 2, the ability to download and store locally a UV file (in CFF – Common File Format) will roll out later this summer. One of the challenges in marketing is to communicate to users that UV is a phased rollout – what you get today will become greatly enhanced in the future.

A panel discussion followed this intro, the topic being “Preparing and Planning an UltraViolet Title Launch”. This was a good look ‘under the hood’ into just how many steps are required to effect a commercial release of a film or TV episode onto the UV platform. Although there are a LOT of moving parts (and often the legal and licensing issues are greater than the technical bits!) the system has been designed to simplify as much as possible. QA and Testing is a large part of the process, and it was stressed that prior planning well in advance of release was critical to success. (Hmmm, never heard that before…)

We then heard a short dissertation on DRM (Digital Rights Management) – as it exists within the UV ecosystem. This is a potentially brain-numbing topic, expertly and lightly presented by Jim Taylor, CTO of the DECE (the consortium that brought you UV). I am personally very familiar with this type of technology, and it’s always a pleasure to see a complex subject rendered into byte sized chunks that don’t overwhelm our brains. [Although having this early in the day when we still had significant amounts of coffee in the bloodstream probably helped…]  The real issue here is that UV must accommodate multiple DRM schemes in order to interoperate on a truly wide array of consumer devices, from phones all the way up to web-connected Blu-ray players feeding 65″ flatscreens. Jim gave us an understanding of DRM domains, and how authentication tokens are used to allow a single license that a user has for a movie to populate multiple DRM schemas and thereby allow the user access as required. Currently 2 of the 5 anticipated DRM schemes are enabled, with testing going on for the others. [The current crop of 5 DRM technologies include: Widevine, Marlin, OMA, Playready, FlashAccess]

Jason Kramer of Vital Findings (a consumer research company) gave us a great insight into the ‘mind of a real UV consumer’ with some humorous and interesting videos. We learned to never underestimate the storage capacity of a pink backpack (approximately 500GB – as in 100 DVDs); that young children like to skate on DVDs on the living room carpet (a good reason for UV, so when they wear out the DVDs mom can still download the content without buying it again…) – now come on, be honest, find me a software use case QA person that would have thought THAT one up… and on and on. It showed us that it’s really important to do this kind of research. You have NO IDEA how many ways your users will find to play with your new toy…

A panel discussion then helped us all understand the real power of metadata in the overall UV ecosystem. We are all getting a better understanding of how metadata interoperates with our content, our habits, our advertising, etc. – but seldom has a single environment been designed from the ground up to make such end-to-end use of extensive metadata of all types. Metadata in the UV universe facilitates three interdependent functions:  helping the user find and enjoy content through recommendation and search functions; managing the distribution of the content and reporting back to DMRs (Digital Media Retailers) and Content Providers; and the all-important Big Data information vacuum cleaner:  here’s an opportunity for actual customer libraries of content choices to be mined. To be precise, there are a huge amount of business rules in place about what kind of ‘little data’ is shared by whom to whom and for what purpose – and this is still a very fluid area… but in general, this UV ecosystem offers the potential of a win-win Big Data scenario. The user – based on actual content in their library – can help drive recommendations with a precision lacking in other data models; while content providers and others in the supply chain can learn about the user to the extent that is either appropriate or ‘opted in’. One area that will need refinement is what plagues other ‘content providers’ that offer recommendations (Amazon, Netflix, etc.) – different family members that share an account (a feature of UltraViolet) confuse recommendation engines badly… One can imagine easily the difficulty of sorting ‘dad’ vs ‘mom’ vs ‘6yr old kid’ when all the movies in a single account holder’s library are commingled… This is an area ripe for refinement.

The next panel delved into the current perceptions of “cloud content provisioning” in general as well as UltraViolet in particular. PWC’s Consumer Sentiment Workshops’ findings were discussed by representatives from participating studios (Fox/Warner/Universal). As might be expected, consumers have equated cloud storage of content with the two words that strike terror into the hearts of any studio executive:  Free & Forever… So, just like in Washington, where any savvy politician will tell you that there is ‘no free lunch – only alternatively funded lunch’ – the UV supporters have to educate and subtlety re-phrase consumer’s expectations. So ‘Free & Forever’ needs to be recast to ‘No Additional Cost & 35 Dog Years’… There are actually numerous issues here:  streaming is a bit different from download, no one has really tested such a widespread multi-format cloud provisioning system that has an extended design lifetime, etc. etc. Not to mention that many of the byzantine contracts already in place for content distribution never imagined a platform such as UltraViolet, so it will take time to sort out all the ramifications of what looks simple on the surface.

The User Experience (UX for those cognizetti who love acronyms) received a detailed discussion from a wide-ranging panel headed by Chuck Parker, one of our new masters of the Second Screen. This is a difficult and complex topic – even the UI (User Interface) is simpler – the UI is the actual ‘face’ of the application or interface: the buttons, graphics, etc. connected via a set of instructions to the underlying application; the UX is the emotion that the user feels and walks away with WHILE and AFTER the experience of using the UI. It’s harder to measure, and harder yet to refine. It’s a discipline that involves a creative, artistic and sometime almost mystic mix of hardware, software and ergonometric design. Color, shape, context, texture, all play a part. And UV has, in one sense, an even harder task in creating a unified and ‘branded’ experience:  at least companies like Apple (whose UX have attracted a cult following that most religions wish they had) have control over both hardware and software. UltraViolet, by the very nature of ‘riding on top’ of existing hardware (and even software) has only the thinnest of UI’s that they can call their own. Out of this UV still needs to craft a ubiquitous UX that will ‘brand’ itself and instill a level of consumer confidence (Ok, I know where I am – this is the cool player that let’s me get to all my movies no matter where/what/how/when) with the environment. Not a trivial task…

The day finished with a panel on the current marketing efforts of UltraViolet. Most of the studios were represented on the panel, with many clearly articulated plans brought forth. The large challenge of simultaneously bringing in large numbers of new users, yet communicating that UV is still very much a work in progress – and will be for several years yet – was exposed. The good news is that each marketing executive was enthusiastic about their plans to do two things:  collaborate together to ensure a unified message no matter which studio or content provider was marketing on behalf of UV (this is a bigger deal than many think:  silos were invented by movie studios, didn’t you know that?? – and it’s never easy for multi-billion dollar companies to collaborate in this highly regulated era – but in this case, since the marketing of UV can in no way be construed to be ‘price-collaborative’ it’s a greener field; and all the participants agreed that a continued effort to bring as much content into the UV system as soon as practical was in everyone’s best interest. The current method of signing up users (typically by first purchasing a physical media, such as Blu-ray – which in turn gives a coupon that is redeemed for UV access to that same title) may well flip:  in a few years, or even less, users may purchase online, and then receive a coupon to redeem at a local store for a ‘hard copy’ of the same movie on a disk should they desire that.

In summary, a lot of information was delivered in a relatively short time, and our general attention was held well. UV has a lot of promise. It certainly has its challenges, most notably the lack of Disney and Apple at the table so far, but both those companies have had substantial changes internally since the original decision was taken to not join the UV consortium. Time will tell. The current technology appears to be supportive of the endeavor, the upcoming CFF download format will notably enhance the offering, and the number of titles (a weakness in the beginning) is growing weekly.

Watch this space:  I write frequently on changes and new technologies in the entertainment sector, and will undoubtedly have more to say on UltraViolet in the future.

NAB2012 – Comments on the National Association of Broadcasters convention

April 26, 2012 · by parasam

Here’s a report on the NAB show I attended last week in Las Vegas. For those of you who see my reports each year, welcome to this new format – it’s the first time I’m posting the report on my blog site. For the uninitiated to this convention, please read the following introductory paragraph. I previously distributed a pdf version of my report using e-mail. The blog format is more efficient, allows for easier distribution and reading on a wider variety of devices, including tablets and smartphones – and allows comments from my readers. I hope you enjoy this new format – and of course please look around while here on the site – there are a number of other articles that you might find interesting.

Intro to NAB

The National Association of Broadcasters is a trade group that advocates on behalf of the nation’s radio and television broadcasters. According to their mission statement: NAB advances the interests of their members in federal government, industry and public affairs; improves the quality and profitability of broadcasting; and encourages content and technology innovation. Each April a large convention is held (for decades now it’s been held exclusively in Las Vegas) where scientific papers are presented on new technology, as well as most of the vendors that serve this sector demonstrate their products and services on the convention floor.

It’s changed considerably over the years – originally the focus was almost exclusively radio and tv broadcast, now it has expanded to cover many aspects of production and post-production, with big software companies making a presence in addition to manufacturers of tv transmission towers… While Sony (professional video cameras) and Grass Valley (high end professional broadcast electronics) still have large booths, now we also have Adobe, Microsoft and AutoDesk. This show is a bit like CES (Consumer Electronic Show) – but for professionals. This is where the latest, coolest, and most up-to-date technology is found each year for this industry.

Comments and Reviews

This year I visited over 80 vendors on the convention floor, and had detailed discussions with probably 25 of those. It was a very busy 4 days… The list was specifically targeted towards my interests, and the needs of my current employer (Technicolor) – this is not a general review. All information presented here is current as of April 24, 2012 – but as always is subject to change without notice – many of the items discussed are not yet released and may undergo change.

Here is the abbreviated list of vendors that I visited:   (detailed comments on a selection of this list follows)

Audio-Technica U.S., Inc.     Panasonic System Communications Company     GoPro     Canon USA Inc.     Leader Instruments Corp.     ARRI Inc.     Glue Tools     Dashwood Cinema Solutions
Doremi Labs, Inc.     DK – Technologies     Leica Summilux-C Lenses     Forecast Consoles, Inc.     Sony Electronics Inc.     FIMS/AMWA/EBU     Advanced Media Workflow Association (AMWA)
Tektronix Inc.     Evertz     Snell     DNF CONTROLS     Miranda Technologies Inc.     Ensemble Designs     Venera Technologies     VidCheck     EEG Enterprises, Inc     Dolby Laboratories
Lynx Technik AG     Digimetrics – DCA     Wohler Technologies, Inc.     Front Porch Digital     Blackmagic Design     Telestream, Inc.     Ultimatte Corporation     Adobe Systems
Adrienne Electronics Corp.     PixelTools Corp.     Red Digital Cinema     Signiant     Da-Lite Screen Company LLC     Digital Rapids     DVS Digital Video Systems
CEITON technologies Inc.     Planar Systems, Inc.     Interra Systems     Huawei     Eizo Nanao Technologies Inc.     G-Technology by Hitachi     Ultrium LTO     NetApp     Photo Research Inc.
Cinegy GmbH     Epson America, Inc.     Createasphere     Harmonic Inc.     ATEME     Sorenson Media     Manzanita Systems     SoftNI Corporation     Envivio Inc     Verimatrix
Rovi     MOG Technologies     AmberFin     Verizon Digital Media Services     Wowza Media Systems     Elemental Technologies     Elecard     Discretix, Inc.     Panasonic System Communications Company
3D @ Home Consortium     3ality Technica     USC Digital Repository

In addition to the technical commentary, I have added a few pictures to help tell the story – hope you find all this informative and enjoyable!

On final approach to Las Vegas airport (from my hotel room)

It all started with my flight into Las Vegas from Burbank – very early Monday morning:  looking forward to four days of technology, networking, meetings – and the inevitable leg-numbing walking of almost 3.2 million sq ft of convention halls, meeting rooms. etc.

Las Vegas: convention center in the middle

Zoomed in view of LVCC (Las Vegas Convention Center) - not a bad shot for 200mm lens from almost 1000 meters...

The bulk of all the exhibits were located in the four main halls, with a number of smaller venues (typically for pre-release announcements) located in meeting rooms and local hotel suites.

Vegas from hotel window (with unintended efx shot in the sky - had iPhone up against the window, caught reflection!)

The Wynn provided an easily accessible hotel location – and a good venue for these photographs!

Wynn golf course - just as many deals here as on convention floor...

Ok, now onto the show…   {please note:  to save me inserting hundreds of special characters – which is a pain in a web-based blog editor – please recognize that all trade names, company names, etc. etc. are © or ® or TM as appropriate…}

Also, while as a disclaimer I will note that I am currently employed by Technicolor, and we use a number of the products and services listed here, I have in every case attempted to be as objective as possible in my comments and evaluations. I have no commercial or other relationship with any of the vendors (other than knowing many of them well, I have been coming to NAB and pestering them for a rather long time…) so all comments are mine and mine alone. Good, bad or indifferent – blame me if I get something wrong or you disagree – that’s the beauty of this format (blog with comments at the end) – my readers can talk back!

Adobe Systems    link – the tactical news was the introduction of Creative Suite 6, with a number of new features:

  • Photoshop CS6
    • Mercury real-time graphics engine
    • 3D performance enhancements
    • 3D controls for artwork
    • enhanced integration of vector layers with raster-based Photoshop
    • better blur effects
    • new crop tool
    • better 3D shadows and reflections
    • more video tools
    • multi-taksing: background saving while working, even on TB sized files
    • workspace migration
    • better video compositing
  • Illustrator CS6
    • full, native 64bit support
    • new image trace
    • stroke gradients
    • inline editing in panels
    • faster Gaussian Blurs
    • improvements in color, transform, type, workspaces, etc.
  • InDesign
    • create PDF forms inside ID
    • better localization for non-Arabic documents (particularly for Farsi, Hindi, Punjabi, Hebrew, etc.)
    • better font management
    • grayscale preview with real pre-press WYSIWYG
  • Premiere
    • workflow improvements
    • warp stabilizer to help hand-held camera shots
    • better color corrector
    • native DSLR camera support to enhance new single-chip cameras used for video
    • and many more, including rolling shutter correction, etc.
  • not to mention Lightroom4, Muse, Acrobat, FlashPro, Dreamweaver, Edge, Muse, Fireworks, After Effects, Audition, SpeedGrade, Prelude, Encore, Bridge, Media Encoder, Proto, Ideas, Debut, Collage, Kuler and more…

In spite of that, the real news is not the enhancements to the full range of Adobe apps, but a tectonic shift in how the app suite will be priced and released… this is a bellweather shift in the industry for application pricing and licensing.. the shift to a true SaaS model for consumer applications – in a subscription model.

For 20+ years, Adobe, along with Microsoft and others has followed the time-honored practice of relatively high-priced applications, very rigorous activation and restrictive licensing models, etc. etc. For instance, a full seat of Adobe Master Collection is over $2,500 retail, and the upgrades run about $500 per year at least. With significant updates occurring each year now, this is an amortized cost of over $1,500 per year, per seat. The new subscription model, at $50/mo, means a drop to about $600 per year for the same functionality!

This is all an acknowledgement by the software industry at large that the model has changed:  we are now buying apps at $1-$5 each for iPhone, iPad, etc. – and getting good value and excellent productivity – and wondering why we are paying 100x and up for the same functionality (or less!) on a PC or Mac… the ‘golden goose’ needs an upgrade.. she will still produce eggs, but not as many, not as fat…

Adrienne Electronics  link

The AEC-uBOX-2 is a neat little hardware device that takes in LTC (LinearTimeCode) from a camera and turns this into machine control TC (RS-422). This allows pro-sumer and other ‘non-broadcast’ cameras and streaming content sources to feed a video capture card as if the NLE (NonLinearEditor – such as FinalCut) was seeing a high-end tape deck. This allows capture of time code associated with content to ease many editorial functions.

AMWA (Advanced Media Workflow Association)  link

This group (previously known as AAF – Advanced Authoring Format) is one of the industry’s important bits of glue that helps make things ‘interoperable’ – the word that is more often than not uttered as a curse in our industry.. as it’s as elusive as the pot of gold at the end of the rainbox… and we are missing the leprechauns to help us find it..

Led by the pied piper of Application Specifications – Brad Gilmer – this group has almost single-handedly pushed, prodded, poked and otherwise heckled many in our industry to conform to such things as AS02, AS03, etc. – which allow the over-faceted MXF specification to actually function as intended: provide an interoperable wrapper for content essence that carries technical and descriptive metadata.

Amberfin  link

The makers of the iCR – one of the best ingest devices on the market, have added even more functionality. The new things at the show:  UQC (Unified Quality Control – combination of autoQC with operator tools allowing for combination QC for highest efficiency workflows for QC) and  Multi-Transcode (which allows up to 8 nodes of transcoding on a single multi-core machine).

ARRI  link  

The ARRI ALEXA camera is in reality an ever expandable image capture and processing device – with continual hardware and software upgrades that help ameliorate its stratospheric price… v7 will add speed and quality to the de-Bayer algorithms to further enhance both regular and high-speed images; offer ProRes2K; LCC (LowContrastCurve); etc. The next release, v8, will add DNxHD 444; ProRes2K in high speed; vertical image mirroring (for Steadicam shots); post triggering (for nature photographers – allows ‘negative’ reaction time); and auto card-spanning for continuous shooting across flip/flop memory cards to not lose shots even when card capacity reached.

ARRI + Leica - match made in heaven...

Angenieux  link

The Optimo series of cinema camera lenses by Angenieux are of the the highest quality. In particular, they have the DP 3D package – a pair of precisely matched lenses for ease of image capture using stereo camera rigs. The tracking, focus and optical parameters are matched at the factory for virtually indistinguishable results on each camera.

Oh Angie... (Angenieux lens)

ATEME  link

The KFE transcoder – the product I am familiar with – continues to move forward, along with other efforts within the larger ATEME family. One interesting foray is a combined project with Orange Labs, France Télévisions, GlobeCast, TeamCast, Technicolor and Doremi, as well as the Télécom ParisTech and INSA-IETR University labs – to reduce the bandwidth necessary for UHDTV – or in real world-speak: how the heck to ever get 4K to the home… currently, the bandwidth for 4K delivery is 6Gb/s uncompressed, or about 1000x higher than a typical home connection… even for modern technology, this will be a hard nut to crack..

On other fronts, ATEME has partnered with DTS to offer DTS Neural Surround into the transcoding/encoding workflow. The cool thing about DTS’ Neural Sound technology its upmix capability. It can simulate 7.1 from 5.1, or 5.1 from stereo. I have personally listened to this technology on a number of feature film clips – comparing with originally recorded soundtracks (for instance, comparing actual 7.1 with the simulated 7.1) – and it works. Really, really well. The sound stage stays intact, even on car chases and other fast movements (that often show up as annoying phase changes if not done very carefully). This will allow better delivery of packages suited to home theatre and other environments with demanding users.

The Titan and Kyrion encoders have improved speed, capabilities, etc. as well.

Blackmagic Design  link

HyperDeck has added ProRes encoding capability; DesktopVideo will support AdobeCS6 when released; and the new Blackmagic Cinema Camera. The camera is a bit of a surprising move for a company that has never moved this far upstream in capture before:  to date they have a well-earned reputation for video capture cards, but now they have moved to actual image capture. The camera accepts EF lenses, has a 2.5K resolution, built-in SSD drive for storage, 13 stop dynamic range, DNG (RAW) files, as well as option to capture in ProRes or DNxHD. And it’s under $3,000 – (no lens!) – it will be interesting to see how this fits in the capture ecosystem…

Canon  link

Ok, I admit, this was one of the first booths I headed for:  the chance to get some hands on time with the new Canon EOS-1D C DSLR camera… the Cinema version of the recent EOS1 system. It records full 4K video (4096×2160), has their log-gamma as well as the Technicolor CineStyle plugin, for the best color grading. 18 megapixels at 4:2:2MJPEG 4K isn’t bad for a ‘little’ DSLR camera… and of course not new, but their glass is some of the best in the business.. Canon also introduced a number of new video zoom lenses for broadcast TV cameras, as well as true Cinema cameras (EOS C500) – 4K uncompressed (RAW) output – for external recording onto that trailer of hard drives you need for that data rate… <grin>

Their relatively new (introduced at CES in January) PixmaPro-1 12 ink printer is awesome for medium format fine-art printing, and for those with Pentagon type budgets, the PROGRAF iPF9100 is incomparable: 2400dpi native; 60″ wide roll paper (yes, 5 feet wide by up to 60 feet long…); and ok – it is $16,000  (not including monthly ink costs, which probably approach that of a small house payment…)

new Canon EOS1

when you're getting ready to drop a $1,000,000 or more on cameras and lenses, you have to see what is what.. so vendors build full scale operating evaluation stages for buyers to try out hardware...

one of Canon's four lens evaluation stages...

model had about 20 huge video lenses trained on her - she's looking at this guy with a little iPhone camera wondering what the heck he's doing...

An awesome lens: 200mm @ F2.0 - at 15 ft the depth of field is less than 1 inch! The image is knife-sharp.

MonsterGlass. If you really need to reach out and touch someone, here's the 400mm F2.8 Canon - tack sharp, but it will set you back about $35,000

the Canon iPF9100 printer: 5 ft wide print size, 12 ink, beautiful color

Ceiton Technologies  link

Ceiton provided German engineering focused on sophisticated workflow tools to aid firms in automating their production planning and control. As we know, file-based workflows can become horrendously complex, and the mapping of these workflows onto humans, accounting systems and other business processes has been absolutely littered with failures, blood and red ink. Sony DADC, Warner, Raytheon, AT&T, GDMX, A+E and others have found their solution to actually work. While I personally have not worked for a company yet that employs their solution, I have been following them for the last 5 years, and like their technology.

They have a modular system, so you can use just the pieces you need. If you are not a workflow expert, they offer consulting services to help you. If you are familiar, and have access to in-house developers, they provide API definitions for full integration.

Cinnafilm  link

Dark Energy, Tachyon… sounds like science fiction – well it is almost… These two products are powerful and fascinating tools to de-noise, restore, add ‘film look’, and otherwise perform just about any texture management function needed to content (Dark Energy); and if you need multiple outputs (in different frame rates) from a single input – faster than a speeding neutrino, then look to (Tachyon). An impressive and powerful set of applications, each one available in several formats – check their site. Also need to mention that seldom do I get as detailed, concise, eloquent and factual explanation of some complex technology as Ernie Sanchez provided (COO, Cinnafilm) – trade shows are notoriously hard on schedules, and I felt like I had a Vulcan mind-meld of info in the time I had available. Also thanks to Mark Pinkle of MPro Systems for the intro.

Dashwood Cinema Solutions  link

This company makes one of my two favorite 3D plug-ins for stereoscopic post-production (the other being CineForm). They have two primary tools I like (Stereo3D CAT and Stereo3D Toolbox) – the first is a rather unique stereoscopic analysis and calibration software. It lets you align your 3D rig using your Mac laptop with high precision, and during a shoot allows constant checking of parameters (disparity, parallax, etc.) so you don’t end up with expensive fixes after you have torn down your scene. Once in the edit session, the 3D Toolbox plugs in to either Final Cut or After Effects, giving you powerful tools to work with your stereoscopic captures. You can manipulate convergence, depth planes, set depth-mapped subtitles, etc. etc.

Digimetrics  link

This company provides, among other products, the Aurora automated QC tool for file-based content. With the speed required of file-based workflows today, human QC of large numbers of files is completely impractical. The Aurora system is one answer to this challenge. With a wide range of test capability, this software can inspect many codecs, containers, captions, and other facets to ensure that the file meets the desired parameters. In addition to basic conformity tests, this application can perform quality checks on both video and audio content. This can detect ‘pixelation’ and other compression artifacts; loss of audio and audio level checks (CALM); find ‘tape hits’ (RF dropouts on VTRs), and even run the so-called Harding analyzer tests (to meet FCC specs for photo-sensitive epilepsy).

Digital Rapids  link

Digital Rapids has long been one of the flagship applications for industrial-strength transcoding and encoding of content. They are continuing this tradition with enhancements to both their Stream and Kayak lines:  some of the new features include:  UltraViolet support (CFF files); Dolby Digital Plus; DTS-HD and DTS Express; and the highly flexible multi-format capabilities of the new Kayak/TranscodeManager platform. Essentially, the GUI on the Kayak/TM system lets you design a process – which is then turned into an automated workflow! I believe the namesake for this product (Kayak), aside from being a palindrome, is supposed to convey the ‘unsinkable’, ‘flexible’ and ‘adaptable’ characteristics of this little boat type.

There are only a few enterprise-class encoding/transcoding solutions in the marketplace today – DR is one of them. I’ve personally used this platform for over 8 years now, and find it a solid and useful toolset.

Dolby Laboratories  link

For a company that built its entire reputation on audio, it’s been quite a jump of faith for Dolby to significantly enter the visual space. They now have several products in the video sector:  a 3D display system for theaters; a 3D compression technology to transmit 2 full-sized images (not the usual anamorphically squeezed ‘frame compatible’ method) to the home; a precision Reference Monitor (that is simply the most accurate color monitor available today) – and now at the show they introduced a new 3D display that is auto-stereoscopic (does not require glasses).

The Dolby 3D display is based on an earlier Philips product that uses (as do all current auto-stereoscopic displays intended for home use) a lenticular film, that in conjunction with a properly prepared image on the underlying LCD panel, attempts to project a number of ‘views’ – each one offering slightly different information to each eye, thereby creating the disparity that allows the human eye-brain system to be fooled into thinking that it is seeing depth. The problem with most other such systems is the limited number of ‘views’ means that that the user’s eye position in relation to the screen is critical:  step a few inches to one side or the other from the ‘sweet spot’ and the 3D effect disappears, or worse, appears fuzzy and distorted.

This device uses 28 views, the most I have seen so far (others have used between 9 and 11). However, in my opinion, the 3D effect is quite muted… it’s nothing as powerful as what you get today with a high quality active or passive display (that does require glasses). I think it’s an admirable showing – and speaks to the great attraction for a 3D display that can dispense with glasses (the stand was packed the entire show!) – but this technology needs considerable work still. If anyone can eventually pull this off, Dolby is a likely contender – I have huge respect for their image processing team.

DVS  link

This company is known for the Clipster – a high end machine that for a while was just about the only hardware that could reliably and quickly work with digital cinema files at 4K resolution. Even though now there are other contenders, this device (and it’s siblings:  Venice, Pronto, etc.) are still the reference standard for high resolution digital file-based workflows. The Clipster has ingest capabilities tailored to HDCAM-SR tape, camera RAW footage directly from RED, ARRI, Sony etc. It can also accept Panasonic AVC-Ultra, Sony SR, Apple ProRes422/444 and Avid DNxHD formats. One of its strengths is its speed, as it utilizes purpose-built hardware for encoding.

Eizo  link

Eizo is a provider of high quality LCD monitors that are recognized as some of the most accurate displays for critical color evaluation. They have a long history of use not only in post-production, but across many industries: print, advertising, medical, air traffic control and other industrial applications. The ColorEdge family is my monitor of choice – I have been using the CG243W for the last 3 years. For the NAB show, a few new things were brought to the floor: a 3D LUT (LookUpTable) that provides the most accurate color rendition possible, including all broadcast and digital cinema color spaces – so that the exact color can be evaluated across multiple potential display systems. This device also supports DCI 2K resolution (2048×1080) for theatrical preview. In addition, they showed emulation software for mobile devices. This is very cool: you measure the mobile device (while it is displaying a specialized test pattern), then the table of results is copied into the setup of the Eizo monitor, and then you can see exactly what your video or still will look like on an iPad, Android or other device.

Ok, we’ve had a fair tour so far – and here all that’s moving are your eyeballs… I walked about 6 miles to gather this same info <grin>   that’s one of the big downsides of NAB – and why ‘shortest path planning’ using the little “MyNAB” app on the iPhone proved invaluable (but not perfect – there were the inevitable meetings that always seemed to be at the other end of the universe when I had only 10 minutes to get there). This year, it was hot as well – 30C (86F) most days. Inside is air-conditioned, but standing in the cab lines could cause dehydration and melting…

trade shows always have two things in common: lower back pain and low battery life..

The absolute best thing about a short time-out at NAB: a bench to sit down!

Now that we are rested… on with some more booths…

Elecard  link

This company, located (really!) in Siberia (in the city of Tomsk) brings us precision software decoding and other tools for working with compressed files. They make a large range of consumer and professional products – from SDKs (SoftwareDevelopmentKits) to simple video players to sophisticated analysis tools. I use StreamEye Studio (for professional video analysis), Converter Studio Pro (for transcoding), and Elecard Player (for playback of MPEG-2 and H.264 files). All the current versions were shown in their booth. Their tools are inexpensive for what they offer so make a good addition to anyone’s toolset.

Elemental Technologies  link

Elemental is a relatively new arrival on the transcoding scene, only founded in 2006 and really just in the last few years have they matured to the point where their products have traction in the industry. Their niche is powerful however – they make use of massively scaled GPUs to offload the processing of video transcoding from the normal CPU-intensive task queue. They have also made smaller stand-alone servers that are well suited to live streaming. With nVidia as their main GPU partner, they have formed a solid and capable range of systems. While they are not as flexible as systems such as Digital Rapids, Rhozet or Telestream – they do offer speed and lower cost per stream in many cases. They are a good resource to check if you have highly predictable output requirements, and your inputs are well defined and ‘normal’.

Front Porch Digital  link

Front Porch makes, among other things, the DIVA storage system along with the SAMMA ingest system. These are high volume, enterprise class products that allow the ingest and storage of thousands to millions of  files. The SAMMA robot is a fully automated tape encoding system that automatically loads videotape into a stack of tape decks, encodes while performing real-time QC during the ingest, then passes the encoded files to the DIVA head end for storage management. In addition they have launched (at NAB time) a service called LYNXlocal – basically a cloud storage connection appliance. It essentially provides a seamless and simple connection to Front Porch LYNX cloud service – at a smaller scale that a full DIVA system (which has had LYNX support built in since version 6.5)

Harmonic – Omneon and Rhozet

Both Omneon and Rhozet are now part of Harmonic – whose principle product line is hardware-based encoding/transcoding for the broadcast and real-time distribution sector. Omneon is a high end vendor of storage that is specific to the broadcast and post-production industry, while Rhozet, with their Carbon and WFS transcoding lines, serves file-based workflows across a wide variety of clients.

I have personal experience with all of these companies, and in particular have worked with Omneon and Rhozet as they are extensively used in a number of our facilities.

The Omneon MediaGrid is a powerful and capable ‘smart storage’ system. It has capabilities that extend beyond just storing content – it can actually integrate directly with applications (such as FinalCut), perform transcoding, create browse proxies, manage metadata, perform automatic file migration and even host CDN (ContentDistributionNetworks). It was one of very, very few systems that passed rigorous scalability and capacity testing at our Digital Media Center. One of the tests was simultaneously supporting HD editing (read and write) from 20 FinalCut stations at the same time. That’s a LOT of bandwidth. And our tolerance was zero. As in no lost frames. None. At NAB Omneon announced their selection by NBC for support in the Olympic Games in London this summer, along with Carbon transcoding.

Rhozet, which came on the scene with the Carbon Coder and Carbon Server (transcoding engine and transcode manager platform) has been a staple of large scale transcoding for some time now. Originally (and still) their claim to fame was a lost cost per seat while maintaining the flexibility and capability that are demanded of many file-based workflows. In the last few years, the ecosystem has changed:  Carbon Coder is now known as ProMedia Carbon, and the WFS (WorkFlowSystem) has replaced the Carbon Server. WFS is highly amenable to workflow automation, and basically designed to ingest, transcode, transfer files, manage storage, notify people or machines, and many other tasks. The WFS can be interfaced by humans, scripts, API calls – and I suppose one of these days by direct neural connections…

WFS integrates with actual ProMedia Carbon nodes for the actual transcoding, and can also directly integrate with the QCS (QualityControlSystem) of Rhozet for live QC during the workflow, and with Microsoft Enterprise SQL Database for reporting and statistical analysis.

INGRI;DAHL  link

This is the only virtual booth I visited 🙂  But Kine and Einy have a great small company that makes really hip 3D glasses. Since (see my earlier review on Dolby and other auto-stereoscopic displays) we are going to be needing 3D glasses for some time yet – why not have ones that are comfortable, stylish and stand out – in a good way? Yes, they are ‘passive’ – so they won’t work with active home 3D tv’s like Panasonic, Samsung, etc. – but they do work with any passive display (Vizio, RealD, JVC, etc.) – and most importantly – at the theatre! Yes, you get ‘free’ glasses at the movies.. but consider this:  if you’ve read this far, you’ve seen a 3D movie in a theatre -with glasses that fit well and are comfortable, right…?? And… one of the biggest challenges we have today is the low light level on the screen. This is a combination of half the light being lost (since only 50% at most reaches each eye, with the alternate polarization), poor screen reflectance, etc. Although it’s not a huge difference, these glasses have greater transmittance than the average super-cheap movie-theatre glasses – therefore the picture is a little bit brighter.

And they offer a set of clip-on 3D glasses – for those of us (like myself) who wear glasses to see the screen in the first place – wearing a second pair over the top is a total drag. Not only uncomfortable – I look even more odd than normal… Check them out. The cost of a couple of tickets will buy you the glasses, and then you are set.

Interra Systems  link

Interra makes the Baton automated QC analyzer. This is a highly scalable and comprehensive digital file checker. With the rate of file manufacture these days, there aren’t enough eyeballs or time left on the planet to adequately QC that many files. Nor would it be affordable. With a template-driven process, and a very granular set of checks, it’s now possible for the Baton tool to grind through files very quickly and determine if they are made to spec. Some of the new features announced at NAB include: wider format support (DTS audio, MKV container, etc.); more quality checks (film grain amount, audio track layout, etc.); closed caption support; verification efficiency improvements; audio loudness analysis and correction (CALM); new verification reports and high availability clustering.

Manzanita Systems  link

Manzanita is the gold standard for MPEG-2 multiplexing, demultiplexing and file analysis. Period. Everyone else, in my professional opinion, is measured against this yardstick. Yes, there are cheaper solutions. Yes, there are more integrated solutions. But the bottom line is if you want the tightest and most foolproof wrapper for your content (as long as it’s MPEG-2 Program Stream or Transport Stream – and now they’ve just added MP4) – well you know where to go. They are a relatively small company in Southern California – with a global reach in terms of clients and reputation. The CableLabs standard is what most VOD (VideoOnDemand) is built upon, and with their muxing software (and concomitant analyzer), Manzanita-wrapped content plays on just about any set top box in existence – no mean feat.

New things:  CrossCheck – a fast and lower cost alternative to the full-blown Transport Stream Analyzer; MP4 multiplexer; Adaptive Transport Stream Mux (for mobile and “TV Everywhere” apps); integration of DTS audio into Manzanita apps; and a peek at a highly useful tool (prototype at the show, not a full product yet):  a transport stream ‘stitcher/editor’ – basically allows ad insertion/replacement without re-encoding the original elementary streams. Now that has some interesting applications. If you have a thorny MPEG-2 problem, and no amount of head-scratching is helping, then ask Greg Vines at Manzanita – he’s one of our resident wizards of this format – I can’t remember an issue he has not been able to shed some light on. But buy something from them, even the really inexpensive (but useful and powerful) MPEG-ID tool for your desktop – no one’s time is free… 🙂

MOG Solutions / MOG Technologies  link

MOG is a Portugal-based firm (MethodsObjectsGadgets). So now, in addition to world class sailors, great port, we have excellent quality MXF from this country of light and history. I’ve worked with them extensively in the past, and they make a small but powerful set of SDKs and applications for processing the MXF wrapper format. The core is based on the Component Suite from MOG Solutions – a set of SDKs that allow wrapping, unwrapping, editing and viewing of MXF containers. This underlying code is integrated into a number of well known platforms which allows the growing ubiquitous reach of this standards-based enabling technology.

As a new arm, MOG Technologies was formed to offer some new products that are differentiated from the core code provided by the parent MOG Solutions. They have introduced the mxfSPEEDRAIL to offer SD-HD/SDI ingest, file-based ingest, digital delivery and real time playback. The recorder handles multiple resolutions and multiple formats: SD or HD, any normal frame rate, natively encodes to QuickTime, Avid, MXF, AVCIntra, DNxHD, DVCProHD, XDCAM/HD, ProRes. Many other features. Also accepts files in any of those formats.

Panasonic  link

Panasonic offers a wide array of professional video products, from cameras, monitors, memory cards, mixers & switchers, etc. Probably one of the more interesting products is the 3D camcorder: AG-3DP1. This is ‘bigger brother’ of the original AG-3DA1. The newer model has slightly better pixel count, main difference is better recording format:  true AVC-Intra instead of AVCHD, and 10bit 4:2:2 instead of 8bit 4:2:0  It’s almost twice as expensive, but still very cheap for a full 3D rig ($35k). This camera has numerous limitations, but IF you understand how to shoot 3D, and you don’t break the rules… (biggest issue with this camera is you can’t get close to your subject, since interaxillary distance is fixed) you can make some nice stereo footage at a bargain price.

With such a wide range of products, it’s impossible cover everything here. Check this link for all their NAB announcements if you have a particular item of interest.

Photo Research  link

This is one of the best vendors of precision spectrometers. This is a must have device for precision optical measurements of color monitors. While calibration probes and software can calibrate individual monitors of the same type, only a purely optical spectrometer can correlate readings and match devices (or rather give you the information to accomplish this) across different technologies and brands – such as matching (to the best degree possible) CCLF LCD to LED LCD to Plasma. If you are serious about color, you have to have one of these. My preference is the PR-655.

RED Digital Cinema  link

Well.. now we have arrived at one of the most popular booths at the show. The lines waiting to get in to the REDray projector went around the booth.. sometimes twice.. As has often been the case in the past, RED has not done incremental – they just blow the doors off… 4K playback in 3D by real lasers in a small box that doesn’t cost the budget of a small African country?? and a home version as well (ok, still 4K but 2D). And of course they still make an awesome set of cameras… the Epic is well, epic?? Of course, with almost obscene resolution and frame rates, copious data rates have forced allowed RED to vertically integrate.. you now have RED ROCKET so you can transcode in real time (what ARE you going to do with 2 x 5K streams of RED RAW data squirting out of your 3D rig… you did remember to bring your house trailer of hard disks, correct?? Anyway, just go their site and drool, or if you have backing and the chops, start getting your kit together…

RED camera

RED 3D (in 3ality rig)

RED skirt/bag

couldn’t resist…

rovi – MainConcept  link

Now owned by rovi, MainConcept has been one of the earliest, and most robust, providers of software codecs and other development tools for the compression industry. Founded in 1993, when MPEG-1 was still barely more than a dream, this company has provided codecs ‘under the hood’ for a large majority of compressed video products throughout this industry. Currently they have a product line that encompasses video, audio, muxing, 3D, transcoding, streaming, GPU acceleration and other technologies as SDKs – and applications/plug-ins for transcoding, decoding, conversion and codec enhancements for popular NLE platforms.

I personally use their Reference Engine professional transcoding platform (does just about everything, including Digital Cinema at up to 4K resolution), the codec plug in for Adobe Premiere and various decoder packs. The plug-in for Adobe will advance (to support CS6) by the time the new version of Premiere releases…

Schneider Optics  link

A great many years ago, when such wonderful (and now a bit archaic) technologies such as 4×5 sheet film was the epitome of “maximum megapixels” [actually, the word pixel wasn’t even invented yet!] the best way to get that huge negative filled up with tack-sharp light was a Schneider “Super Angulon” lens. I still remember the brilliance of my first few shots after getting one of those lenses for my view camera. (And BTW, there is not a digital camera out there, even today, that can approach the resolution of an 4×5 negative on Ilford Delta100 stock… when I scan those I am pulling in 320 megapixels, which, even in grayscale (16bit) is about 500MB per frame.. almost sounds like RED data rates <grin>. At the show, Dwight Lindsey was kind enough to share a detailed explanation of their really awesome little adaptive lenses for the iPhone camera: they have a fisheye, a wide angle, and are just about to release a telephoto. These high quality optics certainly add capability to what’s already a cool little camera. (for more info on the iPhone camera, look elsewhere on my blog – I write about this consistently)

Signiant  link

Signiant is one of the major players in enhanced file transfer. They have a range of products dedicated to highly secure file movements over global networks. They offer a combination of high speed (much better utilization that TCP) as well as security strong enough to pass muster with the security/IT divisions of every major studio. They offer highly capable workflow engines – it’s not just a souped-up FTP box… Management, scheduling, prioritization, least-cost routing – all the bits that a serious networking tool should possess.

With the ability to transfer over the internet, or direct point-to-point, there is real flexibility. An endpoint can be as simple as a browser (yet still totally secure), or a dedicated enterprise server with 10Gb fiber connectivity. This toolset can be an enabler for CDNs, and automatically redirect content formats as required to fulfill anything from set top boxes to mobile devices.

Snell link

Formerly Snell & Wilcox, since the merger with ProBel the combined firm is now just Snell. A long time front-runner in broadcast hardware, and a recognized world leader in television conversion products, Snell has continued to adapt to the new world order – which still BTW uses a LOT of baseband video! The conversion to pure IP is a ways off yet… and for real day-to-day production, SDI video is often king. The combined company now offers a wide range of products in such categories as control & monitoring; routing; modular infrastructure; conversion & restoration; production & master control switching; and automation/media-management. Too many things to go into detail – but do check out Momentum – announced at NAB:  their new unified platform for managing complex workflows across multiple screens.

SoftNI  link

Subtitling/closed captions. Done well. Their products handle basically the full range of world broadcast, cable, satellite, OTT, etc. distribution where there is a need to add subtitles or closed captions. The capabilities extend to theatrical, screening rooms, editorial, contribution, DVD, etc. etc. The software handles virtually all non-Arabic languages as well (African, Asian, European, Middle Eastern).

Sony  link

With probably the largest booth at NAB, Sony just boggles the mind with the breadth and depth of their offerings. Even though the consumer side of their house is a bit wilted at the moment, the professional video business is solid and a major part of so many broadcasters, post-production and other entities that it’s sort of like part of everyone’s DNA. There’s always a bit of Sony somewhere… HDCAM-SR? DigiBeta? XDCAM? and so on.. Here’s just a minuscule fraction of what they were showing:  compare the PMWTD300 3D camcorder to the Panasonic discussed earlier. Very similar. It’s basically a 3D XDCAM/HD. And they are getting their toes in the water with UHD (UltraHighDefinition) with their 4K ‘stitch’ display technology. Had to laugh though: the spec sheet reads like a fashion magazine:  “Price upon Request” – in other words, just like buying haute couture from Dior, if you have to ask, you can’t afford it…

8K x 2K display! (2 x 4K displays stitched together...)

one of the displays above... the small red box is detailed below

close-up of red outline area above.. I focused as close as I could get with the iPhone - about 2" from the screen - you can just barely see the pixels. This is one sharp image!

Tektronix  link

Tektronix has been a long standing provider of high end test equipment, with their scopes, waveform monitors and other tools being seen in just about every engineering shop, master control and OB van I’ve ever seen… but now, as with most of the rest of our industry, software tools and file-based workflows demand new toolsets. Their Cerify platform was the first really capable automated QC analysis platform, and it is still capable today. At the show, we saw upcoming improvements for v7 – the addition of audio loudness measurement and correction (in conjunction with Dolby technology), as well as other format and speed enhancements.

Telestream link

The company that brought us FlipFactory – aka the universal transcoding machine – has spread its wings a lot lately. While FF is still with us, the new Vantage distributed workflow system is like a tinkertoy set for grownups who need to make flexible, adaptable and reliable workflows quickly. With the addition of Anystream to the Telestream fold, there are now additional enterprise products, and Telestream’s own lineup has expanded as well to encompass Pipeline, Flip4Mac, Episode, etc. etc. A few of the NAB announcements included Vantage v4.0 (CALM act compliance, new acceleration technology, better video quality); Vantage HE Server (parallel transcoding for multiscreen delivery); and enhanced operations in Europe with the opening of a German branch.

Verizon Digital Media Services  link

VDMS is a big solution to a big problem. The internet, as wonderful as it is, handles small packets really, really well. Little bits of emails, even rather big spreadsheet attachments, zip around our globe in large numbers, with the reliability and fabric of many interconnected routers that make the ‘net what it is. But this very ‘fabric of redundancy’ is not so wonderful for video.. which is a data format that needs really fast, smooth pipes, with as few stops as possible. It’s not the size of the highway (those are big enough), but rather those toll stations every 5 miles that cause the traffic jams…

So… what do you do? If you own some of the largest private bandwidth on the planet, that is NOT the internet.. then you consider providing an OTT solution for the internet itself… This has the potential to be a game-changer, as video delivery – through a true unicast system – can be offloaded from the internet (not completely of course, but enough to make a big difference) and turned into a utility service.. well, kind of like what a phone company used to be… before landlines started disappearing, analog phones checked into museums, and pretty much everyone’s talking bits…

This could be one of our largest digital supply chains. Let’s see what happens…

VidCheck  link

There are several cool things in Bristol, UK (apart from the weather) – the British Concorde first flew from Filton (suburb of Bristol) in April of 1969 (a bit after the French version – yes, this was the forerunner of AirBus – the original distributed supply chain… and hmmm…. seems like, as usual, the Brits and the French are still arguing…) – in more recent years this area has become a hub of high-tech, and some smart engineers who cut their teeth on other automated QC tools started up VidCheck. In my view, this is a “2nd generation” toolset, and brings some very interesting capabilities to the table.

This was one of the first to be as capable in fixing errors in a file as in just finding them… It handles a very wide array of formats (both containers and codecs – it’s one of the few that processes ProRes). Another unique aspect of this toolset is the “VidApps” – which are plug-ins for common NLEs – QC/fix now becomes part and parcel of the editorial workflow. Particularly with audio, this simplifies and speeds up corrections. NAB announcements included:  the new “VidFixer” app – which is designed from the ground up to integrate into workflow automation systems as well. In addition, the VidChecker software is now part of the new Amberfin UQC (UnifiedQualityControl) platform – providing the same detailed and rapid analysis, but integrated into the Amberfin platform.

Wowza  link

We can never seem to get away from multiple formats. First we had NTSC and PAL, then VHS and Beta, then different types of HD, etc. etc. etc. Now we have tablets, smartphones, laptops, Dick Tracy video wristwatches, etc. – all of which want to display video – but in a universal format? No such luck… Silverlight, HLS, HTML5, WideVine, Flash, TS, …… enter Wowza, who essentially is a ‘transmuxer’ (instead of a transcoder) – since usually the elementary streams are ok, it’s the wrappers that are incompatible. So basically the Wowza server skins a potato and re-wraps it in a banana skin – so that what was a Flash file can now play on an iPad… of course there’s a bit more to it… but check it out – it just might save you some migraines…

and wow – you’ve stuck through till the end – congratulations for your perseverance! Here’s a bit of my trip back as well, in pictures…

Sort of like life: Easy / Hard

There’s always several ways to do a workflow…

end of the day - SHOES OFF!!

Convention is ending – and believe me it takes its toll on feet, legs, backs…

Sunset on Encore

Wynn reflected in Encore

the Starship Enterprise has landed...

Walking back from a meeting at the Palazzo…

heading back to hotel...

convention over: now 100,000 people all want to leave airport at the same time - no seats. nowhere...

your roll-on also makes a good seat...

finally, Southwest flight #470 is ready for take-off...

please put your legs in the upright and locked position to prepare for take-off...

It took almost as long to get luggage in Burbank as it did to fly from Vegas... waiting... waiting...

still waiting for luggage...

The Return

someone else we know and love came back from Vegas on our plane...

homeward bound...

That’s it! See you next year just after NAB in this same space. Thanks for reading.

A few comments on the iPhone camera posts…

April 13, 2012 · by parasam

I have just posted the third of my ongoing series of discussions on iPhone camera apps (Camera Plus Pro). Thanks to all of you around the world who have taken the time and interest to read my most recent post on Camera+   To date I have had about 7,500 views from 93 countries – that is a fantastic response! Please share with your friends:  according to the developers of Camera+ they have sold about 10 million copies of their app, so that means there are millions more of you out there that might like to have a manual for this app (which does not come with one) – my blog serves as at least a rudimentary guide for this cool app.

It’s taken several weeks to get this next one written, hopefully the rest will come a bit faster. Apps that have a lot of filters require more testing (and lot of image uploading – the most recent post on Camera Plus Pro has 280 images!). BTW, I know this makes the posts a bit on the large side, and increases your download times. However, since the subject matter is comparing details of color, tonal values, etc. I feel that high resolution images are required for the reader to gain useful information, so I believe the extra time is worth it.

Although this list is contained in my intro to the iPhone camera app software, here are the apps for which I intend to post analysis and discussions:

Still Imaging Apps:

  • Camera
  • Camera+
  • Camera Plus Pro
  • almost DSLR
  • ProHDR
  • Big Lens
  • Squareready
  • PhotoForge2
  • Snapseed
  • TrueDoF
  • OptimumCS-Pro
  • Iris Photo Suite
  • Filterstrom
  • Genius Scan+
  • Juxtaposer
  • Frame X Frame
  • Phonto
  • SkipBleach
  • Monochromia
  • MagicShutter
  • Easy Release
  • Photoshop Express
  • 6×6
  • Camera!

Motion Imaging Apps:

  • Movie*Slate
  • Storyboard Composer
  • Splice
  • iTC Calc
  • FilmiC Pro
  • Camera
  • Camera Plus Pro
  • Camcorder Pro

The above apps are selected only because I use them. I am not a professional reviewer, have no relationship with any of the developers of the above apps, am not paid or otherwise motivated externally. I got started on this little mission as I love photography, science, and explaining how things work. At first, I just wanted to know what made the iPhone tick… as my article on the hardware explains, that was more of a mission than I had counted on… but fun! I then turned to the software that makes the hardware actually do something useful… and here we are.

The choice of apps is strictly personal – this is just what I have found useful to me so far. I am sure there are others that are equally as good for others – and I will leave it to those others to discuss. It’s a big world – lots of room for lots of writing… Undoubtedly I will add things from time to time, but this is a fair list to start with!

Readers like you (and so many thanks to those that have commented!) are what brings me back to the keyboard. Please keep the comments coming. If I have made errors, or confused you, please let me know so I can correct that. Blogs are live things – continually open to reshaping.

Thanks!

iPhone4S – Section 4c: Camera Plus Pro app

April 13, 2012 · by parasam

This is a similar app to Camera+ (but not made by the same developers). (This version costs $1.99 at the time of this post – $1 more than Camera+). It’s similar in design and function. The biggest differences are:

  • Ability to tag photos
  • More setup options on selections (self-timer, burst mode, resolution, time lapse, etc.)
  • More sharing options
  • Ability to add date and copyright text to photo
  • A ‘Quick Roll’ (light table type function) has 4 ‘bins’ (All, Photos, Video, Private – can be password protected)
  • Can share photos via WiFi or FTP
  • Bing search from within app
  • Separate ‘Digital Flash’ filter with 3 intensity settings
  • Variable ‘pro’ adjustments in edit mode (Brightness, Saturation, Hue, Contrast, Sharpness, Tint, Color Temperature)
  • Different filters than Camera+, including special ‘geometric distortion’ filters
  • Quick Roll design for selecting which photos to Edit, Share, Sync, Tag, etc.
  • Still Camera Functions  [NOTE: the Video Camera functions will be discussed separately in later in this series when I compare video apps for the iPhone]
    • Ability to split Focus area from Exposure area
    • Can lock White Balance
    • Flash: Off/On (for the 4 & 4S); this feature changes to “Soft Flash” for iPhone 3GS and Touch 4G.
    • Front or Rear camera selection
    • Digital Zoom
    • 4 Shooting Modes: Normal/Stabilized/Self-Timer/Burst (part of the below Photo Options menu)
    • Photo options:
      • Sound On/Off
      • Zoom On/Off
      • Grid Lines On/Off
      • Geo Tags On/Off
      • SubMenu:
        • Tags:  select and add tags from list to the shot; or add a new tag
        • Settings:  a number of advanced settings for the app
          • Photos:
            • Timer (select the time delay for self-timer: 2-10 seconds in 1 second increments
            • Burst Mode (select the number of pictures taken when in burst mode: 3-10
            • Resolution (Original [3264×2448]; Medium [1632×1224]; Low [816×612]) – NOTE: these resolutions for the iPhone4S, each different hardware model supported by this app has a different set of resolutions set by the sensor. Essentially it is Full, Half and Quarter resolution. The exact numbers for each model are in the manual.
            • Copyright (sets the copyright text and text color)  [note: this is a preset – the actual ‘burn in’ of the copyright notice into the image is controlled during Editing]
            • Date (toggle date display on/off; set date format; text color)
        • Videos (covered in later section)
        • Private Access Restriction (Set or Change password for the Private bin inside the Quick Roll)
        • Tags (edit, delete, add tag names here)
        • Share (setup and credentials for social sharing services are entered here):
          • Facebook
          • Twitter
          • Flickr
          • Picasa
          • YouTube (for videos)
        • Review (a link to review the app)
      • Info:
        • Some ‘adware’ is here for other apps from this vendor, and a list of FAQs, Tips, Tricks (all of which are also in the manual available for download as a pdf from here)
  • Live Filters:
    • A set of 18 filters that can be applied before taking your shot, as opposed to adding a filter after the shot during Editing.
      • BW
      • Vintage
      • Antique
      • Retro
      • Nostalgia
      • Old
      • Holga
      • Polaroid
      • Hipster
      • XPro
      • Lomo
      • Crimson
      • Sienna
      • Emerald
      • Bourbon
      • Washed
      • Arctic
      • Warm
    • A note on image quality using Live Filters. A bit more about the filters will be discussed below when we dive into the filter details, but some test shots using various Live Filters show a few interesting things:
      • The pixel resolution stays the same whether the filter is on or off (3264×2448 in the case of the iPhone4S).
      • While the Live Filter function is fully active during preview of an image, once you take the shot there is a delay of about 3 seconds while the filtering is actually applied to the image. Some moving icons on the screen notify the user. Remember that the screen is 960×640 while the full image is 3264×2448 (13 X larger!) so it takes a few seconds to filter all those additional pixels.
      • This does mean that when using Live Filter you can’t use Burst Mode (it is turned off when you turn on a Live Filter), and you can’t shoot that rapidly.
      • Although the pixel dimensions are unchanged, the size of the image file is noticeably smaller when using Live Filters than when not. This can only mean that the jpeg compression ratio is higher (same amount of input data; smaller output data; compression ratio mathematically must be higher).
      • I first noticed this when I went to email myself a full resolution image from my phone to my laptop [faster for one or two pix than syncing with iTunes] as I’m researching for this blog – the images were on average 1.7MB instead of the 2.7MB average for normal iPhone shots.
      • I tested against four other camera apps, including the native Camera app from Apple, and all of them delivered images averaging 2.7MB per image.
      • I then tested this app (Camera Plus Pro) in Unfiltered mode, and the size of the output file jumps up to an average of 2.3MB per image. Not as high as most of the others, but 35% larger. Therefore a 35% reduction in compression ratio. I’ll run some more objective tests during the filter analysis section below, but both in file size and visual observation, the images appear more highly compressed.
      • This does not mean that a more compressed picture is inferior, or softer, etc. – it is highly dependent on subject material, lighting, etc. But, what is true is that a more highly compressed picture will tend to show artifacts more easily in difficult parts of the frame than will the same image at a lower compression ratio.
      • Just all part of my “Know Your Tools” motto…
      • Edit Functions
        • Crop
          • Freeform (variable aspect ratio)
          • Square (1:1 aspect ratio)
          • Rectangular (2:3 aspect ratio) [portrait]
          • Rectangular (3:2 aspect ratio) [landscape]
          • Rectangular (4:3 aspect ratio) [landscape]
  • Rotation
    • Flip Horizontal
    • Right
    • Left
    • Flip Vertical
  • Digital Flash [a filter that simulates flash illumination]
    • Small
    • Medium
    • Large
  • Adjust [image parameter adjustments]
    • Brightness
    • Saturation
    • Hue
    • Contrast
    • Sharpness
    • Tint
    • Color Temperature
  • Effects
    • Nostalgia – 9 ‘retro’ effects
      • Coffee
      • Retro Red
      • Vintage
      • Nostalgia
      • Retro
      • Retro Green
      • 70s
      • Antique
      • Washed
    • Special – 9 custom effects
      • XPro
      • Pop
      • Lomo
      • Holga
      • Diana
      • Polariod
      • Rust
      • Glamorize
      • Hipster
    • Color – 9 tints
      • Black & White
      • Sepia
      • Sunset
      • Moss
      • Lucifer
      • Faded
      • Warm
      • Arctic
      • Allure
    • Artistic – 9 special filters
      • HDR
      • Fantasy
      • Vignette
      • Grunge
      • Pop Art
      • GrayScale
      • Emboss
      • Xray
      • Heat Signature
    • Distortion – 9 geometric distortion (warping) filters
      • Center Offset
      • Pixelate
      • Bulge
      • Squeeze
      • Swirl
      • Noise
      • Light Tunnel
      • Fish Eye
      • Mirror
  • Borders
    • Original (no border)
    • 9 border styles
      • Thin White
      • Rounded Black
      • Double Frame
      • White Frame
      • Polaroid
      • Stamp
      • Torn
      • Striped
      • Grainy

Camera Functions

[Note:  Since this app has a manual available for download that does a pretty fair job of describing the features and how to access and use them, I will not repeat that information here. I will discuss and comment on the features where I believe this will add value to my audience. You may want to have a copy of the manual available for clarity while reading this blog.]

The basic use and function of the camera is addressed in the manual, what I will discuss here are the Live Filters. I have run a series of tests to attempt to illustrate the use of the filters, and provide some basic analysis of each filter to help the user understand how the image will be affected by the filter choice. The resolution of the image is not reduced by the use of a Live Filter – in my case (testing with iPhone4S) the resultant images are still 3264×2448 – native resolution. There are of course the effects of the filter, which in some cases can reduce apparent sharpness, etc.

A note on my testing procedure:  In order to present a uniform set of comparison images to the reader, and have them be similar to my standard test images, the following steps were taken:

Firstly:  my standard test images that I use to analyze filters/scenes/etc for any iPhone camera app consists of two initial test images:  a technical image (calibrated color and grayscale image), and a ‘real-world’ image – a photo I shot of a woman in the foreground with a slightly out-of-focus background. The shot has a wide range of lighting, color, a large amount of skin tone for judging how a given filter changes that important parameter, and a fairly wide exposure range.

 The original source for the calibration chart was a precision 35mm slide (Kodak Q60, Ektachrome) that was scanned on a Nikon Super Coolscan 5000ED using Silverfast custom scanner software. The original image was scanned at 4000dpi, yielding a 21megapixel image sampled at 16bits per pixel. This image was subsequently reduced in gamut (from ProPhotoRGB to sRGB) and size (to match the native iPhone4S resolution of 3264×2448) and bit depth (8bits per pixel) . The image processing was performed using Photoshop CS5.5 in a fully color-calibrated workflow.

The source for the ‘real-world’ image was initially captured using a Nikon D5000 DSLR fitted with a Nikkor 200mm F2.8 prime lens (providing an equivalent focal length of 300mm compared to full-frame 35mm – the D5000 is a 2/3 size sensor [4288×2848]). The exposure was 1/250 sec @ f5.6 using camera raw format – no compression. That camera body captures in sRGB color space, and although outputs a 16bit per pixel format, the sensor is really not capable of anything more than 12 bits in a practical sense. The image was processed in Photoshop CS5.5 in a similar manner as above to yield a working image of 3264×2448, 8 bits per pixel, sRGB.

These image pairs are what are used throughout my blog for analyzing filters, by importing into each camera app as a file.

For this test of Live Filters, I needed to actually shoot with the iPhone, since there is no way using this app to apply the Live Filters to a pre-existing image. To replicate the images discussed above as closely as possible, the following procedure was used:

For the calibration chart, the same source image was used (Kodak Q60), this time as a precision print in 4″x5″ size. These prints were manufactured by Kodak under rigidly controlled processes and yield a highly accurate reflective target. (Most unfortunately, with the demise of Kodak, and film/print processing in general, these are no longer available. Even with the best of storage techniques, prints will fade and become inaccurate for calibration. It will be a challenge to replace these…)  I used my iPhone4S to make the exposures under controlled lighting (special purpose full-spectrum lighting set to 5000°K).

For the ‘real-world’ image, I wanted to stay with the same image of the woman for uniformity, and it provides a good range of test values. To accomplish that (and be able to take the pictures with the iPhone) was challenging, since the original shot was impossible to duplicate in real life. I started with the same original high resolution image (in Photoshop) in its original 16bit, high-gamut format. I then printed that image using a Canon fine art inkjet printer (Pixma Pro 9500 MkII), using a 16 bit driver, on to high quality glossy photo paper at a paper size of 13″ x 19″. At a print density of 267dpi, this yielded an image of over 17megapixels when printed. The purpose was to ensure that no subsampling of printed pixels would occur when photographed by the 8megapixel sensor in the iPhone. [Nyquist sampling theory demands a minimum of 2x sampling – 16megapixels in this case – to ensure that). I photographed the image with the same controlled lighting as used above for the calibration chart. I made one adjustment to each image for normalization purposes:  I mapped the highest white level in the photograph (the clipped area on the subject’s right shoulder – which was pure white in the original raw image) to just reach pure white in the iPhone image. This matched the tonal range for each shot, and made up for the fact that even with a lot of light in the studio it wasn’t enough to fully saturate the little tiny iPhone sensor. No other adjustments of any kind were made. [This adjustment was carried out by exporting the original iPhone image to Photoshop to map the levels].

While even further steps could have been taken to make the process more scientifically accurate, the purpose here is one of relative comparison, not absolute measurement, so I feel the steps taken are sufficient for this exercise.

The Live Filters:

Live Filter = BW

Live Filter = BW

The BW filter provides a monochrome adaptation of the original scene. It is a high contrast filter, this can clearly be seen in the test chart, where columns 1-3 are solid black, as well as all grayscale chips from 19-22. Likewise, on the highlight end of the scale, chips 1-3 have no differentiation. The live image shows this as well, with a strong contrast throughout the scene.

Live Filter = Vintage

Live Filter = Vintage

The Vintage filter is a warming filter that adds a reddish-brown cast to the image. It increases the contrast some (not nearly as much as the previous BW filter) – this can be seen in the chart in the area of columns 1-2 and rows A-J. The white and black ends of the grayscale are likewise compressed. Any cool pastel colors either turn white or a pale warm shade (look at columns 9-11). The live image shows these effects, note particularly how the man’s blue shirt and shorts change color remarkably. The increase in contrast, couple with the warming tint, does tend to make skin tones blotchy – note the subject’s face and chest.

Live Filter = Antique

Live Filter = Antique

The Antique filter offers a large amount of desaturation, a cooling of what color remains, and an increase in contrast. Basically, only pinks and navy blues remain in the color spectrum, and the chart shows the clipping of blacks and whites. The live image shows very little saturation, only some dark blue remains, with a faint pink tinge on what was originally the yellow sign in the window.

Live Filter = Retro

Live Filter = Retro

The Retro filter attempts to recreate the look of cheap film cameras of the 1960’s and 1970’s. These low quality cameras often had simple plastic lenses, light leaks due to imperfect fit of components, etc. The noticeable chromatic aberrations of the lens and other optical ‘faults’ have now seen a resurgence as a style, and that is emulated with digital filters in this and others shown below. This particular filter shows a general warming, but with a pronounced red shift in the low lights. This is easily observable in the gray scale strip on the chart.

Live Filter = Nostalgia

Live Filter = Nostalgia

Nostalgia offers another variation on early low-cost film camera ‘look and feel’. As opposed to the strong red shift in the lowlights of Retro, this filter shifts the low-lights to blue. There is also an increase in saturation of both red and blue, notice that in the chart. The green column, #18, hardly has any change in saturation from the original, while the reds and blues show noticeable increases, particularly in the low-lights. The highlights have a general warming trend, showed in the area bounded by columns 13-19 and rows A-C. The live shot shows the strong magenta/red shift that this filter caused on skin tones.

Live Filter = Old

Live Filter = Old

The Old filter applies significant shifts to the tonal range. It’s not exactly a high contrast filter, although that result is apparent in the ratio of the highlight brightness to the rest of the picture. There is strong overall reduction in brightness – in the chart all differentiation is lost below chip #16. There is also desaturation, this is more obvious when studying the chart. The highlights, like many of these filter types, are warmed toward the yellow spectrum.

Live Filter = Holga

Live Filter = Holga

The Holga filter is named after the all-plastic camera of the same name – from Hong Kong in 1982. A 120 format roll-film camera, the name comes from the phrase “ho gwong” – meaning ‘very bright’. The marketing people twisted that phrase into HOLGA. The actual variations show a warming in the highlights and cooling (blue) in the lowlights. The contrast is also increased. In addition, as with many of the Camera Plus Pro filters, there is a spatial element as well as the traditional tonal and chromatic shifts:  in this case a strong red tint in one corner of the frame. My tests appear to indicate that the placement of this (which corner) is randomized, but the actual shape of the red tint overlay is relatively consistent. Notice that in the chart the overlay is in the upper right corner, in the live shot it moved to lower right. There is also desaturation, this is noticeable in her skin, as well as the central columns of the chart.

Live Filter = Polaroid

Live Filter = Polaroid

The Polaroid filter mimics the look of one of the first ‘instant gratification’ cameras – the forerunner of digital instant photography. The PLC look (Polariod Land Camera) was contrasty with crushed blacks, tended towards blue in the shadows, and had slightly yellowish highlights. This particular filter has a pronounced magenta shift in the skin tones that is not readily apparent from the chart – one of the reasons I always use these two different types of test images.

Live Filter = Hipster

Live Filter = Hipster

The Hipster filter effect is another of the digital memorials to the original Hipstamatic camera – a cheap all plastic 35mm camera that shot square photos. Copied from an original low-cost Russian camera, the two brothers that invented it only produced 157 units. The camera cost $8.25 in 1982 when it was introduced. With a hand-molded plastic lens, this camera was another of the “Lo-Fi” group of older analog film cameras whose ‘look’ has once again become popular. The CameraPlusPro version shows pronounced red in the midtones, crushed blacks (see column 1-2 in the chart and chips #18 and below), along with increased contrast and saturation. In my personal view, this look is harsher and darker than the actual Hipstmatic film look, which tended towards raised blacks (a common trait of cheap film cameras, the backs always leaked a bit of light so a low level ‘fog’ of the film base always tended to raise deep blacks [areas of no light exposure in a negative] to a dull gray); a softer look (lower contrast due to raised blacks) and brighter highlights. But that’s purely a personal observation, the naming of filters is arbitrary at best, that’s why I like to ‘look under the hood’ with these detailed comparisons.

Live Filter = XPro

Live Filter = XPro

The XPro filter as manifested by the CameraPlusPro team looks very similar to their Nostalgia version, but the XPro has highlights that are more white than the yellow of Nostalgia. The term XPro comes from ‘cross-process’ – what happens when you process film in the wrong developer, for instance developing E-6 transparency film in C-41 color negative chemistry. The effects of this process are highly random, although there is a general tendency towards high contrast, unnatural colors, and staining. In this instance, the whites are crushed a bit, blacks tend blue, and contrast is raised.

Live Filter = Lomo

Live Filter = Lomo

The Lomo filter effect is designed to mimic some of the style of photograph produced by the original LOMO Plc camera company of Russia (Leningrad Optical Mechanical Amalgamation). This was a low cost automatic 35mm film camera. While still in production today, this and similar cameras account for only a fraction of LOMO’s production – the bulk is military and medical optical systems – and are world class… Due to the low cost of components and production methods, the LOMO camera exhibited frequent optical defects in imaging, color tints, light leaks, and other artifacts. While anathema to professional photographers, a large community that appreciates the quirky effects of this (and other so-called “Lo-Fi” or Low Fidelity) cameras has sprung up with a world-wide following. Hence the Lomo filter…

This particular instance shows increased contrast and saturation, warming in the highlights, green midtones, and like some other CameraPlusPro filters, an added spatial effect (the red streak – again randomized in location, it shows in upper left in the chart, lower right in the live shot). [Pardon the pilot error:  the soft focus of the live shot was due to faulty autofocus on that iPhone shot – but I didn’t notice it until comping the comparison shots several days later, and didn’t have the time to reset the environment and reshoot for one shot. I think the important issues can be resolved in spite of that, but did not want my readers to assume that soft focus was part of the filter!]

Live Filter = Crimson

Live Filter = Crimson

The Crimson filter is, well, crimson! A bit overstated for my taste, but if you need a filter to make your viewers think of “The Shining” then this one’s for you! What more can I say. Red. Lots of it.

Live Filter = Sienna

Live Filter = Sienna

The Sienna filter always makes me think of my early art school days, when my well-meaning parents thought I needed to be exposed to painting… (burnt sienna is a well-know oil pigment, an iron oxide derivative that is reddish-brown. My art instructor said “think tree trunks”.)   Alas, it didn’t take me (or my instructor) long to learn that painting with oils and brushes was not going to happen in this lifetime. Fortunately I discovered painting with light shortly after that, and I’ve been in love with the camera ever since. The Sienna as shown here is colder than the pigment, a somewhat austere brown. The brown tint is more evident in the lowlights, the whites warm up just slightly. As in many of the CameraPlusPro filters, the blacks are crushed, which creates an overall look of higher contrast, even if the midtone and highlight contrast levels are unchanged (look at the grayscale in the chart). There is also an overall desaturation.

Live Filter = Emerald

Live Filter = Emerald

Emerald brings us, well, green… along with what should now be familiar:  crushed blacks, increased contrast, desaturation.

Live Filter = Bourbon

Live Filter = Bourbon

The Bourbon filter resembles the Sienna filter, but has a decidedly magenta cast in the shadows, while the upper midtones are yellowish. The lowered saturation is another common trait of the CameraPlusPro filters.

Live Filter = Washed

Live Filter = Washed

The Washed filter actually looks more like ‘unwashed’ print paper to me.. Let me explain:  before the world of digits descended on photography, during the print process (well, this applies to film as well but the effect is much better known in the printing process), after developing, stopping and fixing, you need to wash the prints. Really, really well. For a long time, like 30-45 minutes under flowing water. This is necessary to wash out almost all of the residual thiosulfate fixing chemical – if you don’t, your prints will age prematurely, showing bleaching and staining, due to the slow annihilation of elemental silver in the emulsion by the remaining thiosulfate. The prints will end up yellowed and a bit faded, in an uneven manner. In this digital approximation, the biggest difference is (as usual for this filter set) the crushed blacks. In the chemical world, just the opposite would occur, as the blacks in a photographic print have the highest accumulation of silver crystals (that block light or cover up the white paper underneath). The other attributes of this particular filter are: strongly yellowed highlights, lowlights tend to blue, increased contrast and raised saturation.

Live Filter = Arctic

Live Filter = Arctic

This Arctic filter looks cold! Unlike the true arctic landscape (which is subtle but has an amazing spectrum of colors), this filter is actually a tinted monochrome. The image is first reduced to black and white, then tinted with a cold blue. This is very clear by looking at the chart. It’s an effect.

LIve Filter = Warm

Live Filter = Warm

After looking so cold in the last shot, our subject is better when Warm. Slightly increased saturation and a yellow-brown cast to the entire tonal range are the basic components of this filter.

Edit Functions

This app has 6 groups of edit functions:  Crop, Rotate, Flash, Adjust, Filters and Borders. The first two are self-evident, and are more than adequately explained in the manual. The “how-to” of the remaining functions I will leave to the manual, what will be discussed here are examples of each variable in the remaining four groups.

Flash – also known as “Digital Flash” – a filter designed to brighten an overly dark scene. Essentially, this filter attempts to bring the image levels up to what they might have been if a flash had been used to take the photograph initially. As always, this will be a ‘best effort’ – nothing can take the place of a correct exposure in the first place. The most frequent ‘side effects’ of this type of filter are increased noise in the image (since the image was dark in the first place – and therefore would have substantial noise due to the nature of CCD/CMOS sensors), raising the brightness level will also raise the appearance of the noise; and white clipping of those areas of the picture that did receive normal, or near-normal, illumination.

This app supports 3 levels of ‘flash’ [brightness elevation] – I call this ‘shirt-sizing’ – S, M, L.  Below are 4 screen shots of the Flash filter in action: None, Small, Medium, Large.

This filter attempts to be somewhat realistic – it is not just an across-the-board brightness increase. For instance, objects that are very dark in the original scene (such as her handbag or the interior revealed by the doorway in the rear of the scene) only are increased slightly in level, whiile midtones and highlights are raised much more substantially.

Flash: Original

Flash: Small / Medium / Large

Adjust – there are 7 sub-functions within the Adjust edit function; Brightness, Saturation, Hue, Contrast, Sharpness, Tint and Color Temperature. Each function has a slider that is initially centered, moving it left reduces the named parameter, moving it right increases. Once moved off the zero center position, a small “x” on the upper right of the assoiciated icon can be tapped to return the slider to the middle position, effectively turning off any changes. Examples below for each of the sub-functions are shown.

Brightness: Minimum / Original / Maximum

Saturation: Minimum / Original / Maximum

Hue: Minimum / Original / Maximum

Contrast: Minimum / Original / Maximum

Sharpness: Minimum / Original / Maximum

Tint: Minimum / Original / Maximum

Color Temperature: Minimum / Original / Maximum

Color Temperature: Cooler / Original / Warmer

Filters – There are 45 image filters in the Edit section of the app. Some of them are similar or identical in function to the filters of the same name that were discussed in the Live Filter section above. These are contained in 5 groups: Nostalgia, Special, Colorize, Artistic and Distortion. The examples below are similar in format to the presentation of the Live Filters. The source images for these comparisons are imported files (see the note at the beginning of this section for details).

Nostalgia filters:

Nostalgia filter = Coffee

Nostalgia filter = Coffee

The Coffee filter is rather well-named:  it looks like your photo had weak coffee spread over it! You can see from the chart that, as usual for many of the CameraPlusPro filters, increased contrast, crushed blacks and desaturation is the base on which a subtle warm-brown cast is overlayed. The live example shows the increased contrast around her eyes, and the skin tones in both the woman and the man in the background have tended to pale brown as opposed to the original red/yellow/pink.

Nostalgia filter = Retro Red

Nostalgia filter = Retro Red

The Retro Red filter shows increased saturation, a red tint across the board (highlights and lowlights), and does not alter the contrast – note all the steps in the grayscale are mostly discernable – although there is a slight blending/clipping of the top highlights. The overall brightness levels are raised from midtones through the highlights.

Nostalgia filter = Vintage

Nostalgia filter = Vintage

The Vintage filter here in the Edit portion of the app is very similar to the filter of the same name in the Live Filter section. The overall brightness appears higher, but some of that may be due to the different process of shooting with a live filter and applying a filter in the post-production process. This is more noticeable in the live shot as opposed to the charts – a comparision of the “Vintage” filter test charts from the Live Filter section and the Edit section shows almost a dead match. This filter is a warming filter that adds a reddish-brown cast to the image. It increases the contrast some  – this can be seen in the chart in the area of columns 1-2 and rows A-J. The white and black ends of the grayscale are likewise compressed. Any cool pastel colors either turn white or a pale warm shade (look at columns 9-11). The live image shows these effects, note particularly how the man’s blue shirt and shorts change color remarkably. The increase in contrast, coupled with the warming tint, does tend to make skin tones blotchy – note the subject’s face and chest.

Nostalgia filter = Nostalgia

Nostalgia filter = Nostalgia

The Nostalgia filter, like Vintage above, is basically the same filter as the instance offered in the Live Filter section. The main difference is the Live Filter version is more magenta and a bit darker than this filter. Also the cyans tend green more strongly in this version of the filter – check out columns 12-13 in the chart. Some increased contrast, pronounced yellows in the highlights and increased red/blue saturation are also evident.

Nostalgia filter = Retro

Nostalgia filter = Retro

The Retro filter, as in the version in the Live Filter section, attempts to recreate the look of cheap film cameras of the 1960′s and 1970′s. These low quality cameras often had simple plastic lenses, light leaks due to imperfect fit of components, etc. The noticeable chromatic aberrations of the lens and other optical ‘faults’ have now seen a resurgence as a style, and that is emulated with digital filters in this and others shown below. This particular filter shows a general warming, but with a pronounced red shift in the low lights. This is easily observable in the gray scale strip on the chart.

Nostalgia filter = Retro Green

Nostalgia filter = Retro Green

The Retro Green filter is a bit of a twist on Retro, with some of Nostalgia thrown in (yes, filter design is a lot like cooking with spices..)  The lowlights are similar to Nostalgia, with a blue cast, the highlights show the same yellows as both Retro and Nostalgia, the big difference is in the midtones which are now strongly green.

Nostalgia filter = 70s

Nostalgia filter = 70s

The 70s filter gives us some desaturation, no change in contrast, red shift in midtones and lowlights, yellow shift in highlights.

Nostalgia filter = Antique

Nostalgia filter = Antique

The Antique filter is similar to the Antique Live Filter, but is much lighter in terms of brightness. There is a large degree of desaturation, some increase in contrast, significant brightness increase in the highlights, and very slight color shifts at the ends of the grayscale:  yellow in the highlights, blue in the lowlights.

Nostalgia filter = Washed

Nostalgia filter = Washed

The Washed filter here in the Edit section is very different from the filter of the same name in Live Filters. The only real similarity is the strongly yellowed highlights. This filter, like many of the others we have reviewed so far, has a much lighter look (brightness levels raised), a very slight magenta shift, slightly increased contrast, enhanced blues in the lowlights and some increase in cyan in the midtones.

Special filters:

Special filter = XPro

Special filter = XPro

The XPro filter in the Edit functions has a different appearance than the filter of the same name in Live Filters. This instance of the digital emulation of a ‘cross-process’ filter is less contrasty, less magenta, and has more yellow in the highlights. The chart shows the yellows in the highlights, blues in the lowlights, and increased saturation. The live shot reveals the increased white clipping on her dress (due to increased contrast), as well as the crushed blacks (notice the detail of the folds in her leather handbag are lost).

Special filter = Pop

Special filter = Pop

The Pop filter brings the familiar basic tonal adjustments (increased contrast, with crushed whites and blacks, an overall increase in midtone and highlight brightness levels) but this time the lowlights have a distince red/magenta cast, with midtones and highlights tending greenish/yellow. This is particularly evident in the live shot. Look at the black doorway in the original which is now very reddish in the filtered shot.

Special filter = Lomo

Special filter = Lomo

The Lomo filter here in the Edit area is rather different than the same named filter in Live Filters. This particular instance shows increased contrast and saturation, yellowish warming in the highlights, and like some other CameraPlusPro filters, an added spatial effect (the red splotch – in this example the red tint is in the same lower right corner for both chart and woman – if the placement is random, then this is just coincidence – but… it makes it look like the lowlights in the grayscale chart are pushed hard to red:  not so, it’s just that’s where the red tint overlay is this time…). Look at the top of her handbag in the live shot to see that the blacks are not actually shifted red. As with many other CameraPlusPro filters, the whites and blacks are crushed some – you can see on her dress how the highlights are now clipped.

Special filter = Holga

Special filter = Holga

The Holga filter is one where there is a marked similarity between the Live Filter and this instance as an Edit Filter. This version is lighter overall, with a more greenish-yellow cast, particularly in the shadows. The vignette effect is stronger in this Edit filter as well.

Special filter = Diana

Special filter = Diana

The Diana filter is another ‘retro camera’ effect:  based on, wow – surprise, the Diana camera… another of the cheap plastic cameras prevalent in the 1960′s. The vignetting, light leaks, chromatic aberrations and other side-effects of a $10 camera have been brought into the digital age. In a similar fashion to several of the previous ‘retro’ filters discussed already, you will notice crushed blacks & highlights, increased contrast, odd tints (in this case unsaturated highlights tend yellow), increased saturation of colors – and a slight twist in this filter due to even monochrome areas becoming tinted – the silver pendant on her chest now takes on a greenish/yellow tint.

Special filter = Polaroid

Special filter = Polaroid

The Polaroid filter here in the Edit section resembles the effects of the same filter in Live Filters in the highlights (tends yellow with some mild clipping), but diverges in the midtones and shadows. Overall, this instance is lighter, with much less magenta shift in the skin tones. The contrast is not as high as in the Live Filter version, and the saturation is a bit lower.

Special filter = Rust

Special filter = Rust

The Rust filter is really very similar to old-style sepia printing:  this is a post-tint process to a monochrome image. In this filter, the image is first rendered to a black & white image, then colorized with a warm brown overlay. The chart clearly shows this effect.

Special filter = Glamorize

Special filter = Glamorize

The Glamorize filter is a high contrast effect, with considerable clipping in both the blacks and the whites. The overall color balance is mostly unchanged, with a slight increase in saturation in the midtones and lowlights. Thr highlights on the other hand are somewhat desaturated.

Special filter = Hipster

Special filter = Hipster

The Hipster filter follows the same pattern as other filters that have the same name in both the Live Filters section and the Edit section: the Edit version is usually lighter with higher brightness levels, less of a magenta cast in skin tones and lowlights, and a bit less contrast. Still, in relation to the originals, the Hipster has the typical crushed whites and blacks, raised contrast, and in this case an overall warming (red/yellow) of midtones and highlights.

Colorize filters:

Colorize filter = Black & White

Colorize filter = Black & White

The Black & White filter here is almost identical to the effects produced by the same filter in the Live Filter section. A comparison of the chart images shows that. The live shots also render in a similar manner, with as usual the Edit filter being a bit lighter with slightly lower contrast. This is yet another reason to always evaluate a filter with at least two (and the more, the better) different types of source material. While digital filters offer a wealth of possibilities that optical filters never could, there are very fundamental differences in how these filters work.

At a simple level, an optical filter is far more predictable across a wide range of input images than a digital filter. The more complex a digital filter becomes (and many of the filters discussed here that attempt to emulate a multitude of ‘retro’ camera effects are quite complex) the more unexpected results are possible. When you consider that a Wratten #85 warming filter is really very simple (an orange filter that essentially partially blocks bluish/cyan light) – therefore this action will occur no matter what the source image is.

A filter such as Hipster, for example, attempts to mimic what is essentially a series of composited effects from a cheap analog film camera:  chromatic aberration of the cheap plastic lens, spherical lens aberration, light leaks, vignetting due to incomplete coverage of the film (sensor) rectangle, focus anomalies due to imperfect alignment of the focal plane of the lens with the film plane, etc. etc. Trying to mimic all this with mathematics (which is what a digital filter does, it simply applies a set of algorithms to each pixel) means that it’s impossible for even the most skilled visual programmer to fully predict what outputs will occur from a wide variety of inputs.

Colorize filter = Sepia

Colorize filter = Sepia

The Sepia filter is very similar to the Rust filter – it’s another ‘monochrome-then-tint’ filter. This time instead of a reddish-brown tint, the color overlay is a warm yellow.

Colorize filter = Sunset

Colorize filter = Sunset

The Sunset filter brings increased brightness, crushed whites and blacks, increased contrast and an overall warming towards yellow/red. Looks like it’s attempting to emulate the late afternoon light.

Colorize filter = Moss

Colorize filter = Moss

The Moss filter is, well, greenish… It’s a somewhat interesting filter, as most of the tinting effect is concentrated solely on monochromatic midtones. The chart clearly shows this. The live shot demonstrates this as well, the saturated bits keep their colors, the neutrals turn minty-green. Note his shirt, her dress, yellow sign stays yellow, and skin tones/hair don’t take on that much color.

Colorize filter = Lucifer

Colorize filter = Lucifer

The Lucifer filter is – surprise – a reddish warming look. There is an overall desaturation, followed by a magenta/red cast to midtones and lowlights. A slight decrease in contrast actually gives this filter a more faded, retro look than ‘devilish’, and in some ways I prefer this look to some of the previous filters with more ‘retro-sounding’ names.

Colorize filter = Faded

Colorize filter = Faded

The Faded filter offers a desaturated, but contrasty, look. Usually I interpret a ‘faded’ look to mean the kind of visual fading that light causes on a photographic print, where all the blacks and strongly saturated colors fade to a much lighter, softer tone. In this case, much of the color has faded, but the luminance is unchanged (in terms of brightness) and the contrast is increased, resulting in the crushed whites and blacks common to Camera Plus Pro filter design.

Colorize filter = Warm

Colorize filter = Warm

The Warm filter is basically a “plus yellow” filter. Looking at the chart you can see that there is an across-the-board increase in yellow. That’s it.

Colorize filter = Arctic

Colorize filter = Arctic

The Arctic filter is, well, cold. Like several of the other tinted monochromatic filters (Rust, Sepia), this filter first renders the image to a monochrome version, then tints it at all levels with a cold blue color.

Colorize filter = Allure

Colorize filter = Allure

The Allure filter is similar to the Warming filter – an even application of a single color increase – in this case magenta. There is also a slight increase in contrast.

Artistic filters:

Artistic filter = HDR

Artistic filter = HDR

The HDR filter is an attempt to mimic the result from ‘real’ HDR (High Dynamic Range) photography. Of course without true double (or more) exposures, this is not possible, but since some of the ‘look’ that some instances of HDR processing reveal show increased contrast, saturation and so on – this filter emulates some of that. Personally, I believe that true HDR photography should be indistinguishable from a ‘normal’ image – except that it should correctly map a very wide range of illumination levels correctly. A lot of “HDR” images tend to be a bit ‘gimicky’ with excessive edge glow, false saturation, etc. While this can make an interesting ‘special effect’ I think that it would better serve the imaging community if we correctly labeled those images as ‘cartoon’ or some other more accurate name – those filter side-effects really have nothing to do with true HDR imaging. Nevertheless, to complete the description of this filter, it is actually quite ‘color-neutral- (no cast), but does add contrast, particularly edge contrast; and significant vibrance and saturation.

Artistic filter = Fantasy

Artistic filter = Fantasy

The Fantasy filter is another across-the-board ‘color cast’ filter, this time with an increase in yellow-orange. Virtually no change in contrast, just a big shift in color balance.

Artistic filter = Vignette

Artistic filter = Vignette

The Vignette filter is a spatial filter, in that it really just changes the ‘shape’ of the image, not the overall color balance or tonal gradations. It mimics the light fall-off that was typical of early cameras whose lenses had inadequate covering power (the image rendered by the lens did not extend to the edges of the film). There is a tiny loss of brightness even in the center of the frame, but essentially this filter darkens the corners.

Artistic filter = Grunge

Artistic filter = Grunge

The Grunge filter is a combination filter:  both a spatial and tonal filter. It first, like past filters that are ‘tinted monochromatic’ filters, renders the image to black & white, then tints it – in this case with a grayish-yellow cast. There is also a marked decrease in contrast, along with elevated brightness levels. This is easily evident from the grayscale strip in the chart. In the live shot you can see her handbag is now a dark gray instead of black. The spatial elements are then added:  specialized vignetting, to mimic frayed or over-exposed edges of a print, as well as ‘scratches’ and ‘wrinkles’ (formed by spatially localized changes in brightness and contrast). All this combines to offer the look of an old, faded, bent and generally funky print.

Artistic filter = Pop Art

Artistic filter = Pop Art

The Pop Art filter is very much a ‘special effects’ filter. This particular filter is based on the solarization technique. This process (solarization) is in fact a rather complex and highly variable technique. It was initially discovered by Daguerre and others who first pioneered photography in the mid-1800’s. The name comes from the reversal of image tone of a drastically over-exposed part of an image:  in this case, pictures that included the sun in direct view. Instead of the image of the sun going pure white (on the print, pure black in the negative), the sun’s image actually went back to a light gray on the negative, rendering the sun a very dark orb in the final print. One of the very first “optical special effects” in the new field of photography. This is actually cause by halogen ions released within the halide grain by over-exposure diffusing to the grain surface in amounts sufficient to destroy the latent image.

In negatives, this is correctly known as the Sabattier effect after the French photographer, who published an article in Le Moniteur de la Photographie 2 in 1862. The digital equivalent of this technique, as shown in this filter, uses image tonal mapping computation to create high contrast bands where the levels of the original image are ‘flattened’ into distinct and constant brightness bands. This is clearly seen in the grayscale strip in the chart image. It is a very distinctive look and can be visually interesting when used in a creative manner on the correct subject matter.

Artistic filter = Grayscale

Artistic filter = Grayscale

The Grayscale filter is just that:  the rendering of the original image into a grayscale image. The difference between this filter and the Black & White filters (in both Live Filters and this Edit section) is a much lower contrast. By comparing the grayscale strips in the original and filtered chart images, you can see there is virtually no difference. The Black & White filters noticeably increase the contrast.

Artistic filter = Emboss

Artistic filter = Emboss (40%)

Artistic filter = Emboss (100%)

The Emboss filter is another highly specialized effects filter. As can be seen from the chart image, the picture is rendered to a constant monochrome shade of gray, with only contrasting edges being represented by either an increase or decrease in brightness. This creates the appearance of a flat gray sheet that is ‘stamped’ or embossed with the outline of the image elements. High contrast edges are rendered sharply, lower contrast edges are softer in shape. Reading from left to right, a transition from dark to light is represented by a dark edge, from light to dark is shown as light edge. Since each of these Edit filters has an intensity slider, the effect’s strength can be ‘dialed in’ as desired. I have shown all the filters up to now at full strength, for illustrative purposes. Here I have included a sample of this filter at a 40% level, since it shows just how different a look can be achieved in some cases by not using a filter at full strength.

Artistic filter = Xray

Artistic filter = Xray

The Xray filter is yet another ‘monochromatic tint’ filter, with the image first being rendered to a grayscale image, then (in this case) undergoing a complete tonal reversal (to make the image look like a negative), then finally a tint with a dark greenish-cyan color. It’s just a look (since all ‘real’ x-ray films are black and white only), but I’m certain at least one of the millions of people that have downloaded this app will find a use for it.

Artistic filter = Heat Signature

Artistic filter = Heat Signature

The Heat Signature filter is the final filter in this Artistic group. It is illustrative of a scientific imaging method whereby infrared camera images (that see only wavelengths too long for the human eye to see) are rendered into a visual color spectrum to help illustrate relative temperatures of the observed object. In the real scientific camera systems, cooler temperatures are rendered blue, the hottest parts of the image in reds. In between temperatures are rendered in green. Here, this mapping technique is applied against the grayscale. Blacks are blue, midtones are green, highlights are red.

Distortion filters:

The geometric distortion filters are presented differently, since these are spatial filters only. There is no need, nor advantage, to using the color chart test image. I have presented each filter as a triptych, with the first image showing the control as found when the filter is opened within the app, the second image showing a manipulation of the “effects circle” (which can be moved and resized), and the third image is the resultant image after applying the filter. There are no intensity sliders on the distortion filters.

Geometric Filter: Center Offset - Initial / Targeted Area / Result

The Center Offset filter ‘pulls’ the image to the center of the circle, as if the image was on an elastic rubber sheet, and was stretched towards the center of the control circle.

Geometric Filter: Pixelate - Initial / Targeted Area / Result

The Pixelate filter distorts the image inside of the control circle by greatly enlarging the quantization factors in the affected area, causing a large ‘chunking’ of the picture. This renders the affected area virtually recognizable – often used in candid video to obfuscate the identity of a subject.

Geometric Filter: Bulge - Initial / Targeted Area / Result

The Bulge filter is similar to the Center Offset, but this time the image is ‘pulled into’ the control circle, as if a magnifying fish-eye lens was applied to just a portion of the image.

Geometric Filter: Squeeze - Initial / Targeted Area / Result

The Squeeze filter is somewhat the opposite of the Bulge filter, with the image within the control circle being reduced in size and ‘pushed back’ visually.

Geometric Filter: Swirl - Initial / Targeted Area / Result

The Swirl filter does just that:  takes the image within the control circle and rotates it. Moving the little dot controls the amount and direction of the swirl. She needs a chiropractor after this…

Geometric Filter: Noise - Initial / Targeted Area / Result

The Noise filter works in a similar way to the Pixelate filter, only this time large-scale noise is introduced, rather than pixelation.

Geometric Filter: Light Tunnel - Initial / Targeted Area / Result

The Light Tunnel filter is probably a Star Trek shadow – what part of our common culture has not been affected by that far-seeing series? Remember the ‘communicator’?  Flip type cell phones, invented 30 years later, looked suspiciously like that device…

Geometric Filter: Fish Eye - Initial / Result

The Fish Eye filter mimics what a ‘fish eye’ lens might make the picture look like. There is no control circle on this filter – it is a fixed effect. The center of the image is the center of the fish-eye effect. In this case, it’s really not that strong of a curvature effect, to me it looks about like what a 12mm lens (on a 35mm camera system) would look like. If you want to see just how wide a look is possible, go to Nikon’s site and look for examples of their 6.5mm fisheye lens. That is wide!

Geometric Filter: Mirror - Initial / Result

The Mirror filter divides the image down the middle (vertically) and reflects the left half of the image onto the right side. There are no controls – it’s a fixed effect.

Borders:

Borders: Thin White / Rounded Black / Double Frame

Borders: White Frame / Polaroid / Stamp

Borders: Torn / Striped / Grainy

Ok, that’s it. Another iPhone camera app dissected, inspected, respected. Enjoy.

Page 4 of 6 « Previous 1 2 3 4 5 6 Next »
  • Blog at WordPress.com.
  • Connect with us:
  • Twitter
  • Vimeo
  • YouTube
  • RSS
  • Follow Following
    • Parasam
    • Join 95 other followers
    • Already have a WordPress.com account? Log in now.
    • Parasam
    • Customize
    • Follow Following
    • Sign up
    • Log in
    • Report this content
    • View site in Reader
    • Manage subscriptions
    • Collapse this bar
 

Loading Comments...