• Home
  • about this blog
  • Blog Posts

Parasam

Menu

  • design
  • fashion
  • history
  • philosophy
  • photography
  • post-production
    • Content Protection
    • Quality Control
  • science
  • security
  • technology
    • 2nd screen
    • IoT
  • Uncategorized
  • Enter your email address to follow this blog and receive notifications of new posts by email.

  • Recent Posts

    • Take Control of your Phone
    • DI – Disintermediation, 5 years on…
    • Objective Photography is an Oxymoron (all photos lie…)
    • A Historical Moment: The Sylmar Earthquake of 1971 (Los Angeles, CA)
    • Where Did My Images Go? [the challenge of long-term preservation of digital images]
  • Archives

    • September 2020
    • October 2017
    • August 2016
    • June 2016
    • May 2016
    • November 2015
    • June 2015
    • April 2015
    • March 2015
    • December 2014
    • February 2014
    • September 2012
    • August 2012
    • June 2012
    • May 2012
    • April 2012
    • March 2012
    • February 2012
    • January 2012
  • Categories

    • 2nd screen
    • Content Protection
    • design
    • fashion
    • history
    • IoT
    • philosophy
    • photography
    • post-production
    • Quality Control
    • science
    • security
    • technology
    • Uncategorized
  • Meta

    • Register
    • Log in
    • Entries feed
    • Comments feed
    • WordPress.com

Browsing Category post-production

DI – Disintermediation, 5 years on…

October 12, 2017 · by parasam

I wrote this original article on disintermediation more than 5 years ago (early 2012), and while most of the comments are as true today as then, I wanted to expand a bit now that time and technology have moved on.

The two newest technologies that are now mature enough to act as major fulcrums of further disintermediation on a large scale are AI and blockchain. Both of these have been in development for more than 5 years of course, but these technologies have made a jump in capability, scale and applicability in the last 1-2 years that is changing the entire landscape. Artificial Intelligence (AI) – or perhaps a better term “Augmented Intelligence” – is changing forever the man/machine interface, bringing machine learning to aid human endeavors in a manner that will never be untwined. Blockchain technology is the foundation (originally developed in the arcane mathematical world of cryptography) for digital currencies and other transactions of value.

AI

While the popular term is “AI” or Artificial Intelligence, a better description is “Deep Machine Learning”. Essentially the machine (computer, or rather a whole pile of them…) is given a problem to solve, a set of algorithms to use as a methodology, and a dataset for training. After a number of iterations and tunings, the machine usually refines its response such that the ‘problem’ can be reliably solved accurately and repeatedly. The process, as well as a recently presented theory on how the ‘deep neural networks’ of machine learning operate, is discussed in this excellent article.

The applications for AI are almost unlimited. Some of the popular and original use cases are human voice recognition and pattern recognition tasks that for many years were thought to be too difficult for computers to perform with a high degree of accuracy. Pattern recognition has now improved to the point where a machine can often outperfom a human, and voice recognition is now encapsulated in the Amazon ‘Echo’ device as a home appliance. Many other tasks, particularly ones where the machine assists a human (Augmented Intelligence) by presenting likely possibilities reduced from extremely large and complex datasets, will profoundly change human activity and work. Such examples include medical diagnostics (an AI system can read every journal every written, compare to a history taken by a medical diagnostician, and suggest likely scenarios that could include data the medical professional couldn’t possibly have the time to absorb); fact-checking news stories against many wide-ranging sources; performing financial analysis; writing contracts; etc.

It’s easy to see that many current ‘professions’ will likely be disrupted or disintermediated… corporate law, medical research, scientific testing, pharmaceutical drug trials, manufacturing quality control (AI connected to robotics), and so on. The incredible speed and storage capability of modern computational networks provides the foundation for an ever-increasing usage of AI at a continually falling price. Already apps for mobile devices can scan thousands of images and make suggestions for keywords, mark for collections of similar images, etc. [EyeEm Vision].

Another area where AI is utilized is in autonomous vehicles (self-driving cars). The synthesis of hundreds of inputs from sensors, cameras, etc. are analyzed thousands of times per second in order to safely pilot the vehicle. One of the fundamental powers of AI is the continual learning that takes place. The larger the dataset, the more of a given set of experiences, the better the machine will be at optimizing the best outputs. For instance, every Tesla car gathers massive amounts of data from every drive the car takes, and continually uploads that data to the servers at the factory. The combined experience of how thousands of vehicles respond to varying road and traffic conditions is learned and then shared (downloaded) to every vehicle. So each car in the entire fleet benefits from everything learned by every car. This is impossible to replicate with individual human drivers.

The potential use cases for this new technology is almost unbounded. Some challenging issues likely can only be solved with advanced machine learning. One of these is the (today) seemingly intractable problem of updating and securing a massive IoT (Internet of Things) network. Due to the very low cost, embedded nature, lack of human interface, etc. that is a characteristic of most IoT devices, it’s impossible to “patch” or otherwise update individual sensors or actuators that are discovered to have either functional or security flaws after deployment. By embedding intelligence into the connecting fabric of the network itself that links the IoT devices to nodes or computers that utilize the info, even sub-optimal devices can be ‘corrected’ by the network. Incorrect data can be normalized, attempts at intrusion or deliberate altering of data can be determined and mediated.

Blockchain

The blockchain technology that is often discussed today, usually in the same sentence as Bitcoin or Ethereum, is a foundational platform that allows secure and traceable transactions of value. Essentially each set of transactions is a “block”, and these are distributed widely in an encrypted format for redundancy and security. These transactions are “chained” together, forming the “blockchain”. Since the ‘public ledger’ of these groups of transactions (the blockchains) are impossible to alter, the security of every transaction is ensured. This article explains in more detail. While the initial focus of blockchain technology has been on so-called ‘cryptocurrencies’ there are many other uses for this secure transactional technology. By using the existing internet connectivity, items of value can be securely distributed practically anywhere, to anyone.

One of the most obvious instances of transfer of items of value over the internet is intellectual property: i.e. artistic works such as books, images, movies, etc. Today the wide scale distribution of all of these creative works is handled by a few ‘middlemen’ such as Amazon, iTunes, etc. This introduces two major forms of restriction: the physical bottleneck of client-server networking, where every consumer must pull from a central controlled repository; and the financial bottleneck of unitary control over distribution, with the associated profits and added expense to the consumer.

Even before blockchain, various artists have been exploring making more direct connections with their consumers, taking more control over the distribution of their art, and changing the marketing process and value chain. Interestingly the most successful (particularly in the world of music) are all women: Taylor Swift, Beyoncé, Lady Gaga. Each is now marketing on a direct to fan basis via social media, with followings of millions of consumers. A natural next step will be direct delivery of content to these same users via blockchain – which will have even a large effect on the music industry than iTunes ever did.

SingularDTV is attempting the first ever feature film to be both funded and distributed entirely on a blockchain platform. The world of decentralized distribution is upon us, and will forever change the landscape of intellectual property distribution and monetization. The full effects of this are deep and wide-ranging, and would occupy and entire post… (maybe soon).

In summation, these two notable technologies will continue the democratization of data, originally begun with the printing press, and allow even more users to access information, entertainment and items of value without the constraints of a narrow and inflexible distribution network controlled by a few.

Where Did My Images Go? [the challenge of long-term preservation of digital images]

August 13, 2016 · by parasam
Littered Memories - Photos in the Gutter

Littered Memories – Photos in the Gutter (© 2016 Paul Watson, used with permission)

Image Preservation – The Early Days

After viewing the above image from fellow streetphotographer Paul Watson, I wanted to update an issue I’ve addressed previously: the major challenge that digital storage presents in terms of long-term archival endurance and accessibility. Back in my analog days, when still photography was a smelly endeavor in the darkroom for both developing and printing, I slowly learned about careful washing and fixing of negatives, how to make ‘museum’ archival prints (B&W), and the intricacies of dye-transfer color printing (at the time the only color print technology that offered substantial lifetimes). Prints still needed carefully restricted environments for both display and storage, but if all was done properly, a lifetime of 100 years could be expected for monochrome prints and even longer for carefully preserved negatives. Color negatives and prints were much more fragile, particularly color positive film. The emulsions were unstable, and many of the early Ektachrome slides (and motion picture films) faded rapidly after only a decade or so. A well-preserved dye-transfer print could be expected to last for almost 50 years if stored in the dark.

I served for a number of years as a consultant to the Los Angeles County Museum of Art, advising them on photographic archival practices, particularly relating to motion picture films. The Bing Theatre for many years offered a fantastic set of screenings that offered a rare tapestry of great movies from the past – and helped many current directors and others in the industry become better at their craft. In particular, Ron Haver (the film historian, preservationist and LACMA director with whom I worked during that time) was instrumental in supervising the restoration, screening and preservation of many films that would now be in the dust bin of history without his efforts. I learned much from him, and the principles last to this day, even in a digital world that he never experienced.

One project in particular was interesting: bringing the projection room (and associated film storage facilities) up to Los Angeles County Fire Code so we could store and screen early nitrate films from the 1920’s. [For those that don’t know, nitrate film is highly flammable, and once on fire will quite happily burn under water until all the film is consumed. It makes its own oxygen while burning…] Fire departments were not great fans of this stuff… Due to both the large (and expensive) challenges in projecting this type of film, as well as the continual degradation of the film stock, almost all nitrate film left has since been digitally scanned for preservation and safety. I also designed the telecine transfer bay for the only approved nitrate scanning facility in Los Angeles at that period.

What this all underscored was the considerable effort, expense and planning that is required for long term image preservation. Now, while we may think that once digitized, all our image preservation problems are over – the exact opposite is true! We have ample evidence (glass plate negatives from the 1880’s, B&W sheet film negatives from the early 1900’s) that properly stored  monochrome film can easily last 100 years or more, and is readable today as it was the day the film was exposed with no extra knowledge or specialized machinery. B&W movie film is also just as stable as long as printed onto safety film base. Due to the inherent fading of so many early color emulsions, the only sure method for preservation (in the analog era) was to ‘color separate’ the negative film and print the three layers (cyan, magenta and yellow) onto three individual B&W films. – the so-called “Technicolor 3-stripe process”.

Digital Image Preservation

The problem with digital image preservation is not due to the inherent technology of digital conversion – if done well that can yield a perfect reproduction of the original after theoretically an infinite time period. The challenge is how we store, read and write the “0s and 1s” that make up the digital image. Our computer storage and processing capability has moved so quickly over the last 40 years that almost all digital storage from more than 25 years ago is somewhere between difficult and impossible to recover today. This problem is growing worse, not better, in every succeeding year…

IBM 305 RAMAC Disk System 1956: IBM ships the first hard drive in the RAMAC 305 system. The drive holds 5MB of data at $10,000 a megabyte.

IBM 305 RAMAC Disk System 1956: IBM ships the first hard drive in the RAMAC 305 system. The drive holds 5MB of data at $10,000 a megabyte.

This is a hard drive. It holds less than .01% of the data as the smallest iPhone today...

This is a hard drive. It holds less than .01% of the data as the smallest iPhone today…

One of the earliest hard drives available for microcomputers, c.1980. The cost then was $350/MB, today's cost (based on 1TB hard drive) is $0.00004/MB or a factor of 8,750,000 times cheaper.

One of the earliest hard drives available for microcomputers, c.1980. The cost then was $350/MB, today’s cost (based on 1TB hard drive) is $0.00004/MB or a factor of 8,750,000 times cheaper.

Paper tape digital storage as used by DEC PDP-11 minicomputers in 1975.

Paper tape digital storage as used by DEC PDP-11 minicomputers in 1975.

Paper punch card, a standard for data entry in the 1970s.

Paper punch card, a standard for data entry in the 1970s.

Floppy disks: (from left) 8in; 5-1/4"; 3-1/2". The standard data storage format for microcomputers in the 1980s.

Floppy disks: (from left) 8in; 5-1/4″; 3-1/2″. The standard data storage format for microcomputers in the 1980s.

As can be  seen from the above examples, digital storage has changed remarkably over the last few decades. Even though today we look at multi-terabyte hard drives and SSD (Solid State Drives) as ‘cutting edge’, will we chuckle 20 years from now when we look back at something as archaic as spinning disks or NAND flash memory? With quantum memory, holographic storage and other technologies already showing promise in the labs, it’s highly likely that even the 60TB SSD disks that Samsung just announced will take their place alongside 8-inch floppy disks in a decade or so…

And these issues are actually the least of the problem (the physical storage medium). Yes, if you put your ‘digital negatives’ on a floppy disk 15 years ago and now want to read them you have a challenge at hand… but with patience and some time on eBay you could probably assemble the appropriate hardware to retrieve the data into a modern computer. The bigger issue is that of the data format: both of the drives themselves and the actual image files. The file systems – the method that was used to catalog and find the individual images stored on whatever kind of physical storage device, whether ancient hard drive or floppy disk – have changed rapidly over the years. Most early file systems are no longer supported by current OS (Operating Systems), so hooking up an old drive to a modern computer won’t work.

Even if one could find a translator from an older file system to a current one (there is a very limited capability in this regard, many older file systems can literally only be read by a computer as old as the drive), that doesn’t solve the next issue: the image format itself. The issue of ‘backwards compatibility’ is one of the great Achilles Heels of the entire IT industry. The huge push by all vendors to keep all their users relentlessly updating to the latest software, firmware and hardware is just to avoid these same companies having to support older versions of hardware and software. This is not totally a self-serving issue (although there are significant costs and time involved in doing so) – frequently certain changes in technology just can’t support an older paradigm any longer. The earliest versions of Photoshop files, PICT, etc are not easily opened with current applications. Anyone remember Corel Draw?? Even ‘common interchange’ formats such as TIFF and JPEG have evolved, and not every version is supported by every current image processing application.

The more proprietary and specific the image format is, the more fragile it is – in terms of archival longevity. For instance, it may seem that the best archival format would be the Camera Raw format – essentially the full original capture directly from the camera. File types such as RAW, NEF, CR2 and so on are typical. However, each of these is proprietary and typically has about a 5 year life span, in terms of active application support by the vendor. As camera models keep changing – more or less on a yearly cycle – the Raw formats change as well. 3rd party vendors, such as Adobe Photoshop, are under no obligation to support earlier Raw formats forever… and as previously discussed the challenge of maintaining backwards compatibility grows more complex with each passing year. There will always come a time when such formats will no longer be supported by currently active image retrieval, viewing or processing software.

Challenges of Long-Term Digital Image Preservation

Therefore two major challenges must be resolved in order to achieve long term storage and future accessibility of digital images. The first is the physical storage medium itself, whether that is tape (such as LTO-6), hard disk, SSD, optical, etc. The second is the actual image format. Both must be usable and able to transfer images back to the operating system, device and software that is current at the time of retrieval in order for the entire exercise of archival digital storage to be successful. Unfortunately, this is highly problematic at this time. As the pace of technological advance is exponentially increasing, the continual challenge of obsolescence becomes greater every year.

Currently there is no perfect answer for this dilemma – the only solution is one of proactivity on the part of the user. One must accommodate the continuing obsolescence of physical storage mediums, file systems, operating systems and file formats by moving the image files on a regular and continual basis to current versions of all of the above. Typically this is an exercise that must be repeated every five years – at current rates of technological development. For uncompressed images, other than the cost of the move/update there is no impact on the digital image – that is one of the plus sides of digital imagery. However, many images (almost all if you are other than a professional photographer or filmmaker) are stored in a compressed format (JPG, TIFF-LZW/ZIP, MPG, MOV, WMV, etc.). These images/movies will experience a small degradation in quality each time they are copied. The amount and type of artifacts introduced are highly variable, depending on the level of compression and many other factors. The bottom line is that after a number of copy cycles of a compressed file (say 10) it is quite likely that a visible difference from the original file can be seen.

Therefore, particularly for compressed files, a balance must be struck between updating often enough to avoid technical obsolescence and making the fewest number of copies over time in order to avoid image degradation. [It should be noted that potential image degradation will typically only be due to changing/updating the image file format, not moving a bit-perfect copy from one type of storage medium to another].

This process, while a bit tedious, can be automated with scripts or other similar tools, and for the casual photographer or filmmaker will not be too arduous if undertaken every five years or so. It’s another matter entirely for professionals with large libraries, or for museums, archives and anyone else with thousands or millions of image files. A lot of effort, research and thought has been applied to this problem by these professionals, as this is a large cost of both time and money – and no solution other than what’s been described above has been discovered to date. Some useful practices have been developed, both to preserve the integrity of the original images as well as reduce the time and complexity of the upgrade process.

Methods for Successful Digital Image Archiving

A few of those processes are shared below to serve as a guide for those that are interested. Further search will yield a large amount of sites and information that addresses this challenge in detail.

  • The most important aspect of ensuring a long-term archival process that will result in the ability to retrieve your images in the future is planning. Know what you want, and how much effort you are willing to put in to achieve that.
  • While this may be a significant undertaking for professionals with very large libraries, even a few simple steps will benefit the casual user and can protect family albums for decades.
  • In addition to the steps discussed above (updating storage media, OS and file systems, and image formats) another very important aspect is “Where do I store the backup media?” Making just one copy and having it on the hard drive of your computer is not sufficient. (Think about fire, theft, complete breakdown of the computer, etc.)
    • The current ‘best practices’ recommendation is the “3-2-1” approach: Make 3 copies of the archival backup. Store in at least 2 different locations. Place at least 1 copy off-site. A simple but practical example (for a home user) would be: one copy of your image library in your computer. A 2nd copy on a backup drive that is only used for archival image storage. A 3rd copy either on another hard drive that is stored in a vault environment (fireproof data storage or equivalent) or cloud storage.
    • A note on cloud storage: while this can be convenient, be sure to check the fine print on liability, access, etc. of the cloud provider. This solution is typically feasible for up to a few terabytes, beyond that the cost can become significant, particularly when you consider storage for 10-20  years. Also, will the cloud provider be around in 20 years? What insurance do they provide in terms of buyout, bankruptcy, etc.? While the issue of storage media is not an issue with cloud storage and file formats (it is incumbent on the cloud provider to keep that updated) you are still personally responsible for the image format issue: the cloud vendor is only storing a set of binary files, they cannot guarantee that these files will be readable in 20 years.
    • Unless you have a fairly small image library, current optical media (DVD, etc.) is impractical: even double-sided DVDs only hold about 8GB of formatted data. In addition, as one would need to burn these DVDs in your computer, the longevity of ‘burned’ DVDs is not great (compared to printed DVDs like you purchase when you buy a movie). With DVD usage falling off noticeably this is most likely not a good long-term archival format.
    • The best current solution for off-premise archival storage is to physically store external hard drives (or SSDs) with a well known data vaulting vendor (Iron Mountain is one example). The cost is low, and since you only need access every 5 years or so the extra cost for retrieval and re-storage (after updating the storage media) is acceptable even for the casual user.
  • Another vitally important aspect of image preservation is metadata. This is the information about the images. If you don’t know what you have then future retrieval can be difficult and frustrating. In addition to the very basic metadata (file name, simple description, and a master catalog of all your images) it is highly desirable to put in place a metadata schema that can store keywords and a multitude of other information about the images. This can be invaluable to yourself or others who may want to access these images decades in the future. A full discussion of image metadata is beyond the scope of this post, but there is a wealth of information available. One notable challenge is the most basic (and therefore future-proof) still image formats in use today [JPG and TIFF] do not have any facility to attach metadata directly within the image file – it must be stored externally and cross-referenced somehow. Photoshop files on the other hand store both metadata and the image within the same file – but as discussed above this is not the best format for archival storage. There are techniques to cross-reference information to images: from purpose-built archival image software to a simple spreadsheet that uses the filename of the image as a key to the metadata.
  • An important reminder: the whole purpose of an archival exercise is to be able to recover the images at a future date. So test this. Don’t just assume. After putting it all in place, pull up some images from your local offline storage every 3-6 months and see that everything works. Pull one of your archival drives from off-site storage once a year and test it to be sure you can still read everything. Set up reminders in your calendar – it’s so easy to forget until you need a set of images that was accidentally deleted from your computer – and then find out your backup did work as expected.

A final note:  if you look at entities that store valuable images as their sole activity (Library of Congress, The National Archives, etc.) you will find [for still images] that the two most popular image formats are low-compression JPG and uncompressed TIFF. It’s a good place to start…

 

Digital Security in the Cloudy & Hyper-connected world…

April 5, 2015 · by parasam

Introduction

As we inch closer to the midpoint of 2015, we find ourselves in a drastically different world of both connectivity and security. Many of us switch devices throughout the day, from phone to tablet to laptop and back again. Even in corporate workplaces, the ubiquity of mobile devices has come to stay (in spite of the clamoring and frustration of many IT directors!). The efficiency and ease of use of integrated mobile and tethered devices propels many business solutions today. The various forms of cloud resources link all this together – whether personal or professional.

But this enormous change in topology has introduced very significant security implications, most of which are not really well dealt with using current tools, let alone software or devices that were ‘state of the art’ only a few years ago.

What does this mean for the user – whether personal or business? How do network admins and others that must protect their networks and systems deal with these new realities? That’s the focus of the brief discussion to follow.

No More Walls…

NoWalls

The pace of change in the ‘Internet’ is astounding. Even seasoned professionals who work and develop in this sector struggle to keep up. Every day when I read periodicals, news, research, feeds, etc. I discover something I didn’t know the day before. The ‘technosphere’ is actually expanding faster than our collective awareness – instead of hearing that such-and-such is being thought about, or hopefully will be invented in a few years, we are told that the app or hardware already exists and has a userbase of thousands!

One of the most fundamental changes in the last few years is the transition from ‘point-to-point’ connectivity to a ‘mesh’ connectivity. Even a single device, such as a phone or tablet, may be simultaneously connected to multiple clouds and applications – often in highly disparate geographical locations. The old tried-and-true methodology for securing servers, sessions and other IT functions was to ‘enclose’ the storage, servers and applications within one or more perimeters – then protect those ‘walled gardens’ with firewalls and other intrusion detection devices.

Now that we reach out every minute across boundaries to remotely hosted applications, storage and processes the very concept of perimeter protection is no longer valid nor functional.

Even the Washing Machine Needs Protection

Another big challenge for today’s security paradigm is the ever-growing “Internet of Things” (IoT). As more and more everyday devices become network-enabled, from thermostats to washing machines, door locks to on-shelf merchandise sensors – an entirely new set of security issues has been created. Already the M2M (Machine to Machine) communications are several orders of magnitude greater than sessions involving humans logging into machines.

This trend is set to literally explode over the next few years, with an estimated 50 billion devices being interconnected by 2020 (up from 8.7 billion in 2012). That’s a 6x increase in just 8 years… The real headache behind this (from a security point of view) is the amount of connections and sessions that each of these devices will generate. It doesn’t take much combinatorial math to see that literally trillions of simultaneous sessions will be occurring world-wide (and even in space… the ISS has recently completed upgrades to push 3Mbps channels to 300Mbps – a 100x increase in bandwidth – to support the massive data requirements of newer scientific experiments).

There is simply no way to put a ‘wall’ around this many sessions that are occurring in such a disparate manner. An entirely new paradigm is required to effectively secure and monitor data access and movement in this environment.

How Do You Make Bulletproof Spaghetti?

spaghetti

If you imagine the session connections from devices to other devices as strands of pasta in a boiling pot of water – constantly moving and changing in shape – and then wanted to encase each strand in an impermeable shield…. well you get the picture. There must be a better way… There are a number of efforts underway currently from different researchers, startups and vendors to address this situation – but there is no ‘magic bullet’ yet, nor is there even a complete consensus on what method may be best to solve this dilemma.

One way to attempt to resolve this need for secure computation is to break the problem down into the two main constituents: authentication of whom/what; and then protection of the “trust” that is given by the authentication. The first part (authentication) can be addressed with multiple-factor login methods: combinations of biometrics, one-time codes, previously registered ‘trusted devices’, etc. I’ve written on these issues here earlier. The second part: what does a person or machine have access to once authenticated – and how to protect those assets if the authentication is breached – is a much thornier problem.

In fact, from my perspective the best method involves a rather drastically different way of computing in the first place – one that would not have been possible only a few years ago. Essentially what I am suggesting is a fully virtualized environment where each session instance is ‘built’ for the duration of that session; only exposes the immediate assets required to complete the transactions associated with that session; and abstracts the ‘devices’ (whether they be humans or machines) from each other to the greatest degree possible.

While this may sound a bit complicated at first, the good news is that we are already moving in that direction, in terms of computational strategy. Most large scale cloud environments already use virtualization to a large degree, and the process of building up and tearing down virtual instances has become highly automated and very, very fast.

In addition, for some time now the industry has been moving towards thinner and more specific apps (such as found on phones and tablets) as opposed to massive thick client applications such as MS Office, SAP and other enterprise builds that fit far more readily into the old “protected perimeter” form of computing.

In addition (and I’m not making a point of picking on a particular vendor here, it’s just that this issue is a “fact of nature”) the Windows API model is just not secure any more. Due to the requirement of backwards compatibility – to a time where the security threats of today were not envisioned at all – many of the APIs are full of security holes. It’s a constant game of reactively patching vulnerabilities once discovered. This process cannot be sustained to support the level of future connectivity and distributed processing towards which we are moving.

Smaller, lightweight apps have fewer moving parts, and therefore by their very nature are easier to implement, virtualize, protect – and replace entirely should that be necessary. To use just an example: MS Word is a powerful ‘word processor’ – which has grown to integrate and support a rather vast range of capabilities including artwork, page layout, mailing list management/distribution, etc. etc. Every instance of this app includes all the functionality, of which 90% is unused (typically) during any one session instance.

If this “app” was broken down into many smaller “applets” that called on each other as required, and were made available to the user on the fly during the ‘session’ the entire compute environment becomes more dynamic, flexible and easier to protect.

Lowering the Threat Surface

Immune-System

One of the largest security challenges of a highly distributed compute environment – such as is presented by the typical hybrid cloud / world-wide / mobile device ecosystem that is rapidly becoming the norm – is the very large ‘threat surface’ that is exposed to potential hackers or other unauthorized access.

As more and more devices are interconnected – and data is interchanged and aggregated from millions of sensors, beacons and other new entities, the potential for breaches is increased exponentially. It is mathematically impossible to proactively secure every one of these connections – or even monitor them on an individual basis. Some new form of security paradigm is required that will, by its very nature, protect and inhibit breaches of the network.

Fortunately, we do have an excellent model on which to base this new type of security mechanism: the human immune system. The ‘threat surface’ of the human body is immense, when viewed at a cellular level. The number of pathogens that continually attempt to violate the human body systems are vastly greater than even the number of hackers and other malevolent entities in the IT world.

The conscious human brain could not even begin to attempt to monitor and react to every threat that the hordes of bacteria, viruses and other pathogens bring against the body ecosystem. About 99% of such defensive response mechanisms are ‘automatic’ and go unnoticed by our awareness. Only when things get ‘out of control’ and the symptoms tell us that the normal defense mechanisms need assistance do we notice things like a sore throat, an ache, or in more severe cases: bleeding or chest pain. We need a similar set of layered defense mechanisms that act completely automatically against threats to deal with the sheer numbers and variations of attack vectors that are becoming endemic in today’s new hyper-connected computational fabric.

A Two-Phased Approach to Hyper-Security

Our new hyper-connected reality requires an equally robust and all-encompassing security model: Hyper-Security. In principle, an approach that combines the absolute minimal exposure of any assets, applications or connectivity with a corresponding ‘shielding’ of the session using techniques to be discussed shortly can provide an extremely secure, scalable and efficient environment.

Phase One – building user ‘sessions’ (whether that user is a machine or a human) that expose the least possible amount of threat surface while providing all the functionality required during that session – has been touched on earlier during our discussion of virtualized compute environments. The big paradigm shift here is that security is ‘built in’ to the applications, data storage structures and communications interface at a molecular level. This is similar to how the human body systems are organized, which in addition to the actual immune systems and other proactive ‘security’ entities, help naturally limit any damage caused by pathogens.

This type of architecture simply cannot be ‘backed in’ to legacy OS systems – but it’s time that many of these are moved to the shelf anyway: they are becoming more and more clumsy in the face of highly virtualized environments, not to mention the extreme amount of time/cost to maintaining these outdated systems. Having some kind of attachment or allegiance to an OS today is as archaic as showing a preference for a Clydesdale vs a Palomino in the world of Ferraris and Teslas… Really all that matters today is the user experience, reliability and security. How something gets done should not matter any more, even to highly technical users, any more than knowing exactly which endocrines are secreted by our Islets of Langerhans (some small bits of the pancreas that produce some incredibly important things like insulin). These things must work (otherwise humans get diabetes or computers fail to process) but very few of us need to know the details.

Although the concept of this distributed, minimalistic and virtualized compute environment is simple, the details can become a bit complex – I’ll reserve further discussion for a future post.

To summarize, the security provided by this new architecture is one of prevention, limitation of damage and ease of applying proactive security measures (to be discussed next).

Phase Two – the protection of the compute sessions from either internal or external threat mechanisms – also requires a novel approach that is suited for our new ecosystems. External threats are essentially any attempt by unauthorized users (whether human, robots, extraterrestrials, etc.) to infiltrate and/or take data from a protected system. Internal threats are activities that are attempted by an authorized user – but are not authorized actions for that particular user. An example is a rogue network admin either transferring data to an unauthorized endpoint (piracy) or destruction of data.

The old-fashioned ‘perimeter defense systems’ are no longer appropriate for protection of cloud servers, mobile devices, etc. A particular example of how extensive and interconnected a single ‘session’ can be is given here:

A mobile user opens an app on their phone (say an image editing app) that is ‘free’ to the user. The user actually ‘pays’ for this ‘free’ privilege by donating a small amount of pixels (and time/focus) to some advertising. In the background, the app is providing some basic demographic info of the user, the precise physical location (in many instances), along with other data to an external “ad insertion service”.

This cloud-based service in turn aggregates the ‘avails’ (sorted by location, OS, hardware platform, app type that the user is running, etc.) and often submits these ‘avails’ [with the screen dimensions and animation capabilities] to an online auction system that bids the ‘avails’ against a pool of appropriate ads that are preloaded and ready to be served.

Typically the actual ads are not located on the same server, or even the same country, as either the ad insertion service or the auction service. It’s very common for up to half a dozen countries, clouds and other entities to participate in delivering a single ad to a mobile user.

This highly porous ad insertion system has actually become a recent favorite of hackers and other con games – even without technical breaches it’s an incredibly easy system to game – due to the speed of the transactions and almost impossible ability to monitor in real time many ‘deviations’ are possible… and common.

There are a number of ingenious methods being touted right now to help solve both the actual protection of virtualized and distributed compute environments, as well as to monitor such things as intrusions, breaches and unintended data moves – all things that traditional IT tools don’t address well at all.

I am unaware of a ‘perfect’ solution yet to address either the protection or monitoring aspects, but here are a few ideas: [NOTE: some of these are my ideas, some have been taken up by vendors as a potential product/service. I don’t feel qualified enough to judge the merits of any particular commercial product at this point, nor is the focus of this article on detailed implementations but rather concepts, so I’ll refrain from getting into specific products].

  • User endpoint devices (anything from humans’ cellphones to other servers) must be pre-authenticated (using combination of currently well-known identification methods such as MAC address, embedded token, etc.). On top of this basic trust environment, each session is authenticated with a minimum of a two-factor logon scheme (such as biometric plus PIN, certificate plus One Time Token, etc). Once the endpoints are authenticated, a one-time use VPN is established for each connection.
  • Endpoint devices and users are combined as ‘profiles’ that are stored as part of a security monitoring application. Each user may have more than one profile: for instance the same user may typically perform (or be allowed to perform by his/her firm’s security protocol) different actions from a cellphone as opposed to a corporate laptop. The actions that each user takes are automatically monitored / restricted. For instance, the VPNs discussed in the point above can be individually tailored to allow only certain kinds of traffic to/from certain endpoints. Actions that fall outside of the pre-established scope, or are outside a heuristic pattern for that user, can either be denied or referred for further authorization.
  • Using techniques similar to the SSL methodologies that protect and authenticate online financial transactions, different kinds of certificates can be used to permit certain kinds of ‘transactions’ (with a transaction being either access to certain data, permission to move/copy/delete data, etc.) In a sense it’s a bit like the layered security that exists within the Amazon store: it takes one level of authentication to get in and place an order, yet another level of ‘security’ to actually pay for something (you must have a valid credit card that is authenticated in real time by the clearing houses for Visa/MasterCard, etc.). For instance, a user may log into a network/application instance with a biometric on a pre-registered device (such as fingerprint on an iPhone6 that has been previously registered in the domain as an authenticated device). But if that user then wishes to move several terabytes of a Hollywood movie studio’s content to remote storage site (!!) they would need to submit an additional certificate and PIN.

An Integrated Immune System for Data Security

Virus in blood - Scanning Electron Microscopy stylised

The goal of a highly efficient and manageable ‘immune system’ for a hyper-connected data infrastructure is for such a system to protect against all possible threats with the least direct supervision possible. Since not only is it impossible for a centralized omniscient monitoring system to handle the incredible number of sessions that take place in even a single modern hyper-network; it’s equally difficult for a single monitoring / intrusion detection device to understand and adapt to the myriad of local contexts and ‘rules’ that define what is ‘normal’ and what is a ‘breach’.

The only practical method to accomplish the implementation of such an ‘immune system’ for large hyper-networks is to distribute the security and protection infrastructure throughout the entire network. Just as in the human body, where ‘security’ begins at the cellular level (with cell walls allowing only certain compounds to pass – depending on the type and location of each cell); each local device or application must have as part of its ‘cellular structure’ a certain amount of security.

As cells become building blocks for larger structures and eventually organs or other systems, the same ‘layering’ model can be applied to IT structures so the bulk of security actions are taken automatically at lower levels, with only issues that deviate substantially from the norm being brought to the attention of higher level and more centralized security detection and action systems.

Another issue of which to be aware: over-reporting. It’s all well and good to log certain events… but who or what is going to review millions of lines of logs if every event that deviates even slightly from some established ‘norm’ is recorded? And even then, that action will only be looking in the rear view mirror.. The human body doesn’t generate any logs at all and yet manages to more or less handle the security for 37.2 trillion cells!

That’s not say that no logs at all should be kept – they can be very useful to help understand breaches and what can be improved in the future – but the logs should be designed with that purpose in mind and recycled as appropriate.

Summary

In this very brief overview we’ve discussed some of the challenges and possible solutions to the very different security paradigm that we now have due to the hyper-connected and diverse nature of today’s data ecosystems. As the number of ‘unmanned’ devices, sensors, beacons and other ‘things’ continues to grow exponentially, along with the fact that most of humanity will soon be connected to some degree to the ‘Internet’, the scale of the security issue is truly enormous.

A few ideas and thoughts that can lead to effective, scalable and affordable solutions have been discussed – many of these are new and works in progress but offer at least a partially viable solution as we work forward. The most important thing to take away here is an awareness of how things must change, and to keep asking questions and not assume that the security techniques that worked last year will keep you safe next year.

The Hack

December 21, 2014 · by parasam

 

It’s a sign of our current connectedness (and the lack of ability or desire for most of us to live under a digital rock – without an hourly fix of Facebook, Twitter, CNN, blogs, etc – we don’t feel we exist) that the title of this post needs no further explanation.

The Sony “hack” must be analyzed apart from the hyperbole of the media, politics and business ‘experts’ to put the various aspects in some form of objectivity – and more importantly to learn the lessons that come with this experience.

I have watched and read endless accounts and reports on the event, from lay commentators, IT professionals, Hollywood business, foreign policy pundits, etc. – yet have not seen a concise analysis of the deeper meaning of this event relative to our current digital ecosystems.

Michael Lynton (CEO, Sony Pictures) stated on CNN’s Fareed Zakaria show today that “the malware inserted into the Sony network was so advanced and sophisticated that 90% of any companies would have been breached in the same manner as Sony Pictures.” Of course he had to take that position – while his interview was public there was a strong messaging to investors in both Sony and the various productions that it hosts.

As reported by Wired, Slate, InfoWorld and others the hack was almost certainly initiated by the introduction of malware into the Sony network – and not particularly clever code at that. For the rogue code to execute correctly, and to have the permissions to access, transmit and then delete massive amounts of data required the credentials of a senior network administrator – which supposedly were stolen by the hackers. The exact means by which this theft took place have not been revealed publicly. Reports on the amount of data stolen vary, but range from a few to as much as a hundred terabytes. That is a massive amount of data. To move this amount of data requires a very high bandwidth pipe – at least a 1Gbps, if not higher. These sized pipes are very expensive, and normally are managed rather strictly to prioritize bandwidth. Depending on the amount of bandwidth allocated for the theft of data, the ‘dump’ must have lasted days, if not weeks.

All this means that a number of rather standard security protocols were either not in place, or not adhered to at Sony Pictures. The issue here is not Sony – I have no bone to pick with them, and in fact they have been a client of mine numerous times in the past while with different firms, and I continue to have connections with people there. This is obviously a traumatic and challenging time for everyone there. It’s the larger implications that bear analysis.

This event can be viewed through a few different lenses: political, technical, philosophical and commercial.

Political – Initially let’s examine the implications of this type of breach, data theft and data destruction without regard to the instigator. In terms of results the “who did it” is not important. Imagine instead of this event (which caused embarrassment, business disruption and economic loss only) an event in which the Light Rail switching system in Los Angeles was targeted. Multiple and simultaneous train wrecks are a highly likely result, with massive human and infrastructure damage certain. In spite of the changes that were supposed to follow on from the horrific crash some years ago in the Valley there, the installation of “collision avoidance systems” on each locomotive still has not taken place. Good intentions in politics often take decades to see fruition…

One can easily look at other bits of infrastructure (electrical grids, petroleum pipelines, air traffic control systems [look at London last week], telecommunications, internet routing and peering – the list goes on and on – of critical infrastructure that is inadequately protected.

Senator John McCain said today that of all the meetings in his political life, none took longer and accomplished less than cybersecurity subjects. This issue is just not taken seriously. Many major hacks have occurred in the past – this one is getting serious attention from the media due to the target being a media company, and that many high profile Hollywood people have had a lot to say – and that further fuels the news machine.

Now whether North Korea instigated or performed this on its own – both possible and according to the FBI is now fact – the issue of a nation-state attacking other national interests is most serious, and demands a response from the US government. But regardless of the perpetrator – whether an individual criminal, a group, etc. – a much higher priority must be placed on the security of both public and private entities in our connected world.

Technical – The reporting and discussion on the methodology of this breach in particular, and ‘hacks’ in general, has ranged from the patently absurd to relatively accurate. In this case (and some other notable breaches in the last few years, such as Target), the introduction of malware into an otherwise protected (at least to some degree) system allowed access and control from an undesirable external party. While the implanting of the malware may have been a relatively simple part of the overall breach, the design of the entire process, codewriting and testing, steering and control of the malware from the external servers, as well as the data collection and retransmission clearly involved a team of knowledgeable technicians and some considerable resources. This was not a hack done by a teenager with a laptop.

On the other hand, the Sony breach was not all that sophisticated. The data made public so far indicates that the basic malware was Trojan Destover, combined with a commercially available codeset EldoS RawDisk which was used for the wiping (destruction) of the Sony data. Both of these programs (and their similes Shamoon and Jokra) have been detected in other breaches (Saudi Aramco, Aug 2012; South Korea, Mar 2013). See this link for further details. Each major breach of this sort tends to have individual code characteristics, along with required access credentials with the final malware deliverable package often compiled shortly before the attack. The evidence disclosed in the Sony breach indicates that stolen senior network admin credentials were part of the package, which allowed the full and unfettered access to the network.

It is highly likely that the network was repeatedly probed some time in advance of the actual breach, both as a test of the stolen credentials (to see how wide the access was, and to inspect for any tripwires that may have been set if the credentials had become suspect).

The real lessons to take away from the Sony event have much more to do with the structure of the Sony network, their security model, security standards and practices, and data movement monitoring. To be clear, this is not picking out Sony as a particularly bad example: unfortunately this firm’s security practices are rather the norm today: very, very few commercial networks are adequately protected or designed – even financial companies who one would assume have better than average security.

Without having to look at internal details, one only has to observe the reported breaches of large retail firms, banks and trading entities, government agencies, credit card clearing houses… the list goes on and on. Add to this that not all breaches are reported, and even less are publicly disclosed – the estimates range from 20-30% of network security breaches are reported. The reasons vary from loss of shareholder or customer trust, appearance of competitive weakness, not knowing what actually deserves reporting and how to classify the attempt or breach, etc. etc. In many cases data on “cyberattacks” is reported anonymously or is gathered statistically by firms that handle security monitoring on an outsource basis. At least these aggregate numbers give a scope to the problem – and it is huge. For example, IBM’s report shows for one year (April 2012 – April 2013)  there were 73,400 attacks on a single large organization during this time period. This resulted in about 100 actual ‘security incidents’ during the year for that one company. A PWC report shows that an estimated 42 million data security incidents will have occurred during 2014 worldwide.

If this amount of physical robberies were occurring to firms the response, and general awareness, would be far higher. There is something insidious about digital crime that doesn’t attract the level of notice that physical events do. The economic loss worldwide is estimated in the hundreds of billions of dollars – with most of these proceeds ending up in organized crime, rogue nation-states and terrorist groups. Given the relative sophistication of ISIS in terms of social media, video production and other high-tech endeavours, it is highly likely that a portion of their funding comes from cybercrime.

The scope of the Sony attack, with the commensurate data loss, is part of what has made this so newsworthy. This is also the aspect of this breach that could have mitigated rather easily – and underscores the design / security practices faults that plague so many firms today. The following points list some of the weaknesses that contributed to the scale of this breach:

  • A single static set of credentials allowed nearly unlimited access to the entire network.
  • A lack of effective audit controls that would have brought attention to potential use of these credentials by unauthorized users.
  • A lack of multiple-factor authentication that would have made hard-coding of the credentials into the malware ineffective.
  • Insufficient data move monitoring: the level of data that was transmitted out of the Sony network was massive, and had to impact normal working bandwidth. It appears that large amounts of data are allowed to move unmanaged in and out of the network – again an effective data move audit / management process would have triggered an alert.
  • Massive data deletion should have required at least two distinct sets of credentials to initiate.
  • A lack of internal firewalls or ‘firestops’ that could have limited the scope of access, damage, theft and destruction.
  • A lack of understanding at the highest management levels of the vulnerability of the firm to this type of breach, with commensurate board expertise and oversight. In short, a lack of governance in this critical area. This is perhaps one of the most important, and least recognized, aspects of genuine corporate security.

Philosophical – With the huge paradigm shift that the digital universe has brought to the human race we must collectively asses and understand the impacts of security, privacy and ownership of that ephemeral yet tangible entity called ‘data’. With an enormous transformation under way where millions of people (the so-called ‘knowledge workers’) produce, consume, trade and enjoy nothing but data. There is not an industry that is untouched by this new methodology: even very ‘mechanistic’ enterprises such as farming, steelmills, shipping and train transportation are deeply intertwined with IT now. Sectors such as telecoms, entertainment, finance, design, publishing, photography and so on are virtually impossible to implement without complete dependence on digital infrastructures. Medicine,  aeronautics, energy generation and prospecting – the lists go on and on.

The overall concept of security has two major components: Data Integrity (ensuring that the data is not corrupted by either internal or external factors, and that the data can be trusted; and Data Security (ensuring that only authorized users have access to view, transmit, delete or perform other operations on the data). Each are critical – Integrity can likened to disease in the human body: pathogens that break the integrity of certain cells will disrupt and eventually cause injury or death; Security is similar to the protection that skin and other peripheral structures provide – a penetration of these boundaries leads to a compromise of the operation of the body, or in extreme cases major injury or death.

An area that is a particular challenge is the ‘connectedness’ of modern data networks. The new challenge of privacy in the digital ecosystem has prompted (and will continue to) many conversations, from legal to moral/ethical to practical. The “Facebook” paradigm [everything is shared with everybody unless you take efforts to limit such sharing] is really something we haven’t experienced since small towns in past generations where everybody knew everyone’s business…

Just as we have always had a criminal element in societies – those that will take, destroy, manipulate and otherwise seek self-aggrandizement at the expense of others – we now have the same manifestations in the digital ecosystem. Only digi-crime is vastly more efficient, less detectable, often more lucrative, and very difficult to police. The legal system is woefully outdated and outclassed by modern digital pirates – there is almost no international cooperation, very poor understanding by most police departments or judges, etc. etc. The sad truth is that 99% of cyber-criminals will get away with their crimes for as long as they want to. A number of very basic things must change in our collective societies in order to achieve the level of crime reduction that we see in modern cultures in the physical realm.

A particular challenge is mostly educational/ethical: that everything on the internet is “free” and is there for the taking without regard to the intellectual property owner’s claim. Attempting to police this after the fact is doomed to failure (at least 80% of the time) – not until users are educated to the disruption and effects of their theft of intellectual property. This attitude has almost destroyed the music industry world-wide, and the losses to the film and television industry amount to billions of dollars annually.

Commercial – The economic losses due to data breaches, theft, destruction, etc are massive, and the perception of the level of this loss is staggeringly low – even among commercial stakeholders whom are directly affected. Firms that spend massive amounts of time, money and design effort to physically protect their enterprises apply the flimsiest of real effective data security efforts. Some of this is due to lack of knowledge, some to lack of understanding of the core principals that comprise a real and effective set of procedures for data protection, and a certain amount of laziness: strong security always takes some effort and time during each session with the data.

It is unfortunate, but the level of pain, publicity and potential legal liability of major breaches such as Sony are seemingly necessary to raise the attention that everyone is vulnerable. It is imperative that all commercial entities, from a vegetable seller at a farmer’s market that uses SnapScan all the way to global enterprises such as BP Oil, J.P. Morgan, or General Motors take cyber crime as a continual, ongoing, and very real challenge – and deal with it at the board level with same importance given to other critical areas of governance: finance, trade secrets, commercial strategy, etc.

Many firms will say, “But we already spend a ridiculous amount on IT, including security!” I am sure that Sony is saying this even today… but it’s not always the amount of the spend, it’s how it’s done. A great deal of cash can be wasted on pretty blinking lights and cool software that in the end is just not effective. Most of the changes required today are in methodology, practice, and actually adhering to already adopted ‘best practices’. I personally have yet to see any business, large or small, that follows the stated security practices set up in that particular firm to the letter.

– Ed Elliott

Past articles on privacy and security may be found at these links:

Comments on SOPA and PIPA

CONTENT PROTECTION – Methods and Practices for protecting audiovisual content

Anonymity, Privacy and Security in the Connected World

Whose Data Is It Anyway?

Privacy, Security and the Virtual World…

Who owns the rain?  A discussion on accountability of what’s in the cloud…

The Perception of Privacy

Privacy: a delusion? a right? an ideal?

Privacy in our connected world… (almost an oxymoron)

NSA (No Secrets Anymore), yet another Snowden treatise, practical realities…

It’s still Snowing… (the thread on Snowden, NSA and lack of privacy continues…)

 

Why do musicians have lousy music systems?

August 18, 2012 · by parasam

[NOTE: this article is a repost of an e-mail thread started by a good friend of mine. It raised an interesting question, and I found the answers and comments fascinating and wanted to share with you. The original thread has been slightly edited for continuity and presentation here].

To begin, the original post that started this discussion:

Why do musicians have lousy hi-fis?

It’s one of life’s little mysteries, but most musicians have the crappiest stereo systems.

  by Steve Guttenberg

August 11, 2012 7:36 AM PDT

I know it doesn’t make sense, but it’s true: most musicians don’t have good hi-fis.

To be fair, most musicians don’t have hi-fis at all, because like most people musicians listen in their cars, on computers, or with cheap headphones. Musicians don’t have turntables, CD players, stereo amplifiers, and speakers. Granted, most musicians aren’t rich, so they’re more likely to invest whatever available cash they have in buying instruments. That’s understandable, but since they so rarely hear music over a decent system they’re pretty clueless about the sound of their recordings.

(Credit: Steve Guttenberg/CNET)

Musicians who are also audiophiles are rare, though I’ve met quite a few. Trumpet player Jon Faddis was definitely into it, and I found he had a great set of ears when he came to my apartment years ago to listen to his favorite Dizzy Gillespie recordings. Most musicians I’ve met at recording sessions focus on the sound of their own instrument, and how it stands out in the mix. They don’t seem all that interested in the sound of the group.

I remember a bass player at a jazz recording session who grew impatient with the time the engineer was taking to get the best possible sound from his 200-year-old-acoustic bass. After ten minutes the bassist asked the engineer to plug into a pickup on his instrument, so he wouldn’t take up any more time setting up the microphone. The engineer wasn’t thrilled with the idea, because he would then just have the generic sound of a pickup rather than the gorgeous sound of the instrument. I was amazed: the man probably paid $100,000 for his bass, and he didn’t care if its true sound was recorded or not. His performance was what mattered.

From what I’ve seen, musicians listen differently from everyone else. They focus on how well the music is being played, the structure of the music, and the production. The quality of the sound? Not so much!

Some musicians have home studios, but very few of today’s home (or professional) studios sound good in the audiophile sense. Studios use big pro monitor speakers designed to be hyperanalytical, so you hear all of even the most subtle details in the sound. That’s the top requirement, but listening for pleasure is not the same as monitoring. That’s not just my opinion — very, very few audiophiles use studio monitors at home. It’s not their large size or four-figure price tags that stop them, as most high-end audiophile speakers are bigger and more expensive. No, studio monitor sound has little appeal for the cognoscenti because pro speakers don’t sound good.

I have seen the big Bowers & Wilkins, Energy, ProAc, and Wilson audiophile speakers used by mastering engineers, so it does work the other way around. Audiophile speakers can be used as monitors, but I can’t name one pro monitor that has found widespread acceptance in the audiophile world.

Like I said, musicians rarely listen over any sort of decent hi-fi, and that might be part of the reason they make so few great-sounding records. They don’t know what they’re missing.

——–

Now, in order, the original comment and replies:  [due to the authors of these e-mails being located in USA, Sweden, UK, etc. not all of the timestamps line up, but the messages are in order]

From: Tom McMahon
Sent: Saturday, August 11, 2012 6:09 PM
To: Mikael Reichel; ‘Per Sjofors’; John Watkinson
Subject: Why do musicians have lousy hi-fis?

I agree to some of this, have same observations.

But I don’t agree with the use broad use of “most musicians” as it may be interpreted that it is the majority. Neither of us can know this. Neil Young evidently cares a lot.

However, the statement “pro speakers do not sound good” is a subjective statement.  It´s like saying distilled water (i.e 100% H20) doesn’t taste good. Possibly, many think so but distilled water is the purest form of water and by definition anything less pure is not pure water. Whether you like it or not.

The water is the messenger and shooting it for delivering the truth is not productive.

If Audiophiles don’t like to hear the truth is sort deflates them.

A friend, singer/songwriter with fifteen CD´s behind her in the rock/blues genre, on a rare occasion when I got her to listen to her own stuff over a pair of Earo speakers, commented on the detail and realism. I then suggested that her forthcoming CD should be mastered over these speakers, she replied “ I don’t dare”.

Best/Mike

——-

From: John Watkinson
Sent: Sun 8/12/2012 6:46 AM
To: Mikael Reichel; Per Sjofors; Tom McMahon; Ed Elliott
Subject: Why do musicians have lousy hi-fis?

Hello All,

If a pro loudspeaker reproduces the input waveform and an audiophool [ed.note: letting this possible mis-spelling stand, in case it’s intended…] speaker also does, then why do they sound different?

We know the reasons, which are that practically no loudspeakers are accurate enough.  We have specialist speakers that fail in different ways.

The reason musicians are perceived to have lousy hi-fis may be that practically everyone does. The resultant imprinting means that my crap speaker is correct and your crap speaker is wrong, whereas in fact they are all crap.

Our author doesn’t seem to know any of this, so he is just wasting our time.

Furthermore I know plenty of musicians with good ears and good hi-fi.

Best,

John

——-

From: Mikael Reichel
Sent: Sun 8/12/2012 12:58 PM
To: John Watkinson; Per Sjofors; Tom McMahon; Ed Elliott
Subject: Why do musicians have lousy hi-fis?

Andrew is a really nice guy.

He has a talent in selecting demo material for his demos and his TAD speakers sound quite good. But they are passive and also use bass-reflex. This more or less puts the attainable quality level against a brick wall. Add the soft dome tweeter and I am a bit surprised at Mr. Jones choices, dome tweeters are fundamentally flawed designs.

One logical result of making “new” drivers is to skip ferrite magnets because they become a size and weight thief and also limits mechanical freedom for the design engineer. You almost automatically get higher sensitivity by using neodymium. But this is also a myth, as little is made to increase the fundamental mismatch of the driver to the air itself. I would guess Andrew has had the good sense to go with neodymium magnets.

To deliver affordable speakers is a matter of having a strong brand to begin with that allows for volumes so that you can have clients buy without listening first. This then allows for direct delivery thus avoiding importing and retail links in the chain to be removed. Typically out of the MSRP, only 25% reaches the manufacturer. Remove the manufacturing cost and you realize it’s  a numbers game.

This is exactly what is going on in the “audio” business today. The classical sales structures are being torn down. A very large number or speaker manufacturers are going to disappear because they don’t have the brand and volumes to sell over the web. To survive new ways of reaching the clients have to be invented. A true paradigm shift.

TAD has been the motor to provide this brand recognition and consumers are marketed to believe that they can get almost $80 performance from a less than $1 speaker, which is naïve.

If the speakers can be made active with DSP, they can be made to sound unbelievably good.  This is the real snapshot of the future.

/Mike

—-

From: John Watkinson
Sent: Sun 8/12/2012 11:13 PM
To: Mikael Reichel; Per Sjofors; Tom McMahon; Ed Elliott
Subject: Why do musicians have lousy hi-fis?

Hello All,

Mike is right. The combination of dome tweeter, bass reflex and passive crossover is a recipe for failure. But our journalist friend doesn’t know. I wonder what he does know?

Best,

John

——

From: Ed Elliott
Sent: Mon 8/13/2012 7:02 AM
To: Mikael Reichel; Per Sjofors; Tom McMahon; John Watkinson
Subject: Why do musicians have lousy hi-fis?

Hi Mike,

Well, this must be answered at several levels. Firstly the author has erred in two major, but unfortunately not at all uncommon ways:  the linguistic construction of “most <fill_in_the_blank>” is inaccurate and unscientific at the best of times, and all too often a device for aligning some margin of factuality to a desired hypothesis; the other issue is the very basis of the premise raised is left undefined in the article – what is “a good hi-fi system”?

Forgoing for the moment the gaps in logic and ontological reasoning that may be applied to the world of aural perception, the author does raise a most interesting question – one that if had been pursued in a different manner would have made for a far more interesting article. Forgetting for the moment issues (that are a total red herring today – the affordability of quality components has never been more accessible) of cost or availability – WHY don’t ‘most’ musicians apparently care to have ‘better’ sound systems? There is no argument that many musicians DO have excellent systems, at all levels of affordability; and appreciate the aural experience provided. However – and I personally have spent many decades closely connected to both the professional audio industry, musicians in general, and the larger post-production community – I do agree that based purely on anecdotal observation, many talented musicians do in fact not attach a large importance to the expense or quality of their ‘retail playback equipment.’ The same of course is not valid for their instruments or any equipment they deem necessary to express their music.

The answer I believe is most interesting:  I believe that good musicians simply don’t need a high quality audio system in order to hear music. The same synaptic wiring and neural fabric connectedness – the stuff that really is the “application layer” in the brain – means that this group of people actually ‘hears’ differently. Hearing, just like seeing, is almost 90% a neurological activity. Beyond the very basic mechanical issues of sound capture, focus, filtering and conversion from pressure waves to chemico-electical impulses (provided by the ears, ear canal, eardrum, cochlea) all the rest of ‘hearing’ is provided by  the ‘brain software.’

To cut to the chase:  this group of people already has a highly refined ‘sample set’ of musical notes, harmonies, melodies, rhythms, etc. in their brains, and needs very little external stimulation in order to ‘fire off’ those internalized ‘letters and words’ of musical sound. Just as an inexperienced reader may ‘read’ individual words – and a highly competent and experienced reader basically digests entire sentences as a single optic-with-meaning element, so a lay person may actually need a ‘better’ sound system in order to ‘hear’ the same things that a trained musician would hear.

That is not to say that musicians don’t hear – and appreciate – external acoustic reality:  just try playing a bit out of tune, lag a few microseconds on a lead guitar riff, or not express the same voice as others in the brass sections and you will get a quick lesson in just how acute their hearing is. It’s just tuned to different things. Once a composed song is underway, the merest hint of a well-known chord progression ‘fires off’ that experience in the musician’s brain software – so they ‘hear’ it was it was intended – the harmonic distortion, the lack of coherent imaging, the flappy bass – all those ‘noise elements’ are filtered out by their brains – they already know what it’s supposed to sound like.

If you realize that someone like Anne-Sophie Mutter has most likely played over 100,000 hours of violin already in her life, and look at what this has done to her brain in terms of listening (forgoing for the moment the musculo-skeletal reprogramming that has turned her body into as much of a musical instrument as the Stradivarius) – you can see that there is not a single passage of classical stringed or piano music that is not already etched into her neural fabric at almost an atomic level.

With this level of ‘programming’ it just doesn’t take a lot of external stimulation in order for the brain to start ‘playing the music.’ Going at this issue from a different but orthogonal point of view:  a study of how hearing impaired people ‘hear’ music is also revealing – as well as the other side of that equation: those that have damaged or uncommon neural software for hearing. People in this realm include autistics (who often have an extreme sensitivity to sound); stroke victims; head trauma victims, etc. A study here shows that the ‘brain software’ is far more of an issue in terms of quality of hearing than the mechanics or objective scientific ‘quality’ (perhaps an oxymoron) of the acoustic pressure waves provided to the human ear.

Evelyn Glennie – profoundly deaf – is a master percussionist (we just saw her play at the Opening Ceremonies) – and has adapted ‘hearing’ to an astounding level of physical sense in vibrations – including her feet (she mostly plays barefoot for this reason). I would strongly encourage the reading of three short and highly informative letters she published on hearing, disabilities and professional music. Evelyn does not need, nor can she appreciate, DACs with only .0001%THD and time-domain accuracies of sub-milliseconds – but there is no question whatsoever that this woman hears music!

This may have been a bit of a round-about answer to the issues of why ‘most musicians’ may have what the author perceives to be ‘sub-optimal’ hi-fi systems – but I believe it more fully answers the larger question of aural perception. I for instance completely appreciate (to the limits of my ability as a listener – which are professional but not ‘golden ears’) the accuracy, imaging and clarity of high end sound systems (most of which are way beyond my budget for personal consumption); but the lack of such does not get in the way of my personal enjoyment of many musicians’ work – even if played back from my iPod. Maybe I have trained my own brain software just a little bit…

In closing, I would like to take an analogy from the still photographer’s world:  this group of amateurs and professional alike put an almost unbelievable level of importance on their kit – with various bits of hardware (and now software) taking either the blame or the glory for the quality (or lack thereof) of their images. My personal observation is that the eye/brain behind the viewfinder is responsible for about 93% of both the successes and failures of a given image to match the desired state. I think a very similar situation exists today in both ‘audiophile’ as well as ‘professional audio’ – it would be a welcome change to discuss facts, not fancy.

Warmest regards,

Ed

——-

From: John Watkinson
Sent: Mon 8/13/2012 12:50 AM
To: Mikael Reichel; Per Sjofors; Tom McMahon; Ed Elliott
Subject: Why do musicians have lousy hi-fis?

Hello All,

I think Ed has hit the nail on he head. It is generally true that people hear what they ought to hear and see what they ought to see, not what is actually there. It is not restricted to musicians, but they have refined it for music.

The consequences are that transistor radios and portable cassette recorders, which sound like strangled cats, were popular, as iPods with their MP3 distortion are today. But in photography, the Brownie and the Instamatic were popular, yet the realism or quality of the snaps was in the viewer’s mind. Most people are content to watch television sets that are grossly misadjusted and they don’t see spelling mistakes.

I would go a little further than Ed’s erudite analysis and say that most people not only see and hear what they ought to, but they also think what they ought to, even if it defies logic. People in groups reach consensus, even if it is wrong, because the person who is right suffers peer pressure to conform else risk being ejected from the group. This is where urban myths that have no basis in physics come from. The result is that most decisions are emotive and science or physics will be ignored. Why else do 40 percent of Americans believe in Creation? I look forward to having problems with groups because it confirms that my ability to use logic is undiminished. Was it Wittgenstein who said what most people think doesn’t amount to much?

Marketing, that modern cancer, leaps into this human failing, by playing on emotions to sell things. It follows that cars do not need advanced technology if they can be sold by draping them with scantily-clad women. Modern cars are still primitive because the technical requirements are distorted downwards by emotion. In contrast  Ed’s accurate observation that photographers obsess about their kit, as do audiophiles illustrates that technical requirements can also be distorted upwards by emotion.

Marketing also preys on people to convince them that success depends on having all the right accessories and clothing for the endeavour. Look at all the stuff that sportsmen wear.

Whilst it would be nice for hi-fi magazines to discuss facts instead of fancy, I don’t see it happening as it gets in the way of the marketing.

Best,

John

——

From: Ed Elliott
Sent: Monday, August 13, 2012 8:11 PM
To: Mikael Reichel; Per Sjofors; Tom McMahon; John Watkinson
Subject: Why do musicians have lousy hi-fis?

Hi John, Mike, et al

Love your further comments, but I’m afraid that “marketing, that modern cancer” is a bit older that we would all like to think.. one example that comes to mind is about 400-odd years old now – and actually represents one of the most powerful and enduring ‘brands’ ever to be promoted in Western culture: Shakespeare. Never mind that allusions to and adaptations of his plays have permeated our culture for hundreds of years – even ‘in the time’ Shakespeare created, bonded with and nurtured his customer base. Now this was admittedly marketing in a more pure sense (you actually got something for your money) – but nonetheless repeat business was just as much of an issue then as now. Understanding his audience, knowing that both tragedy and comedy was required to build the dramatic tension that would bring crowds back for more; recognizing the capabilities and understanding of his audience so that they were stimulated but not puzzled, entertained but not insulted – there was a mastery of marketing there beyond just the tights, skulls and iambic pentameter.

Unfortunately I do agree that with time, marketing hype has diverged substantially from the underlying product to the point that often they don’t share the same solar system… And what’s worse is that now in many cases the marketing department actually runs product development in many large corporations… and I love your comments on ‘stuff sportsmen wear’ – to copy my earlier analogy on photography, if I was to pack up all the things that the latest photo consumer magazine and camera shop said I needed to have to take a picture I would need a band of Sherpas…

Now there is a bit of light ahead potentially:  the almost certain demise of most printed magazines (along with newspapers, etc.) is creating a tumultuous landscape that won’t stabilize right away. This means that what entities do remain and survive to publish information no longer have to conform to selling X amount of pages of ads to keep the magazine alive (and hence pander to marketing, etc.) There are two very interesting facts about digital publishing that to date have mostly been ignored (and IMHO are the root cause of digital mags being so poorly constructed and read – those that want to think that they can convert all their print reader to e-zine subscriptions need to check out multi-year retention stats – they are abysmal.)

#1 is people read digital material in a very different way than paper. (The details must wait for another thread – too much now). Bottom line is that real information (aka CONTENT) is what keeps readership. Splash and video might get some hits, but the fickle-factor is astronomical in online reading – if you don’t give your reader useful facts or real entertainment they won’t stay.

#2 is that, if done correctly, digital publishing can be very effective, beautiful, evocative and compelling at a very low cost. There simply isn’t the need for massive ad dollars any more. So the type of information that you all are sharing here can be distributed much more widely than ever before. I do believe there is a window of opportunity for getting real info out in front of a large audience, to start chipping away at this Himalayan pile of stink that defines so much of (fill in the blank: audio, tv, cars, vitamins, anti-aging creams, etc.)

Ok, off to answer some e-mails for that dwindling supply of really importance:  paying clients!

Many thanks

Ed

——–

From: John Watkinson
Sent: Tue 8/14/2012 12:57 AM
To: Mikael Reichel; Per Sjofors; Tom McMahon; Ed Elliott
Subject: Why do musicians have lousy hi-fis?

Dear Ed,

This is starting to be interesting. I take your point about Shakespeare being marketed, but if we want to go back even further, we have to look at religion as the oldest use of marketing. It’s actually remarkable that the religions managed to prosper to the point of being degenerate when they had no tangible product at all. Invent a problem, namely evil, and then sell a solution, namely a god. It’s a protection racket. Give us money so we can build cathedrals and you go to heaven. It makes oxygen free speaker cables seem fairly innocuous. At least the hi-fi industry doesn’t threaten you with torture. If you  read anything about the evolution of self-replicating viruses, suddenly you see why the Pope is opposed to contraception.

I read an interesting book about Chartres cathedral, in which it was pointed out that the engineering skills and the underlying science needed to make the place stand up (it’s more air than stone) had stemmed from a curiosity about the laws of nature that would ultimately lead to the conclusion that there was no Creation and no evidence for a god, that the earth goes round the sun and that virgin birth is due to people living in poverty sharing bathwater.

If you look at the achievements of hi-fi and religion in comparison to the achievements of science and engineering, the results are glaring. The first two have made no progress in decades, because they are based on marketing and have nothing to offer. Prayer didn’t defeat Hitler, but radar, supercharging and decryption may have played a small part.

Your comments about printed magazines and newspapers are pertinent. These are marketing tools and as a result the copy is seldom of any great merit, as Steve Gutenberg continues to demonstrate in his own way. Actually the same is true for television. People think the screensaver was a computer invention. Actually it’s not, it’s what television broadcasts between commercial breaks.

So yes, you are right that digital/internet publishing is in the process of pulling the rug on traditional media. Television is over. I don’t have one and I don’t miss the dumbed-down crap and the waste of time. Himalayan pile of stink is a wonderful and evocative term!

Actually services like eBay are changing the world as well. I hardly ever buy anything new if I can get one someone doesn’t want on eBay. It’s good for the vendor, for me and the environment.

In a sense the present slump/recession has been good in some ways. Certainly it has eroded peoples’ faith in politicians and bankers and the shortage of ready cash has led many to question consumerism.

Once you stop being a consumer, reverse the spiral and decide to tread lightly on the earth, the need to earn lots of money goes away. My carbon neutral house has zero energy bills and my  policy of buying old things and repairing them means I have all the gadgets I need, but without the cost. The time liberated by not needing to earn lots of money allows me to make things I can’t buy, like decent loudspeakers. It means I never buy pre-prepared food because I’m not short of time. Instead I can buy decent ingredients and know what I’m eating.

One of the experiences I treasure due to reversing the spiral was turning up at a gas station in Luxembourg. There must have a been a couple of million dollars worth of pretentious cars filling up. BMW, Lexus, Mercedes, the lot. And they all turned to stare at my old Jaguar when I turned up. It was something they couldn’t have because they were too busy running on the treadmill to run a car that needs some fixing.

Best,

John

——

From: Ed Elliott
Sent: Wed 8/15/2012 1:01 PM
To: Mikael Reichel; Per Sjofors; Tom McMahon; John Watkinson
Subject: Why do musicians have lousy hi-fis?

Hi John,

Yes, I’m finding this part of my inbox so much more interesting than the chatterings of well-intentioned (but boring) missives; and of course the ubiquitous efforts of (who else!) the current transformation of tele-marketers into spam producers… I never knew that so many of my body parts needed either extending, flattening, bulking up, slimming down, etc. etc!

Ahh! Religion… yes, got that one right the first time. I actually find that there’s a more nefarious aspect to organized religion: to act as a proxy for nation-states that couldn’t get away with the murder, land grabs, misogyny, physical torture and mutilation if these practices were “state sponsored” as opposed to “expressions of religious freedom.”  Always makes me think of that Bob Dylan song, “With God on Our Side…”

On to marketing in television.. and tv in general… I actually turned mine on the other day (well, since I don’t have a ‘real’ tv – but I do have the cable box as I use that for high speed internet – I turned on the little tuner in my laptop so I could watch Olympics in HD (the bandwidth of the NBC streaming left something to be desired) – and as usual found the production quality and techniques used in the adverts mostly exceed the filler… The message, well that went the way of all adverts: straight back out my head into the ether… What I want to know – this is a better trick than almost anything – how did the advertisers ever get convinced that watching this drivel actually effects what people buy?? Or I am just an odd-bod that is not swayed by hype, mesmerizing disinformation [if I buy those sunglasses I’ll get Giselle to come home with me…], or downright charlatantry.

And yeah for fixing things and older cars… I bought my last car in 1991 and have found no reason [or desire] to replace it. And since (thank g-d) it was “pre-computer” it is still ‘fixable’ with things like screwdrivers and spanners… I think another issue in general is that our cultures have lost the understanding of ‘preventative maintenance’ – a lot of what ends up in the rubbish bin is due to lack of care while it was alive..

Which brings me back to a final point:  I do like quality, high tech equipment, when it does something useful and fulfills a purpose. But I see a disappointing tendency with one of the prime vendors in this sector:  Apple. I am currently (in my blog) writing about the use of iPhones as a practical camera system for HD cinemaphotography – with all of the issues and compromises well understood! Turns out that two of the fundamental design decisions by Apple are at the core of limiting the broader adoption of this platform (I describe how to work around this, but it’s challenging):  the lack of a removable battery and removable storage.

While there are obvious advantages to those decisions, in terms of reliability and industrial design, it can’t be ignored that the lack of both of those features certainly mitigate towards a user ‘upgrading’ at regular intervals (since they can’t swap out a battery or add more storage). And now they have migrated this ‘sealed design’ to the laptops… the new Mac Air is for all practical purposes unrepairable (again, no removable battery, the screen is glued into the aluminium case, and all sub-assemblies are wave-soldered to the main board). The construction of even the Mac Pro is moving in that direction.

So my trusty Dell laptop, with all of its warts, is still appreciated for its many little screws and flaps… when a bit breaks, I can take it apart and actually change out just a Bluetooth receiver, or upgrade memory, or even swap the cpu. Makes me feel a little less redundant in this throw-away world.

I’ll leave you with this:

Jay Leno driving down Olive Ave. last Sunday in his recently restored 1909 Doble & White steam car. At 103 years old, this car would qualify for all current California “low emissions” and “fuel efficiency” standards…

(snapped from my iPhone)

Here is the link to Jay’s videos on the restoration process.

Enjoy!

Ed

iPhone Cinemaphotography – A Proof of Concept (Part 1)

August 3, 2012 · by parasam

I’m introducing a concept that I hope some of my readers may find interesting:  the production of an HD video that is entirely built using only the iPhone (and/or iPad). Everything from storyboard to all photography, editing, sound, titles and credits, graphics and special effects, etc. – and final distribution – can now be performed on a “cellphone.” I’ll show you how. Most of the focus of the new crop of highly capable ‘cellphone cameras’ such as is available with the iPhone and certain Android phones has been focused on still photography. While motion photography (video) is certainly well-known, it has not received the same attention and detail – nor the amount of apps – as its single-image sibling.

While I am using a single platform with which I am familiar (iOS on the iPhone/iPad), this concept can I believe be performed on the Android class of devices as well. I have not (nor do I intend to) research that possibility – I’ll leave that for others who are more familiar with that platform. The purpose is to show that such a feat CAN be done – and hopefully done reasonably well. It’s only been a few years since the production of HD video was strictly in the realm of serious professionals, with budgets of hundreds of thousands of dollars or more. While there of course are many compromises – and I don’t for a minute pretend that the range of possible shots or quality will anywhere near approach what a high quality DSLR, RED, Arri or other professional video camera can produce, I do know that a full HD (1080P) video can now be totally produced on a low-cost mobile platform.

This POC (Proof Of Concept) is intended as more than just a lark or a geeky way to eat some spare time:  the real purpose is to bring awareness that the previous bar of high cost cinemaphotography/editing/distribution has been virtually eliminated. This paves the way for creative individuals almost anywhere in the world to express themselves in a way that was heretofore impossible. Outside of America and Western Europe both budgets and skilled operator/engineers are in far lower supply. But there are just as many people who have a good story to tell in South Africa, Nigeria, Uruguay, Aruba, Nepal, Palestine, Montenegro and many other places as there are in France, Canada or the USA. The internet has now connected all of us – information is being democratized in a huge way. Of course there are still the ‘firewalls’ of North Korea, China and a few others – but the human thirst for knowledge, not to mention the unbelievable cleverness and endurance of 13-year-old boys and girls in figuring out ‘holes in the wall’ shows us that these last bastions of stolidity are doomed to fall in short order.

With Apple and other manufacturers doing their best to leave nary a potential customer anywhere in the world ‘out in the cold’, the availability, both in real terms and affordability, is almost ubiquitous. With apps now costing typically a few dollars (it’s almost insane – the Avid editor for iOS is $5; the Avid Media Composer software for PC/Mac is $2,500) an entire production / post-production platform can be assembled for under $1,000. This exercise is about what’s possible, not what is the easiest, most capable, etc. Yes, there are many limitations. Yes, some things will take a lot longer. But what you CAN do is just nothing short of amazing. That’s the story I’m going to share with you.

A note to my readers:  None of the hardware or software used in this exercise was provided by any vendor. I have no commercial relationship with any vendor, manufacturer or distributor. Choices I have made or examples I use in this post are based purely on my own preference. I am not a professional reviewer, and have made no attempt to exhaustively research every possible solution for the hardware or software that I felt was required to produce this video. All of the hardware and software used in this exercise is currently commercially available – any reasonably competent user should be able to reproduce this process.

Before I get into detail on hardware or software, I need to remind you that the most important part of any video is the story. Just having a low-cost, relatively high quality platform on which to tell your ‘story’ won’t help if you don’t have something compelling to say – and the people/places/things in front of the lens to say it. We have all seen that vast amounts of money and technical talent means nothing in the face of a lousy script or poor production values – just look over some of the (unfortunately many) Hollywood bombs… I’m the first one to admit that motion picture storytelling is not my strong point. I’m an engineer by training and my personal passion is still photography – telling a story with a single image. So… in order to bring this idea to fruition – I needed help. After some thought, I decided that ‘piggybacking’ on an existing production was the most feasible way to produce this idea: basically adding a few iPhone cameras to a shoot where I could take advantage of existing set, actors, lighting, direction, etc. etc. For me, the this was the only practical way to make this happen in a relatively short time frame.

I was lucky enough to know a very talented director, Ambika Leigh, who was receptive and supportive of my idea. After we discussed my general idea of ‘piggybacking’ she kindly identified a potential shoot. After initial discussions with the producers, the green light for the project was given. The details of the process will come in future posts, but what I can say now (the project is an upcoming series that is not released yet – so be patient! It will be worth the wait) is that without the support and willingness of these three incredible women (Ambika Leigh, director; Tiffany Price & Lauren DeLong, producers/actors/writers) this project would not have moved forward with the speed, professionalism and just plain fun that it has. At a very high level, the series brings us into the clever and humorous world of the “Craft Ladies” – a couple of friends that, well, like to craft – and drink wine.

Craft Ladies is the story of Karen and Jane, best friends forever, who love to
craft…they just aren’t any good at it. Over the years Karen and Jane’s lives
have taken slightly different paths but their love of crafting (and wine)
remains strong. Tune in in September to watch these ladies fulfill their
dream…a craft show to call their own. You won’t find Martha Stewart here,
this is crafting Craft Ladies style. Craft Up Nice Things!”

Please check out their links for further updates and details on the ‘real thing’

www.facebook.com/CraftUpNiceThings
www.twitter.com/#!/2craftladies
www.CraftUpNiceThings.com

I am solely responsible for the iPhone portion of your program – so all errors, technical gaffs, editorial bloops and other stumbles are mine. As said, this is a proof of concept – not the next Spielberg epic… My intention is to follow – as closely as my expertise and the available iOS technology will allow – the editorial decisions, effects, titles, etc. that end up on the ‘real show’. To this end I will be necessarily lagging a bit in my production, as I have to review the assembled and edited footage first. However, I will make every effort to have my iPhone version of this series ready for distribution shortly after the real version launches. Currently this is planned for some time in September.

For the iPhone shoot, two iPhone4S devices were used. I need to thank my capable 2nd camerawoman – Tara Lacarna – for her endurance, professionalism and support over two very long days of shooting! In addition to her new career as an iPhonographer (ha!) she is a highly capable engineer, musician and creative spirit. While more detail will be provided later in this post, I would also like to thank Niki Mustain of Schneider Optics for her time (and the efforts of others at this company) in helping me get the best possible performance from the “iPro” supplementary lenses that I used on portions of the shoot.

Before getting down to the technical details of equipment and procedure, I’ll lay out the environment in which I shot the video. Of course, this can vary widely, and therefore the exact technique used, as well as some hardware, may have to change and adapt as required. In this case the entire shoot was indoors using two sets. Professional lighting was provided (3200°K) for the principal photography (which used various high-end DSLR cameras with cinema lenses). I had to work around the available camera positions for the two iPhone cameras, so my shots will not be the same as were used in principal photography. Most shots were locked off with both iPhones on tripods; there were some camera moves and a few handheld shots. The first set of episodes was filmed over two days (two very, very long days!!) and resulted in about 116GB of video material from the two iPhones. In addition to Ambika, Tiffany, Lauren and Tara there was a dedicated and professional crew of camera operators, gaffers, grips, etc. (with many functions often performed by just one person – this was after all about quality not quantity – not to mention the lack of a 7-figure Hollywood budget!). A full list of credits will be in a later post.

Aside from the technical challenges; the basic job of getting lines and emotion on camera; taking enough camera angles, close-ups, inserts and so on to ensure raw material for editorial continuity; and just plain endurance (San Fernando Valley, middle of summer, had to close all windows and turn off all fans and A/C for each shot due to noise, a pile of people on a small set, hot lights… you get the picture…) – the single most important ingredient was laughter. And there was lots of it!! At one time or another, we had to stop down for several minutes until one or the other of us stopped laughing so hard that we couldn’t hold a camera, say a line or direct the next sequence. That alone should prompt you to check this series out – these women are just plain hilarious.

Hardware:

As mentioned previously, two iPhone4S cameras were used. Each one was the 32GB model. Since shooting video generates large files, most user data was temporarily deleted off each phone (easy to restore later with a sync using iTunes). Approximately 20GB free space was made available on each phone. If one was going to use an iPhone for a significant amount of video photography the 64GB version would probably be useful. The down side is that (unless you are shooting very short events) you will still have to download several times a day to an external storage device or computer – and the more you have to download the longer that takes! As in any process, good advance planning is critical. In my case with this shoot, I needed to coordinate ‘dumping times’ with the rest of the shoot:  there was a tight schedule and the production would not wait for me to finish dumping data off the phones. The DSLR cameras use removable memory cards, so it only takes a few minutes to swap cards, then those cameras are ready to roll again. I’ll discuss the logistics of dumping files from the phones in more detail in the software section below. If one was going to attempt long takes with insufficient break time to fully dump the phone before needing to shoot again, the best solution would be to have two iPhones for each camera position, so that one phone could be transferring data while the other one was filming.

In order to provide more visual control, as well as interest, a set of external adapter lenses (the “iPro” system by Schneider Optics) was used on various shots. A total of three different lenses are available: telephoto, wide-angle and a fisheye. A detailed post on these lenses – and adaptor lenses in general – is here. For now, you can visit their site for further detail. These lenses attach to a custom shell that is affixed to the iPhone. The lenses are easily interchanged with a bayonet mounting system. Another vital feature of the iPro shell for the phone is the provision for tripod mounting – a must for serious cinemaphotography – especially with the telephoto lens which magnifies camera movement. Each phone was fitted with one of the iPro shells to facilitate tripod mounting. This also made each phone available for attaching one of the lenses as required for the shot.

iPro “Fisheye” lens

iPro “Wide Angle” lens

iPro “Telephoto” lens

Another hardware requirement is power:  shooting video kills batteries just about faster than any other activity on the iPhone. You are using most of the highest power consuming parts of the phone – all at the same time:  the camera sensor, the display, the processor, and high bandwidth memory writing. A fully charged iPhone won’t even last two hours shooting video, so one must run the phone on external power, or plan the shoot for frequent (and lengthy!) recharge sessions. Bring plenty of extra cables, spare chargers, extension cords, etc. – it’s very cheap insurance to keep the phones running. Damage to cables while on a shoot is almost a guaranteed experience – don’t let that ruin your session.

A particular challenge that I had was a lack of a ‘feed through’ docking connector on the Line6 “Mobile In” audio adapter (more on this below). This meant that while I was using this high quality audio input adapter I was forced to run on battery, since I could not plug in the Mobile In device and the power cable at the same time to the docking connector on the bottom of the phone. I’m not aware of a “Y” adapter for iPhone docking connectors, but that would have really helped. It took a lot of juggling to keep that phone charged enough to keep shooting. On several shots, I had to forgo the high quality audio as I had insufficient power remaining and had to plug in to the charger.

As can be seen, the lack of both removable storage and a removable battery are significant challenges for using the iPhone in cinemaphotography. This can be managed, but it’s a critical point that requires careful attention. Another point to keep in mind is heat. Continual use of the phone as a video camera definitely heats up the phone. While neither phone ever overheated to the point where it became an issue, one should be aware of this fact. If one was shooting outside, it may be helpful to (if possible) shade the phone(s) from direct sunlight as much as practical. However, do not put the iPhones in the ice bucket to keep them cool…

Gitzo tripod with fluid head attached

Close-up of fluid head

Tripods are a must for any real video work:  camera judder and shake is very distracting to the viewer, and is impossible to remove (with any current iPhone app). Even with serious desktop horsepower (there is rather good toolset in Adobe AfterEffects for helping to remove camera shake) it takes a lot of time, skill and computing power. Far better to avoid in the first place whenever possible. Since ‘locked off’ shots are not as interesting, it’s worth getting fluid heads for your tripods so you can pan and tilt smoothly. A good high quality tripod is also well worth the investment:  flimsy ones will bend and shake. While the iPhone is very light – and this may tempt one to go with a very lightweight tripod – this will work against you if you want to make any camera tilts or pans. The very light weight of the phone actually causes problems in this case: it’s hard to smoothly move a camera that has almost no mass. At least having a very rigid and sturdy tripod will help in this regard. One will need considerable practice to get used to the feel of your particular fluid head, get the tension settings just right, etc. – in order to effect the smoothest camera movements. Remember this is a very small sensor, and the best results will be obtained with slow and even camera pans/tilts.

For certain situations, miniature tripods or dollies can be very useful, but they don’t take the place of a normal tripod. I used a tiny tripod for one shot, and experimented with the Pico Dolly (sort of a miniature skateboard that holds a small camera) although did not actually use for a finished shot. This is where the small size and light weight of the iPhone can be a plus: you can hang it and place it in locations that would be difficult to impossible with a normal camera. Like anything else though, don’t get too creative and gimmicky:  the job of the camera is to record the story, not call attention to itself or technology. If a trick or a gadget can help you visually tell the story – then it’s useful. Otherwise stick with the basics.

Another useful trick I discovered that helped stabilize my hand-held shots:  my tripod (as many do) has a removable center post on which the fluid head is mounted (that in turn holds the camera). By removing the entire camera/fluid-head/center-post assembly I was able to hold the camera with far greater accuracy and stability. The added weight of the central post and fluid head, while not much – maybe 500 grams – certainly added stability to those shots.

Tripod showing center shaft extended before removal.

Center shaft removed for “hand-held” use

If you are planning on any camera moves while on the tripod (pans or tilts), it is imperative that the tripod be leveled first – and rechecked every time you move it or dismount the phone. Nothing worse than watching a camera pan move uphill as you traverse from left to right… A small circular spirit level is the perfect accessory. While I have seen very small circular levels actually attached to tripod heads, I find them too small for real accuracy. I prefer a small removable device that I can place on top of the phone itself (which then accounts for all the hardware up to and including the shell) that can affect alignment. The one I use is 25mm (1″) in diameter.

I touched on the external audio input adapter earlier while discussing power for the iPhones, I’ll detail that now. For any serious video photography you must use external microphones: the one in the phone itself – although amazingly sensitive, has many drawbacks. It is single channel – where the iPhone hardware (and several of the better video camera apps) are capable of recording stereo; you can’t focus the sensitivity of the microphone, and most importantly, the mike is on the front of the phone at the bottom – pointing away from where your lens is aimed!

While it is possible to plug a microphone into the combination headphone/microphone connector on the top of the phone, there are a number of drawbacks. The first is it’s still a mono input – only 1 channel of sound. The next is the audio quality is not that great. This input was designed for telephone conversation headpiece use, so extended frequency response, low noise and reduced harmonic distortion were not part of the design parameters. Far better audio quality is available on the digital docking connector on the bottom of the phone. That said, there are very few devices actually on the market today (that I have been able to locate) that will function in the environment of video cinemaphotography, particularly if one is using the iPro shell and tripod mounting the iPhone. Many of the devices treat the iPhone as just an audio device (the phone actually snaps into several of the units, making it impossible to use as a camera); with others the mechanical design is not compatible with either the iPro case or tripod mounting. Others offer only a single channel input (these are mostly designed for guitar input so budding Hendrix types can strum into GarageBand). The only unit I was able to find that met all of my requirements (stereo line input, high audio quality, mechanically did not interfere with tripod or the iPro case) was a unit “Mobile In” manufactured by Line6. Even this device is primarily a guitar input unit, but it does have a line in stereo connector that works very well. In order to use the hardware, you must download and install their free app (and it’s on the fat side, about 55MB) which contains a huge amount of guitar effects. Totally useless for the line input – but it won’t work without it. So just install it and forget about it. You never need to open the MobilePOD app in order to use the line input connector. As discussed above in the section on power, the only major drawback is that once this device is plugged in you can’t run your phone off external power. Really need to find that “Y” adapter for the docking connector..

“Mobile In” audio input adapter attached.

Now you may ask, why do I need a line input connector when I’m using microphones?? My attempt here is to produce the highest quality content possible, while still using the iPhone as the camera/recorder. For the reasons already discussed above, the use of external microphones is required. Typically a number of mikes will be placed, fed into a mixer, and then a line level feed (usually stereo) will be fed to the sound recorder. In all ‘normal’ (aka not using cellphones as cameras!!) video shoots, the sound is almost always recorded on a separate device, just synchronized in some fashion to each of the cameras so the entire shoot is in sync. In this particular shoot, the two actors on the set were individually miked with lavalier microphones (there is a whole hysterical story on that subject, but it will have to wait until after that episode airs…) and a third direction boom mike was used for ambient sound. The three mikes were fed into a small portable mixer/sound recorder. The stereo output (usually used for headphone monitoring – a line level output) was fed (through a “Y” cable) to both the monitoring headphones and the input to the Mobile In device. Essentially, I just ‘piggybacked’ on top of the existing audio feed for the shoot.

This didn’t violate my POC – as one would need this same equipment – or something like it – on any professional shoot. At a minimum, one could just use a small mixer, obviously if the iPhone was recording the sound an external recorder is not required. I won’t attempt to further discuss all the issues in recording high quality sound – that would take a full post (if not a book!) – but there is a massive amount of literature out there on the web if one looks. Good sound recording is an art – if possible avail yourself of someone who knows this skill to assist you on your shoot – it will be invaluable. I’ll just mention a few pointers to complete this part of the discussion:

  • Record the most dynamic range possible without distortion (big range between soft and loud sounds). This will markedly improve the presence of your audio tracks.
  • Keep all background noise to an absolute minimum. Turn off all cellphones! (put the iPhone that are ‘cameras’ in “airplane mode” so they won’t be disturbed by phone calls, texts or e-mails). Turn off fans, air conditioners, refrigerators (if you are near a kitchen), etc. etc. Take a few moments after calling ‘quiet on the set’ to sit still and really listen to your headphones to ensure you don’t hear any noise.
  • As much as possible, keep the loudness levels consistent from take to take – it will help keep your editor (or yourself…) from taking out the long knives after way too many hours trying to normalize levels between takes…
  • If you use lavalier mikes (those tiny microphones that clip onto clothing – they are available in ‘wired’ or ‘wireless’ versions) you need to listen carefully during rehearsals and actual takes for clothing rustle. That can be very distracting – you may have to stop and reposition the mike so that the housing is not touching any clothing. These mikes come with little clips that actually mount on to the cable just below the actual microphone body – thereby insulating clothing movement (rustle) from being transmitted to the sensor through the body of the microphone. Take care in mounting and test with your actor as they move – and remind them that clasping their hands to their chest in excitement (and thumping the mike) will make your sound person deaf – and ruin the audio for that shot!

Actors’ view of the camera setup for a shot, (2 iPhones, 3 DSLRs)

Storage and the process of dumping (transferring video files from the iPhones to external storage) is a vital part of both hardware, software and procedure. The hardware I used will be discussed here, the software and procedure is mentioned in the next section. Since the HD video files consume about 2.5GB for every 10 minutes of filming, even the largest capacity iPhone (64GB) will run out of space in short order. As mentioned earlier, I used the 32GB models on this shoot, with about 20GB free space on each phone. That meant that, at a maximum, I had a little over an hour’s storage on each phone. During the two days of shooting, we shot just under 5 hours of actual footage – which amounted to a total of 116GB from the two iPhones in total. (Not every shot was shadowed by the iPhones: some of the close-ups and inserts could not be performed by the iPhones as they would have been in the shot composed by the DSLR cameras).

The challenge to this project was to not involve anything other than the iPhone/iPad for all factors of the production. The dumping of footage from the iPhones to external storage is one area where Apple (nor any 3rd party developer that I have found) does not offer a purely iOS solution. With the lack of removable storage, there are only two ways to move files off the iPhone: Wi-Fi or the USB cable attached to the docking connector. Wi-Fi is not a practical solution in this environment:  the main reason is it’s too slow. You can find as many ‘facts’ on iPhone Wi-Fi speed as there are types of orchids in the Amazon, but my research (verified by personal tests) show that, in a real-world and practical manner 8Mb/s is a top-end average for upload (which is what you need to transmit files FROM the phone to an external storage device). That’s only 800KB/s – so it would take 7 hours to upload one 2.5GB movie file – which is 10 minutes of shooting! Not to mention the issues of Wi-Fi interference, dropped connections, etc. etc.

That brings us to cabled connections. Currently, the only way to move data off of (or on to for that matter) an iPhone is to use a computer. While the Apple Time Machine could in theory function as a direct-to-phone data storage device, it only connects via Wi-Fi. However, the method I chose only uses the computer as a ‘connection link’ to an external hard drive, so in my view it does not break my premise of an “all iOS” project. When I get to the editing stage, I just reverse the process and pull files back from the external drive through the computer back to the phone (in this case using iTunes).

I will discuss the precise technique and software used below, but suffice to say here that I used a PC as the computer – mainly just because that is the laptop that I have. It also does prove however that there is no issue of “Mac vs PC” as far as the computer goes. I feel this is an important point, as in many countries outside USA and Western Europe the price premium on Apple computers is such that they are very scarce. For this project, I wanted to make sure the required elements were as widely available as possible.

The choice of external storage is important for speed and reliability’s sake. Since the USB connection from the phone to the computer is limited to v2.0 (480Mb/s theoretical) one may assume that just any USB2.0 external drive would be sufficient. That’s not actually the case, as we shall see…  While the link speed of USB2.0 supposedly can provide a maximum of 48MB/s (480Mb/s), that is never matched in reality. USB chipsets in the internal hub in the computer, processing power in the phone and the computer, other processes running on the computer during transfer, bus and cpu speed in the computer, actual disk controller and disk speed of the external storage – all these factors serve to significantly affect transfer speed.

Probably the most important is the actual speed of the external disk. Most common portable USB2.0 disks (the small 2.5″ format) run at 5400RPM, and have disk controller chipsets that are commensurate, with actual performance in the 5-10MB/s range. This is too slow for our purposes. The best solution is to use an external RAID array of two ‘striped’ disks [RAID 0] using high performance 7200RPM SATA disks with an appropriately designed disk controller. Devices such as the G-RAID Mini system is a good example. If you are using a PC, get the best performance with an eSATA connection to the drive (my laptop has a built-in eSATA connector, but PC Card adapters are available that easily support this connectivity for computers that don’t have it built in). This offers the highest performance (real-world tests show average write speeds of 115MB/s using this device). If you are using an Apple computer, opt for the FW800 connection (I’m not aware of eSATA on any Mac computer). While this limits the performance to around 70MB/s maximum, it’s still much faster than the USB2.0 interface from the phone so it’s not an issue. I have proven that having a significant amount of ‘headroom’, in terms of speed performance, on the external drive, is desirable. You just don’t need the drive to slow things down any.

There are other viable alternatives for external drives, particularly if one needed a drive that did not require an external power supply (which the G-RAID does due to the performance). Keep in mind that while it’s possible to run a laptop and external drive all off battery power, you really won’t want to do this – for one, unless you are on a remote outdoor location shoot, you will have AC power – and disk writing at continuous high throughput is a battery killer! That said, a good alternative (for PC) is one of the Seagate GoFlex USB3.0 drives. I use a 1.5TB model that houses a high-performance 7200RPM drive and supports up to 50MB/s write speeds. For the Mac, Seagate has a Thunderbolt model. Although the Thunderbolt interface is twice as fast (10Gb/s vs 5Gb/s) as USB3.0 it makes no difference in transfer speed (these single drive storage devices can’t approach the transfer speeds of either interface). However, there is a very good reason to go with USB3.0/eSATA/Thunderbolt instead of USB2.0 – overall performance. With the newer high-speed interfaces, the full system (hard disk controller, interface chipset, etc.) is designed for high-speed data transfer, and I have proved to myself that it DOES make a difference. It’s very hard to find a USB2.0 system that matches the performance of a USB3.0/etc system – even on a 2.5″ single drive subsystem.

The last thing to cover here under storage is backup. Your video footage is irreplaceable. Procedure will be covered below, but under hardware, provide a second external drive on the set. It’s simply imperative that you immediately back up the footage on to a second physical drive as soon as practical – NOT at the end of the day! If you have a powerful enough computer, with the correct connectivity, etc. – you can actually copy the iPhone files to two drives simultaneously (best solution), but otherwise plan on copying the files from one external drive to the backup while the next scenes are being shot (background task).

I’ll close with a final suggestion:  while this description of hardware and process is not meant in any way to be a tutorial on cinemaphotography, audio, etc. etc. – here is a small list (again, this is under ‘hardware’ as it concerns ‘stuff’) of useful items that will make your life easier “on the set”:

  • Proper transport cases, bags, etc. to store and carry all these bits. Organization, labeling, color-coding, etc. all helps a lot when on a set with lots of activity and other equipment.
  • Spare cables for everything! Murphy will see to it that the one item for which you have no duplicate will get bent during the shoot…
  • Plenty of power strips and extension cords.
  • Gorilla tape or camera tape (this is NOT ‘duct tape’). Find a gaffer and he/she will explain it to you…
  • Small folding table or platform (for your PC/Mac and drives) – putting high value equipment on the floor is asking for BigFoot to visit…
  • Small folding stool (appropriate for the table above), or an ‘apple box’ – crouching in front of computer while manipulating high value content files is distracting, not to mention tiring.
  • If you are shooting outside, more issues come into play. Dust is the big one. Cans of compressed air, lens tissue, camel-hair brushes, zip-lock baggies, etc. etc. – none of the items discussed in this entire post appreciate dust or dirt…
    • Cooling. Mentioned earlier, but you’ll need to keep the phone and computer as cool as practical (unless of course you are shooting in Scotland in February in which case the opposite will be true: trying to figure out how to keep things warm and dry in the middle of a wet and freezing moor will become paramount).
    • Special mention for ocean-front shoots:  corrosion is a deadly enemy of iPhones and other such equipment. Wipe down ALL equipment (with appropriate cloths and solutions) every night after the shoot. Even the salt air makes deposits on every exposed metal surface – and later on a very hard to remove scale will become apparent.
  • A final note for sunny outdoor shoots: seeing the iPhone screen is almost impossible in bright sunlight, and unlike DSLRs the iPhone does not have an optical viewfinder. Some sort of ‘sunshade’ will be required. While researching this online, I came across this little video that shows one possible solution. Obviously this would have to be modified to accommodate the audio adapter, iPro lenses, etc. shown in my project, but it will hopefully give you some ideas. (Thanks to triplelucky for this video).

Software:

As amazing as the hardware capabilities of the above system are (iPhone, supplemental lenses, audio adapters, etc.) – none of this would be possible without the sophisticated software that is now available for this platform at such low cost. The list of software that I am currently using to produce this video is purely of my own choosing – there may be other equally viable solutions for each step or process. I feel what is important is the possibility of the process, not the precise piece of kit used to accomplish the task. Obviously, as I am using the iOS platform, all the apps are “Apple iPhone/iPad compliant”. The reader that chooses an alternate platform will need to do a bit of research to find similar functionality.

As a parallel project, I am currently describing my experiences with the iPhone camera in general, as well as many of the software packages (apps) that support the iPhone still and video camera. These posts are elsewhere in this same blog location. For that reason, I will not describe in any detail the apps here. If software that is discussed or listed here is not yet in my stable of posts, please be patient – I promise that each app used in this project will be discussed in this blog at some point. I will refer the reader to this post where an initial list of apps that will be discussed is located.

Here is a short list of the apps I am currently using. I may add to this list before I complete this project! If so, I will update this and other posts appropriately.

Storyboard Composer Excellent app for building storyboards from shot or library photos, adding actors, camera motion, script, etc. Powerful.

Movie*Slate A very good slate app.

Splice Unbelievable – a full video editor for the iPhone/iPad. Yes, you can: drop movies and stills on a timeline, add multiple sound tracks and mix them, work in full HD, has loads of video and audio efx, add transitions, burn in titles, resize, crop, etc. etc. Now that doesn’t mean that I would choose to edit my next feature on a phone…

Avid Studio  The renowned capability of Avid now stuffed into the iPad. Video, audio, transitions, etc. etc. Similar in capability to Splice (above) – I’ll have a lot more to say after these two apps get a serious test drive while editing all the footage I have shot.

iTC Calc The ultimate time code app for iDevices. I use on both iPad and iPhone.

FilmiC Pro Serious movie camera app for iPhone. Select shooting mode, resolution, 26 frame rates, in-camera slating, colorbars, multiple bitrates for each resolution, etc. etc.

Camera+ I use this as much for editing stills as shooting, biggest advantage over native iPhone camera app is you can set different part of frame for exposure and focus.

almost DSLR is the closest thing to fully manual control of iPhone camera you can get. Takes some training, but is very powerful once you get the hang of it.

PhotoForge2 Powerful editing app. Basically Photoshop on the iPhone.

TrueDoF This one calculates true depth-of-field for a given lens, sensor size, etc. I use this to plan my range of focus once I know my shooting distance.

OptimumCS-Pro This is sort of inverse of the above app – here you enter the depth of field you want, then OCSP tells you the shooting distance and aperture you need for that.

Juxtaposer This app lets you layer two different photos onto each other, with very controllable blending.

Phonto One of the best apps for adding titles and text to shots.

Some of the above apps are designed for still photography only, but since stills can be laid down in the video timeline, they will likely come into use during transitions, effects, title sequences, etc.

I used Filmic Pro as the only video camera app for this project. This was firstly based just on personal preference and the capabilities that it provided (the ability to lock focus, exposure and white balance were critical to maintaining continuity across takes in my opinion). Once I had selected a video camera app with which I was comfortable, I felt it important to use that on both the iPhones – again for continuity of the content. There may be other equally capable apps for this purpose. My focus was on producing as high a quality product as possible within the means and capabilities at my disposal. The particular tools are less important than the totality of the process.

The process of dumping footage off the iPhone (transferring video files to external storage) requires some additional discussion. The required hardware has been mentioned above, now let’s dive into process and the required software. The biggest challenge is logistics: finding enough time in between takes to transfer footage. If the iPhones are the only cameras used, then in one way this is easier – you have control over the timeline in that regard. In my case, this was even more challenging, as I was ‘piggybacking’ on an existing shoot so I had to fit in with the timeline and process in place. Since professional video cameras all use removable storage, they only require a few minutes to effectively be ready to shoot again after the on-camera storage is full. But even if iPhones are the only cameras, taking long ‘time-outs’ to dump footage will hinder your production.

There are several ways to maximize the transfer speed of files off the iPhone, but the best way is to make use of time management:  try to schedule dumping for normal ‘down time’ on the set (breaks, scene changes, wardrobe changes, meal breaks, etc.)  In order to do this you need to have your ‘transfer station’ [computer and external drive] ready and powered up so you can take advantage of even a short break to clear files from the phone. I typically transferred only one to three files at a time, so in case we started up sooner than expected I was not stuck in the middle of a long transfer. The other advantage in my situation was that the iPhone charges while connected via USB cable, so I was able to accomplish two things at once: replenish battery capacity due to shooting with the Mobile In audio adapter not allowing shooting while on line power; and dumping the files to external storage.

My 2nd camerawoman, Tara, brought her Mac Air laptop for file transfer to an external USB drive, I used a Dell PC laptop (discussed above in the hardware section). In both cases, I found that using the native OS file management (Image Capture [part of OS] for the Mac, Windows Explorer for the PC) was hideously slow. It does work (after plugging in the iPhone to the USB connector on the computer, the iPhone shows up as just another external disk. You can navigate down through a few folders and find your video files). On my PC (which BTW is a very fast machine – basically a 4-core mobile workstation that can routinely transfer files to/from external drives at over 150MB/s) the best transfer speed I could obtain with Windows Explorer amounted to needing almost an hour to transfer 10 minutes of video off the iPhone – a complete non-starter in this case. After some research, I located software from WideAngle Software called TouchCopy that solved my problem. They make versions for both Mac and PC, and it allowed transfer off the iPhone to external storage about 6x faster than Windows Explorer. My average transfer times were approximately ‘real time’ – i.e. 10 minutes of footage took about 10 minutes to transfer. There may be other similar applications out there – as mentioned earlier I am not in the software reviewing business – once I find something that works for me I will use that – until I find something “better/faster/cheaper.”

To summarize the challenging file transfer issue:

  • Use the fastest hardware connections and drives that you can.
  • Use time management skills and basic logistics to optimize your ‘windows’ for file transfer.
  • Use supplemental software to maximize your transfer speed from phone to external storage.
  • Transfer in small chunks so you don’t hold up production.

The last bit that requires a mention is file backup. Your original footage is impossible to replace, so you need to take exquisite care with it. The first thing to do it back it up to a second external physical drive immediately after the file transfer. Typically I started this task as soon as I was done with dumping files off the iPhone – this task could run unsupervised during the next takes.. However, one thing to consider before doing that (and this may depend on how much time you have during breaks): the relabeling of the video files. The footage is stored on your iPhone as a generically labeled .mov file, usually something like IMG_2334.mov – not a terribly insightful description of your scene/take. I never change the original label, only add to it. There is a reason… it helps to keep all the files in sequential order when starting the scene selection and editorial process later. This can be very helpful when things go a bit skew – as the always do during a shoot. For instance if the slate is missing on a clip (you DO slate every take, correct??) having the original ‘shot order’ can really help place the orphan take into its correct sequence. In my case, this happened several time due to slate placement:  since my iPhone cameras were in different locations, sometimes the slate was pointed where it was in frame for the DSLR cameras but was not visible by the iPhones.

I developed a short-hand description take from the slate at the head of each shot that I appended to the original file name. This does a few seconds (to launch Quicktime or VLC, shuttle in to the slate, pause and get the slate info), but the sooner you do this, the better. If you have time to rename the shots before the backup, then you don’t have to rename twice – or face the possibility of human error during this task. Here is a sample of one of my files after renaming: IMG_2334_Roll-A1_EP1-1_T-3.mov  This is short for Roll A1, Episode 1, Scene 1, Take 3.

However you go about this, just ensure that you back up the original files quickly. The last step of course is to delete the original video files off the iPhone so you have room for more footage. To double-check this process (you NEVER want to realize you just deleted footage that was not successfully transferred!!!) I do three things:

  1. Play into the file with headphones on to ensure that I have video and audio at head, middle and end of each clip. That only takes a few seconds, but just do it.
  2. Using Finder or Explorer, get the file size directly off the still-connected iPhone and compare it to the copied file on your external drive. Look at actual file size, not ‘size on disk’, as your external disk may have different sector sizes than the iPhone). If they are different, re-transfer the file.
  3. Using the ‘scrub bar’, quickly traverse the entire file using your player of choice (Quicktime, VLC, etc.) and make sure you have picture from end to end in the clip.

Then and only then, double-check exactly what you are about to delete, offer a small prayer to your production spirit of choice, and delete the file(s).

Summary:

This is only the beginning! I will write more as this project moves ahead, but wanted to introduce the concept to my audience. A deep thanks to all of you who have read my past posts on various subjects, and please return for more of this journey. Your comments and appreciation provides the fuel for this blog.

Support and Contact Details:

Please visit and support the talented women that have enabled me to produce this experiment. This would not have been possible otherwise.

Tiffany Price, Writer, Producer, Actress
Lauren DeLong, Writer, Producer, Actress
Ambika Leigh, Director, Producer

UltraViolet – A Report on the “Academy of UltraViolet” Seminar

May 18, 2012 · by parasam

I attended the full day seminar on the new UltraViolet technology earlier this week. UltraViolet is the recently launched “Digital Entertainment Cloud” service that allows a user to essentially ‘pay once, watch many’ across a wide range of devices, with the content being sourced from a protected cloud environment – physical media is no longer required.

While this report on the seminar is primarily intended for those on my team and in my firm that could not make the date, I will include a brief introduction to level-set my audience.

The UltraViolet Premise

The purpose is to offer a Digital Collection of consumer content (you can think of this as a “DVD for the Internet”), allowing the user to enjoy a universal viewing experience not limited by where you bought the content, the format of the content (or even whether physical or virtual), the type of device (as long as it supports a UV Player) or where the user is located [fine print: UV does allow local laws to be enforced via a geographies module, so not all features or content may be available in all territories].

I strongly recommend a visit to the UV FAQ site here – which is kept current on roughly a monthly basis. Even knowledgable members of this audience will find useful bits there: history, features, technical details, what-ifs to cover business cases [will my UV file still work even if the vendor that sold it to me goes out of business?} {the answer is yes BTW}, and many other useful bits.

For those that want a more detailed set of technical information, including the publicly available CFF (Common File Format) download specification, UV ecosystem and Role information, licensing info, etc. please visit the UV Business site here

The UV Academy Seminar

Firstly, a thanks to both the organizers and presenters:  this seminar did not have a lot of lead time, and took place in a nice venue with breakfast and lunch provided – which helped the audience (mostly industry professionals) to digest a rather enormous helping of information in a short time. Many of the presentations were aided with excellent graphics or video which greatly enhanced the understanding of a complex subject. The full list of presenters and sponsors is here.

We began with an update on current status, noting that basically the UV rollout is still in early stages – currently “Phase 1” where UV content is only available online in streaming formats. Essentially, legacy streaming providers that have signed up to be part of the UV ecosystem (known as LASPs – Locker Access Streaming Provider)  [come on, you must expect any new geeky infrastructure to have at least 376 new acronyms… 🙂 ] – will be able to stream content that the user has added to his/her “UV Cloud”. The availability, quality, etc of the stream will be the same as you currently get from that same provider.

Phase 2, the ability to download and store locally a UV file (in CFF – Common File Format) will roll out later this summer. One of the challenges in marketing is to communicate to users that UV is a phased rollout – what you get today will become greatly enhanced in the future.

A panel discussion followed this intro, the topic being “Preparing and Planning an UltraViolet Title Launch”. This was a good look ‘under the hood’ into just how many steps are required to effect a commercial release of a film or TV episode onto the UV platform. Although there are a LOT of moving parts (and often the legal and licensing issues are greater than the technical bits!) the system has been designed to simplify as much as possible. QA and Testing is a large part of the process, and it was stressed that prior planning well in advance of release was critical to success. (Hmmm, never heard that before…)

We then heard a short dissertation on DRM (Digital Rights Management) – as it exists within the UV ecosystem. This is a potentially brain-numbing topic, expertly and lightly presented by Jim Taylor, CTO of the DECE (the consortium that brought you UV). I am personally very familiar with this type of technology, and it’s always a pleasure to see a complex subject rendered into byte sized chunks that don’t overwhelm our brains. [Although having this early in the day when we still had significant amounts of coffee in the bloodstream probably helped…]  The real issue here is that UV must accommodate multiple DRM schemes in order to interoperate on a truly wide array of consumer devices, from phones all the way up to web-connected Blu-ray players feeding 65″ flatscreens. Jim gave us an understanding of DRM domains, and how authentication tokens are used to allow a single license that a user has for a movie to populate multiple DRM schemas and thereby allow the user access as required. Currently 2 of the 5 anticipated DRM schemes are enabled, with testing going on for the others. [The current crop of 5 DRM technologies include: Widevine, Marlin, OMA, Playready, FlashAccess]

Jason Kramer of Vital Findings (a consumer research company) gave us a great insight into the ‘mind of a real UV consumer’ with some humorous and interesting videos. We learned to never underestimate the storage capacity of a pink backpack (approximately 500GB – as in 100 DVDs); that young children like to skate on DVDs on the living room carpet (a good reason for UV, so when they wear out the DVDs mom can still download the content without buying it again…) – now come on, be honest, find me a software use case QA person that would have thought THAT one up… and on and on. It showed us that it’s really important to do this kind of research. You have NO IDEA how many ways your users will find to play with your new toy…

A panel discussion then helped us all understand the real power of metadata in the overall UV ecosystem. We are all getting a better understanding of how metadata interoperates with our content, our habits, our advertising, etc. – but seldom has a single environment been designed from the ground up to make such end-to-end use of extensive metadata of all types. Metadata in the UV universe facilitates three interdependent functions:  helping the user find and enjoy content through recommendation and search functions; managing the distribution of the content and reporting back to DMRs (Digital Media Retailers) and Content Providers; and the all-important Big Data information vacuum cleaner:  here’s an opportunity for actual customer libraries of content choices to be mined. To be precise, there are a huge amount of business rules in place about what kind of ‘little data’ is shared by whom to whom and for what purpose – and this is still a very fluid area… but in general, this UV ecosystem offers the potential of a win-win Big Data scenario. The user – based on actual content in their library – can help drive recommendations with a precision lacking in other data models; while content providers and others in the supply chain can learn about the user to the extent that is either appropriate or ‘opted in’. One area that will need refinement is what plagues other ‘content providers’ that offer recommendations (Amazon, Netflix, etc.) – different family members that share an account (a feature of UltraViolet) confuse recommendation engines badly… One can imagine easily the difficulty of sorting ‘dad’ vs ‘mom’ vs ‘6yr old kid’ when all the movies in a single account holder’s library are commingled… This is an area ripe for refinement.

The next panel delved into the current perceptions of “cloud content provisioning” in general as well as UltraViolet in particular. PWC’s Consumer Sentiment Workshops’ findings were discussed by representatives from participating studios (Fox/Warner/Universal). As might be expected, consumers have equated cloud storage of content with the two words that strike terror into the hearts of any studio executive:  Free & Forever… So, just like in Washington, where any savvy politician will tell you that there is ‘no free lunch – only alternatively funded lunch’ – the UV supporters have to educate and subtlety re-phrase consumer’s expectations. So ‘Free & Forever’ needs to be recast to ‘No Additional Cost & 35 Dog Years’… There are actually numerous issues here:  streaming is a bit different from download, no one has really tested such a widespread multi-format cloud provisioning system that has an extended design lifetime, etc. etc. Not to mention that many of the byzantine contracts already in place for content distribution never imagined a platform such as UltraViolet, so it will take time to sort out all the ramifications of what looks simple on the surface.

The User Experience (UX for those cognizetti who love acronyms) received a detailed discussion from a wide-ranging panel headed by Chuck Parker, one of our new masters of the Second Screen. This is a difficult and complex topic – even the UI (User Interface) is simpler – the UI is the actual ‘face’ of the application or interface: the buttons, graphics, etc. connected via a set of instructions to the underlying application; the UX is the emotion that the user feels and walks away with WHILE and AFTER the experience of using the UI. It’s harder to measure, and harder yet to refine. It’s a discipline that involves a creative, artistic and sometime almost mystic mix of hardware, software and ergonometric design. Color, shape, context, texture, all play a part. And UV has, in one sense, an even harder task in creating a unified and ‘branded’ experience:  at least companies like Apple (whose UX have attracted a cult following that most religions wish they had) have control over both hardware and software. UltraViolet, by the very nature of ‘riding on top’ of existing hardware (and even software) has only the thinnest of UI’s that they can call their own. Out of this UV still needs to craft a ubiquitous UX that will ‘brand’ itself and instill a level of consumer confidence (Ok, I know where I am – this is the cool player that let’s me get to all my movies no matter where/what/how/when) with the environment. Not a trivial task…

The day finished with a panel on the current marketing efforts of UltraViolet. Most of the studios were represented on the panel, with many clearly articulated plans brought forth. The large challenge of simultaneously bringing in large numbers of new users, yet communicating that UV is still very much a work in progress – and will be for several years yet – was exposed. The good news is that each marketing executive was enthusiastic about their plans to do two things:  collaborate together to ensure a unified message no matter which studio or content provider was marketing on behalf of UV (this is a bigger deal than many think:  silos were invented by movie studios, didn’t you know that?? – and it’s never easy for multi-billion dollar companies to collaborate in this highly regulated era – but in this case, since the marketing of UV can in no way be construed to be ‘price-collaborative’ it’s a greener field; and all the participants agreed that a continued effort to bring as much content into the UV system as soon as practical was in everyone’s best interest. The current method of signing up users (typically by first purchasing a physical media, such as Blu-ray – which in turn gives a coupon that is redeemed for UV access to that same title) may well flip:  in a few years, or even less, users may purchase online, and then receive a coupon to redeem at a local store for a ‘hard copy’ of the same movie on a disk should they desire that.

In summary, a lot of information was delivered in a relatively short time, and our general attention was held well. UV has a lot of promise. It certainly has its challenges, most notably the lack of Disney and Apple at the table so far, but both those companies have had substantial changes internally since the original decision was taken to not join the UV consortium. Time will tell. The current technology appears to be supportive of the endeavor, the upcoming CFF download format will notably enhance the offering, and the number of titles (a weakness in the beginning) is growing weekly.

Watch this space:  I write frequently on changes and new technologies in the entertainment sector, and will undoubtedly have more to say on UltraViolet in the future.

NAB2012 – Comments on the National Association of Broadcasters convention

April 26, 2012 · by parasam

Here’s a report on the NAB show I attended last week in Las Vegas. For those of you who see my reports each year, welcome to this new format – it’s the first time I’m posting the report on my blog site. For the uninitiated to this convention, please read the following introductory paragraph. I previously distributed a pdf version of my report using e-mail. The blog format is more efficient, allows for easier distribution and reading on a wider variety of devices, including tablets and smartphones – and allows comments from my readers. I hope you enjoy this new format – and of course please look around while here on the site – there are a number of other articles that you might find interesting.

Intro to NAB

The National Association of Broadcasters is a trade group that advocates on behalf of the nation’s radio and television broadcasters. According to their mission statement: NAB advances the interests of their members in federal government, industry and public affairs; improves the quality and profitability of broadcasting; and encourages content and technology innovation. Each April a large convention is held (for decades now it’s been held exclusively in Las Vegas) where scientific papers are presented on new technology, as well as most of the vendors that serve this sector demonstrate their products and services on the convention floor.

It’s changed considerably over the years – originally the focus was almost exclusively radio and tv broadcast, now it has expanded to cover many aspects of production and post-production, with big software companies making a presence in addition to manufacturers of tv transmission towers… While Sony (professional video cameras) and Grass Valley (high end professional broadcast electronics) still have large booths, now we also have Adobe, Microsoft and AutoDesk. This show is a bit like CES (Consumer Electronic Show) – but for professionals. This is where the latest, coolest, and most up-to-date technology is found each year for this industry.

Comments and Reviews

This year I visited over 80 vendors on the convention floor, and had detailed discussions with probably 25 of those. It was a very busy 4 days… The list was specifically targeted towards my interests, and the needs of my current employer (Technicolor) – this is not a general review. All information presented here is current as of April 24, 2012 – but as always is subject to change without notice – many of the items discussed are not yet released and may undergo change.

Here is the abbreviated list of vendors that I visited:   (detailed comments on a selection of this list follows)

Audio-Technica U.S., Inc.     Panasonic System Communications Company     GoPro     Canon USA Inc.     Leader Instruments Corp.     ARRI Inc.     Glue Tools     Dashwood Cinema Solutions
Doremi Labs, Inc.     DK – Technologies     Leica Summilux-C Lenses     Forecast Consoles, Inc.     Sony Electronics Inc.     FIMS/AMWA/EBU     Advanced Media Workflow Association (AMWA)
Tektronix Inc.     Evertz     Snell     DNF CONTROLS     Miranda Technologies Inc.     Ensemble Designs     Venera Technologies     VidCheck     EEG Enterprises, Inc     Dolby Laboratories
Lynx Technik AG     Digimetrics – DCA     Wohler Technologies, Inc.     Front Porch Digital     Blackmagic Design     Telestream, Inc.     Ultimatte Corporation     Adobe Systems
Adrienne Electronics Corp.     PixelTools Corp.     Red Digital Cinema     Signiant     Da-Lite Screen Company LLC     Digital Rapids     DVS Digital Video Systems
CEITON technologies Inc.     Planar Systems, Inc.     Interra Systems     Huawei     Eizo Nanao Technologies Inc.     G-Technology by Hitachi     Ultrium LTO     NetApp     Photo Research Inc.
Cinegy GmbH     Epson America, Inc.     Createasphere     Harmonic Inc.     ATEME     Sorenson Media     Manzanita Systems     SoftNI Corporation     Envivio Inc     Verimatrix
Rovi     MOG Technologies     AmberFin     Verizon Digital Media Services     Wowza Media Systems     Elemental Technologies     Elecard     Discretix, Inc.     Panasonic System Communications Company
3D @ Home Consortium     3ality Technica     USC Digital Repository

In addition to the technical commentary, I have added a few pictures to help tell the story – hope you find all this informative and enjoyable!

On final approach to Las Vegas airport (from my hotel room)

It all started with my flight into Las Vegas from Burbank – very early Monday morning:  looking forward to four days of technology, networking, meetings – and the inevitable leg-numbing walking of almost 3.2 million sq ft of convention halls, meeting rooms. etc.

Las Vegas: convention center in the middle

Zoomed in view of LVCC (Las Vegas Convention Center) - not a bad shot for 200mm lens from almost 1000 meters...

The bulk of all the exhibits were located in the four main halls, with a number of smaller venues (typically for pre-release announcements) located in meeting rooms and local hotel suites.

Vegas from hotel window (with unintended efx shot in the sky - had iPhone up against the window, caught reflection!)

The Wynn provided an easily accessible hotel location – and a good venue for these photographs!

Wynn golf course - just as many deals here as on convention floor...

Ok, now onto the show…   {please note:  to save me inserting hundreds of special characters – which is a pain in a web-based blog editor – please recognize that all trade names, company names, etc. etc. are © or ® or TM as appropriate…}

Also, while as a disclaimer I will note that I am currently employed by Technicolor, and we use a number of the products and services listed here, I have in every case attempted to be as objective as possible in my comments and evaluations. I have no commercial or other relationship with any of the vendors (other than knowing many of them well, I have been coming to NAB and pestering them for a rather long time…) so all comments are mine and mine alone. Good, bad or indifferent – blame me if I get something wrong or you disagree – that’s the beauty of this format (blog with comments at the end) – my readers can talk back!

Adobe Systems    link – the tactical news was the introduction of Creative Suite 6, with a number of new features:

  • Photoshop CS6
    • Mercury real-time graphics engine
    • 3D performance enhancements
    • 3D controls for artwork
    • enhanced integration of vector layers with raster-based Photoshop
    • better blur effects
    • new crop tool
    • better 3D shadows and reflections
    • more video tools
    • multi-taksing: background saving while working, even on TB sized files
    • workspace migration
    • better video compositing
  • Illustrator CS6
    • full, native 64bit support
    • new image trace
    • stroke gradients
    • inline editing in panels
    • faster Gaussian Blurs
    • improvements in color, transform, type, workspaces, etc.
  • InDesign
    • create PDF forms inside ID
    • better localization for non-Arabic documents (particularly for Farsi, Hindi, Punjabi, Hebrew, etc.)
    • better font management
    • grayscale preview with real pre-press WYSIWYG
  • Premiere
    • workflow improvements
    • warp stabilizer to help hand-held camera shots
    • better color corrector
    • native DSLR camera support to enhance new single-chip cameras used for video
    • and many more, including rolling shutter correction, etc.
  • not to mention Lightroom4, Muse, Acrobat, FlashPro, Dreamweaver, Edge, Muse, Fireworks, After Effects, Audition, SpeedGrade, Prelude, Encore, Bridge, Media Encoder, Proto, Ideas, Debut, Collage, Kuler and more…

In spite of that, the real news is not the enhancements to the full range of Adobe apps, but a tectonic shift in how the app suite will be priced and released… this is a bellweather shift in the industry for application pricing and licensing.. the shift to a true SaaS model for consumer applications – in a subscription model.

For 20+ years, Adobe, along with Microsoft and others has followed the time-honored practice of relatively high-priced applications, very rigorous activation and restrictive licensing models, etc. etc. For instance, a full seat of Adobe Master Collection is over $2,500 retail, and the upgrades run about $500 per year at least. With significant updates occurring each year now, this is an amortized cost of over $1,500 per year, per seat. The new subscription model, at $50/mo, means a drop to about $600 per year for the same functionality!

This is all an acknowledgement by the software industry at large that the model has changed:  we are now buying apps at $1-$5 each for iPhone, iPad, etc. – and getting good value and excellent productivity – and wondering why we are paying 100x and up for the same functionality (or less!) on a PC or Mac… the ‘golden goose’ needs an upgrade.. she will still produce eggs, but not as many, not as fat…

Adrienne Electronics  link

The AEC-uBOX-2 is a neat little hardware device that takes in LTC (LinearTimeCode) from a camera and turns this into machine control TC (RS-422). This allows pro-sumer and other ‘non-broadcast’ cameras and streaming content sources to feed a video capture card as if the NLE (NonLinearEditor – such as FinalCut) was seeing a high-end tape deck. This allows capture of time code associated with content to ease many editorial functions.

AMWA (Advanced Media Workflow Association)  link

This group (previously known as AAF – Advanced Authoring Format) is one of the industry’s important bits of glue that helps make things ‘interoperable’ – the word that is more often than not uttered as a curse in our industry.. as it’s as elusive as the pot of gold at the end of the rainbox… and we are missing the leprechauns to help us find it..

Led by the pied piper of Application Specifications – Brad Gilmer – this group has almost single-handedly pushed, prodded, poked and otherwise heckled many in our industry to conform to such things as AS02, AS03, etc. – which allow the over-faceted MXF specification to actually function as intended: provide an interoperable wrapper for content essence that carries technical and descriptive metadata.

Amberfin  link

The makers of the iCR – one of the best ingest devices on the market, have added even more functionality. The new things at the show:  UQC (Unified Quality Control – combination of autoQC with operator tools allowing for combination QC for highest efficiency workflows for QC) and  Multi-Transcode (which allows up to 8 nodes of transcoding on a single multi-core machine).

ARRI  link  

The ARRI ALEXA camera is in reality an ever expandable image capture and processing device – with continual hardware and software upgrades that help ameliorate its stratospheric price… v7 will add speed and quality to the de-Bayer algorithms to further enhance both regular and high-speed images; offer ProRes2K; LCC (LowContrastCurve); etc. The next release, v8, will add DNxHD 444; ProRes2K in high speed; vertical image mirroring (for Steadicam shots); post triggering (for nature photographers – allows ‘negative’ reaction time); and auto card-spanning for continuous shooting across flip/flop memory cards to not lose shots even when card capacity reached.

ARRI + Leica - match made in heaven...

Angenieux  link

The Optimo series of cinema camera lenses by Angenieux are of the the highest quality. In particular, they have the DP 3D package – a pair of precisely matched lenses for ease of image capture using stereo camera rigs. The tracking, focus and optical parameters are matched at the factory for virtually indistinguishable results on each camera.

Oh Angie... (Angenieux lens)

ATEME  link

The KFE transcoder – the product I am familiar with – continues to move forward, along with other efforts within the larger ATEME family. One interesting foray is a combined project with Orange Labs, France Télévisions, GlobeCast, TeamCast, Technicolor and Doremi, as well as the Télécom ParisTech and INSA-IETR University labs – to reduce the bandwidth necessary for UHDTV – or in real world-speak: how the heck to ever get 4K to the home… currently, the bandwidth for 4K delivery is 6Gb/s uncompressed, or about 1000x higher than a typical home connection… even for modern technology, this will be a hard nut to crack..

On other fronts, ATEME has partnered with DTS to offer DTS Neural Surround into the transcoding/encoding workflow. The cool thing about DTS’ Neural Sound technology its upmix capability. It can simulate 7.1 from 5.1, or 5.1 from stereo. I have personally listened to this technology on a number of feature film clips – comparing with originally recorded soundtracks (for instance, comparing actual 7.1 with the simulated 7.1) – and it works. Really, really well. The sound stage stays intact, even on car chases and other fast movements (that often show up as annoying phase changes if not done very carefully). This will allow better delivery of packages suited to home theatre and other environments with demanding users.

The Titan and Kyrion encoders have improved speed, capabilities, etc. as well.

Blackmagic Design  link

HyperDeck has added ProRes encoding capability; DesktopVideo will support AdobeCS6 when released; and the new Blackmagic Cinema Camera. The camera is a bit of a surprising move for a company that has never moved this far upstream in capture before:  to date they have a well-earned reputation for video capture cards, but now they have moved to actual image capture. The camera accepts EF lenses, has a 2.5K resolution, built-in SSD drive for storage, 13 stop dynamic range, DNG (RAW) files, as well as option to capture in ProRes or DNxHD. And it’s under $3,000 – (no lens!) – it will be interesting to see how this fits in the capture ecosystem…

Canon  link

Ok, I admit, this was one of the first booths I headed for:  the chance to get some hands on time with the new Canon EOS-1D C DSLR camera… the Cinema version of the recent EOS1 system. It records full 4K video (4096×2160), has their log-gamma as well as the Technicolor CineStyle plugin, for the best color grading. 18 megapixels at 4:2:2MJPEG 4K isn’t bad for a ‘little’ DSLR camera… and of course not new, but their glass is some of the best in the business.. Canon also introduced a number of new video zoom lenses for broadcast TV cameras, as well as true Cinema cameras (EOS C500) – 4K uncompressed (RAW) output – for external recording onto that trailer of hard drives you need for that data rate… <grin>

Their relatively new (introduced at CES in January) PixmaPro-1 12 ink printer is awesome for medium format fine-art printing, and for those with Pentagon type budgets, the PROGRAF iPF9100 is incomparable: 2400dpi native; 60″ wide roll paper (yes, 5 feet wide by up to 60 feet long…); and ok – it is $16,000  (not including monthly ink costs, which probably approach that of a small house payment…)

new Canon EOS1

when you're getting ready to drop a $1,000,000 or more on cameras and lenses, you have to see what is what.. so vendors build full scale operating evaluation stages for buyers to try out hardware...

one of Canon's four lens evaluation stages...

model had about 20 huge video lenses trained on her - she's looking at this guy with a little iPhone camera wondering what the heck he's doing...

An awesome lens: 200mm @ F2.0 - at 15 ft the depth of field is less than 1 inch! The image is knife-sharp.

MonsterGlass. If you really need to reach out and touch someone, here's the 400mm F2.8 Canon - tack sharp, but it will set you back about $35,000

the Canon iPF9100 printer: 5 ft wide print size, 12 ink, beautiful color

Ceiton Technologies  link

Ceiton provided German engineering focused on sophisticated workflow tools to aid firms in automating their production planning and control. As we know, file-based workflows can become horrendously complex, and the mapping of these workflows onto humans, accounting systems and other business processes has been absolutely littered with failures, blood and red ink. Sony DADC, Warner, Raytheon, AT&T, GDMX, A+E and others have found their solution to actually work. While I personally have not worked for a company yet that employs their solution, I have been following them for the last 5 years, and like their technology.

They have a modular system, so you can use just the pieces you need. If you are not a workflow expert, they offer consulting services to help you. If you are familiar, and have access to in-house developers, they provide API definitions for full integration.

Cinnafilm  link

Dark Energy, Tachyon… sounds like science fiction – well it is almost… These two products are powerful and fascinating tools to de-noise, restore, add ‘film look’, and otherwise perform just about any texture management function needed to content (Dark Energy); and if you need multiple outputs (in different frame rates) from a single input – faster than a speeding neutrino, then look to (Tachyon). An impressive and powerful set of applications, each one available in several formats – check their site. Also need to mention that seldom do I get as detailed, concise, eloquent and factual explanation of some complex technology as Ernie Sanchez provided (COO, Cinnafilm) – trade shows are notoriously hard on schedules, and I felt like I had a Vulcan mind-meld of info in the time I had available. Also thanks to Mark Pinkle of MPro Systems for the intro.

Dashwood Cinema Solutions  link

This company makes one of my two favorite 3D plug-ins for stereoscopic post-production (the other being CineForm). They have two primary tools I like (Stereo3D CAT and Stereo3D Toolbox) – the first is a rather unique stereoscopic analysis and calibration software. It lets you align your 3D rig using your Mac laptop with high precision, and during a shoot allows constant checking of parameters (disparity, parallax, etc.) so you don’t end up with expensive fixes after you have torn down your scene. Once in the edit session, the 3D Toolbox plugs in to either Final Cut or After Effects, giving you powerful tools to work with your stereoscopic captures. You can manipulate convergence, depth planes, set depth-mapped subtitles, etc. etc.

Digimetrics  link

This company provides, among other products, the Aurora automated QC tool for file-based content. With the speed required of file-based workflows today, human QC of large numbers of files is completely impractical. The Aurora system is one answer to this challenge. With a wide range of test capability, this software can inspect many codecs, containers, captions, and other facets to ensure that the file meets the desired parameters. In addition to basic conformity tests, this application can perform quality checks on both video and audio content. This can detect ‘pixelation’ and other compression artifacts; loss of audio and audio level checks (CALM); find ‘tape hits’ (RF dropouts on VTRs), and even run the so-called Harding analyzer tests (to meet FCC specs for photo-sensitive epilepsy).

Digital Rapids  link

Digital Rapids has long been one of the flagship applications for industrial-strength transcoding and encoding of content. They are continuing this tradition with enhancements to both their Stream and Kayak lines:  some of the new features include:  UltraViolet support (CFF files); Dolby Digital Plus; DTS-HD and DTS Express; and the highly flexible multi-format capabilities of the new Kayak/TranscodeManager platform. Essentially, the GUI on the Kayak/TM system lets you design a process – which is then turned into an automated workflow! I believe the namesake for this product (Kayak), aside from being a palindrome, is supposed to convey the ‘unsinkable’, ‘flexible’ and ‘adaptable’ characteristics of this little boat type.

There are only a few enterprise-class encoding/transcoding solutions in the marketplace today – DR is one of them. I’ve personally used this platform for over 8 years now, and find it a solid and useful toolset.

Dolby Laboratories  link

For a company that built its entire reputation on audio, it’s been quite a jump of faith for Dolby to significantly enter the visual space. They now have several products in the video sector:  a 3D display system for theaters; a 3D compression technology to transmit 2 full-sized images (not the usual anamorphically squeezed ‘frame compatible’ method) to the home; a precision Reference Monitor (that is simply the most accurate color monitor available today) – and now at the show they introduced a new 3D display that is auto-stereoscopic (does not require glasses).

The Dolby 3D display is based on an earlier Philips product that uses (as do all current auto-stereoscopic displays intended for home use) a lenticular film, that in conjunction with a properly prepared image on the underlying LCD panel, attempts to project a number of ‘views’ – each one offering slightly different information to each eye, thereby creating the disparity that allows the human eye-brain system to be fooled into thinking that it is seeing depth. The problem with most other such systems is the limited number of ‘views’ means that that the user’s eye position in relation to the screen is critical:  step a few inches to one side or the other from the ‘sweet spot’ and the 3D effect disappears, or worse, appears fuzzy and distorted.

This device uses 28 views, the most I have seen so far (others have used between 9 and 11). However, in my opinion, the 3D effect is quite muted… it’s nothing as powerful as what you get today with a high quality active or passive display (that does require glasses). I think it’s an admirable showing – and speaks to the great attraction for a 3D display that can dispense with glasses (the stand was packed the entire show!) – but this technology needs considerable work still. If anyone can eventually pull this off, Dolby is a likely contender – I have huge respect for their image processing team.

DVS  link

This company is known for the Clipster – a high end machine that for a while was just about the only hardware that could reliably and quickly work with digital cinema files at 4K resolution. Even though now there are other contenders, this device (and it’s siblings:  Venice, Pronto, etc.) are still the reference standard for high resolution digital file-based workflows. The Clipster has ingest capabilities tailored to HDCAM-SR tape, camera RAW footage directly from RED, ARRI, Sony etc. It can also accept Panasonic AVC-Ultra, Sony SR, Apple ProRes422/444 and Avid DNxHD formats. One of its strengths is its speed, as it utilizes purpose-built hardware for encoding.

Eizo  link

Eizo is a provider of high quality LCD monitors that are recognized as some of the most accurate displays for critical color evaluation. They have a long history of use not only in post-production, but across many industries: print, advertising, medical, air traffic control and other industrial applications. The ColorEdge family is my monitor of choice – I have been using the CG243W for the last 3 years. For the NAB show, a few new things were brought to the floor: a 3D LUT (LookUpTable) that provides the most accurate color rendition possible, including all broadcast and digital cinema color spaces – so that the exact color can be evaluated across multiple potential display systems. This device also supports DCI 2K resolution (2048×1080) for theatrical preview. In addition, they showed emulation software for mobile devices. This is very cool: you measure the mobile device (while it is displaying a specialized test pattern), then the table of results is copied into the setup of the Eizo monitor, and then you can see exactly what your video or still will look like on an iPad, Android or other device.

Ok, we’ve had a fair tour so far – and here all that’s moving are your eyeballs… I walked about 6 miles to gather this same info <grin>   that’s one of the big downsides of NAB – and why ‘shortest path planning’ using the little “MyNAB” app on the iPhone proved invaluable (but not perfect – there were the inevitable meetings that always seemed to be at the other end of the universe when I had only 10 minutes to get there). This year, it was hot as well – 30C (86F) most days. Inside is air-conditioned, but standing in the cab lines could cause dehydration and melting…

trade shows always have two things in common: lower back pain and low battery life..

The absolute best thing about a short time-out at NAB: a bench to sit down!

Now that we are rested… on with some more booths…

Elecard  link

This company, located (really!) in Siberia (in the city of Tomsk) brings us precision software decoding and other tools for working with compressed files. They make a large range of consumer and professional products – from SDKs (SoftwareDevelopmentKits) to simple video players to sophisticated analysis tools. I use StreamEye Studio (for professional video analysis), Converter Studio Pro (for transcoding), and Elecard Player (for playback of MPEG-2 and H.264 files). All the current versions were shown in their booth. Their tools are inexpensive for what they offer so make a good addition to anyone’s toolset.

Elemental Technologies  link

Elemental is a relatively new arrival on the transcoding scene, only founded in 2006 and really just in the last few years have they matured to the point where their products have traction in the industry. Their niche is powerful however – they make use of massively scaled GPUs to offload the processing of video transcoding from the normal CPU-intensive task queue. They have also made smaller stand-alone servers that are well suited to live streaming. With nVidia as their main GPU partner, they have formed a solid and capable range of systems. While they are not as flexible as systems such as Digital Rapids, Rhozet or Telestream – they do offer speed and lower cost per stream in many cases. They are a good resource to check if you have highly predictable output requirements, and your inputs are well defined and ‘normal’.

Front Porch Digital  link

Front Porch makes, among other things, the DIVA storage system along with the SAMMA ingest system. These are high volume, enterprise class products that allow the ingest and storage of thousands to millions of  files. The SAMMA robot is a fully automated tape encoding system that automatically loads videotape into a stack of tape decks, encodes while performing real-time QC during the ingest, then passes the encoded files to the DIVA head end for storage management. In addition they have launched (at NAB time) a service called LYNXlocal – basically a cloud storage connection appliance. It essentially provides a seamless and simple connection to Front Porch LYNX cloud service – at a smaller scale that a full DIVA system (which has had LYNX support built in since version 6.5)

Harmonic – Omneon and Rhozet

Both Omneon and Rhozet are now part of Harmonic – whose principle product line is hardware-based encoding/transcoding for the broadcast and real-time distribution sector. Omneon is a high end vendor of storage that is specific to the broadcast and post-production industry, while Rhozet, with their Carbon and WFS transcoding lines, serves file-based workflows across a wide variety of clients.

I have personal experience with all of these companies, and in particular have worked with Omneon and Rhozet as they are extensively used in a number of our facilities.

The Omneon MediaGrid is a powerful and capable ‘smart storage’ system. It has capabilities that extend beyond just storing content – it can actually integrate directly with applications (such as FinalCut), perform transcoding, create browse proxies, manage metadata, perform automatic file migration and even host CDN (ContentDistributionNetworks). It was one of very, very few systems that passed rigorous scalability and capacity testing at our Digital Media Center. One of the tests was simultaneously supporting HD editing (read and write) from 20 FinalCut stations at the same time. That’s a LOT of bandwidth. And our tolerance was zero. As in no lost frames. None. At NAB Omneon announced their selection by NBC for support in the Olympic Games in London this summer, along with Carbon transcoding.

Rhozet, which came on the scene with the Carbon Coder and Carbon Server (transcoding engine and transcode manager platform) has been a staple of large scale transcoding for some time now. Originally (and still) their claim to fame was a lost cost per seat while maintaining the flexibility and capability that are demanded of many file-based workflows. In the last few years, the ecosystem has changed:  Carbon Coder is now known as ProMedia Carbon, and the WFS (WorkFlowSystem) has replaced the Carbon Server. WFS is highly amenable to workflow automation, and basically designed to ingest, transcode, transfer files, manage storage, notify people or machines, and many other tasks. The WFS can be interfaced by humans, scripts, API calls – and I suppose one of these days by direct neural connections…

WFS integrates with actual ProMedia Carbon nodes for the actual transcoding, and can also directly integrate with the QCS (QualityControlSystem) of Rhozet for live QC during the workflow, and with Microsoft Enterprise SQL Database for reporting and statistical analysis.

INGRI;DAHL  link

This is the only virtual booth I visited 🙂  But Kine and Einy have a great small company that makes really hip 3D glasses. Since (see my earlier review on Dolby and other auto-stereoscopic displays) we are going to be needing 3D glasses for some time yet – why not have ones that are comfortable, stylish and stand out – in a good way? Yes, they are ‘passive’ – so they won’t work with active home 3D tv’s like Panasonic, Samsung, etc. – but they do work with any passive display (Vizio, RealD, JVC, etc.) – and most importantly – at the theatre! Yes, you get ‘free’ glasses at the movies.. but consider this:  if you’ve read this far, you’ve seen a 3D movie in a theatre -with glasses that fit well and are comfortable, right…?? And… one of the biggest challenges we have today is the low light level on the screen. This is a combination of half the light being lost (since only 50% at most reaches each eye, with the alternate polarization), poor screen reflectance, etc. Although it’s not a huge difference, these glasses have greater transmittance than the average super-cheap movie-theatre glasses – therefore the picture is a little bit brighter.

And they offer a set of clip-on 3D glasses – for those of us (like myself) who wear glasses to see the screen in the first place – wearing a second pair over the top is a total drag. Not only uncomfortable – I look even more odd than normal… Check them out. The cost of a couple of tickets will buy you the glasses, and then you are set.

Interra Systems  link

Interra makes the Baton automated QC analyzer. This is a highly scalable and comprehensive digital file checker. With the rate of file manufacture these days, there aren’t enough eyeballs or time left on the planet to adequately QC that many files. Nor would it be affordable. With a template-driven process, and a very granular set of checks, it’s now possible for the Baton tool to grind through files very quickly and determine if they are made to spec. Some of the new features announced at NAB include: wider format support (DTS audio, MKV container, etc.); more quality checks (film grain amount, audio track layout, etc.); closed caption support; verification efficiency improvements; audio loudness analysis and correction (CALM); new verification reports and high availability clustering.

Manzanita Systems  link

Manzanita is the gold standard for MPEG-2 multiplexing, demultiplexing and file analysis. Period. Everyone else, in my professional opinion, is measured against this yardstick. Yes, there are cheaper solutions. Yes, there are more integrated solutions. But the bottom line is if you want the tightest and most foolproof wrapper for your content (as long as it’s MPEG-2 Program Stream or Transport Stream – and now they’ve just added MP4) – well you know where to go. They are a relatively small company in Southern California – with a global reach in terms of clients and reputation. The CableLabs standard is what most VOD (VideoOnDemand) is built upon, and with their muxing software (and concomitant analyzer), Manzanita-wrapped content plays on just about any set top box in existence – no mean feat.

New things:  CrossCheck – a fast and lower cost alternative to the full-blown Transport Stream Analyzer; MP4 multiplexer; Adaptive Transport Stream Mux (for mobile and “TV Everywhere” apps); integration of DTS audio into Manzanita apps; and a peek at a highly useful tool (prototype at the show, not a full product yet):  a transport stream ‘stitcher/editor’ – basically allows ad insertion/replacement without re-encoding the original elementary streams. Now that has some interesting applications. If you have a thorny MPEG-2 problem, and no amount of head-scratching is helping, then ask Greg Vines at Manzanita – he’s one of our resident wizards of this format – I can’t remember an issue he has not been able to shed some light on. But buy something from them, even the really inexpensive (but useful and powerful) MPEG-ID tool for your desktop – no one’s time is free… 🙂

MOG Solutions / MOG Technologies  link

MOG is a Portugal-based firm (MethodsObjectsGadgets). So now, in addition to world class sailors, great port, we have excellent quality MXF from this country of light and history. I’ve worked with them extensively in the past, and they make a small but powerful set of SDKs and applications for processing the MXF wrapper format. The core is based on the Component Suite from MOG Solutions – a set of SDKs that allow wrapping, unwrapping, editing and viewing of MXF containers. This underlying code is integrated into a number of well known platforms which allows the growing ubiquitous reach of this standards-based enabling technology.

As a new arm, MOG Technologies was formed to offer some new products that are differentiated from the core code provided by the parent MOG Solutions. They have introduced the mxfSPEEDRAIL to offer SD-HD/SDI ingest, file-based ingest, digital delivery and real time playback. The recorder handles multiple resolutions and multiple formats: SD or HD, any normal frame rate, natively encodes to QuickTime, Avid, MXF, AVCIntra, DNxHD, DVCProHD, XDCAM/HD, ProRes. Many other features. Also accepts files in any of those formats.

Panasonic  link

Panasonic offers a wide array of professional video products, from cameras, monitors, memory cards, mixers & switchers, etc. Probably one of the more interesting products is the 3D camcorder: AG-3DP1. This is ‘bigger brother’ of the original AG-3DA1. The newer model has slightly better pixel count, main difference is better recording format:  true AVC-Intra instead of AVCHD, and 10bit 4:2:2 instead of 8bit 4:2:0  It’s almost twice as expensive, but still very cheap for a full 3D rig ($35k). This camera has numerous limitations, but IF you understand how to shoot 3D, and you don’t break the rules… (biggest issue with this camera is you can’t get close to your subject, since interaxillary distance is fixed) you can make some nice stereo footage at a bargain price.

With such a wide range of products, it’s impossible cover everything here. Check this link for all their NAB announcements if you have a particular item of interest.

Photo Research  link

This is one of the best vendors of precision spectrometers. This is a must have device for precision optical measurements of color monitors. While calibration probes and software can calibrate individual monitors of the same type, only a purely optical spectrometer can correlate readings and match devices (or rather give you the information to accomplish this) across different technologies and brands – such as matching (to the best degree possible) CCLF LCD to LED LCD to Plasma. If you are serious about color, you have to have one of these. My preference is the PR-655.

RED Digital Cinema  link

Well.. now we have arrived at one of the most popular booths at the show. The lines waiting to get in to the REDray projector went around the booth.. sometimes twice.. As has often been the case in the past, RED has not done incremental – they just blow the doors off… 4K playback in 3D by real lasers in a small box that doesn’t cost the budget of a small African country?? and a home version as well (ok, still 4K but 2D). And of course they still make an awesome set of cameras… the Epic is well, epic?? Of course, with almost obscene resolution and frame rates, copious data rates have forced allowed RED to vertically integrate.. you now have RED ROCKET so you can transcode in real time (what ARE you going to do with 2 x 5K streams of RED RAW data squirting out of your 3D rig… you did remember to bring your house trailer of hard disks, correct?? Anyway, just go their site and drool, or if you have backing and the chops, start getting your kit together…

RED camera

RED 3D (in 3ality rig)

RED skirt/bag

couldn’t resist…

rovi – MainConcept  link

Now owned by rovi, MainConcept has been one of the earliest, and most robust, providers of software codecs and other development tools for the compression industry. Founded in 1993, when MPEG-1 was still barely more than a dream, this company has provided codecs ‘under the hood’ for a large majority of compressed video products throughout this industry. Currently they have a product line that encompasses video, audio, muxing, 3D, transcoding, streaming, GPU acceleration and other technologies as SDKs – and applications/plug-ins for transcoding, decoding, conversion and codec enhancements for popular NLE platforms.

I personally use their Reference Engine professional transcoding platform (does just about everything, including Digital Cinema at up to 4K resolution), the codec plug in for Adobe Premiere and various decoder packs. The plug-in for Adobe will advance (to support CS6) by the time the new version of Premiere releases…

Schneider Optics  link

A great many years ago, when such wonderful (and now a bit archaic) technologies such as 4×5 sheet film was the epitome of “maximum megapixels” [actually, the word pixel wasn’t even invented yet!] the best way to get that huge negative filled up with tack-sharp light was a Schneider “Super Angulon” lens. I still remember the brilliance of my first few shots after getting one of those lenses for my view camera. (And BTW, there is not a digital camera out there, even today, that can approach the resolution of an 4×5 negative on Ilford Delta100 stock… when I scan those I am pulling in 320 megapixels, which, even in grayscale (16bit) is about 500MB per frame.. almost sounds like RED data rates <grin>. At the show, Dwight Lindsey was kind enough to share a detailed explanation of their really awesome little adaptive lenses for the iPhone camera: they have a fisheye, a wide angle, and are just about to release a telephoto. These high quality optics certainly add capability to what’s already a cool little camera. (for more info on the iPhone camera, look elsewhere on my blog – I write about this consistently)

Signiant  link

Signiant is one of the major players in enhanced file transfer. They have a range of products dedicated to highly secure file movements over global networks. They offer a combination of high speed (much better utilization that TCP) as well as security strong enough to pass muster with the security/IT divisions of every major studio. They offer highly capable workflow engines – it’s not just a souped-up FTP box… Management, scheduling, prioritization, least-cost routing – all the bits that a serious networking tool should possess.

With the ability to transfer over the internet, or direct point-to-point, there is real flexibility. An endpoint can be as simple as a browser (yet still totally secure), or a dedicated enterprise server with 10Gb fiber connectivity. This toolset can be an enabler for CDNs, and automatically redirect content formats as required to fulfill anything from set top boxes to mobile devices.

Snell link

Formerly Snell & Wilcox, since the merger with ProBel the combined firm is now just Snell. A long time front-runner in broadcast hardware, and a recognized world leader in television conversion products, Snell has continued to adapt to the new world order – which still BTW uses a LOT of baseband video! The conversion to pure IP is a ways off yet… and for real day-to-day production, SDI video is often king. The combined company now offers a wide range of products in such categories as control & monitoring; routing; modular infrastructure; conversion & restoration; production & master control switching; and automation/media-management. Too many things to go into detail – but do check out Momentum – announced at NAB:  their new unified platform for managing complex workflows across multiple screens.

SoftNI  link

Subtitling/closed captions. Done well. Their products handle basically the full range of world broadcast, cable, satellite, OTT, etc. distribution where there is a need to add subtitles or closed captions. The capabilities extend to theatrical, screening rooms, editorial, contribution, DVD, etc. etc. The software handles virtually all non-Arabic languages as well (African, Asian, European, Middle Eastern).

Sony  link

With probably the largest booth at NAB, Sony just boggles the mind with the breadth and depth of their offerings. Even though the consumer side of their house is a bit wilted at the moment, the professional video business is solid and a major part of so many broadcasters, post-production and other entities that it’s sort of like part of everyone’s DNA. There’s always a bit of Sony somewhere… HDCAM-SR? DigiBeta? XDCAM? and so on.. Here’s just a minuscule fraction of what they were showing:  compare the PMWTD300 3D camcorder to the Panasonic discussed earlier. Very similar. It’s basically a 3D XDCAM/HD. And they are getting their toes in the water with UHD (UltraHighDefinition) with their 4K ‘stitch’ display technology. Had to laugh though: the spec sheet reads like a fashion magazine:  “Price upon Request” – in other words, just like buying haute couture from Dior, if you have to ask, you can’t afford it…

8K x 2K display! (2 x 4K displays stitched together...)

one of the displays above... the small red box is detailed below

close-up of red outline area above.. I focused as close as I could get with the iPhone - about 2" from the screen - you can just barely see the pixels. This is one sharp image!

Tektronix  link

Tektronix has been a long standing provider of high end test equipment, with their scopes, waveform monitors and other tools being seen in just about every engineering shop, master control and OB van I’ve ever seen… but now, as with most of the rest of our industry, software tools and file-based workflows demand new toolsets. Their Cerify platform was the first really capable automated QC analysis platform, and it is still capable today. At the show, we saw upcoming improvements for v7 – the addition of audio loudness measurement and correction (in conjunction with Dolby technology), as well as other format and speed enhancements.

Telestream link

The company that brought us FlipFactory – aka the universal transcoding machine – has spread its wings a lot lately. While FF is still with us, the new Vantage distributed workflow system is like a tinkertoy set for grownups who need to make flexible, adaptable and reliable workflows quickly. With the addition of Anystream to the Telestream fold, there are now additional enterprise products, and Telestream’s own lineup has expanded as well to encompass Pipeline, Flip4Mac, Episode, etc. etc. A few of the NAB announcements included Vantage v4.0 (CALM act compliance, new acceleration technology, better video quality); Vantage HE Server (parallel transcoding for multiscreen delivery); and enhanced operations in Europe with the opening of a German branch.

Verizon Digital Media Services  link

VDMS is a big solution to a big problem. The internet, as wonderful as it is, handles small packets really, really well. Little bits of emails, even rather big spreadsheet attachments, zip around our globe in large numbers, with the reliability and fabric of many interconnected routers that make the ‘net what it is. But this very ‘fabric of redundancy’ is not so wonderful for video.. which is a data format that needs really fast, smooth pipes, with as few stops as possible. It’s not the size of the highway (those are big enough), but rather those toll stations every 5 miles that cause the traffic jams…

So… what do you do? If you own some of the largest private bandwidth on the planet, that is NOT the internet.. then you consider providing an OTT solution for the internet itself… This has the potential to be a game-changer, as video delivery – through a true unicast system – can be offloaded from the internet (not completely of course, but enough to make a big difference) and turned into a utility service.. well, kind of like what a phone company used to be… before landlines started disappearing, analog phones checked into museums, and pretty much everyone’s talking bits…

This could be one of our largest digital supply chains. Let’s see what happens…

VidCheck  link

There are several cool things in Bristol, UK (apart from the weather) – the British Concorde first flew from Filton (suburb of Bristol) in April of 1969 (a bit after the French version – yes, this was the forerunner of AirBus – the original distributed supply chain… and hmmm…. seems like, as usual, the Brits and the French are still arguing…) – in more recent years this area has become a hub of high-tech, and some smart engineers who cut their teeth on other automated QC tools started up VidCheck. In my view, this is a “2nd generation” toolset, and brings some very interesting capabilities to the table.

This was one of the first to be as capable in fixing errors in a file as in just finding them… It handles a very wide array of formats (both containers and codecs – it’s one of the few that processes ProRes). Another unique aspect of this toolset is the “VidApps” – which are plug-ins for common NLEs – QC/fix now becomes part and parcel of the editorial workflow. Particularly with audio, this simplifies and speeds up corrections. NAB announcements included:  the new “VidFixer” app – which is designed from the ground up to integrate into workflow automation systems as well. In addition, the VidChecker software is now part of the new Amberfin UQC (UnifiedQualityControl) platform – providing the same detailed and rapid analysis, but integrated into the Amberfin platform.

Wowza  link

We can never seem to get away from multiple formats. First we had NTSC and PAL, then VHS and Beta, then different types of HD, etc. etc. etc. Now we have tablets, smartphones, laptops, Dick Tracy video wristwatches, etc. – all of which want to display video – but in a universal format? No such luck… Silverlight, HLS, HTML5, WideVine, Flash, TS, …… enter Wowza, who essentially is a ‘transmuxer’ (instead of a transcoder) – since usually the elementary streams are ok, it’s the wrappers that are incompatible. So basically the Wowza server skins a potato and re-wraps it in a banana skin – so that what was a Flash file can now play on an iPad… of course there’s a bit more to it… but check it out – it just might save you some migraines…

and wow – you’ve stuck through till the end – congratulations for your perseverance! Here’s a bit of my trip back as well, in pictures…

Sort of like life: Easy / Hard

There’s always several ways to do a workflow…

end of the day - SHOES OFF!!

Convention is ending – and believe me it takes its toll on feet, legs, backs…

Sunset on Encore

Wynn reflected in Encore

the Starship Enterprise has landed...

Walking back from a meeting at the Palazzo…

heading back to hotel...

convention over: now 100,000 people all want to leave airport at the same time - no seats. nowhere...

your roll-on also makes a good seat...

finally, Southwest flight #470 is ready for take-off...

please put your legs in the upright and locked position to prepare for take-off...

It took almost as long to get luggage in Burbank as it did to fly from Vegas... waiting... waiting...

still waiting for luggage...

The Return

someone else we know and love came back from Vegas on our plane...

homeward bound...

That’s it! See you next year just after NAB in this same space. Thanks for reading.

DI – Disintermediation

February 28, 2012 · by parasam

Disintermediation – a term you should come to know. Essentially this means “to remove the intermediary.” This has always been a disruptive process in cultures – whether affecting religion, law, education or technology. In religion, one well-known example was the rise of the Lutheran church when they felt the ‘intermediary’ process of pope, cardinals, bishops, etc. of the Catholic form was no longer necessary for the common man to connect to their belief in God.

Higher education used to be exclusive to those that could afford to attend brick-and-mortar campuses; now we have iTunesU, distance learning and a host of alternative learning environments open to virtually anyone with the time and focus to consume the knowledge.

Bringing questions of law was traditionally handled by barristers, attorneys and others – ensconced in a system of high cost and slow delivery of service. Today we have storefront paralegal offices, online access to many governmental and private legal services and a plethora of inexpensive software for preparation of common legal forms.

Each of these areas of practice fought long and hard to preserve the intermediary. We were told that our souls might be lost without the guidance of trained priests, that we might lose everything we owned if we prepared legal documents without assistance, and that only a trained professor could teach us anything.

We, collectively, begged to differ. And we slowly, with much bloodshed, sweat and tears, succeeded to emancipate ourselves from the yoke of enforced intermediation. However, like many true tools, knowledge is often represented as a sharp two-edged sword. Going it alone has consequences. There is no substitute for the compassion and experience of spiritual advisor, no matter his or her title. There are many areas of law where the specialized knowledge of legal codes, not to mention the oratorical skills of an experienced courtroom jouster, are essential to victory.  The guidance and explanation of one who has mastered the knowledge of a subject – not to mention the special skills of teaching, of helping a student to reach comprehension – are many times critical to the learning process.

Now we are confronting a new locus of disintermediation:  the provisioning of access to the ‘cloud’ of entertainment and information. The internet – in its largest sense – is in a new stage of democratization. The traditional providers of access (telcos, cable tv, satellite) are in a fight for their collective lives – and they are losing. Attempting to hold onto an outmoded business model is simply ‘dead man walking’ philosophy. At the most you can walk more slowly – but you will reach the hangman’s noose irregardless.

This will not be an overnight change, but we have already seen just how quickly ‘internet time’ moves. The velocity of change is on an exponential upwards curve, and nothing in our recent past has given us any reason to doubt this will alter anytime soon.

There are a number of factors that are fueling this:  the explosion of the number of content creators; the desire of traditional content creators (studios, episodic tv) to sell their content to as wide an audience as rapidly as possible; the high cost and oft-perceived low value-add of traditional NSPs (Network Service Providers – telco, cable, etc.)

One of the biggest reasons that has helped to propel this change in consumer behavior is knowledge:  as little as ten years ago the average media consumer associated the ‘channel’ and the ‘content’ as the same thing. Movies or news on TV came out the wall on a cord (no matter what fed the cord) – or movies could be seen on a plastic disk that you rented/bought. The concept of separation of content from channel did not exist.

Today, even the average 6-year-old understands that he or she can watch SpongeBob on almost anything that has a screen and a speaker. The content is what matters, how it gets there and on what device it is consumed just doesn’t matter much. Not a great thing for makers of networks or devices…

Fortunately the other side of the sword does exist… while the traditional models of telco provisioning of services are either antiquated or obsolete (the time/distance model of tariffs, high cost for low functionality, etc.), the opportunity for new business models does exist. What is disruptive for one set of structures is opportunistic for others.

Once content is ‘unbound’ from its traditional channels, a new world of complexity sets in:  metadata about the content gains importance. Is it SD or HD? What’s the aspect ratio: 4:3 or 16:9? What codec was used (for instance if it was Flash I can’t see it on my iPad), etc. etc.

Finding the content, or the version of it that you want, can be challenging. Licensing and other DRM (Digital Rights Management) issues add to the confusion. If voice communication (aka the telephone) is stripped out of its network and now becomes an ‘app’, (for instance like Skype), who sells/supports the app? If all “private networks” (telco, cable, satellite) become essentially data pipes only, what pricing models can be offered that will attract consumers yet allow such companies to run profitably? There is a growing tendency for “un-bundling” and other measures of transparency – for instance overseas mobile phone operators are backing away from cellphone handset subsidies. This was due in large part to the prevalence of  prepaid phone contracts in these regions – for which no subsidized phones can be provided. This has the knock-on effect of reducing iPhone sales and increasing Android (and other) less expensive phone hardware penetration. For instance, in the last year or so the sales of iPhones has fallen considerably in Greece, Spain, Portugal and Italy… Hmmm, wonder why??

All of these questions will assume larger and larger importance in the near future. Current modalities are either failing outright or becoming marginalized. We have heard the moniker “Content is King” – and it’s still true, much to the chagrin of many network providers. When one is thirsty, you pay for water, not the pipes that it arrives in…

Here’ s another anecdotal piece that helps to demonstrate that you cannot underestimate the importance of content ownership:  as is well known, VFX (Visual Special Effects) are now the ‘star’ of most movies. A decade ago, actors carried a movie, now it’s the effects… Do the research. Look at 2011 box office stats. Try to find a movie that was in top 20 grossing that did NOT have significant special effects… Now here’s the important bit:  one would think that the firms that specialize in creating such fantastic imagery would be wildly successful… NOT. It’s very, very expensive to create this stuff. It takes ungodly amounts of processing power, many really clever humans, and ridiculous amounts of time. Rango just won the Oscar… and in spite of the insanely powerful computers we have today, it took TWO YEARS to animate this movie!

The bottom line is that the ONLY special effects firms that are in business today or are remotely profitable, are the ones connected to either studios or consortiums that themselves own the content on which this magic is applied.

Content is water at the top of the hill. The consumers are at the bottom with their little digital buckets out, waiting to be filled. They just don’t care which path the water runs down the hill… but they DO care that it runs quickly, without damming up, and without someone trying to siphon off  ‘their’ water…

This is not all settled. Many battles will be won and lost before the outcome of the ‘war’ is known. New strategies, new generals, new covert forces will be deployed.

Stay tuned.

VERA (Vision Electronic Recording Apparatus) – one of the first video tape recorders

February 26, 2012 · by parasam

Today we take for granted the ability to watch HDTV on large screens in our homes, see stunning digital cinema at super high resolution in the theatre, and stream high quality video to our smartphones, tablets and computers. It wasn’t always this way…

One of the first attempts at recording and playing video content (with audio as well) – all previous video distribution was completely live in real time – was a British system:  the Vision Electronic Recording Apparatus (VERA). The development of this device began in 1952 by the BBC, under project manager Dr. Peter Axon.

All previous recording technology of that time was used for audio. Of course, only analog techniques were available then. At a high level, the recorders used open-reel magnetic tape that passed over a fixed record/playback head. Although there were several formats for audio recording in use by the 1950’s – common ones used professionally moved the tape at 15ips (inches per second) to record frequencies up to 16kHz – there were no video recording devices.

Video signals, even monochrome, require a much higher bandwidth. The relatively low-resolution cameras of that day (405 tv lines vertically) needed about 3MHz to record the fine detail in the picture. This is roughly 200x the maximum frequency recorded by audio tape machines of that time. Essentially, in order to record a signal on magnetic tape using stationary heads, either the head gap must be made smaller, or the tape must move faster.

Limitations in materials science in the 1950’s, as well as other complex issues of recording head design essentially dictated that the tape needed to move much faster to record video. For the VERA system, the BBC used 52cm (20”) reels of magnetic tape that moved past the stationary record/playback head at 5.08 meters per second (16.7 ft. per sec.!) That’s about 20x faster than audio tape – quite a mechanical achievement for that time.

VERA was capable of recording about 15 minutes (e.g. 4572 meters) of 405-line black-and-white video per reel, and the picture tended to wobble because the synchronizing pulses that keep the picture stable were not recorded accurately enough.

In order to cope with 625-line PAL or SECAM colour transmissions VERA would likely have required an even faster, and possibly unfeasible, tape speed.

Development began in 1952, but VERA was not perfected until 1958, by which time it had already been rendered obsolete by the Ampex quadruplex video recording system. This used 5cm (2”) wide tapes running at a speed of 38cm/s (15ips). The rapid tape-to-head speed was achieved by spinning the heads rapidly on a drum: the system used, with variations, on all video tape systems ever since, as well as DAT.

The BBC scrapped VERA and quickly adopted the Ampex system. It has been suggested that the BBC only continued to develop VERA as a bargaining tool, so it would be offered some of the first Ampex machines produced in unstated exchange for abandoning further work on a potential rival.

The only VERA recordings that survive are film telerecordings of the original demonstration. Even if some of the original tape had survived there would be no way of playing it back today. Rather the film kinescopes were transferred using modern film scanning technology to a digital file, one of which is reproduced here.

Page 1 of 2 1 2 Next »
  • Blog at WordPress.com.
  • Connect with us:
  • Twitter
  • Vimeo
  • YouTube
  • RSS
  • Follow Following
    • Parasam
    • Join 95 other followers
    • Already have a WordPress.com account? Log in now.
    • Parasam
    • Customize
    • Follow Following
    • Sign up
    • Log in
    • Report this content
    • View site in Reader
    • Manage subscriptions
    • Collapse this bar
 

Loading Comments...