• Home
  • about this blog
  • Blog Posts

Parasam

Menu

  • design
  • fashion
  • history
  • philosophy
  • photography
  • post-production
    • Content Protection
    • Quality Control
  • science
  • security
  • technology
    • 2nd screen
    • IoT
  • Uncategorized
  • Enter your email address to follow this blog and receive notifications of new posts by email.

  • Recent Posts

    • Take Control of your Phone
    • DI – Disintermediation, 5 years on…
    • Objective Photography is an Oxymoron (all photos lie…)
    • A Historical Moment: The Sylmar Earthquake of 1971 (Los Angeles, CA)
    • Where Did My Images Go? [the challenge of long-term preservation of digital images]
  • Archives

    • September 2020
    • October 2017
    • August 2016
    • June 2016
    • May 2016
    • November 2015
    • June 2015
    • April 2015
    • March 2015
    • December 2014
    • February 2014
    • September 2012
    • August 2012
    • June 2012
    • May 2012
    • April 2012
    • March 2012
    • February 2012
    • January 2012
  • Categories

    • 2nd screen
    • Content Protection
    • design
    • fashion
    • history
    • IoT
    • philosophy
    • photography
    • post-production
    • Quality Control
    • science
    • security
    • technology
    • Uncategorized
  • Meta

    • Register
    • Log in
    • Entries feed
    • Comments feed
    • WordPress.com

Quality AND Quantity… or how can I have my cake and eat it too??

January 27, 2012 · by parasam

The relatively new ecosystem of large-scale distribution of digital content (movies, tv, music, books, periodicals) has brought many challenges – one of the largest being how in the world do we create and distribute so much of this stuff and keep the quality high?

Well… often we don’t… sad to say, most of the digital content that is made available online (and here I mean this in the largest sense:  cable, satellite, telco as well as the internet) is of lower quality than what we enjoyed in the past with purely physical distribution. Compared to an HD Blu-ray movie, a fine photographic print, a hard-back book made by a good lithographer, a CD of music – the current crop of highly compressed video, music, etc. is but a reasonable approximation.

In many cases, the new digital versions are getting really, really good – for the cost and bandwidth used to distribute them. The convenience, low cost and flexibility of the digital distribution model has been overwhelmingly adopted by the world today – to the extent that huge physical distribution companies (such as Netflix – who only a few years ago ONLY distributed movies via DVD) now state that “We expect DVD subscribers to decline every quarter, forever.”

The bottom line is that the products delivered by digital distribution technology have been deemed “good enough” by the world at large. Yes, there is (fortunately) a continual quest for higher quality, and an appreciation of that by the consumer:  nobody wants to go back to VHS quality of video – we all want HD (or as close to that as we can get). But the convenience, flexibility and low cost of streaming and downloaded material offsets, in most cases, the drop in quality compared to what can be delivered physically.

One only has to look at print circulation figures for newspapers and magazines to see how rapidly this change is taking place. Like it or not, this genie is way out of the bottle and is not going back. Distributors of books, newspapers and magazines have perhaps the most challenging adaption period ahead for two reasons:

#1:  “Digital Print” can be effectively distributed at a very high quality to smartphones, tablets and laptops/PCs due to the high quality displays in use today on these devices and the relatively small data requirements to send print and smallish photos. This means that there is almost no differentiation experienced by the consumer, in terms of quality, when they move from physical to virtual consumption.

#2: With attention to the full range of possibilities of “Digital Print” media, including hyperlinking, embedded videos and graphics that are impossible (either technically or financially) to create in physical print, interactive elements, etc. – the digital versions are often more compelling than the physical versions of the same content.

The content creation and distribution model for print is moving very rapidly: it is very likely that within 5 years the vast majority of all contemporaneous print media (newspapers, magazines, popular fiction, reports, scientific journals, etc.) will be available only in digital format.

In addition to the convenience and immediacy of digital print, there are other issues to consider:  newsprint requires lots of trees, water, energy, fuel to transport, etc. – all adding to the financial and ecological cost of physical distribution. The cost of manufacturing all our digital devices, and the energy to run them, is a factor in the balance, but that cost is far lower. Every morning’s paper is eventually thrown out, burned, etc. – while yesterday’s digital news is just written over in memory…

Music, movies, television, games, etc. are more difficult to distribute digitally than physically – for the equivalent quality. That is the crucial difference. High quality video and music takes a LOT of bandwidth, and even in today’s times that is still expensive. Outside of the USA, Western Europe and certain areas of AsiaPacific, high bandwidth either does not exist or is prohibitively expensive.

To offset this issue, all current video and audio content is highly compressed to save bandwidth, lower costs, and decrease transmission time. This compression lowers the quality of the product. Tremendous efforts have been successfully made over the years to drive the quality up and either keep the bandwidth the same, or even lower it. This has allowed us to experience tv on our cell phones now, a reasonable level of HD in the home via cable/satellite/telco, and thousands of songs in a postage-stamp sized device that can clip to your shirt pocket.

We assume that this trajectory will only keep improving, with eventually the quality of video and audio getting close to what physical media can offer today.

So what is the point of this observation? That a truly enormous amount of data is now being produced and distributed digitally – more than most ever envisioned. The explosion of digital consumer devices, and even more importantly the complete dependence by all business, governmental and military functions in the world on computers and interconnected data, has pushed data consumption to levels that are truly astounding.

An article in the Huffington Post in April 2011 estimated the annual data consumption of the world at 10 zettabytes per year. A zettabyte is a million petabytes. A petabyte is a million gigabytes. Ok, you know that your iPhone holds a few gigabytes… now you get the picture. And this amount of data is escalating rapidly…

Now we are getting to crux of this post:  how in the world is this amount of data checked for accuracy and quality? As was shown in my last post, the consequences of bad data can be devastating, or at the least just annoying if your movie has bad pictures or distorted sound. It may seem obvious once you see the very large amounts of data created and moved every year (or to put it another way:  32,000 gigabytes per second is the current rate of data consumption on the planet Earth) – most of the data (whether financial data, movies, music, etc.) is simply not checked at all.

We rely – perhaps more than we should – on the accuracy of the digital systems themselves to propagate this data correctly. Most of the data that is turned loose into our digital world had some level of quality check at some point – an article was proof-read before publishing; a movie was put through a quality control (QC) process; photos were examined, etc. However, the very nature of our fragmented and differentiated digital distribution system requires constant and frequent transformation of data.

Whether a movie was transcoded (converted from one digital format to another); financial data moved from one database to another; music encoded from a CD to an MP3 file – all these data transformations are often done automatically and then sent on their way – with no further checks on accuracy. This is not completely due to a blatant disregard for quality – it’s just the laws of physics:  there is simply no way to check this much data!

This is an insiduous problem – no one sat back 20 years ago and said, “We’re going to build a digital universe that will basically become unmanageable in terms of quality control.” But here we are…

On the whole, the system as a whole works amazingly well. The amount of QC done on the front end (when the data is first created), coupled with the relatively high accuracy of digital systems, has allowed the ecosystem to work rather well – on the average. The issue is that individual errors really do matter – see my last post for some notable examples. The National Academy of Science reported in 2006 that the total error rate for dispensation of prescription medicine that caused injury or death in the United States was 0.5% – now that seems small until you do the math. In 2006 there were 300 million people in the US, so there 1.5 million people affected by these errors. If you were affected by a data error that caused injury or death you might look a bit differently on the issue of quality control in digital systems.

So the issue boils down to this:  is it possible to have a system (the ‘internet’ for lack of a better description – although here I really mean the conglomerated digital communications system for all forms of digital data) that can offer BOTH high quantities of data as well as ensure a reasonably high level of quality?

The short answer is yes, this is possible. It takes effort, attention and good process. I’ve coined a term for this – TS2 – (you can’t be a tekkie if you don’t invent acronyms… 🙂 meaning “Trusted Source / Trusted System”. At the highest level, if a data source (movie, article, financial data, etc.) is tested to a level of certainty before being introduced to a digital distribution system AND all of the workflows, transformative processes, etc. themselves are tested, then it can be mathematically proven that the end distributed result will have a high level of accuracy – with no further QC required.

This is not a trivial issue, as the hard part is testing and control of the system elements that touch the data as it travels to its destination. This cannot always be accomplished, but in many cases it is possible, and a well-managed approach can greatly increase the reliablity of the delivered content. A rigorous ‘change control’ process must be introduced, with any sugsequent changes to the system being well tested in a lab environment before turned loose in the ‘real world.’ Some examples in the previous post show what happens if this is not done….

So, some food for thought… or to put it another way, it is possible to eat your digital cake!

… but to really screw it up you need a computer…

January 26, 2012 · by parasam

We’ve all heard this one, but I wanted to share a few ‘horror stories’ with you as a prequel to a blog I will post shortly on the incredible challenges that our new world of digital content puts to us in terms of quality control of content. Today we are swimming in an ocean of content:  movies, books, music, financial information, e-mail, etc. We collectively put an alarming level of trust in all this digital information – we mostly just assume it’s correct. But what if it is not?

The following ‘disasters’ are all true. Completely. They were originally compiled by Andrew Brandt of the IDG News Service in October 2008 as a commentary on the importance of good QA (Quality Assurance) teams in the IT industry.

<begin article>

Stupid QA tricks: Colossal testing oversights

What do you get when you add the human propensity to screw stuff up to the building of large-scale IT systems? What the military calls the force-multiplier effect — and the need for a cadre of top-notch QA engineers.

After all, if left unchecked, one person’s slip of the mouse can quickly turn into weeks of lost work, months of missing e-mails, or, in the worst cases, whole companies going bankrupt. And with IT infused in every aspect of business, doesn’t it pay to take quality assurance seriously?

Let’s face it. Everybody makes mistakes. Users, managers, admins – no one is immune to the colossally stupid IT miscue now and again. But when a fat-fingered script or a poor security practice goes unnoticed all the way through development and into production, the unsung heroes of IT, the QA engineers, take a very embarrassing center stage. It may seem cliché, but your IT development chain is only as strong as its weakest link. You better hope that weakest link is your QA team, as these five colossal testing oversights attest.

Code “typo” hides high risk of credit derivative

Testing oversight: Bug in financial risk assessment code

Consequence: Institutional investors are led to believe high-risk credit derivatives are highly desirable AAA-rated investments.

Here’s the kind of story we’re not hearing much about these days despite our present economic turmoil.

According to a report published in May 2008 in the Financial Times, Moody’s inadvertently overrated about $4 billion worth of debt instruments known as CPDOs (constant proportion debt obligations), due to a bug in its software. The company, which rates a wide variety of government bonds and obligation debts, underplayed the level of risk to investors as a result of the bug, a glitch that may have contributed to substantial investment losses among today’s reeling financial institutions. CPDOs were sold to large institutional investors beginning in 2006, during the height of the financial bubble, with promises of high returns — nearly 10 times those of prime European mortgage-backed bonds — at very little risk.

Internal Moody’s documents reviewed by reporters from the Financial Times, however, indicated that senior staff at Moody’s were aware in February 2007 that a glitch in some computer models rated CPDOs as much as 3.5 levels higher in the Moody’s metric than they should have been. As a result, Moody’s advertised CPDOs as significantly less risky than they actually were until the ratings were corrected in early 2008.

Institutional investors typically rely on ratings from at least two companies before they put significant money into a new financial product. Standard & Poor’s had previously rated CPDOs with its highest AAA rating, and stood by its evaluation. Moody’s AAA rating provided the critical second rating that spurred investors to begin purchasing CPDOs. But other bond-ratings firms didn’t rate CPDO transactions as highly; John Schiavetta, head of global structured credit at Derivative Fitch in New York, was quoted in the Financial Times in April 2007, saying, “We think the first generation of CPDO transactions are overrated.”

Among the U.S.-based financial institutions that put together CPDO portfolios, trying to cash in on what, in late 2006, seemed to be a gold rush in investments, were Lehman Brothers, Merrill Lynch, and J.P. Morgan. When first reported this past May, the Financial Times story described the bug in Moody’s rating system as “nothing more than a mathematical typo — a small glitch in a line of computer code.” But this glitch may have contributed in some measure to the disastrous financial situation all around us.

It’s kind of hard to come up with a snarky one-liner for a foul-up like that.

Testing tip: When testing something as critical as this, run commonsense trials: Throw variations of data at the formula, and make sure you get the expected result each time. You also have to audit your code periodically with an outside firm, to ensure that a vested insider hasn’t “accidentally” inserted a mathematical error that nets the insider millions. There’s no indication that such an inside job happened in this case, but such a scenario isn’t so far-fetched that it’s beyond the realm of possibility.

Sorry, Mr. Smith, you have cancer. Oh, you’re not Mr. Smith?

Testing oversight: Mismatched contact information in insurer’s customer database

Consequence: Blue Cross/Blue Shield sends 202,000 printed letters containing patient information and Social Security numbers to the wrong patients.

Of course, it sounded like a good idea at the time: Georgia’s largest health insurance company, with 3.1 million members, designed a system that would send patients information about how each visit was covered by their insurance. The EOB (explanation of benefits) letters would provide sensitive patient information, including payment and coverage details, as well as the name of the doctor or medical facility visited and the patient’s insurance ID number.

Most insurance companies send out EOBs after people receive medical treatment or visit a doctor, but the Georgia Blue Cross/Blue Shield system so muddled up its medical data management functionality that its members were sent other members’ sensitive patient information. According to The Atlanta Journal-Constitution, registered nurse Rhonda Bloschock, who is covered by Blue Cross/Blue Shield, received an envelope containing EOB letters for nine different people. Georgia State Insurance Commissioner John Oxendine described the gaffe to WALB news as “the worst breach of healthcare privacy I’ve seen in my 14 years in office.”

As for the roughly 6 percent of Georgia Blue Cross/Blue Shield customers who were affected, I’m sure they will be heartened by the statement provided by spokeswoman Cindy Sanders, who described the event as an isolated incident that “will not impact future EOB mailings.” It’s a mantra Georgia Blue Cross/Blue Shield customers can keep repeating to themselves for years as they constantly check their credit reports for signs of identity theft.

Testing tip: Merging databases is always tricky business, so it’s important to run a number of tests using a large sample set to ensure fields don’t get muddled together. The data set you use for testing should be large enough to stress the system as a normal database would, and the test data should be formatted in such a way to make it painfully obvious if anything is out of place. Never use the production database as your test set.

Where free shipping really, really isn’t free

Testing oversight: Widespread glitches in Web site upgrade

Consequence: Clothier J. Crew suffers huge financial losses and widespread customer dissatisfaction in wake of “upgrade” that blocks and fouls up customer orders for a month.

On June 28, 2008, engineers took down the Web site for clothes retailer J. Crew for 24 hours to perform an upgrade. In terms of the results of this effort, one might argue that the site did not in fact come back online for several weeks, even though it was still serving pages.

The company’s 10-Q filing summarized the problems: “During the second quarter of fiscal 2008 we implemented certain direct channel systems upgrades which impacted our ability to capture, process, ship and service customer orders.” That’s because the upgrade essentially prevented site visitors from doing anything other than look at photos of fashionable clothes.

Among the problems reported by customers was this whopper: A man who ordered some polo shirts received, instead, three child-size shirts and a bill for $44.97 for the shirts, plus $9,208.50 for shipping. And before you ask, no, they weren’t hand-delivered by a princess in an enchanted coach.

As a result, the company temporarily shut down e-mail marketing campaigns designed to drive business to the Web site. It also had to offer discounts, refunds, and other concessions to customers who couldn’t correct orders conducted online or who received partial or wrong orders.

But the biggest story is how the Web site upgrade affected the company’s bottom line. In a conference call with investors in August, CFO James Scully said, “The direct system upgrades did impact our second-quarter results more than we had anticipated and will also impact our third-quarter and fiscal-year results,” according to a transcript of the call.

Ouch.

Testing tip: When your company’s bottom line depends on the availability of your Web site, there’s no excuse for not running a thorough internal trial to probe the functionality of the entire site before you throw that update live to the world. Bring everyone on the Web team into the office, buy a bunch of pizzas, and tell them to click absolutely everything. And keep full backups of your old site’s front and back end, just in case you do somehow push a broken site update live and need to revert to save your company from unmitigated disaster.

Department of Corrections database inadvertently steps into the “user-generated” generation

Testing oversight: Trusted anonymous access to government database

Consequence: Database queries in URLs permit anyone with passing knowledge of SQL to pull down full personal information of anyone affiliated with the Oklahoma Department of Corrections, including prisoners, guards, and officers.

Anyone who’s ever been an employee of the Oklahoma prison system or an unwilling guest of the state now has an additional issue to worry about: identity theft. Thanks to a poorly programmed Web page designed to provide access to the Sexual and Violent Offender Registry, Web visitors were able to gain complete access to the entire Department of Corrections database.

Among the data stored in the database were names, addresses, Social Security numbers, medical histories, and e-mail addresses. But the problem was far worse that that: Anyone who knew how to craft SQL queries could have actually added information to the database.

Got an annoying neighbor who mows his lawn too early on a Sunday? How about a roommate who plays his music too loud, late into the night? Annoying ex-boyfriend or ex-girlfriend? Why not add them to the Sexual and Violent Offender Registry and watch them get rejected from jobs and be dragged off to the pokey after a routine traffic stop?

To add insult to injury, when Alex Papadimoulis, editor of dailywtf.com, alerted Oklahoma corrections officials about the security problem, they fixed it immediately — by making the SQL query case-sensitive.

So instead of adding “social_security_number” to the query string that retrieves that bit of information, it only worked if you used “Social_security_number.” Genius, huh? Nobody would ever have thought of that.

The database-on-a-Web-site issue is only a slice of the problems Oklahoma’s Department of Corrections faces when it comes to IT. An audit of the department published at the end of 2007 explains that the OMS (Offender Management System) is on the brink of collapse. “The current software is so out of date that it cannot reside on newer computer equipment and is maintained on an antiquated hardware platform that is becoming increasingly difficult to repair. A recent malfunction of this server took OMS down for over a full day while replacement parts were located. If this hardware ultimately fails, the agency will lose its most vital technology resource in the day-to-day management of the offender population.”

Testing tip: When you’re building an interface to a database that contains the sensitive data of hundreds or thousands of people, there’s no excuse for taking the least-expensive-coder route. Coding security into a Web application takes a programmer with practical experience. In this case, that didn’t happen. The money you spend on a secure site architecture at the beginning may save you from major embarrassment later, after some kid breaks your security model in five minutes. Remember, “security through obscurity” provides no security at all.

Busted big-time — by the bank

Testing oversight: Contact fields transposed during financial database migration

Consequence: Financial services firm sends detailed “secret” savings and charge card records made for mistresses to customers’ wives.

It’s hard to get away with an affair when the bank won’t play along. That’s what some high-roller clients of an unnamed financial services firm learned when the firm sent statements containing full details of account holders’ assets to their home addresses.

Although that might not sound like a recipe for disaster, this particular firm — which requires a $10 million minimum deposit to open an account — is in the business of providing, shall we say, a financial masquerade for those who wish to sock away cash they don’t want certain members of their marriage to know about. Customers who desire this kind of service typically had one (somewhat abridged) statement mailed home, and another, more detailed (read: incriminating) statement mailed to another address.

When the firm instituted a major upgrade to its customer-facing portal, however, a database migration error slipped through the cracks. The customer’s home address was used for the full, unabridged account statements. The nature and character of the discussions between account holder and spouse regarding charges for hotel rooms, expensive jewelry, flowers, and dinners are left as an exercise for the imagination. According to a source inside the company, the firm lost a number of wealthy clients and nearly $450 million in managed assets as a result of the flub. But the real winners in this case, apparently, were the divorce lawyers.

Testing tip: In this case, it seems like the engineers who designed the upgrade didn’t fully understand the ramifications of what they were doing. The bank executives who maintain this house of cards were ultimately at fault. Communicate the intricacies of your customers’ business relationships to your site designers and follow through with continuous oversight to ensure clients’ dirty laundry, err, sensitive data out of public view.

<end article>

While the above examples are certainly large and adversely affected many people’s lives and finances, these are a very, very small tip of a really, really large iceberg. Our digital world is the modern-day Titanic, steaming ahead while we party to the tunes in our earbuds and believe the 3D we see is real…

We humans like to believe what we see, what we’re told, what we feel. Most of us are trusting of the information we receive every day with little consideration of its accuracy. Only when something doesn’t work, or some calamity occurs due to incorrect data, do we stop to ask ourselves, “Is that right?”

I’ll conclude these thoughts in my next posting, but I will leave you with a clue that will provide security and sanity in the face of potential digital uncertainty:  use commmon sense.

CONTENT PROTECTION – Methods and Practices for protecting audiovisual content

January 25, 2012 · by parasam

[Note:  I orginally wrote this in early 2009 as an introduction to the landscpe of content protection. The audience at that time consisted of content owners and producers (studios, etc.) who had (and have!) concern over illegal reproduction and distribution of their copyrighted material – i.e. piracy. With this issue only becoming bigger, and as a follow-up to my recent article on proposed piracy legislation (SOPA-PIPA) I felt it timely to reprint this here. Although a few small technical details have been added to the ecosystem, essentially the primer is as accurate and germane today as it was 3 years ago. While this is somewhat technical I believe that it will be of interest to this wider audience.]

What is Content Protection?

  • The term ‘Copy Protection’ is often used to describe the technical aspect of Content Protection.
  • Copy Protection is a limiting and often inaccurate term, as technical forms of Content Protection often include more aspects than just limiting or prohibiting copies of content.
  • Other forms of Technical Content Protection include:
    • Display Protection
      • Restrictions on type, resolution, etc. of display devices
    • Transmission Protection
      • Restrictions on retransmission or forwarding of content
    • Fingerprinting, Watermarks, etc.
      • Forensic marks to allows tracing of versions of content

Content Protection is the enforcement of DRM

  • Digital Rights Management (DRM)
    • A more accurate term would be ‘Content Rights Management’ (CRM) as this describes what is actually being managed [the word digital is now so overused that we see digital shoes (with LEDs), digital batteries, etc.)
    • Simply put, DRM is a set of policies that describe how content may be used in alignment with contractual agreements to ensure content owners a fair return on their investment in creating and distributing their content.
    • These policies can be enforced by legal, social and technical means.
      • Legal enforcement is almost always ex post facto
        • Civil and criminal penalties brought against parties suspected of violating DRM policies
      • Typically used in circumstances involving significant financial losses, due to time and costs involved
      • Is the most reactive and never prevents policy misuse in the first place
    • Social enforcement is a complex array of measures that will be discussed later in this article
    • Technical enforcement is what most of think about when we mention ‘Content Protection’ or ‘Copy Protection’
      • This is often a very proactive form of rights enforcement, as it can prevent misuse in the first place
      • It has costs, both in terms of actual cost of implementation and often a “social cost” in terms of customer alienation
      • Many forms of technical enforcement are percieved by customers as unfairly limiting their ‘fair use’ of content they have legally obtained

Technical Content Protection

  • To be effective, must have these attributes:
    • DRM policies must be well defined and be expressible with rules or formulas that are mechanically reproducible
    • Implementations should match the environment in terms of complexity, cost, reliability and lifespan
      • Protecting Digital Cinema content is a different process than protecting a low-resolution streaming internet file
      • The costs of these techniques should be included in mastering or distribution, as consumers see no “value” in content protection – it is not a ‘feature’ they will pay for
      • There are challenges in the disparate environments in which content is transmitted and viewed
        • CE (Consumer Electronics) has a very different viewpoint (and price point) on content protection than the PC industry
    • A balance is required in terms of the level of effectiveness vs. cost and perceived “hassle factor”
      • A “layered defense” and the concept of using technical content protection as a significant “speed bump” as opposed to a “Berlin Wall” will be most efficient
      • A combination of all three content protection methods (legal, social and technical) will ultimately provide the best overall protection at a realistic cost
      • The goal should not be to prohibit any possible breach of DRM policy, but rather to maximize the ROI to the content owner/distributor at an acceptable cost
      • All technical content protection methods will eventually fail
        • As general technology and computational power moves forward, techniques that were “unbreakable” a few years in the past will be defeated in the future
      • The technical protection mechanisms and algorithms are highly asymmetrical in terms of “cat & mouse” – i.e. there are a few hundred developers and potentially millions or tens of millions of users working to defeat these systems
    • The methods employed should work across international boundaries and should to the greatest degree possible be agnostic to language, culture, custom and other localization issues
    • Any particular deployment of a content protection system (usually a combination of protected content and a licensed playback mechanism) must be persistent, particularly in the consumer electronics environment
      • For example, users will expect DVDs to play correctly in both PCs and DVD player appliances for many years to come

Challenges for Technical Content Protection

  • Ubiquitous use
    • Users desire to consume content on a variety of devices of their choosing
      • “Anytime, anywhere, anyhow”
    • New technologies often outpace Rights Management policies
      • Example:  a DVD is region-coded for North America, cannot be played in Europe; but the same content can be purchased via iTunes and downloaded to iPod and played anywhere in the world
    • How to define “home use” in the face of Wi-Fi, Wi-Max, ipsec tunneling to remote servers, etc.
  • Persistent Use
    • Technical schemes must continue to work long after original deployment
    • In the CE (Consumer Electronics) environment older technology seldom dies, it is “handed down” the economic ladder. Just as DC-3 airplanes are still routinely hauling cargo in South America and Alaska some 50 years after the end of its design lifetime, VHS and DVD players will be expected to work decades from now
    • Particular care must be taken with some newer schemes that are contemplating the need for a network connection – that may be very difficult to make persistent
  • Adaptable Use
    • This is one of the more difficult technical issues to overcome simply
    • The basic premise is the user legally purchases content, then desires to consume it personally across a large inventory of playback devices
      • TV
      • PC
      • iPod
      • Cell phone
      • Portable DVD/BD player
      • Networked DVD/BD player in the home
    • How do both Rights Management policies and technical content protection handle this use case?
    • This is a currently evolving area and will require adaptation by both content owners, content distributors as well as content protection designers and device manufacturers
    • What will the future bring?
      • One protection scheme for enforcing “home network use” analyzes the “hop time” [how long it takes a packet to get to a destination] – a long hop time assumes an “out of home” destination and this use would be disallowed. How does this stop users in a peer-to-peer wireless environment that are close together (in a plane, at a party?)
      • DVD region codes were an interesting discussion when players were installed in the ISS (International Space Station)
      • A UK company (Techtronics) “de-regionalized” a Sony unit…
      • Technologies such as MOST (Media Oriented Systems Transport) – the new network system for vehicles
      • Sophisticated retransmission systems – such as SlingBox

Technical Content Protection Methods

  • Content protection schemes may be divided into several classes
    • Copy Protection – mechanisms to prevent or selectively restrict the ability of a user to make copies of the content
    • Display Protection – mechanisms to control various parameters of how content may be displayed
    • Transmission Protection – mechanisms to prevent or selectively restrict the ability of a user to retransmit content, or copy content that has been received from a transmission that is licensed for viewing but not recording
  • Legacy analog methods
    • APS (Analog Protection System) often known by its original developer name (Macrovision). Also known as Copyguard. This is a copy protection scheme primarily targeted at preventing VHS tape copies from VHS or DVD original content.
    • CGMS-A (Copy Generation Management System – Analog) is a copy protection scheme for analog television signals. It is in use by certain tv broadcasts, PVRs, DVRs, DVD players/recorders, D-VHS, STBs, Blu-ray and recent versions of TiVo. 2 bits in the VBI (Vertical Blanking Interval) carry CCI (Copy Control Information) that signals to the downstream device what it can copy:
      • 00    CopyFreely  (unlimited copies allowed)
      • 01    CopyNoMore  (one copy made already, no more allowed)
      • 10    CopyOnce  (one copy allowed)
      • 11    CopyNever  (no copies allowed)
    • Current digital methods
    • CGMS-D (Copy Generation Management System – Digital). Basically the digital form of CGMS-A with the CCI bits inserted into the digital bitstream in defined locations instead of using analog vertical blanking real estate.
    • DTCP (Digital Transmission Content Protection) is designed for the “digital home” environment. This scheme links technologies such as BD/DVD player/recorders, SD/HD televisions, PCs, portable media players, etc. with encrypted channels to enforce Rights Management policies. Also known as “5C” for the 5 founding companies.
    • AACS (Advanced Access Content System), the copy protection scheme used by Blu-ray (BD) and other digital content distribution mechanisms. This is a sophisticated encryption and key management system.
    • HDCP (High-bandwidth Digital Content Protection) is really a form of display protection, although that use implies a form of copy protection as well. This technology restricts certain formats or resolutions from being displayed on non-compliant devices. Typically protected HD digital signals will only be routed to compliant display devices, not to recordable output ports. In this use case, only analog signals would be available at output ports.
    • Patronus – various copy protection schemes targeted at the DVD market:  anti-rip (for both burned and replicated disks) and CSS (Content Scramble System) for DTO (Download To Own)
    • CPRM (Content Protection for Recordable Media), a technology for protecting content on recordable DVDs and SD memory cards
    • CPPM (Content Protection for Pre-recorded Media), a technology for protecting content on DVD audio and other pre-recorded disks
    • CPSA (Content Protection Systems Architecture) which defines an overall framework for integration of many of the above systems
    • CPXM (Content Protection for eXtended Media) An extension of CPRM to other forms of media, most often SD memory cards and similar devices. Allows licensed content to be consumed by many devices that can load the SD card (or other storage medium)
    • CMLA (Content Management License Administration), a consortium of Intel, Nokia, Panasonic and Samsung that administers and provided key management for mobile handsets and other devices that employ the OMA (Open Mobile Appliance ) spec, allowing the distribution of protected content to mobile devices.
    • DTLA (Digital Transmission Licensing Administrator) provides the administration and key management for DTCP.

Home Networking – the DTCP model

  • As one of the most deployed content protection systems, a further explanation of the DTCP environment:
    • DTCP works in conjunction with other content protection technologies to provide an unbroken chain of encrypted content from content provider to the display device
    • Each piece has its own key management system and protects a portion of the distribution chain:
      • CA (Conditional Access) – cable/satellite/telco
      • DTCP – the PVR/DVR/DVD recorder
      • CPRM – recordable disks, personal computer
      • HDCP – display device

DTCP and Transmission Protection

  • One important feature of DTCP is the enabling of the so called “Broadcast Flag”
    • Accepted by the FCC as an “approved technology”, the CCI information embedded in the DTV (Digital Television) signal is used by DTCP-compliant devices to regulate the use of digitally broadcast content
    • The technology will allow free-to-air digital broadcast for live viewing while at the same time prohibit recording or retransmission of the digital signal.

DTCP and the future

  • A number of recent extensions to the original DTCP standard have been published:
    • The original DTCP standard was designed for the first digital interface implemented on set top boxes: FireWire (1394a).
    • The original standard has now been extended to 7 new types of interfaces:
      • USB
      • MOST
      • Bluetooth
      • i.Link & IDB1394 (FireWire for cars)
      • IP
      • AL (Additional Localization)
        • New restrictions to insure all DTCP devices are in 1 home
      • WirelessHD

DTCP Summary

  • With probably the largest installed base of devices, DTCP is the backbone of most “home digital network content protection” schemes in use today.
    • As DTCP only protects data transmission interfaces, the other ‘partners’ (CA, CSS, CPRM, CPPM, HDCP) are all required to provide the end-to-end protection from content source to the display screen.
    • The extensions that govern IP and WirelessHD in particular allows the protection of HD content in the home.
    • The underlying design principles of DTCP are not limited by bandwidth or resolution, improved future implementations will undoubtedly keep pace with advances in content and display technology.

Underlying mechanisms that enable Technical Content Protection

  • All forms of digital content protection are comprised of two parts:
    • Some form of encryption of content in order that the content is unusable without a method of decoding the content before display, copying or retransmission
    • A repeatable and reliable method for decrypting the content for allowed use in the presence of a suitable key – the presence of which is assumed to equivalent to a license to perform the allowed actions
    • The encryption part of the process uses well-known and proven methods from the cryptographic community that are appropriate for this task:
    • The cipher (encryption algorithm) must be robust, reliable, persistent, immutable and easily implemented
    • The encryption/decryption process must be fast
      • At a minimum must support real-time crypto at any required resolution to allow for broadcast and playback
      • Ideally should allow for significantly faster than real-time encryption to maximize the efficiency of production and distribution entities that must handle large amounts of content quickly
    • All encryption techniques use a process that can be simplified to the following:
      • Content [C] and a key [K] are inputs to an encryption process [E], which produces encrypted content [CE]
    • In a similar but inverse action, decryption uses a process:
    • Encrypted content [CE], and a key [K] are inputs to a decryption process [D], which produces a replica of the original content [C]

    • Encryption methods
      • This is a huge science in and of itself. Leaving the high mathematics behind, a form of cipher known as a symmetrical cipher is best suited for encryption of large amounts of audiovisual content.
      • It is fast, secure and can be implemented in hardware or software.
      • Many forms of symmetrical ciphers exist, the most common is a block cipher known as AES which is currently used in 3 variants (cipher        strengths):  AES128, AES192 and AES256
      • AES (Advanced Encryption Standard) is approved by the NIST (National Institute of Standards) for use by military, government and civilian use. The 128-bit variant is more than secure enough for protecting audiovisual content, and the encryption meets the speed requirements for  video.
    • Keys
      • Symmetrical block ciphers (such as AES128) use the principle of a “shared secret key”
      • The challenge is how to create and manage keys that can be kept secret while being used to encrypt and decrypt content in many places with devices as diverse as DVD players, PCs, set top boxes, etc.
      • In practice, this is an enormously complex process, but this has been solved and implemented in a number of different DRM environments including all DTCP-compliant devices, most content application software available on PCs, etc.
      • It is possible to revoke keys (that is, deny their future ability to decode content) if the implementation allows for that. This makes it possible for known compromised keys to no longer be able to decrypt content.

Forensics

  • Forensic science (often shortened to forensics) is the application of a broad spectrum of sciences to answer questions of interest to a legal system.
    • Although technically not a form of Content Protection, the technologies associated with forensics in relation to audiovisual content (watermarking, fingerprinting, hashing, etc.) are vitally important as tools to support Legal Content Protection.
    • Without the verification and proof that Content Forensics can offer, it would be impossible to bring civil or criminal charges against parties suspected of subverting DRM agreements.
  • Watermarking
    • A method of embedding a permanent mark into content such that this mark, if recovered from content in the field, is proof that the content is either the original content or a copy of that content.
    • There are two forms of watermark:
      • Visible Watermarking, often known as a “bug” or a “burn-in”
        • This is frequently used by tv broadcasters to define ownership and copyright on material
        • Also used on screeners and other preview material where the visual detraction is secondary to rendering the content unsuitable for general use or possible resale.
        • Is subject to compromise due to:
          • Since it is visible, the presence of a watermark is known
          • Can be covered or removed without evidence of this action
        • Invisible Watermarking
          • The watermark can be patterns, glyphs or other visual information that can be recognized when looked for
          • Various visual techniques are used to render the watermarks “invisible” to the end user when watching or listening to content for entertainment.
          • Since the exact type, placement, timing and other information on embedding the watermark is known by the watermarking authority, this information is used during forensic recovery to assist in reading the embedded watermarks.
          • Frequently many versions of a watermark are used on a single source item of content, in order to narrow the distribution channel represented by a given watermark.
            • Challenges to invisible watermarking
          • Users attempting to subvert invisible watermarks have become very sophisticated and a number of attacks are now common against embedded watermarks.
          • A high quality watermarking method must offer the following capabilities:
            • Robustness against attacks such as geometric rotation, random bending, cropping, contrast alteration, time compression/expansion and re-encoding using different codecs or bit rates.
            • Robustness against the “analog hole” is also a requirement of a high quality watermark. (The “analog hole” is a hole in the security chain that could be broken by taking a new video of the playback of the original content, such as a camcorder in a theater).
            • Security of the watermark against competent attacks such as image region detection, collusion (parallel comparison and averaging of watermarked materials) and repetition detection.
              • Invisible watermarking must be “invisible”
            • The watermark must not degrade the image nor be easily detectable by eye (if one is not looking for it)
            • Various algorithms are commonly used to select geometric areas of certain frames that are better suited than others to “hide” watermarks. In addition, “tube” or “sliding” techniques can be applied to move the watermark in subsequent frames as an object in the frame moves. This lessens the chance for visual detection.
  • Fingerprinting
    • As opposed to watermarking, fingerprinting makes no prior “marks” to the source content, but rather measures the source content in a very precise way that allows subsequent comparison to forensically prove that the content is identical.
    • Both video and audio can be fingerprinted, but video is of more use and is more common. Audio is easily manipulated, and sufficient changes can be made to “break” a fingerprint comparison without rendering the audio unusable.
    • The video fingerprint files are quite small, and can be stored in databases and used for monitoring of internet sites, broadcasts, DVDs, etc.
  • Hashing
    • In this context, cryptographic hash functions have been explored as a form of “digital fingerprint”
    • This is different from “content fingerprinting” discussed in the previous section, a hash value is a purely numerical value derived via formula from an analysis of all the bits in a digital file.
    • If the hash values of two files are the same, the files are identical.
    • Hashing turns out to be unreliable for use as a forensic tool in this context:
      • A change of just a few bits in an entire file (such as trimming 1 second off the runtime of a movie) will cause a different hash value to be computed.
        • Essentially the same content can have multiple hash values, therefore the hash cannot be used as forensic evidence.
        • Content fingerprinting or watermarking are superior techniques in this regard.
    • Cryptographic hashes have great value in the underlying mechanisms of technical content protection, they are just not suitable as an alternative for watermarking or fingerprinting.
      • As checksums to insure accidental data corruption of critical information (encrypted keys, master key blocks, etc.)
      • As part of the technology that allows “digital signatures”, a method of insuring data has not been changed.
      • As a part of MACs (Message Authentication Codes) used to verify exchanges of privileged data.

Social Content Protection

  • Of the three forms of Rights Management enforcement (Legal, Social, Technical) this is probably the least recognized but if applied properly, the most effective form of enforcement
    • All the forms of Content Protection discussed overlap with each other to some extent
      • Forensics, a part of Technical protection, is what allows Legal protection to work, it gives the basis for claims.
      • Legal protection, in the form of original agreements, precedes all other forms, as Rights cannot be enforced until they are accurately described and agreed upon.
      • Social content protection is an aggregate of methods such as business policies, release strategies, pricing and distribution strategies and similar practices.

    Back to the future… what is the goal of content protection?

    • It’s really to protect the future revenues of proprietary content – to achieve the projected ROI on this asset
    • Ultimately, the most efficient method (or combination of methods) will demonstrate simplicity, low cost, wide user acceptance, ease of deployment and maintenance, and robustness in the face of current and future attempts at subversion.
    • The solution will be heterogeneous and will differentiate across various needs and groups – there is no “one size fits all” in terms of content protection.
    • Recognize the differences in content to be protected
      • Much content is ephemeral, it does not hold value for long
        • Newscasts, commercials, user-contributed content that is topical in nature, etc.
        • This content can be weakly protected, or left unprotected
      • Some content has a long lifespan and is deserving of strong protection
        • Feature movies, music, books, works of art, etc.
        • Even in this category, there will be differentiation:
          • Bottom line is assets that have a high net worth demand a higher level of protection
    • Recognize that effective content protection is a shared responsibility
    • It cannot be universally accomplished at the point of origin
    • Effective content protection involves content creators, owners, distributors (physical and online), package design, hardware and software designers and manufacturers, etc.
    • Each step must integrate successfully or a “break in the chain” can occur, which can be exploited by those that wish to subvert content protection.
    • Understand that most users see content protection as a “negative” – the implementation of various forms of social or technical content protection are perceived as “roadblocks” to the user’s enjoyment of content.
    • Purchasing a DVD while overseas on vacation and finding it will not play in their home DVD player;
    • Discovering that they have purchased the same content 3 or 4 times in order to play in various devices in their home, car, person (VHS, DVD, Blu-ray, iPod, Zune)
    • Purchasing a Blu-ray movie, playing it back in the user’s laptop (since they don’t have a stand-alone BD player and the laptop has a BD drive), finding it plays on the laptop screen but when connected via DVI to their large LCD display nothing is visible, and no error message is displayed[in this case HDCP content protection has disallowed the digital output from the laptop, but the user thinks either their laptop or monitor is broken]
    • One of the least successful attributes of technical content protection is notifying users when content copying/display/retransmission is disallowed.
    • Understand the history and philosophy of content protection in order to get the best worldview on the full ecosystem of this issue
      • The social dilemma is this:  in the past, all content was free as we had only an oral tradition. There was no recording, the only “cost” was that of moving your eyes and ears to where the content was being created (play, song, speech).
      • In order to share content across a wider audience (and to experience content in its original form, as opposed to how uncle Harry described what he heard…) books were invented. This allowed distribution across distance, time and language. The cost of producing was borne by the user (sale of books).
      • Eventually the concept of copyright was formed, a radical idea at the time, as it enriched content owners as well as distributors. The original reason for copyrights was to protect content creators/owners from unscrupulous distributors, not end users.
      • Similar protections were later applied to artwork, music, films, photographs, software and even inventions (in the form of patents).
      • Current patent law protects original inventions for 20 years, copyrights by individual authors survive for the life of the author plus 70 years, “works for hire” [just about all music and movies today] are protected for 120 years from creation.
      • Both patents and copyrights have no value except in the face of enforcement.
      • The IPP (Intellectual Property Protection) business has grown to a multi-billion dollar business

Social Content Protection – New Ideas

  • The scale of the problem may not be accurately stated
  • Current “losses” claimed by content owners (whether they are software, film, books, music the issue is identical) assume every pirated or “use out of license” occurrence should have produced the equivalent income as if a copy of the content was sold at retail.
  • This is unrealistic with a majority of the world’s population having insufficient earning power to purchase content at 1st world prices. For example, Indonesia, a country with high rates of DVD piracy, has an average per capita income of US$150 per month. Given the choice of a $15 legitimate DVD or a $1 pirated copy the vast majority will either do without or purchase an illegal copy.
  • With burgeoning markets in India, China and other non-European countries, a reconsideration of content protection is in order.
  • Even in North America and Western Europe “casual piracy” has become endemic due to high bandwidth pipes, fast PCs, and file sharing networks. These technologies will not go away, they will only get better.
  • A different solution is required – a mix of concepts, business strategies and technology that together will provide a realistic ROI without an excessive cost.
  • Old models that are not working must be retired.
  • New “Social Content Protection” schemes to consider:
    • Differential pricing based on affordability (price localization)
    • Differential pricing based on package (multi-level packaging)
      • Top tier DVD has full clamshell, insert, bonus material
      • Low tier has basic DVD only, no bonus material, paper slipcover
    • Differential pricing based on resolution (for online)
      • Top tier is 16:9 @ 1920×1080, 5.1, etc.
      • Lower tier is 16:9 @ 720×408, stereo, etc.
    •  Bottom line is for content to have strong technical protection matched with variable economic thresholds to match the user thresholds in order that users will find less resistance to legally purchasing content than looking for alternatives
  • Most “alternatively supplied” content is of inferior quality, this can become a marketing advantage.
  • Although file-sharing networks and other technological ‘work-arounds’ exist today, they can be cumbersome and require a certain level of skill, many users will opt away from those if a more attractive option is presented.
  • The current economic situation will be exploited, it only remains to be seen whether that is by “alternative distributors” (aka Blackbeard) or by clever legitimate content owners and distributors.
  • The evolving industry practice of “Day and Date” releasing is another useful tactic.
  • As traditional DVD sales continue to flatten, careful consideration of alternatives to insure an increase in legal sales will be necessary.

The Physiology of 3D Viewing

January 23, 2012 · by parasam

[Note:  this blog was originally written as an internal commentary for peers working in the post-production community that work with 3-D, but the issues raised are of interest to a wider community so I have posted this here in its original format]

This is intended as a very brief commentary on “how humans see what we commonly call 3-D.” The purpose of this is to help those of us who work with this kind of service have a better understanding of some of the issues surrounding this technique, and why it may fail for certain viewers. As a service provider, it can only help us to better comprehend the possible feedback from customers, viewers, QC operators, etc as it relates to the possible inability or difficulty to resolve depth perception as used in the theatrical and home viewing of “3-D.”

This is a topic that could fill a large library, and is littered with contradictions, opinions, misconceptions, differing points of view and just plain garbage. I am most likely painting a large bullseye on my back by even starting a blog on this topic… but having a hard head and even thicker skin I will carry on… At the least, maybe this will provoke some commentary, contrary opinion, correction and will further enhance our collective understanding.

This particular article will focus solely on the varying physiological factors than can detract or prevent a person from effectively perceiving “3-D” – I will make no attempt here to discuss how we actually perceive this. That will be for another topic – once I figure out how to reduce that issue to a blog-sized comment.

[Data below is from the National Institute of Health, THX and various vendors of 3-D display devices]

What we commonly call “3-D” is actually nothing more than “2-D plus depth” – a true “3-D” experience would require holography or some similar technology in order to reproduce a continual change of point of view as the viewer moved in space relative to the object (screen). We use the practice of stereoscopy to simulate depth, and furthermore utilize a ‘side-effect’ of our binocular vision that allows us this depth simulation.

      The HVS (Human Visual System) did not evolve in order for us to see Avatar in 3-D… our binocular visual alignment served a much more fundamental task:  how to find food in a forest (our habitat of choice a few years back), and how to avoid being someone else’s dinner at the same time. There is insufficient time or space to delve into this here, but suffice to say that binocular vision with an interocular distance of 6.5cm (approx. 2.5”) allows one to see around leaves in a forested environment – thereby more clearly seeing things at a distance that might make a good meal – or to avoid the same fate for yourself.

A side effect of this stereoscopic vision is the perception of depth. By adjusting the images presented to each eye to simulate the offsets that track our normal viewing experience, we can trick the brain into believing certain objects are closer or farther away from us. In order for this effect to work properly, the human under question needs a relatively healthy visual system, and there are other issues that factor in as well that can reduce or prevent the “3-D experience” from being appreciated.

It may seem obvious, but blindness in one eye negates the 3-D experience. Approximately 2% of the world population is either bind in one eye, or has sufficiently impaired vision in one eye to prevent stereoscopic vision from working. Advanced Macular Degeneration (a retinal disease) also significantly impairs 3-D vision, and another 1% of the population (<60 yrs old) has this condition. Cataracts negatively interfere with stereo vision as well, approximately 9% of the population (<60 yrs old) has sufficient effect from this disease to affect 3-D vision. Red-Green color blindness also affects depth perception in a stereoscopic environment (this effect holds true even for non-anaglyphic presentations for reasons that are not fully understood at this time) – and that effects another 7-10% of the population.

Even after statistically combining these results to allow for the fact that some viewers will have multiples of the disorders listed above, approximately 18% of the population is physiologically unable to correctly perceive stereoscopic presentations to some degree. In addition to the purely mechanical issues to the visual system listed above, there are further issues that either affect the neurological connection to the brain, the brain’s perception of the visual data presented or other interfering neurological factors. Issues such as epilepsy, vertigo, cognitive processing difficulty, a certain percentage of autistic spectrum disorders, diabetes and other conditions can all reduce or eliminate the possibility of stereoscopic viewing, or make it not enjoyable.

The bottom line is that somewhere between 15-30% of viewers in general will for a multitude of reasons demonstrate a reduced capability for the 3-D experience. That of course leaves many millions of viewers (and their dollars) available for the consumption of this service and the associated products. So this should not be seen as a deterrent to the continuing proliferation of 3-D content and services to distribute the same, but rather as one more knowledge point to have in hand when working in this field.

Technology and the Art of Storytelling

January 23, 2012 · by parasam

Why should we talk about this, particularly in relation to content distribution? Isn’t most of the art performed within production and creative services?

I would argue that as much creativity, craft and artistic design goes into preserving and re-creating the intention of the original theatrical story across the plethora of devices and transmission paths as was used in the original post-production process. At the end of the day the goal of the content creator is to provoke a set of responses within the human brain, excited by stimulus to the eyes and ears. (Currently our movies have made little use of smell, taste and touch.. maybe that is next after 3-D becomes old hat??)

       The field of Human Perception Design has recognized that the eye/ear/brain interface is rather easily fooled. If this was not the case, then all modern compression schemes would fail to provide an equivalent experience to the observer in relation to original uncompressed material. While it is not technically feasible to match the viewing experience of the theatre with that of an iPod, it is possible to simulate enough of the original experience to not have the consumption of content in this form stand in the way of the storytelling.

In many ways, the theatrical viewing experience is more tolerant of errors, and is certainly less difficult to produce to, than mobile devices or internet connected televisions at low bandwidths. Theatrical viewing is a closed system, with very high bandwidth, no distractions (such as light or other noise), and an immersive screen size (field of vision fully occupied). Even an HD tv in the home must deal with external unbalanced light sources, imperfect acoustical environment, issues with dynamic range of both video and audio and other parameters that can reduce the effectiveness of the storytelling process. This makes any imperfections more noticeable, since the issues mentioned already have typically removed all the “buffer” between following the story and having the viewing experience interrupted by distractions (such as noticeable artifacts in the picture or sound).

Even though it is far less likely to happen in the theatre, a momentary visual artifact (say blockiness in the picture, or a one-frame freeze) will not usually break the concentration of the viewer, as they are immersed in the dark room / big screen / loud sound chamber – there is so much “presence” of the story surrounding one that this ‘mass of experience’ carries one through these momentary distractions. The same level of error in a mobile or home viewing device will often interrupt the viewing experience – i.e. the distraction is noticed to the point where, even for a moment, the viewer’s concentration breaks from the story to the error.

When one adds in all the issues that present to the Media Services process (low bandwidth, restricted color gamut of both codecs and delivery devices, visual errors due to compression artifacts, etc.) it is easy to see that extraordinary measures must be often brought to bear during the content delivery activity in order to preserve the story.

Typical challenges that affect content in this context are:  conversions from interlaced to progressive; frame rate conversions; resolution changes; codec changes; bit rate constraints; video and audio dynamic range compression; aspect ratio reformatting; audio channel downmixing; etc. There is often more than one way to resolve the issue and other design parameters must be factored such as cost, time efficiency, facility capacity, etc.

Technology should to the greatest part be invisible and just support the storytelling process. Just as a white pole stuck in a dune at White Sands park is almost invisible at noon if it were not for the shadow thrown, the shadow of technology should be all that is visible – just enough to outline and focus the viewer on the story.

Comments on SOPA and PIPA

January 23, 2012 · by parasam

The Stop Online Piracy Act (SOPA) and Protect Intellectual Property Act (PIPA) have received much attention recently. As is often the case with large-scale debate on proposed legislation, the facts and underlying issues can be obscured by emotion and shallow sound-bites. The issues are real but the current proposals to solve the problem are reactive in nature and do not fully address the fundamental challenge.

[Disclaimer:  I currently am employed by Technicolor, a major post-production firm that derives substantial income from the motion-picture industry and associated content owners / distributors. These entities, as well as my employer itself, experience tangible losses from piracy and other methods of intellectual property theft. However, the comments that follow are my personal opinions and do not reflect in any way the position of my employer or any other firm with which I do business.]

For those that need a brief introduction to these two bills that are currently in legislative process:  both bills are similar, and – if enacted – would allow enforcement of the following actions to reduce piracy of goods and services offered via the internet, primarily from off-shore companies.

  1. In one way or another, US-based Internet Service Providers (ISPs) would be required to block the links to any foreign-based server entity that had been identified as infringing on copyrighted material.
  2. Payment providers, advertisers and search engines would be required to cease doing business with foreign-based server sites that infringed on copyrighted material.

The intent behind this legislation is to block access to the sites for US-based consumers, and to remove or substantially reduce the economic returns that could be generated from US-based consumers on behalf of the offending web sites.

For further details on the bills, with some fairly objective comments on both the pros and cons of the bills, check this link. [I have no endorsement of this site, just found it to be reasonable and factual when compared with the wording of the bills themselves.]

The issues surrounding “piracy” (aka theft of intellectual or physical property) are complex. The practice of piracy has been with us since inter-cultural commerce began, with the first documented case being the exploits of the Sea Peoples who threatened the Aegean and Mediterranean seas in the 14th century BC.

Capture of Blackbeard

With the historical definition of piracy constrained to theft ‘on the high seas’ – i.e. areas of ocean that are international, or beyond the jurisdiction of any one nation-state – the extension of the term ‘piracy’ to describe theft based within the international ocean of the internet is entirely appropriate.

While the SOPA and PIPA bills are focused on ‘virtual’ property (movies, software, games and other forms of property that can be downloaded from the internet), modern piracy also affects many physical goods, from oil and other raw materials seized by Somali pirates off the east coast of Africa to stolen or counterfeit perfume, clothing and other tangibles offered for sale over the internet. The worst form of piracy today takes the form of human kidnapping on the high seas for ransom. More than 1,100 people were kidnapped by pirates in 2010, with over 300 people currently being held hostage for ransom by pirates at the time of this article (Jan 2012). The larger issue of piracy is of major international concern, and will require proactive and persistent efforts to mitigate this threat.

While the solutions brought forward by these two bills are well-intentioned, they are reactive in nature and fall short of a practical solution. In addition, they suffer from the same heavy-handed methods that often accompany legislative attempts to modify human behavior. Without regard to any of the underlying issues, and taking no sides in terms of this commentary, governmental attempts to legislate alcohol and drug consumption, reproductive behavior and cohabitation lifestyles have all been either outright failures or fraught with difficulty and have produced little or none of the desired results.

Each side in this current debate has exaggerated both the risks and rewards of the proposed legislation. From the content owner’s side the statements of financial losses are overblown and are in fact very difficult to quantify. One of the most erroneous bases for financial computation of losses is the assumption that every pirated transaction would have been money that the studio or other content owner would have received if the content had been legally purchased. This is not supported by fact. Unfortunately many pirated transactions are motivated by cost (either very low or free) – if the user had to pay for the content they simply would choose not to purchase. It is very difficult to assess the amount of pirated transactions, although many attempts are made to quantify this value.

What certainly can be said is that real losses due occur and they are substantial. However, it would better serve both the content owners, and those that desire to assist these rightsholders, to pursue a more conservative and accurate assessment of losses. To achieve a practical solution to the challenge of Intellectual Property (IP) theft, this must be treated as a business use case, and set aside the moral aspects of this issue. The history of humanity is littered with the carcasses of failed attempts to legislate morality. Judgments of behavior do not generate cash, collection of revenue is the only mechanism that factually puts money in the bank.

Any action in commerce has a financial cost. In order to make an informed choice on the efficacy of a proposed action, the cost must be known, as well as the potential profit or loss. If a retail store wants to reduce the assumed losses due to shoplifting, the cost of the losses must be known as well as the cost of additional security measures in order to make a rational decision on what to spend to resolve the problem. If the cost of securing the merchandise is higher than the losses, then it makes no sense to embark on additional measures.

Overstating the amount of losses due to piracy could appear to justify expensive measures to counteract this theft – if implemented the results may in fact only add to the overall financial loss. In addition, costs to implement security are real, while unearned revenue is potential, not actual.

On the side of the detractors to the SOPA and PIPA legislation, the claims of disruption to the fabric of the internet, as well as potential security breaches if link blocking was enabled are also overstated. As an example, China currently practices large scale link blocking, DNS (Domain Name Server) re-routing and other technical practices that are similar in many respects to the proposed technical solutions of the proposed Acts – and none of this has broken the internet – even internally within China.

The real issue here is that these methods don’t work well. The very nature of the internet (a highly redundant, robust and reliable fabric of connectivity) works against attempts to thwart connections from a client to a server. We have seen many recent attempts by governments to restrict internet connectivity to users within China, the Arab states, Libya, etc – and all have essentially failed.

For both sides of this discussion, a more appropriate direction for legislation, funding and focus of energy is to treat this issue for what it is factually:  a criminal activity that requires mitigation from the public sector through police and judicial efforts, and from the private sector through specific and proven security measures. Again, the analogy of current practices in retail merchandising may be useful:  the various technologies of RFI scanners at store exits, barcoded ‘return authorization tags’ and other measures have proven to substantially reduce property and financial loss without unduly penalizing the majority of honest consumers.

Coupled with specific laws and the policy of prosecuting all shoplifters this two-pronged approach (from both public and private sector) has made substantial inroads to merchandise loss in the retail industry.

Content protection is a complex issue and cannot be solved with just one or two simple acts no matter how much that may be desired. In addition, the actual financial threat posed by piracy of movies and other content must be honestly addressed: it is sometimes convenient to point to perceived losses due to piracy rather than other reasons – for instance poor returns due to simply that no one liked the movie… or distribution costs that are higher than ideal, etc.

A part of the overall landscape of content protection is to look at both the demand side as well as the supply side of the equation. Both the SOPA and PIPA proposals only address the supply side – they attempt to reduce access to, or disrupt payment for – the supply of assets. Most consumers make purchase choices based on a cost/benefit model, even if unconsciously so:  therefore at first glance, the attractiveness of downloading a movie for ‘free’ as opposed to paying $5-$25 for the content is high.

However, there are a number of mitigating factors that make the choice more complex:

  • Quality of the product
  • Ease of use (for both getting and playing the content)
  • Ease of re-use or sharing the content
  • Flexibility of devices on which the content may be consumed
  • Potential of consequences for use of pirated material

With careful attention to the above factors (and more), it is possible for legal content to become potentially more attractive than pirated content, at least for a percentage of consumers. It is impossible to prevent piracy from occurring – the most that is reasonable to expect is a reduction to the point where the financial losses are tolerable. This is the same tactic taken with retail merchandise security – a cost/benefit analysis helps determine the appropriate level of security cost in relation to the losses.

In terms of the factors listed above:

  • Legal commercial content is almost always of substantially higher quality than pirated content, raising the attractiveness of the product.
  • For most consumers (i.e. excluding teenage geeks that have endless time and patience!) a properly designed portal or other download experience CAN be much easier to operate than linking to a pirate site, determining which files to download, uncompressing, etc. etc.Unfortunately, many commercial sites are not well designed, and often are as frustrating to operate as some pirate sites. Attention to this issue is very important, as this is a low cost method to retain legal customers.
  • Depending on the rights purchased, and whether the content was streamed or downloaded, the re-use or legal sharing of purchased content (i.e. within the home or on mobile devices owned by the content purchaser) should ideally be straightforward.Again, this is often not the case, and again motivates consumers to potentially consider pirated material as it is often easier to consume on multiple devices and share with others. This is a very big issue and is only beginning to be substantially addressed by such technologies as UltraViolet, Keychest and others.Another issue that often complicates this factor is the enormously complex and inconsistent legal rights to copyrighted material. Music, books, movies, etc. all have highly divergent rules that govern the distribution and sale of the material. The level of complexity and cost of administering these rights, and the resultant inequities in availability make pirated material much more available and attractive than it should be.
  • With the recent explosion of types of devices available to consume digital content (whether books, movies, tv, music, newspapers, etc.) the consumer rightly desires a seamless consumption model across the devices of their choice. This is often not provided legally, or is available only at significant cost. This is yet another area that can be addressed by content owners and distributors to lower the attractiveness of pirated material.
  • The issue of consequences for end-users that may be held accountable for downloading and consumption of pirated material is complex and fraught with potential backlash to content owners that attempt enforcement in this area. Several recent cases within the music industry have shown that the adverse publicity garnered by content owners suing end users has had a high cost and is generally perceived to be counter-productive.The bulk of legal enforcement at this time is concentrated on the providers of pirated material all through the supply chain, as opposed to the final consumer. This is also a more efficient use of resources, as the effort to identify and legally prosecute potentially millions of consumers of pirated material would be impractical compared to degrading the supply chain itself – often operated by a few hundreds of individuals.There have been recent attempts by some governments and ISPs to monitor and identify the connections from an end consumer to a known pirate site and then mete out some level of punishment for this practice. This usually takes the form of multiple warnings to a user followed by some degradation or interruption of their internet service. There are several factors that complicate the enforcement of this type of policy:
    • This action potentially comes up against privacy concerns, and the level and invasiveness of monitoring of a user’s habits and what they download vary greatly by country and culture.
    • Many so-called ‘pirate’ sites offer a mix of both legally obtained material, illegally obtained material, and storage for user generated content. It is usually impossible to precisely determine which of these content types a user has actually downloaded, so the risk is high that a user could be punished for a perfectly innocent behavior.
    • It is too easy for a pirate site to keep one (or several) steps ahead of this kind of enforcement activity with changing names, ip addresses, and other obfuscating tactics.

In summary, it should be understood that piracy of copyrighted material is a real and serious threat to the financial well-being of content producers throughout the world. What is called for to mitigate this threat is a combined approach that is rational, efficient and affordable. Emotional rhetoric and draconian measures will not solve the problem, but only exacerbate tensions and divert resources from the real problem. A parallel approach of improving the rights management, distribution methodology and security measures associated with legal content – aided by consistent application of law and streamlined judicial and police procedure world-wide – is the most effective method for reducing the trafficking of stolen intellectual property.

Education of the consumer will also help. Although, as stated earlier, one cannot legislate morality – and in the ‘privacy’ of the consumer’s internet connection many will take all they can get for ‘free’ – it cannot hurt to repeatedly describe the knock-on effects of large scale piracy on the content creation sector. The bottom line is that the costs of producing high quality entertainment are significant, and without sufficient financial return this cannot be sustained. The music industry is a prime example of this:  more labels and music studios have gone out of business than remain in business today – as measured from 1970 to 2011. While it is true that the lowered bar of cost due to modern technology has allowed many to ‘self-produce’ it is also true that some of the great recording studios that have gone out of business due to decreased demand and funding have cost us – and future generations – the unique sound that was only possible in those physical rooms. These intangible costs can be very high.

One last fact that should be added to the public awareness concerning online piracy:  the majority of these sites today are either run by or funded by organized criminal cartels. For instance, in Mexico the production and sale of counterfeit DVDs is used primarily as a method of laundering drug money, in addition to the profitable nature of the business itself (since no revenues are returned to the studios whose content is being duplicated). The fact that the subscription fees for the online pirate site of choice is very likely funding human trafficking, sexual slavery, drug distribution and other criminal activity on a large scale should not be ignored. Everyone is free to make a choice. The industry, and collective governments, need to provide thoughtful, useful and practical measures to help consumers make the right choice.

Page 7 of 7 « Previous 1 … 5 6 7
  • Blog at WordPress.com.
  • Connect with us:
  • Twitter
  • Vimeo
  • YouTube
  • RSS
  • Follow Following
    • Parasam
    • Join 95 other followers
    • Already have a WordPress.com account? Log in now.
    • Parasam
    • Customize
    • Follow Following
    • Sign up
    • Log in
    • Report this content
    • View site in Reader
    • Manage subscriptions
    • Collapse this bar
 

Loading Comments...