• Home
  • about this blog
  • Blog Posts

Parasam

Menu

  • design
  • fashion
  • history
  • philosophy
  • photography
  • post-production
    • Content Protection
    • Quality Control
  • science
  • security
  • technology
    • 2nd screen
    • IoT
  • Uncategorized
  • Enter your email address to follow this blog and receive notifications of new posts by email.

  • Recent Posts

    • Take Control of your Phone
    • DI – Disintermediation, 5 years on…
    • Objective Photography is an Oxymoron (all photos lie…)
    • A Historical Moment: The Sylmar Earthquake of 1971 (Los Angeles, CA)
    • Where Did My Images Go? [the challenge of long-term preservation of digital images]
  • Archives

    • September 2020
    • October 2017
    • August 2016
    • June 2016
    • May 2016
    • November 2015
    • June 2015
    • April 2015
    • March 2015
    • December 2014
    • February 2014
    • September 2012
    • August 2012
    • June 2012
    • May 2012
    • April 2012
    • March 2012
    • February 2012
    • January 2012
  • Categories

    • 2nd screen
    • Content Protection
    • design
    • fashion
    • history
    • IoT
    • philosophy
    • photography
    • post-production
    • Quality Control
    • science
    • security
    • technology
    • Uncategorized
  • Meta

    • Register
    • Log in
    • Entries feed
    • Comments feed
    • WordPress.com

Browsing Category Content Protection

Digital Security in the Cloudy & Hyper-connected world…

April 5, 2015 · by parasam

Introduction

As we inch closer to the midpoint of 2015, we find ourselves in a drastically different world of both connectivity and security. Many of us switch devices throughout the day, from phone to tablet to laptop and back again. Even in corporate workplaces, the ubiquity of mobile devices has come to stay (in spite of the clamoring and frustration of many IT directors!). The efficiency and ease of use of integrated mobile and tethered devices propels many business solutions today. The various forms of cloud resources link all this together – whether personal or professional.

But this enormous change in topology has introduced very significant security implications, most of which are not really well dealt with using current tools, let alone software or devices that were ‘state of the art’ only a few years ago.

What does this mean for the user – whether personal or business? How do network admins and others that must protect their networks and systems deal with these new realities? That’s the focus of the brief discussion to follow.

No More Walls…

NoWalls

The pace of change in the ‘Internet’ is astounding. Even seasoned professionals who work and develop in this sector struggle to keep up. Every day when I read periodicals, news, research, feeds, etc. I discover something I didn’t know the day before. The ‘technosphere’ is actually expanding faster than our collective awareness – instead of hearing that such-and-such is being thought about, or hopefully will be invented in a few years, we are told that the app or hardware already exists and has a userbase of thousands!

One of the most fundamental changes in the last few years is the transition from ‘point-to-point’ connectivity to a ‘mesh’ connectivity. Even a single device, such as a phone or tablet, may be simultaneously connected to multiple clouds and applications – often in highly disparate geographical locations. The old tried-and-true methodology for securing servers, sessions and other IT functions was to ‘enclose’ the storage, servers and applications within one or more perimeters – then protect those ‘walled gardens’ with firewalls and other intrusion detection devices.

Now that we reach out every minute across boundaries to remotely hosted applications, storage and processes the very concept of perimeter protection is no longer valid nor functional.

Even the Washing Machine Needs Protection

Another big challenge for today’s security paradigm is the ever-growing “Internet of Things” (IoT). As more and more everyday devices become network-enabled, from thermostats to washing machines, door locks to on-shelf merchandise sensors – an entirely new set of security issues has been created. Already the M2M (Machine to Machine) communications are several orders of magnitude greater than sessions involving humans logging into machines.

This trend is set to literally explode over the next few years, with an estimated 50 billion devices being interconnected by 2020 (up from 8.7 billion in 2012). That’s a 6x increase in just 8 years… The real headache behind this (from a security point of view) is the amount of connections and sessions that each of these devices will generate. It doesn’t take much combinatorial math to see that literally trillions of simultaneous sessions will be occurring world-wide (and even in space… the ISS has recently completed upgrades to push 3Mbps channels to 300Mbps – a 100x increase in bandwidth – to support the massive data requirements of newer scientific experiments).

There is simply no way to put a ‘wall’ around this many sessions that are occurring in such a disparate manner. An entirely new paradigm is required to effectively secure and monitor data access and movement in this environment.

How Do You Make Bulletproof Spaghetti?

spaghetti

If you imagine the session connections from devices to other devices as strands of pasta in a boiling pot of water – constantly moving and changing in shape – and then wanted to encase each strand in an impermeable shield…. well you get the picture. There must be a better way… There are a number of efforts underway currently from different researchers, startups and vendors to address this situation – but there is no ‘magic bullet’ yet, nor is there even a complete consensus on what method may be best to solve this dilemma.

One way to attempt to resolve this need for secure computation is to break the problem down into the two main constituents: authentication of whom/what; and then protection of the “trust” that is given by the authentication. The first part (authentication) can be addressed with multiple-factor login methods: combinations of biometrics, one-time codes, previously registered ‘trusted devices’, etc. I’ve written on these issues here earlier. The second part: what does a person or machine have access to once authenticated – and how to protect those assets if the authentication is breached – is a much thornier problem.

In fact, from my perspective the best method involves a rather drastically different way of computing in the first place – one that would not have been possible only a few years ago. Essentially what I am suggesting is a fully virtualized environment where each session instance is ‘built’ for the duration of that session; only exposes the immediate assets required to complete the transactions associated with that session; and abstracts the ‘devices’ (whether they be humans or machines) from each other to the greatest degree possible.

While this may sound a bit complicated at first, the good news is that we are already moving in that direction, in terms of computational strategy. Most large scale cloud environments already use virtualization to a large degree, and the process of building up and tearing down virtual instances has become highly automated and very, very fast.

In addition, for some time now the industry has been moving towards thinner and more specific apps (such as found on phones and tablets) as opposed to massive thick client applications such as MS Office, SAP and other enterprise builds that fit far more readily into the old “protected perimeter” form of computing.

In addition (and I’m not making a point of picking on a particular vendor here, it’s just that this issue is a “fact of nature”) the Windows API model is just not secure any more. Due to the requirement of backwards compatibility – to a time where the security threats of today were not envisioned at all – many of the APIs are full of security holes. It’s a constant game of reactively patching vulnerabilities once discovered. This process cannot be sustained to support the level of future connectivity and distributed processing towards which we are moving.

Smaller, lightweight apps have fewer moving parts, and therefore by their very nature are easier to implement, virtualize, protect – and replace entirely should that be necessary. To use just an example: MS Word is a powerful ‘word processor’ – which has grown to integrate and support a rather vast range of capabilities including artwork, page layout, mailing list management/distribution, etc. etc. Every instance of this app includes all the functionality, of which 90% is unused (typically) during any one session instance.

If this “app” was broken down into many smaller “applets” that called on each other as required, and were made available to the user on the fly during the ‘session’ the entire compute environment becomes more dynamic, flexible and easier to protect.

Lowering the Threat Surface

Immune-System

One of the largest security challenges of a highly distributed compute environment – such as is presented by the typical hybrid cloud / world-wide / mobile device ecosystem that is rapidly becoming the norm – is the very large ‘threat surface’ that is exposed to potential hackers or other unauthorized access.

As more and more devices are interconnected – and data is interchanged and aggregated from millions of sensors, beacons and other new entities, the potential for breaches is increased exponentially. It is mathematically impossible to proactively secure every one of these connections – or even monitor them on an individual basis. Some new form of security paradigm is required that will, by its very nature, protect and inhibit breaches of the network.

Fortunately, we do have an excellent model on which to base this new type of security mechanism: the human immune system. The ‘threat surface’ of the human body is immense, when viewed at a cellular level. The number of pathogens that continually attempt to violate the human body systems are vastly greater than even the number of hackers and other malevolent entities in the IT world.

The conscious human brain could not even begin to attempt to monitor and react to every threat that the hordes of bacteria, viruses and other pathogens bring against the body ecosystem. About 99% of such defensive response mechanisms are ‘automatic’ and go unnoticed by our awareness. Only when things get ‘out of control’ and the symptoms tell us that the normal defense mechanisms need assistance do we notice things like a sore throat, an ache, or in more severe cases: bleeding or chest pain. We need a similar set of layered defense mechanisms that act completely automatically against threats to deal with the sheer numbers and variations of attack vectors that are becoming endemic in today’s new hyper-connected computational fabric.

A Two-Phased Approach to Hyper-Security

Our new hyper-connected reality requires an equally robust and all-encompassing security model: Hyper-Security. In principle, an approach that combines the absolute minimal exposure of any assets, applications or connectivity with a corresponding ‘shielding’ of the session using techniques to be discussed shortly can provide an extremely secure, scalable and efficient environment.

Phase One – building user ‘sessions’ (whether that user is a machine or a human) that expose the least possible amount of threat surface while providing all the functionality required during that session – has been touched on earlier during our discussion of virtualized compute environments. The big paradigm shift here is that security is ‘built in’ to the applications, data storage structures and communications interface at a molecular level. This is similar to how the human body systems are organized, which in addition to the actual immune systems and other proactive ‘security’ entities, help naturally limit any damage caused by pathogens.

This type of architecture simply cannot be ‘backed in’ to legacy OS systems – but it’s time that many of these are moved to the shelf anyway: they are becoming more and more clumsy in the face of highly virtualized environments, not to mention the extreme amount of time/cost to maintaining these outdated systems. Having some kind of attachment or allegiance to an OS today is as archaic as showing a preference for a Clydesdale vs a Palomino in the world of Ferraris and Teslas… Really all that matters today is the user experience, reliability and security. How something gets done should not matter any more, even to highly technical users, any more than knowing exactly which endocrines are secreted by our Islets of Langerhans (some small bits of the pancreas that produce some incredibly important things like insulin). These things must work (otherwise humans get diabetes or computers fail to process) but very few of us need to know the details.

Although the concept of this distributed, minimalistic and virtualized compute environment is simple, the details can become a bit complex – I’ll reserve further discussion for a future post.

To summarize, the security provided by this new architecture is one of prevention, limitation of damage and ease of applying proactive security measures (to be discussed next).

Phase Two – the protection of the compute sessions from either internal or external threat mechanisms – also requires a novel approach that is suited for our new ecosystems. External threats are essentially any attempt by unauthorized users (whether human, robots, extraterrestrials, etc.) to infiltrate and/or take data from a protected system. Internal threats are activities that are attempted by an authorized user – but are not authorized actions for that particular user. An example is a rogue network admin either transferring data to an unauthorized endpoint (piracy) or destruction of data.

The old-fashioned ‘perimeter defense systems’ are no longer appropriate for protection of cloud servers, mobile devices, etc. A particular example of how extensive and interconnected a single ‘session’ can be is given here:

A mobile user opens an app on their phone (say an image editing app) that is ‘free’ to the user. The user actually ‘pays’ for this ‘free’ privilege by donating a small amount of pixels (and time/focus) to some advertising. In the background, the app is providing some basic demographic info of the user, the precise physical location (in many instances), along with other data to an external “ad insertion service”.

This cloud-based service in turn aggregates the ‘avails’ (sorted by location, OS, hardware platform, app type that the user is running, etc.) and often submits these ‘avails’ [with the screen dimensions and animation capabilities] to an online auction system that bids the ‘avails’ against a pool of appropriate ads that are preloaded and ready to be served.

Typically the actual ads are not located on the same server, or even the same country, as either the ad insertion service or the auction service. It’s very common for up to half a dozen countries, clouds and other entities to participate in delivering a single ad to a mobile user.

This highly porous ad insertion system has actually become a recent favorite of hackers and other con games – even without technical breaches it’s an incredibly easy system to game – due to the speed of the transactions and almost impossible ability to monitor in real time many ‘deviations’ are possible… and common.

There are a number of ingenious methods being touted right now to help solve both the actual protection of virtualized and distributed compute environments, as well as to monitor such things as intrusions, breaches and unintended data moves – all things that traditional IT tools don’t address well at all.

I am unaware of a ‘perfect’ solution yet to address either the protection or monitoring aspects, but here are a few ideas: [NOTE: some of these are my ideas, some have been taken up by vendors as a potential product/service. I don’t feel qualified enough to judge the merits of any particular commercial product at this point, nor is the focus of this article on detailed implementations but rather concepts, so I’ll refrain from getting into specific products].

  • User endpoint devices (anything from humans’ cellphones to other servers) must be pre-authenticated (using combination of currently well-known identification methods such as MAC address, embedded token, etc.). On top of this basic trust environment, each session is authenticated with a minimum of a two-factor logon scheme (such as biometric plus PIN, certificate plus One Time Token, etc). Once the endpoints are authenticated, a one-time use VPN is established for each connection.
  • Endpoint devices and users are combined as ‘profiles’ that are stored as part of a security monitoring application. Each user may have more than one profile: for instance the same user may typically perform (or be allowed to perform by his/her firm’s security protocol) different actions from a cellphone as opposed to a corporate laptop. The actions that each user takes are automatically monitored / restricted. For instance, the VPNs discussed in the point above can be individually tailored to allow only certain kinds of traffic to/from certain endpoints. Actions that fall outside of the pre-established scope, or are outside a heuristic pattern for that user, can either be denied or referred for further authorization.
  • Using techniques similar to the SSL methodologies that protect and authenticate online financial transactions, different kinds of certificates can be used to permit certain kinds of ‘transactions’ (with a transaction being either access to certain data, permission to move/copy/delete data, etc.) In a sense it’s a bit like the layered security that exists within the Amazon store: it takes one level of authentication to get in and place an order, yet another level of ‘security’ to actually pay for something (you must have a valid credit card that is authenticated in real time by the clearing houses for Visa/MasterCard, etc.). For instance, a user may log into a network/application instance with a biometric on a pre-registered device (such as fingerprint on an iPhone6 that has been previously registered in the domain as an authenticated device). But if that user then wishes to move several terabytes of a Hollywood movie studio’s content to remote storage site (!!) they would need to submit an additional certificate and PIN.

An Integrated Immune System for Data Security

Virus in blood - Scanning Electron Microscopy stylised

The goal of a highly efficient and manageable ‘immune system’ for a hyper-connected data infrastructure is for such a system to protect against all possible threats with the least direct supervision possible. Since not only is it impossible for a centralized omniscient monitoring system to handle the incredible number of sessions that take place in even a single modern hyper-network; it’s equally difficult for a single monitoring / intrusion detection device to understand and adapt to the myriad of local contexts and ‘rules’ that define what is ‘normal’ and what is a ‘breach’.

The only practical method to accomplish the implementation of such an ‘immune system’ for large hyper-networks is to distribute the security and protection infrastructure throughout the entire network. Just as in the human body, where ‘security’ begins at the cellular level (with cell walls allowing only certain compounds to pass – depending on the type and location of each cell); each local device or application must have as part of its ‘cellular structure’ a certain amount of security.

As cells become building blocks for larger structures and eventually organs or other systems, the same ‘layering’ model can be applied to IT structures so the bulk of security actions are taken automatically at lower levels, with only issues that deviate substantially from the norm being brought to the attention of higher level and more centralized security detection and action systems.

Another issue of which to be aware: over-reporting. It’s all well and good to log certain events… but who or what is going to review millions of lines of logs if every event that deviates even slightly from some established ‘norm’ is recorded? And even then, that action will only be looking in the rear view mirror.. The human body doesn’t generate any logs at all and yet manages to more or less handle the security for 37.2 trillion cells!

That’s not say that no logs at all should be kept – they can be very useful to help understand breaches and what can be improved in the future – but the logs should be designed with that purpose in mind and recycled as appropriate.

Summary

In this very brief overview we’ve discussed some of the challenges and possible solutions to the very different security paradigm that we now have due to the hyper-connected and diverse nature of today’s data ecosystems. As the number of ‘unmanned’ devices, sensors, beacons and other ‘things’ continues to grow exponentially, along with the fact that most of humanity will soon be connected to some degree to the ‘Internet’, the scale of the security issue is truly enormous.

A few ideas and thoughts that can lead to effective, scalable and affordable solutions have been discussed – many of these are new and works in progress but offer at least a partially viable solution as we work forward. The most important thing to take away here is an awareness of how things must change, and to keep asking questions and not assume that the security techniques that worked last year will keep you safe next year.

The Hack

December 21, 2014 · by parasam

 

It’s a sign of our current connectedness (and the lack of ability or desire for most of us to live under a digital rock – without an hourly fix of Facebook, Twitter, CNN, blogs, etc – we don’t feel we exist) that the title of this post needs no further explanation.

The Sony “hack” must be analyzed apart from the hyperbole of the media, politics and business ‘experts’ to put the various aspects in some form of objectivity – and more importantly to learn the lessons that come with this experience.

I have watched and read endless accounts and reports on the event, from lay commentators, IT professionals, Hollywood business, foreign policy pundits, etc. – yet have not seen a concise analysis of the deeper meaning of this event relative to our current digital ecosystems.

Michael Lynton (CEO, Sony Pictures) stated on CNN’s Fareed Zakaria show today that “the malware inserted into the Sony network was so advanced and sophisticated that 90% of any companies would have been breached in the same manner as Sony Pictures.” Of course he had to take that position – while his interview was public there was a strong messaging to investors in both Sony and the various productions that it hosts.

As reported by Wired, Slate, InfoWorld and others the hack was almost certainly initiated by the introduction of malware into the Sony network – and not particularly clever code at that. For the rogue code to execute correctly, and to have the permissions to access, transmit and then delete massive amounts of data required the credentials of a senior network administrator – which supposedly were stolen by the hackers. The exact means by which this theft took place have not been revealed publicly. Reports on the amount of data stolen vary, but range from a few to as much as a hundred terabytes. That is a massive amount of data. To move this amount of data requires a very high bandwidth pipe – at least a 1Gbps, if not higher. These sized pipes are very expensive, and normally are managed rather strictly to prioritize bandwidth. Depending on the amount of bandwidth allocated for the theft of data, the ‘dump’ must have lasted days, if not weeks.

All this means that a number of rather standard security protocols were either not in place, or not adhered to at Sony Pictures. The issue here is not Sony – I have no bone to pick with them, and in fact they have been a client of mine numerous times in the past while with different firms, and I continue to have connections with people there. This is obviously a traumatic and challenging time for everyone there. It’s the larger implications that bear analysis.

This event can be viewed through a few different lenses: political, technical, philosophical and commercial.

Political – Initially let’s examine the implications of this type of breach, data theft and data destruction without regard to the instigator. In terms of results the “who did it” is not important. Imagine instead of this event (which caused embarrassment, business disruption and economic loss only) an event in which the Light Rail switching system in Los Angeles was targeted. Multiple and simultaneous train wrecks are a highly likely result, with massive human and infrastructure damage certain. In spite of the changes that were supposed to follow on from the horrific crash some years ago in the Valley there, the installation of “collision avoidance systems” on each locomotive still has not taken place. Good intentions in politics often take decades to see fruition…

One can easily look at other bits of infrastructure (electrical grids, petroleum pipelines, air traffic control systems [look at London last week], telecommunications, internet routing and peering – the list goes on and on – of critical infrastructure that is inadequately protected.

Senator John McCain said today that of all the meetings in his political life, none took longer and accomplished less than cybersecurity subjects. This issue is just not taken seriously. Many major hacks have occurred in the past – this one is getting serious attention from the media due to the target being a media company, and that many high profile Hollywood people have had a lot to say – and that further fuels the news machine.

Now whether North Korea instigated or performed this on its own – both possible and according to the FBI is now fact – the issue of a nation-state attacking other national interests is most serious, and demands a response from the US government. But regardless of the perpetrator – whether an individual criminal, a group, etc. – a much higher priority must be placed on the security of both public and private entities in our connected world.

Technical – The reporting and discussion on the methodology of this breach in particular, and ‘hacks’ in general, has ranged from the patently absurd to relatively accurate. In this case (and some other notable breaches in the last few years, such as Target), the introduction of malware into an otherwise protected (at least to some degree) system allowed access and control from an undesirable external party. While the implanting of the malware may have been a relatively simple part of the overall breach, the design of the entire process, codewriting and testing, steering and control of the malware from the external servers, as well as the data collection and retransmission clearly involved a team of knowledgeable technicians and some considerable resources. This was not a hack done by a teenager with a laptop.

On the other hand, the Sony breach was not all that sophisticated. The data made public so far indicates that the basic malware was Trojan Destover, combined with a commercially available codeset EldoS RawDisk which was used for the wiping (destruction) of the Sony data. Both of these programs (and their similes Shamoon and Jokra) have been detected in other breaches (Saudi Aramco, Aug 2012; South Korea, Mar 2013). See this link for further details. Each major breach of this sort tends to have individual code characteristics, along with required access credentials with the final malware deliverable package often compiled shortly before the attack. The evidence disclosed in the Sony breach indicates that stolen senior network admin credentials were part of the package, which allowed the full and unfettered access to the network.

It is highly likely that the network was repeatedly probed some time in advance of the actual breach, both as a test of the stolen credentials (to see how wide the access was, and to inspect for any tripwires that may have been set if the credentials had become suspect).

The real lessons to take away from the Sony event have much more to do with the structure of the Sony network, their security model, security standards and practices, and data movement monitoring. To be clear, this is not picking out Sony as a particularly bad example: unfortunately this firm’s security practices are rather the norm today: very, very few commercial networks are adequately protected or designed – even financial companies who one would assume have better than average security.

Without having to look at internal details, one only has to observe the reported breaches of large retail firms, banks and trading entities, government agencies, credit card clearing houses… the list goes on and on. Add to this that not all breaches are reported, and even less are publicly disclosed – the estimates range from 20-30% of network security breaches are reported. The reasons vary from loss of shareholder or customer trust, appearance of competitive weakness, not knowing what actually deserves reporting and how to classify the attempt or breach, etc. etc. In many cases data on “cyberattacks” is reported anonymously or is gathered statistically by firms that handle security monitoring on an outsource basis. At least these aggregate numbers give a scope to the problem – and it is huge. For example, IBM’s report shows for one year (April 2012 – April 2013)  there were 73,400 attacks on a single large organization during this time period. This resulted in about 100 actual ‘security incidents’ during the year for that one company. A PWC report shows that an estimated 42 million data security incidents will have occurred during 2014 worldwide.

If this amount of physical robberies were occurring to firms the response, and general awareness, would be far higher. There is something insidious about digital crime that doesn’t attract the level of notice that physical events do. The economic loss worldwide is estimated in the hundreds of billions of dollars – with most of these proceeds ending up in organized crime, rogue nation-states and terrorist groups. Given the relative sophistication of ISIS in terms of social media, video production and other high-tech endeavours, it is highly likely that a portion of their funding comes from cybercrime.

The scope of the Sony attack, with the commensurate data loss, is part of what has made this so newsworthy. This is also the aspect of this breach that could have mitigated rather easily – and underscores the design / security practices faults that plague so many firms today. The following points list some of the weaknesses that contributed to the scale of this breach:

  • A single static set of credentials allowed nearly unlimited access to the entire network.
  • A lack of effective audit controls that would have brought attention to potential use of these credentials by unauthorized users.
  • A lack of multiple-factor authentication that would have made hard-coding of the credentials into the malware ineffective.
  • Insufficient data move monitoring: the level of data that was transmitted out of the Sony network was massive, and had to impact normal working bandwidth. It appears that large amounts of data are allowed to move unmanaged in and out of the network – again an effective data move audit / management process would have triggered an alert.
  • Massive data deletion should have required at least two distinct sets of credentials to initiate.
  • A lack of internal firewalls or ‘firestops’ that could have limited the scope of access, damage, theft and destruction.
  • A lack of understanding at the highest management levels of the vulnerability of the firm to this type of breach, with commensurate board expertise and oversight. In short, a lack of governance in this critical area. This is perhaps one of the most important, and least recognized, aspects of genuine corporate security.

Philosophical – With the huge paradigm shift that the digital universe has brought to the human race we must collectively asses and understand the impacts of security, privacy and ownership of that ephemeral yet tangible entity called ‘data’. With an enormous transformation under way where millions of people (the so-called ‘knowledge workers’) produce, consume, trade and enjoy nothing but data. There is not an industry that is untouched by this new methodology: even very ‘mechanistic’ enterprises such as farming, steelmills, shipping and train transportation are deeply intertwined with IT now. Sectors such as telecoms, entertainment, finance, design, publishing, photography and so on are virtually impossible to implement without complete dependence on digital infrastructures. Medicine,  aeronautics, energy generation and prospecting – the lists go on and on.

The overall concept of security has two major components: Data Integrity (ensuring that the data is not corrupted by either internal or external factors, and that the data can be trusted; and Data Security (ensuring that only authorized users have access to view, transmit, delete or perform other operations on the data). Each are critical – Integrity can likened to disease in the human body: pathogens that break the integrity of certain cells will disrupt and eventually cause injury or death; Security is similar to the protection that skin and other peripheral structures provide – a penetration of these boundaries leads to a compromise of the operation of the body, or in extreme cases major injury or death.

An area that is a particular challenge is the ‘connectedness’ of modern data networks. The new challenge of privacy in the digital ecosystem has prompted (and will continue to) many conversations, from legal to moral/ethical to practical. The “Facebook” paradigm [everything is shared with everybody unless you take efforts to limit such sharing] is really something we haven’t experienced since small towns in past generations where everybody knew everyone’s business…

Just as we have always had a criminal element in societies – those that will take, destroy, manipulate and otherwise seek self-aggrandizement at the expense of others – we now have the same manifestations in the digital ecosystem. Only digi-crime is vastly more efficient, less detectable, often more lucrative, and very difficult to police. The legal system is woefully outdated and outclassed by modern digital pirates – there is almost no international cooperation, very poor understanding by most police departments or judges, etc. etc. The sad truth is that 99% of cyber-criminals will get away with their crimes for as long as they want to. A number of very basic things must change in our collective societies in order to achieve the level of crime reduction that we see in modern cultures in the physical realm.

A particular challenge is mostly educational/ethical: that everything on the internet is “free” and is there for the taking without regard to the intellectual property owner’s claim. Attempting to police this after the fact is doomed to failure (at least 80% of the time) – not until users are educated to the disruption and effects of their theft of intellectual property. This attitude has almost destroyed the music industry world-wide, and the losses to the film and television industry amount to billions of dollars annually.

Commercial – The economic losses due to data breaches, theft, destruction, etc are massive, and the perception of the level of this loss is staggeringly low – even among commercial stakeholders whom are directly affected. Firms that spend massive amounts of time, money and design effort to physically protect their enterprises apply the flimsiest of real effective data security efforts. Some of this is due to lack of knowledge, some to lack of understanding of the core principals that comprise a real and effective set of procedures for data protection, and a certain amount of laziness: strong security always takes some effort and time during each session with the data.

It is unfortunate, but the level of pain, publicity and potential legal liability of major breaches such as Sony are seemingly necessary to raise the attention that everyone is vulnerable. It is imperative that all commercial entities, from a vegetable seller at a farmer’s market that uses SnapScan all the way to global enterprises such as BP Oil, J.P. Morgan, or General Motors take cyber crime as a continual, ongoing, and very real challenge – and deal with it at the board level with same importance given to other critical areas of governance: finance, trade secrets, commercial strategy, etc.

Many firms will say, “But we already spend a ridiculous amount on IT, including security!” I am sure that Sony is saying this even today… but it’s not always the amount of the spend, it’s how it’s done. A great deal of cash can be wasted on pretty blinking lights and cool software that in the end is just not effective. Most of the changes required today are in methodology, practice, and actually adhering to already adopted ‘best practices’. I personally have yet to see any business, large or small, that follows the stated security practices set up in that particular firm to the letter.

– Ed Elliott

Past articles on privacy and security may be found at these links:

Comments on SOPA and PIPA

CONTENT PROTECTION – Methods and Practices for protecting audiovisual content

Anonymity, Privacy and Security in the Connected World

Whose Data Is It Anyway?

Privacy, Security and the Virtual World…

Who owns the rain?  A discussion on accountability of what’s in the cloud…

The Perception of Privacy

Privacy: a delusion? a right? an ideal?

Privacy in our connected world… (almost an oxymoron)

NSA (No Secrets Anymore), yet another Snowden treatise, practical realities…

It’s still Snowing… (the thread on Snowden, NSA and lack of privacy continues…)

 

CONTENT PROTECTION – Methods and Practices for protecting audiovisual content

January 25, 2012 · by parasam

[Note:  I orginally wrote this in early 2009 as an introduction to the landscpe of content protection. The audience at that time consisted of content owners and producers (studios, etc.) who had (and have!) concern over illegal reproduction and distribution of their copyrighted material – i.e. piracy. With this issue only becoming bigger, and as a follow-up to my recent article on proposed piracy legislation (SOPA-PIPA) I felt it timely to reprint this here. Although a few small technical details have been added to the ecosystem, essentially the primer is as accurate and germane today as it was 3 years ago. While this is somewhat technical I believe that it will be of interest to this wider audience.]

What is Content Protection?

  • The term ‘Copy Protection’ is often used to describe the technical aspect of Content Protection.
  • Copy Protection is a limiting and often inaccurate term, as technical forms of Content Protection often include more aspects than just limiting or prohibiting copies of content.
  • Other forms of Technical Content Protection include:
    • Display Protection
      • Restrictions on type, resolution, etc. of display devices
    • Transmission Protection
      • Restrictions on retransmission or forwarding of content
    • Fingerprinting, Watermarks, etc.
      • Forensic marks to allows tracing of versions of content

Content Protection is the enforcement of DRM

  • Digital Rights Management (DRM)
    • A more accurate term would be ‘Content Rights Management’ (CRM) as this describes what is actually being managed [the word digital is now so overused that we see digital shoes (with LEDs), digital batteries, etc.)
    • Simply put, DRM is a set of policies that describe how content may be used in alignment with contractual agreements to ensure content owners a fair return on their investment in creating and distributing their content.
    • These policies can be enforced by legal, social and technical means.
      • Legal enforcement is almost always ex post facto
        • Civil and criminal penalties brought against parties suspected of violating DRM policies
      • Typically used in circumstances involving significant financial losses, due to time and costs involved
      • Is the most reactive and never prevents policy misuse in the first place
    • Social enforcement is a complex array of measures that will be discussed later in this article
    • Technical enforcement is what most of think about when we mention ‘Content Protection’ or ‘Copy Protection’
      • This is often a very proactive form of rights enforcement, as it can prevent misuse in the first place
      • It has costs, both in terms of actual cost of implementation and often a “social cost” in terms of customer alienation
      • Many forms of technical enforcement are percieved by customers as unfairly limiting their ‘fair use’ of content they have legally obtained

Technical Content Protection

  • To be effective, must have these attributes:
    • DRM policies must be well defined and be expressible with rules or formulas that are mechanically reproducible
    • Implementations should match the environment in terms of complexity, cost, reliability and lifespan
      • Protecting Digital Cinema content is a different process than protecting a low-resolution streaming internet file
      • The costs of these techniques should be included in mastering or distribution, as consumers see no “value” in content protection – it is not a ‘feature’ they will pay for
      • There are challenges in the disparate environments in which content is transmitted and viewed
        • CE (Consumer Electronics) has a very different viewpoint (and price point) on content protection than the PC industry
    • A balance is required in terms of the level of effectiveness vs. cost and perceived “hassle factor”
      • A “layered defense” and the concept of using technical content protection as a significant “speed bump” as opposed to a “Berlin Wall” will be most efficient
      • A combination of all three content protection methods (legal, social and technical) will ultimately provide the best overall protection at a realistic cost
      • The goal should not be to prohibit any possible breach of DRM policy, but rather to maximize the ROI to the content owner/distributor at an acceptable cost
      • All technical content protection methods will eventually fail
        • As general technology and computational power moves forward, techniques that were “unbreakable” a few years in the past will be defeated in the future
      • The technical protection mechanisms and algorithms are highly asymmetrical in terms of “cat & mouse” – i.e. there are a few hundred developers and potentially millions or tens of millions of users working to defeat these systems
    • The methods employed should work across international boundaries and should to the greatest degree possible be agnostic to language, culture, custom and other localization issues
    • Any particular deployment of a content protection system (usually a combination of protected content and a licensed playback mechanism) must be persistent, particularly in the consumer electronics environment
      • For example, users will expect DVDs to play correctly in both PCs and DVD player appliances for many years to come

Challenges for Technical Content Protection

  • Ubiquitous use
    • Users desire to consume content on a variety of devices of their choosing
      • “Anytime, anywhere, anyhow”
    • New technologies often outpace Rights Management policies
      • Example:  a DVD is region-coded for North America, cannot be played in Europe; but the same content can be purchased via iTunes and downloaded to iPod and played anywhere in the world
    • How to define “home use” in the face of Wi-Fi, Wi-Max, ipsec tunneling to remote servers, etc.
  • Persistent Use
    • Technical schemes must continue to work long after original deployment
    • In the CE (Consumer Electronics) environment older technology seldom dies, it is “handed down” the economic ladder. Just as DC-3 airplanes are still routinely hauling cargo in South America and Alaska some 50 years after the end of its design lifetime, VHS and DVD players will be expected to work decades from now
    • Particular care must be taken with some newer schemes that are contemplating the need for a network connection – that may be very difficult to make persistent
  • Adaptable Use
    • This is one of the more difficult technical issues to overcome simply
    • The basic premise is the user legally purchases content, then desires to consume it personally across a large inventory of playback devices
      • TV
      • PC
      • iPod
      • Cell phone
      • Portable DVD/BD player
      • Networked DVD/BD player in the home
    • How do both Rights Management policies and technical content protection handle this use case?
    • This is a currently evolving area and will require adaptation by both content owners, content distributors as well as content protection designers and device manufacturers
    • What will the future bring?
      • One protection scheme for enforcing “home network use” analyzes the “hop time” [how long it takes a packet to get to a destination] – a long hop time assumes an “out of home” destination and this use would be disallowed. How does this stop users in a peer-to-peer wireless environment that are close together (in a plane, at a party?)
      • DVD region codes were an interesting discussion when players were installed in the ISS (International Space Station)
      • A UK company (Techtronics) “de-regionalized” a Sony unit…
      • Technologies such as MOST (Media Oriented Systems Transport) – the new network system for vehicles
      • Sophisticated retransmission systems – such as SlingBox

Technical Content Protection Methods

  • Content protection schemes may be divided into several classes
    • Copy Protection – mechanisms to prevent or selectively restrict the ability of a user to make copies of the content
    • Display Protection – mechanisms to control various parameters of how content may be displayed
    • Transmission Protection – mechanisms to prevent or selectively restrict the ability of a user to retransmit content, or copy content that has been received from a transmission that is licensed for viewing but not recording
  • Legacy analog methods
    • APS (Analog Protection System) often known by its original developer name (Macrovision). Also known as Copyguard. This is a copy protection scheme primarily targeted at preventing VHS tape copies from VHS or DVD original content.
    • CGMS-A (Copy Generation Management System – Analog) is a copy protection scheme for analog television signals. It is in use by certain tv broadcasts, PVRs, DVRs, DVD players/recorders, D-VHS, STBs, Blu-ray and recent versions of TiVo. 2 bits in the VBI (Vertical Blanking Interval) carry CCI (Copy Control Information) that signals to the downstream device what it can copy:
      • 00    CopyFreely  (unlimited copies allowed)
      • 01    CopyNoMore  (one copy made already, no more allowed)
      • 10    CopyOnce  (one copy allowed)
      • 11    CopyNever  (no copies allowed)
    • Current digital methods
    • CGMS-D (Copy Generation Management System – Digital). Basically the digital form of CGMS-A with the CCI bits inserted into the digital bitstream in defined locations instead of using analog vertical blanking real estate.
    • DTCP (Digital Transmission Content Protection) is designed for the “digital home” environment. This scheme links technologies such as BD/DVD player/recorders, SD/HD televisions, PCs, portable media players, etc. with encrypted channels to enforce Rights Management policies. Also known as “5C” for the 5 founding companies.
    • AACS (Advanced Access Content System), the copy protection scheme used by Blu-ray (BD) and other digital content distribution mechanisms. This is a sophisticated encryption and key management system.
    • HDCP (High-bandwidth Digital Content Protection) is really a form of display protection, although that use implies a form of copy protection as well. This technology restricts certain formats or resolutions from being displayed on non-compliant devices. Typically protected HD digital signals will only be routed to compliant display devices, not to recordable output ports. In this use case, only analog signals would be available at output ports.
    • Patronus – various copy protection schemes targeted at the DVD market:  anti-rip (for both burned and replicated disks) and CSS (Content Scramble System) for DTO (Download To Own)
    • CPRM (Content Protection for Recordable Media), a technology for protecting content on recordable DVDs and SD memory cards
    • CPPM (Content Protection for Pre-recorded Media), a technology for protecting content on DVD audio and other pre-recorded disks
    • CPSA (Content Protection Systems Architecture) which defines an overall framework for integration of many of the above systems
    • CPXM (Content Protection for eXtended Media) An extension of CPRM to other forms of media, most often SD memory cards and similar devices. Allows licensed content to be consumed by many devices that can load the SD card (or other storage medium)
    • CMLA (Content Management License Administration), a consortium of Intel, Nokia, Panasonic and Samsung that administers and provided key management for mobile handsets and other devices that employ the OMA (Open Mobile Appliance ) spec, allowing the distribution of protected content to mobile devices.
    • DTLA (Digital Transmission Licensing Administrator) provides the administration and key management for DTCP.

Home Networking – the DTCP model

  • As one of the most deployed content protection systems, a further explanation of the DTCP environment:
    • DTCP works in conjunction with other content protection technologies to provide an unbroken chain of encrypted content from content provider to the display device
    • Each piece has its own key management system and protects a portion of the distribution chain:
      • CA (Conditional Access) – cable/satellite/telco
      • DTCP – the PVR/DVR/DVD recorder
      • CPRM – recordable disks, personal computer
      • HDCP – display device

DTCP and Transmission Protection

  • One important feature of DTCP is the enabling of the so called “Broadcast Flag”
    • Accepted by the FCC as an “approved technology”, the CCI information embedded in the DTV (Digital Television) signal is used by DTCP-compliant devices to regulate the use of digitally broadcast content
    • The technology will allow free-to-air digital broadcast for live viewing while at the same time prohibit recording or retransmission of the digital signal.

DTCP and the future

  • A number of recent extensions to the original DTCP standard have been published:
    • The original DTCP standard was designed for the first digital interface implemented on set top boxes: FireWire (1394a).
    • The original standard has now been extended to 7 new types of interfaces:
      • USB
      • MOST
      • Bluetooth
      • i.Link & IDB1394 (FireWire for cars)
      • IP
      • AL (Additional Localization)
        • New restrictions to insure all DTCP devices are in 1 home
      • WirelessHD

DTCP Summary

  • With probably the largest installed base of devices, DTCP is the backbone of most “home digital network content protection” schemes in use today.
    • As DTCP only protects data transmission interfaces, the other ‘partners’ (CA, CSS, CPRM, CPPM, HDCP) are all required to provide the end-to-end protection from content source to the display screen.
    • The extensions that govern IP and WirelessHD in particular allows the protection of HD content in the home.
    • The underlying design principles of DTCP are not limited by bandwidth or resolution, improved future implementations will undoubtedly keep pace with advances in content and display technology.

Underlying mechanisms that enable Technical Content Protection

  • All forms of digital content protection are comprised of two parts:
    • Some form of encryption of content in order that the content is unusable without a method of decoding the content before display, copying or retransmission
    • A repeatable and reliable method for decrypting the content for allowed use in the presence of a suitable key – the presence of which is assumed to equivalent to a license to perform the allowed actions
    • The encryption part of the process uses well-known and proven methods from the cryptographic community that are appropriate for this task:
    • The cipher (encryption algorithm) must be robust, reliable, persistent, immutable and easily implemented
    • The encryption/decryption process must be fast
      • At a minimum must support real-time crypto at any required resolution to allow for broadcast and playback
      • Ideally should allow for significantly faster than real-time encryption to maximize the efficiency of production and distribution entities that must handle large amounts of content quickly
    • All encryption techniques use a process that can be simplified to the following:
      • Content [C] and a key [K] are inputs to an encryption process [E], which produces encrypted content [CE]
    • In a similar but inverse action, decryption uses a process:
    • Encrypted content [CE], and a key [K] are inputs to a decryption process [D], which produces a replica of the original content [C]

    • Encryption methods
      • This is a huge science in and of itself. Leaving the high mathematics behind, a form of cipher known as a symmetrical cipher is best suited for encryption of large amounts of audiovisual content.
      • It is fast, secure and can be implemented in hardware or software.
      • Many forms of symmetrical ciphers exist, the most common is a block cipher known as AES which is currently used in 3 variants (cipher        strengths):  AES128, AES192 and AES256
      • AES (Advanced Encryption Standard) is approved by the NIST (National Institute of Standards) for use by military, government and civilian use. The 128-bit variant is more than secure enough for protecting audiovisual content, and the encryption meets the speed requirements for  video.
    • Keys
      • Symmetrical block ciphers (such as AES128) use the principle of a “shared secret key”
      • The challenge is how to create and manage keys that can be kept secret while being used to encrypt and decrypt content in many places with devices as diverse as DVD players, PCs, set top boxes, etc.
      • In practice, this is an enormously complex process, but this has been solved and implemented in a number of different DRM environments including all DTCP-compliant devices, most content application software available on PCs, etc.
      • It is possible to revoke keys (that is, deny their future ability to decode content) if the implementation allows for that. This makes it possible for known compromised keys to no longer be able to decrypt content.

Forensics

  • Forensic science (often shortened to forensics) is the application of a broad spectrum of sciences to answer questions of interest to a legal system.
    • Although technically not a form of Content Protection, the technologies associated with forensics in relation to audiovisual content (watermarking, fingerprinting, hashing, etc.) are vitally important as tools to support Legal Content Protection.
    • Without the verification and proof that Content Forensics can offer, it would be impossible to bring civil or criminal charges against parties suspected of subverting DRM agreements.
  • Watermarking
    • A method of embedding a permanent mark into content such that this mark, if recovered from content in the field, is proof that the content is either the original content or a copy of that content.
    • There are two forms of watermark:
      • Visible Watermarking, often known as a “bug” or a “burn-in”
        • This is frequently used by tv broadcasters to define ownership and copyright on material
        • Also used on screeners and other preview material where the visual detraction is secondary to rendering the content unsuitable for general use or possible resale.
        • Is subject to compromise due to:
          • Since it is visible, the presence of a watermark is known
          • Can be covered or removed without evidence of this action
        • Invisible Watermarking
          • The watermark can be patterns, glyphs or other visual information that can be recognized when looked for
          • Various visual techniques are used to render the watermarks “invisible” to the end user when watching or listening to content for entertainment.
          • Since the exact type, placement, timing and other information on embedding the watermark is known by the watermarking authority, this information is used during forensic recovery to assist in reading the embedded watermarks.
          • Frequently many versions of a watermark are used on a single source item of content, in order to narrow the distribution channel represented by a given watermark.
            • Challenges to invisible watermarking
          • Users attempting to subvert invisible watermarks have become very sophisticated and a number of attacks are now common against embedded watermarks.
          • A high quality watermarking method must offer the following capabilities:
            • Robustness against attacks such as geometric rotation, random bending, cropping, contrast alteration, time compression/expansion and re-encoding using different codecs or bit rates.
            • Robustness against the “analog hole” is also a requirement of a high quality watermark. (The “analog hole” is a hole in the security chain that could be broken by taking a new video of the playback of the original content, such as a camcorder in a theater).
            • Security of the watermark against competent attacks such as image region detection, collusion (parallel comparison and averaging of watermarked materials) and repetition detection.
              • Invisible watermarking must be “invisible”
            • The watermark must not degrade the image nor be easily detectable by eye (if one is not looking for it)
            • Various algorithms are commonly used to select geometric areas of certain frames that are better suited than others to “hide” watermarks. In addition, “tube” or “sliding” techniques can be applied to move the watermark in subsequent frames as an object in the frame moves. This lessens the chance for visual detection.
  • Fingerprinting
    • As opposed to watermarking, fingerprinting makes no prior “marks” to the source content, but rather measures the source content in a very precise way that allows subsequent comparison to forensically prove that the content is identical.
    • Both video and audio can be fingerprinted, but video is of more use and is more common. Audio is easily manipulated, and sufficient changes can be made to “break” a fingerprint comparison without rendering the audio unusable.
    • The video fingerprint files are quite small, and can be stored in databases and used for monitoring of internet sites, broadcasts, DVDs, etc.
  • Hashing
    • In this context, cryptographic hash functions have been explored as a form of “digital fingerprint”
    • This is different from “content fingerprinting” discussed in the previous section, a hash value is a purely numerical value derived via formula from an analysis of all the bits in a digital file.
    • If the hash values of two files are the same, the files are identical.
    • Hashing turns out to be unreliable for use as a forensic tool in this context:
      • A change of just a few bits in an entire file (such as trimming 1 second off the runtime of a movie) will cause a different hash value to be computed.
        • Essentially the same content can have multiple hash values, therefore the hash cannot be used as forensic evidence.
        • Content fingerprinting or watermarking are superior techniques in this regard.
    • Cryptographic hashes have great value in the underlying mechanisms of technical content protection, they are just not suitable as an alternative for watermarking or fingerprinting.
      • As checksums to insure accidental data corruption of critical information (encrypted keys, master key blocks, etc.)
      • As part of the technology that allows “digital signatures”, a method of insuring data has not been changed.
      • As a part of MACs (Message Authentication Codes) used to verify exchanges of privileged data.

Social Content Protection

  • Of the three forms of Rights Management enforcement (Legal, Social, Technical) this is probably the least recognized but if applied properly, the most effective form of enforcement
    • All the forms of Content Protection discussed overlap with each other to some extent
      • Forensics, a part of Technical protection, is what allows Legal protection to work, it gives the basis for claims.
      • Legal protection, in the form of original agreements, precedes all other forms, as Rights cannot be enforced until they are accurately described and agreed upon.
      • Social content protection is an aggregate of methods such as business policies, release strategies, pricing and distribution strategies and similar practices.

    Back to the future… what is the goal of content protection?

    • It’s really to protect the future revenues of proprietary content – to achieve the projected ROI on this asset
    • Ultimately, the most efficient method (or combination of methods) will demonstrate simplicity, low cost, wide user acceptance, ease of deployment and maintenance, and robustness in the face of current and future attempts at subversion.
    • The solution will be heterogeneous and will differentiate across various needs and groups – there is no “one size fits all” in terms of content protection.
    • Recognize the differences in content to be protected
      • Much content is ephemeral, it does not hold value for long
        • Newscasts, commercials, user-contributed content that is topical in nature, etc.
        • This content can be weakly protected, or left unprotected
      • Some content has a long lifespan and is deserving of strong protection
        • Feature movies, music, books, works of art, etc.
        • Even in this category, there will be differentiation:
          • Bottom line is assets that have a high net worth demand a higher level of protection
    • Recognize that effective content protection is a shared responsibility
    • It cannot be universally accomplished at the point of origin
    • Effective content protection involves content creators, owners, distributors (physical and online), package design, hardware and software designers and manufacturers, etc.
    • Each step must integrate successfully or a “break in the chain” can occur, which can be exploited by those that wish to subvert content protection.
    • Understand that most users see content protection as a “negative” – the implementation of various forms of social or technical content protection are perceived as “roadblocks” to the user’s enjoyment of content.
    • Purchasing a DVD while overseas on vacation and finding it will not play in their home DVD player;
    • Discovering that they have purchased the same content 3 or 4 times in order to play in various devices in their home, car, person (VHS, DVD, Blu-ray, iPod, Zune)
    • Purchasing a Blu-ray movie, playing it back in the user’s laptop (since they don’t have a stand-alone BD player and the laptop has a BD drive), finding it plays on the laptop screen but when connected via DVI to their large LCD display nothing is visible, and no error message is displayed[in this case HDCP content protection has disallowed the digital output from the laptop, but the user thinks either their laptop or monitor is broken]
    • One of the least successful attributes of technical content protection is notifying users when content copying/display/retransmission is disallowed.
    • Understand the history and philosophy of content protection in order to get the best worldview on the full ecosystem of this issue
      • The social dilemma is this:  in the past, all content was free as we had only an oral tradition. There was no recording, the only “cost” was that of moving your eyes and ears to where the content was being created (play, song, speech).
      • In order to share content across a wider audience (and to experience content in its original form, as opposed to how uncle Harry described what he heard…) books were invented. This allowed distribution across distance, time and language. The cost of producing was borne by the user (sale of books).
      • Eventually the concept of copyright was formed, a radical idea at the time, as it enriched content owners as well as distributors. The original reason for copyrights was to protect content creators/owners from unscrupulous distributors, not end users.
      • Similar protections were later applied to artwork, music, films, photographs, software and even inventions (in the form of patents).
      • Current patent law protects original inventions for 20 years, copyrights by individual authors survive for the life of the author plus 70 years, “works for hire” [just about all music and movies today] are protected for 120 years from creation.
      • Both patents and copyrights have no value except in the face of enforcement.
      • The IPP (Intellectual Property Protection) business has grown to a multi-billion dollar business

Social Content Protection – New Ideas

  • The scale of the problem may not be accurately stated
  • Current “losses” claimed by content owners (whether they are software, film, books, music the issue is identical) assume every pirated or “use out of license” occurrence should have produced the equivalent income as if a copy of the content was sold at retail.
  • This is unrealistic with a majority of the world’s population having insufficient earning power to purchase content at 1st world prices. For example, Indonesia, a country with high rates of DVD piracy, has an average per capita income of US$150 per month. Given the choice of a $15 legitimate DVD or a $1 pirated copy the vast majority will either do without or purchase an illegal copy.
  • With burgeoning markets in India, China and other non-European countries, a reconsideration of content protection is in order.
  • Even in North America and Western Europe “casual piracy” has become endemic due to high bandwidth pipes, fast PCs, and file sharing networks. These technologies will not go away, they will only get better.
  • A different solution is required – a mix of concepts, business strategies and technology that together will provide a realistic ROI without an excessive cost.
  • Old models that are not working must be retired.
  • New “Social Content Protection” schemes to consider:
    • Differential pricing based on affordability (price localization)
    • Differential pricing based on package (multi-level packaging)
      • Top tier DVD has full clamshell, insert, bonus material
      • Low tier has basic DVD only, no bonus material, paper slipcover
    • Differential pricing based on resolution (for online)
      • Top tier is 16:9 @ 1920×1080, 5.1, etc.
      • Lower tier is 16:9 @ 720×408, stereo, etc.
    •  Bottom line is for content to have strong technical protection matched with variable economic thresholds to match the user thresholds in order that users will find less resistance to legally purchasing content than looking for alternatives
  • Most “alternatively supplied” content is of inferior quality, this can become a marketing advantage.
  • Although file-sharing networks and other technological ‘work-arounds’ exist today, they can be cumbersome and require a certain level of skill, many users will opt away from those if a more attractive option is presented.
  • The current economic situation will be exploited, it only remains to be seen whether that is by “alternative distributors” (aka Blackbeard) or by clever legitimate content owners and distributors.
  • The evolving industry practice of “Day and Date” releasing is another useful tactic.
  • As traditional DVD sales continue to flatten, careful consideration of alternatives to insure an increase in legal sales will be necessary.
  • Blog at WordPress.com.
  • Connect with us:
  • Twitter
  • Vimeo
  • YouTube
  • RSS
  • Follow Following
    • Parasam
    • Join 95 other followers
    • Already have a WordPress.com account? Log in now.
    • Parasam
    • Customize
    • Follow Following
    • Sign up
    • Log in
    • Report this content
    • View site in Reader
    • Manage subscriptions
    • Collapse this bar
 

Loading Comments...