• Home
  • about this blog
  • Blog Posts

Parasam

Menu

  • design
  • fashion
  • history
  • philosophy
  • photography
  • post-production
    • Content Protection
    • Quality Control
  • science
  • security
  • technology
    • 2nd screen
    • IoT
  • Uncategorized
  • Enter your email address to follow this blog and receive notifications of new posts by email.

  • Recent Posts

    • Take Control of your Phone
    • DI – Disintermediation, 5 years on…
    • Objective Photography is an Oxymoron (all photos lie…)
    • A Historical Moment: The Sylmar Earthquake of 1971 (Los Angeles, CA)
    • Where Did My Images Go? [the challenge of long-term preservation of digital images]
  • Archives

    • September 2020
    • October 2017
    • August 2016
    • June 2016
    • May 2016
    • November 2015
    • June 2015
    • April 2015
    • March 2015
    • December 2014
    • February 2014
    • September 2012
    • August 2012
    • June 2012
    • May 2012
    • April 2012
    • March 2012
    • February 2012
    • January 2012
  • Categories

    • 2nd screen
    • Content Protection
    • design
    • fashion
    • history
    • IoT
    • philosophy
    • photography
    • post-production
    • Quality Control
    • science
    • security
    • technology
    • Uncategorized
  • Meta

    • Register
    • Log in
    • Entries feed
    • Comments feed
    • WordPress.com

Browsing Tags piracy

Digital Security in the Cloudy & Hyper-connected world…

April 5, 2015 · by parasam

Introduction

As we inch closer to the midpoint of 2015, we find ourselves in a drastically different world of both connectivity and security. Many of us switch devices throughout the day, from phone to tablet to laptop and back again. Even in corporate workplaces, the ubiquity of mobile devices has come to stay (in spite of the clamoring and frustration of many IT directors!). The efficiency and ease of use of integrated mobile and tethered devices propels many business solutions today. The various forms of cloud resources link all this together – whether personal or professional.

But this enormous change in topology has introduced very significant security implications, most of which are not really well dealt with using current tools, let alone software or devices that were ‘state of the art’ only a few years ago.

What does this mean for the user – whether personal or business? How do network admins and others that must protect their networks and systems deal with these new realities? That’s the focus of the brief discussion to follow.

No More Walls…

NoWalls

The pace of change in the ‘Internet’ is astounding. Even seasoned professionals who work and develop in this sector struggle to keep up. Every day when I read periodicals, news, research, feeds, etc. I discover something I didn’t know the day before. The ‘technosphere’ is actually expanding faster than our collective awareness – instead of hearing that such-and-such is being thought about, or hopefully will be invented in a few years, we are told that the app or hardware already exists and has a userbase of thousands!

One of the most fundamental changes in the last few years is the transition from ‘point-to-point’ connectivity to a ‘mesh’ connectivity. Even a single device, such as a phone or tablet, may be simultaneously connected to multiple clouds and applications – often in highly disparate geographical locations. The old tried-and-true methodology for securing servers, sessions and other IT functions was to ‘enclose’ the storage, servers and applications within one or more perimeters – then protect those ‘walled gardens’ with firewalls and other intrusion detection devices.

Now that we reach out every minute across boundaries to remotely hosted applications, storage and processes the very concept of perimeter protection is no longer valid nor functional.

Even the Washing Machine Needs Protection

Another big challenge for today’s security paradigm is the ever-growing “Internet of Things” (IoT). As more and more everyday devices become network-enabled, from thermostats to washing machines, door locks to on-shelf merchandise sensors – an entirely new set of security issues has been created. Already the M2M (Machine to Machine) communications are several orders of magnitude greater than sessions involving humans logging into machines.

This trend is set to literally explode over the next few years, with an estimated 50 billion devices being interconnected by 2020 (up from 8.7 billion in 2012). That’s a 6x increase in just 8 years… The real headache behind this (from a security point of view) is the amount of connections and sessions that each of these devices will generate. It doesn’t take much combinatorial math to see that literally trillions of simultaneous sessions will be occurring world-wide (and even in space… the ISS has recently completed upgrades to push 3Mbps channels to 300Mbps – a 100x increase in bandwidth – to support the massive data requirements of newer scientific experiments).

There is simply no way to put a ‘wall’ around this many sessions that are occurring in such a disparate manner. An entirely new paradigm is required to effectively secure and monitor data access and movement in this environment.

How Do You Make Bulletproof Spaghetti?

spaghetti

If you imagine the session connections from devices to other devices as strands of pasta in a boiling pot of water – constantly moving and changing in shape – and then wanted to encase each strand in an impermeable shield…. well you get the picture. There must be a better way… There are a number of efforts underway currently from different researchers, startups and vendors to address this situation – but there is no ‘magic bullet’ yet, nor is there even a complete consensus on what method may be best to solve this dilemma.

One way to attempt to resolve this need for secure computation is to break the problem down into the two main constituents: authentication of whom/what; and then protection of the “trust” that is given by the authentication. The first part (authentication) can be addressed with multiple-factor login methods: combinations of biometrics, one-time codes, previously registered ‘trusted devices’, etc. I’ve written on these issues here earlier. The second part: what does a person or machine have access to once authenticated – and how to protect those assets if the authentication is breached – is a much thornier problem.

In fact, from my perspective the best method involves a rather drastically different way of computing in the first place – one that would not have been possible only a few years ago. Essentially what I am suggesting is a fully virtualized environment where each session instance is ‘built’ for the duration of that session; only exposes the immediate assets required to complete the transactions associated with that session; and abstracts the ‘devices’ (whether they be humans or machines) from each other to the greatest degree possible.

While this may sound a bit complicated at first, the good news is that we are already moving in that direction, in terms of computational strategy. Most large scale cloud environments already use virtualization to a large degree, and the process of building up and tearing down virtual instances has become highly automated and very, very fast.

In addition, for some time now the industry has been moving towards thinner and more specific apps (such as found on phones and tablets) as opposed to massive thick client applications such as MS Office, SAP and other enterprise builds that fit far more readily into the old “protected perimeter” form of computing.

In addition (and I’m not making a point of picking on a particular vendor here, it’s just that this issue is a “fact of nature”) the Windows API model is just not secure any more. Due to the requirement of backwards compatibility – to a time where the security threats of today were not envisioned at all – many of the APIs are full of security holes. It’s a constant game of reactively patching vulnerabilities once discovered. This process cannot be sustained to support the level of future connectivity and distributed processing towards which we are moving.

Smaller, lightweight apps have fewer moving parts, and therefore by their very nature are easier to implement, virtualize, protect – and replace entirely should that be necessary. To use just an example: MS Word is a powerful ‘word processor’ – which has grown to integrate and support a rather vast range of capabilities including artwork, page layout, mailing list management/distribution, etc. etc. Every instance of this app includes all the functionality, of which 90% is unused (typically) during any one session instance.

If this “app” was broken down into many smaller “applets” that called on each other as required, and were made available to the user on the fly during the ‘session’ the entire compute environment becomes more dynamic, flexible and easier to protect.

Lowering the Threat Surface

Immune-System

One of the largest security challenges of a highly distributed compute environment – such as is presented by the typical hybrid cloud / world-wide / mobile device ecosystem that is rapidly becoming the norm – is the very large ‘threat surface’ that is exposed to potential hackers or other unauthorized access.

As more and more devices are interconnected – and data is interchanged and aggregated from millions of sensors, beacons and other new entities, the potential for breaches is increased exponentially. It is mathematically impossible to proactively secure every one of these connections – or even monitor them on an individual basis. Some new form of security paradigm is required that will, by its very nature, protect and inhibit breaches of the network.

Fortunately, we do have an excellent model on which to base this new type of security mechanism: the human immune system. The ‘threat surface’ of the human body is immense, when viewed at a cellular level. The number of pathogens that continually attempt to violate the human body systems are vastly greater than even the number of hackers and other malevolent entities in the IT world.

The conscious human brain could not even begin to attempt to monitor and react to every threat that the hordes of bacteria, viruses and other pathogens bring against the body ecosystem. About 99% of such defensive response mechanisms are ‘automatic’ and go unnoticed by our awareness. Only when things get ‘out of control’ and the symptoms tell us that the normal defense mechanisms need assistance do we notice things like a sore throat, an ache, or in more severe cases: bleeding or chest pain. We need a similar set of layered defense mechanisms that act completely automatically against threats to deal with the sheer numbers and variations of attack vectors that are becoming endemic in today’s new hyper-connected computational fabric.

A Two-Phased Approach to Hyper-Security

Our new hyper-connected reality requires an equally robust and all-encompassing security model: Hyper-Security. In principle, an approach that combines the absolute minimal exposure of any assets, applications or connectivity with a corresponding ‘shielding’ of the session using techniques to be discussed shortly can provide an extremely secure, scalable and efficient environment.

Phase One – building user ‘sessions’ (whether that user is a machine or a human) that expose the least possible amount of threat surface while providing all the functionality required during that session – has been touched on earlier during our discussion of virtualized compute environments. The big paradigm shift here is that security is ‘built in’ to the applications, data storage structures and communications interface at a molecular level. This is similar to how the human body systems are organized, which in addition to the actual immune systems and other proactive ‘security’ entities, help naturally limit any damage caused by pathogens.

This type of architecture simply cannot be ‘backed in’ to legacy OS systems – but it’s time that many of these are moved to the shelf anyway: they are becoming more and more clumsy in the face of highly virtualized environments, not to mention the extreme amount of time/cost to maintaining these outdated systems. Having some kind of attachment or allegiance to an OS today is as archaic as showing a preference for a Clydesdale vs a Palomino in the world of Ferraris and Teslas… Really all that matters today is the user experience, reliability and security. How something gets done should not matter any more, even to highly technical users, any more than knowing exactly which endocrines are secreted by our Islets of Langerhans (some small bits of the pancreas that produce some incredibly important things like insulin). These things must work (otherwise humans get diabetes or computers fail to process) but very few of us need to know the details.

Although the concept of this distributed, minimalistic and virtualized compute environment is simple, the details can become a bit complex – I’ll reserve further discussion for a future post.

To summarize, the security provided by this new architecture is one of prevention, limitation of damage and ease of applying proactive security measures (to be discussed next).

Phase Two – the protection of the compute sessions from either internal or external threat mechanisms – also requires a novel approach that is suited for our new ecosystems. External threats are essentially any attempt by unauthorized users (whether human, robots, extraterrestrials, etc.) to infiltrate and/or take data from a protected system. Internal threats are activities that are attempted by an authorized user – but are not authorized actions for that particular user. An example is a rogue network admin either transferring data to an unauthorized endpoint (piracy) or destruction of data.

The old-fashioned ‘perimeter defense systems’ are no longer appropriate for protection of cloud servers, mobile devices, etc. A particular example of how extensive and interconnected a single ‘session’ can be is given here:

A mobile user opens an app on their phone (say an image editing app) that is ‘free’ to the user. The user actually ‘pays’ for this ‘free’ privilege by donating a small amount of pixels (and time/focus) to some advertising. In the background, the app is providing some basic demographic info of the user, the precise physical location (in many instances), along with other data to an external “ad insertion service”.

This cloud-based service in turn aggregates the ‘avails’ (sorted by location, OS, hardware platform, app type that the user is running, etc.) and often submits these ‘avails’ [with the screen dimensions and animation capabilities] to an online auction system that bids the ‘avails’ against a pool of appropriate ads that are preloaded and ready to be served.

Typically the actual ads are not located on the same server, or even the same country, as either the ad insertion service or the auction service. It’s very common for up to half a dozen countries, clouds and other entities to participate in delivering a single ad to a mobile user.

This highly porous ad insertion system has actually become a recent favorite of hackers and other con games – even without technical breaches it’s an incredibly easy system to game – due to the speed of the transactions and almost impossible ability to monitor in real time many ‘deviations’ are possible… and common.

There are a number of ingenious methods being touted right now to help solve both the actual protection of virtualized and distributed compute environments, as well as to monitor such things as intrusions, breaches and unintended data moves – all things that traditional IT tools don’t address well at all.

I am unaware of a ‘perfect’ solution yet to address either the protection or monitoring aspects, but here are a few ideas: [NOTE: some of these are my ideas, some have been taken up by vendors as a potential product/service. I don’t feel qualified enough to judge the merits of any particular commercial product at this point, nor is the focus of this article on detailed implementations but rather concepts, so I’ll refrain from getting into specific products].

  • User endpoint devices (anything from humans’ cellphones to other servers) must be pre-authenticated (using combination of currently well-known identification methods such as MAC address, embedded token, etc.). On top of this basic trust environment, each session is authenticated with a minimum of a two-factor logon scheme (such as biometric plus PIN, certificate plus One Time Token, etc). Once the endpoints are authenticated, a one-time use VPN is established for each connection.
  • Endpoint devices and users are combined as ‘profiles’ that are stored as part of a security monitoring application. Each user may have more than one profile: for instance the same user may typically perform (or be allowed to perform by his/her firm’s security protocol) different actions from a cellphone as opposed to a corporate laptop. The actions that each user takes are automatically monitored / restricted. For instance, the VPNs discussed in the point above can be individually tailored to allow only certain kinds of traffic to/from certain endpoints. Actions that fall outside of the pre-established scope, or are outside a heuristic pattern for that user, can either be denied or referred for further authorization.
  • Using techniques similar to the SSL methodologies that protect and authenticate online financial transactions, different kinds of certificates can be used to permit certain kinds of ‘transactions’ (with a transaction being either access to certain data, permission to move/copy/delete data, etc.) In a sense it’s a bit like the layered security that exists within the Amazon store: it takes one level of authentication to get in and place an order, yet another level of ‘security’ to actually pay for something (you must have a valid credit card that is authenticated in real time by the clearing houses for Visa/MasterCard, etc.). For instance, a user may log into a network/application instance with a biometric on a pre-registered device (such as fingerprint on an iPhone6 that has been previously registered in the domain as an authenticated device). But if that user then wishes to move several terabytes of a Hollywood movie studio’s content to remote storage site (!!) they would need to submit an additional certificate and PIN.

An Integrated Immune System for Data Security

Virus in blood - Scanning Electron Microscopy stylised

The goal of a highly efficient and manageable ‘immune system’ for a hyper-connected data infrastructure is for such a system to protect against all possible threats with the least direct supervision possible. Since not only is it impossible for a centralized omniscient monitoring system to handle the incredible number of sessions that take place in even a single modern hyper-network; it’s equally difficult for a single monitoring / intrusion detection device to understand and adapt to the myriad of local contexts and ‘rules’ that define what is ‘normal’ and what is a ‘breach’.

The only practical method to accomplish the implementation of such an ‘immune system’ for large hyper-networks is to distribute the security and protection infrastructure throughout the entire network. Just as in the human body, where ‘security’ begins at the cellular level (with cell walls allowing only certain compounds to pass – depending on the type and location of each cell); each local device or application must have as part of its ‘cellular structure’ a certain amount of security.

As cells become building blocks for larger structures and eventually organs or other systems, the same ‘layering’ model can be applied to IT structures so the bulk of security actions are taken automatically at lower levels, with only issues that deviate substantially from the norm being brought to the attention of higher level and more centralized security detection and action systems.

Another issue of which to be aware: over-reporting. It’s all well and good to log certain events… but who or what is going to review millions of lines of logs if every event that deviates even slightly from some established ‘norm’ is recorded? And even then, that action will only be looking in the rear view mirror.. The human body doesn’t generate any logs at all and yet manages to more or less handle the security for 37.2 trillion cells!

That’s not say that no logs at all should be kept – they can be very useful to help understand breaches and what can be improved in the future – but the logs should be designed with that purpose in mind and recycled as appropriate.

Summary

In this very brief overview we’ve discussed some of the challenges and possible solutions to the very different security paradigm that we now have due to the hyper-connected and diverse nature of today’s data ecosystems. As the number of ‘unmanned’ devices, sensors, beacons and other ‘things’ continues to grow exponentially, along with the fact that most of humanity will soon be connected to some degree to the ‘Internet’, the scale of the security issue is truly enormous.

A few ideas and thoughts that can lead to effective, scalable and affordable solutions have been discussed – many of these are new and works in progress but offer at least a partially viable solution as we work forward. The most important thing to take away here is an awareness of how things must change, and to keep asking questions and not assume that the security techniques that worked last year will keep you safe next year.

CONTENT PROTECTION – Methods and Practices for protecting audiovisual content

January 25, 2012 · by parasam

[Note:  I orginally wrote this in early 2009 as an introduction to the landscpe of content protection. The audience at that time consisted of content owners and producers (studios, etc.) who had (and have!) concern over illegal reproduction and distribution of their copyrighted material – i.e. piracy. With this issue only becoming bigger, and as a follow-up to my recent article on proposed piracy legislation (SOPA-PIPA) I felt it timely to reprint this here. Although a few small technical details have been added to the ecosystem, essentially the primer is as accurate and germane today as it was 3 years ago. While this is somewhat technical I believe that it will be of interest to this wider audience.]

What is Content Protection?

  • The term ‘Copy Protection’ is often used to describe the technical aspect of Content Protection.
  • Copy Protection is a limiting and often inaccurate term, as technical forms of Content Protection often include more aspects than just limiting or prohibiting copies of content.
  • Other forms of Technical Content Protection include:
    • Display Protection
      • Restrictions on type, resolution, etc. of display devices
    • Transmission Protection
      • Restrictions on retransmission or forwarding of content
    • Fingerprinting, Watermarks, etc.
      • Forensic marks to allows tracing of versions of content

Content Protection is the enforcement of DRM

  • Digital Rights Management (DRM)
    • A more accurate term would be ‘Content Rights Management’ (CRM) as this describes what is actually being managed [the word digital is now so overused that we see digital shoes (with LEDs), digital batteries, etc.)
    • Simply put, DRM is a set of policies that describe how content may be used in alignment with contractual agreements to ensure content owners a fair return on their investment in creating and distributing their content.
    • These policies can be enforced by legal, social and technical means.
      • Legal enforcement is almost always ex post facto
        • Civil and criminal penalties brought against parties suspected of violating DRM policies
      • Typically used in circumstances involving significant financial losses, due to time and costs involved
      • Is the most reactive and never prevents policy misuse in the first place
    • Social enforcement is a complex array of measures that will be discussed later in this article
    • Technical enforcement is what most of think about when we mention ‘Content Protection’ or ‘Copy Protection’
      • This is often a very proactive form of rights enforcement, as it can prevent misuse in the first place
      • It has costs, both in terms of actual cost of implementation and often a “social cost” in terms of customer alienation
      • Many forms of technical enforcement are percieved by customers as unfairly limiting their ‘fair use’ of content they have legally obtained

Technical Content Protection

  • To be effective, must have these attributes:
    • DRM policies must be well defined and be expressible with rules or formulas that are mechanically reproducible
    • Implementations should match the environment in terms of complexity, cost, reliability and lifespan
      • Protecting Digital Cinema content is a different process than protecting a low-resolution streaming internet file
      • The costs of these techniques should be included in mastering or distribution, as consumers see no “value” in content protection – it is not a ‘feature’ they will pay for
      • There are challenges in the disparate environments in which content is transmitted and viewed
        • CE (Consumer Electronics) has a very different viewpoint (and price point) on content protection than the PC industry
    • A balance is required in terms of the level of effectiveness vs. cost and perceived “hassle factor”
      • A “layered defense” and the concept of using technical content protection as a significant “speed bump” as opposed to a “Berlin Wall” will be most efficient
      • A combination of all three content protection methods (legal, social and technical) will ultimately provide the best overall protection at a realistic cost
      • The goal should not be to prohibit any possible breach of DRM policy, but rather to maximize the ROI to the content owner/distributor at an acceptable cost
      • All technical content protection methods will eventually fail
        • As general technology and computational power moves forward, techniques that were “unbreakable” a few years in the past will be defeated in the future
      • The technical protection mechanisms and algorithms are highly asymmetrical in terms of “cat & mouse” – i.e. there are a few hundred developers and potentially millions or tens of millions of users working to defeat these systems
    • The methods employed should work across international boundaries and should to the greatest degree possible be agnostic to language, culture, custom and other localization issues
    • Any particular deployment of a content protection system (usually a combination of protected content and a licensed playback mechanism) must be persistent, particularly in the consumer electronics environment
      • For example, users will expect DVDs to play correctly in both PCs and DVD player appliances for many years to come

Challenges for Technical Content Protection

  • Ubiquitous use
    • Users desire to consume content on a variety of devices of their choosing
      • “Anytime, anywhere, anyhow”
    • New technologies often outpace Rights Management policies
      • Example:  a DVD is region-coded for North America, cannot be played in Europe; but the same content can be purchased via iTunes and downloaded to iPod and played anywhere in the world
    • How to define “home use” in the face of Wi-Fi, Wi-Max, ipsec tunneling to remote servers, etc.
  • Persistent Use
    • Technical schemes must continue to work long after original deployment
    • In the CE (Consumer Electronics) environment older technology seldom dies, it is “handed down” the economic ladder. Just as DC-3 airplanes are still routinely hauling cargo in South America and Alaska some 50 years after the end of its design lifetime, VHS and DVD players will be expected to work decades from now
    • Particular care must be taken with some newer schemes that are contemplating the need for a network connection – that may be very difficult to make persistent
  • Adaptable Use
    • This is one of the more difficult technical issues to overcome simply
    • The basic premise is the user legally purchases content, then desires to consume it personally across a large inventory of playback devices
      • TV
      • PC
      • iPod
      • Cell phone
      • Portable DVD/BD player
      • Networked DVD/BD player in the home
    • How do both Rights Management policies and technical content protection handle this use case?
    • This is a currently evolving area and will require adaptation by both content owners, content distributors as well as content protection designers and device manufacturers
    • What will the future bring?
      • One protection scheme for enforcing “home network use” analyzes the “hop time” [how long it takes a packet to get to a destination] – a long hop time assumes an “out of home” destination and this use would be disallowed. How does this stop users in a peer-to-peer wireless environment that are close together (in a plane, at a party?)
      • DVD region codes were an interesting discussion when players were installed in the ISS (International Space Station)
      • A UK company (Techtronics) “de-regionalized” a Sony unit…
      • Technologies such as MOST (Media Oriented Systems Transport) – the new network system for vehicles
      • Sophisticated retransmission systems – such as SlingBox

Technical Content Protection Methods

  • Content protection schemes may be divided into several classes
    • Copy Protection – mechanisms to prevent or selectively restrict the ability of a user to make copies of the content
    • Display Protection – mechanisms to control various parameters of how content may be displayed
    • Transmission Protection – mechanisms to prevent or selectively restrict the ability of a user to retransmit content, or copy content that has been received from a transmission that is licensed for viewing but not recording
  • Legacy analog methods
    • APS (Analog Protection System) often known by its original developer name (Macrovision). Also known as Copyguard. This is a copy protection scheme primarily targeted at preventing VHS tape copies from VHS or DVD original content.
    • CGMS-A (Copy Generation Management System – Analog) is a copy protection scheme for analog television signals. It is in use by certain tv broadcasts, PVRs, DVRs, DVD players/recorders, D-VHS, STBs, Blu-ray and recent versions of TiVo. 2 bits in the VBI (Vertical Blanking Interval) carry CCI (Copy Control Information) that signals to the downstream device what it can copy:
      • 00    CopyFreely  (unlimited copies allowed)
      • 01    CopyNoMore  (one copy made already, no more allowed)
      • 10    CopyOnce  (one copy allowed)
      • 11    CopyNever  (no copies allowed)
    • Current digital methods
    • CGMS-D (Copy Generation Management System – Digital). Basically the digital form of CGMS-A with the CCI bits inserted into the digital bitstream in defined locations instead of using analog vertical blanking real estate.
    • DTCP (Digital Transmission Content Protection) is designed for the “digital home” environment. This scheme links technologies such as BD/DVD player/recorders, SD/HD televisions, PCs, portable media players, etc. with encrypted channels to enforce Rights Management policies. Also known as “5C” for the 5 founding companies.
    • AACS (Advanced Access Content System), the copy protection scheme used by Blu-ray (BD) and other digital content distribution mechanisms. This is a sophisticated encryption and key management system.
    • HDCP (High-bandwidth Digital Content Protection) is really a form of display protection, although that use implies a form of copy protection as well. This technology restricts certain formats or resolutions from being displayed on non-compliant devices. Typically protected HD digital signals will only be routed to compliant display devices, not to recordable output ports. In this use case, only analog signals would be available at output ports.
    • Patronus – various copy protection schemes targeted at the DVD market:  anti-rip (for both burned and replicated disks) and CSS (Content Scramble System) for DTO (Download To Own)
    • CPRM (Content Protection for Recordable Media), a technology for protecting content on recordable DVDs and SD memory cards
    • CPPM (Content Protection for Pre-recorded Media), a technology for protecting content on DVD audio and other pre-recorded disks
    • CPSA (Content Protection Systems Architecture) which defines an overall framework for integration of many of the above systems
    • CPXM (Content Protection for eXtended Media) An extension of CPRM to other forms of media, most often SD memory cards and similar devices. Allows licensed content to be consumed by many devices that can load the SD card (or other storage medium)
    • CMLA (Content Management License Administration), a consortium of Intel, Nokia, Panasonic and Samsung that administers and provided key management for mobile handsets and other devices that employ the OMA (Open Mobile Appliance ) spec, allowing the distribution of protected content to mobile devices.
    • DTLA (Digital Transmission Licensing Administrator) provides the administration and key management for DTCP.

Home Networking – the DTCP model

  • As one of the most deployed content protection systems, a further explanation of the DTCP environment:
    • DTCP works in conjunction with other content protection technologies to provide an unbroken chain of encrypted content from content provider to the display device
    • Each piece has its own key management system and protects a portion of the distribution chain:
      • CA (Conditional Access) – cable/satellite/telco
      • DTCP – the PVR/DVR/DVD recorder
      • CPRM – recordable disks, personal computer
      • HDCP – display device

DTCP and Transmission Protection

  • One important feature of DTCP is the enabling of the so called “Broadcast Flag”
    • Accepted by the FCC as an “approved technology”, the CCI information embedded in the DTV (Digital Television) signal is used by DTCP-compliant devices to regulate the use of digitally broadcast content
    • The technology will allow free-to-air digital broadcast for live viewing while at the same time prohibit recording or retransmission of the digital signal.

DTCP and the future

  • A number of recent extensions to the original DTCP standard have been published:
    • The original DTCP standard was designed for the first digital interface implemented on set top boxes: FireWire (1394a).
    • The original standard has now been extended to 7 new types of interfaces:
      • USB
      • MOST
      • Bluetooth
      • i.Link & IDB1394 (FireWire for cars)
      • IP
      • AL (Additional Localization)
        • New restrictions to insure all DTCP devices are in 1 home
      • WirelessHD

DTCP Summary

  • With probably the largest installed base of devices, DTCP is the backbone of most “home digital network content protection” schemes in use today.
    • As DTCP only protects data transmission interfaces, the other ‘partners’ (CA, CSS, CPRM, CPPM, HDCP) are all required to provide the end-to-end protection from content source to the display screen.
    • The extensions that govern IP and WirelessHD in particular allows the protection of HD content in the home.
    • The underlying design principles of DTCP are not limited by bandwidth or resolution, improved future implementations will undoubtedly keep pace with advances in content and display technology.

Underlying mechanisms that enable Technical Content Protection

  • All forms of digital content protection are comprised of two parts:
    • Some form of encryption of content in order that the content is unusable without a method of decoding the content before display, copying or retransmission
    • A repeatable and reliable method for decrypting the content for allowed use in the presence of a suitable key – the presence of which is assumed to equivalent to a license to perform the allowed actions
    • The encryption part of the process uses well-known and proven methods from the cryptographic community that are appropriate for this task:
    • The cipher (encryption algorithm) must be robust, reliable, persistent, immutable and easily implemented
    • The encryption/decryption process must be fast
      • At a minimum must support real-time crypto at any required resolution to allow for broadcast and playback
      • Ideally should allow for significantly faster than real-time encryption to maximize the efficiency of production and distribution entities that must handle large amounts of content quickly
    • All encryption techniques use a process that can be simplified to the following:
      • Content [C] and a key [K] are inputs to an encryption process [E], which produces encrypted content [CE]
    • In a similar but inverse action, decryption uses a process:
    • Encrypted content [CE], and a key [K] are inputs to a decryption process [D], which produces a replica of the original content [C]

    • Encryption methods
      • This is a huge science in and of itself. Leaving the high mathematics behind, a form of cipher known as a symmetrical cipher is best suited for encryption of large amounts of audiovisual content.
      • It is fast, secure and can be implemented in hardware or software.
      • Many forms of symmetrical ciphers exist, the most common is a block cipher known as AES which is currently used in 3 variants (cipher        strengths):  AES128, AES192 and AES256
      • AES (Advanced Encryption Standard) is approved by the NIST (National Institute of Standards) for use by military, government and civilian use. The 128-bit variant is more than secure enough for protecting audiovisual content, and the encryption meets the speed requirements for  video.
    • Keys
      • Symmetrical block ciphers (such as AES128) use the principle of a “shared secret key”
      • The challenge is how to create and manage keys that can be kept secret while being used to encrypt and decrypt content in many places with devices as diverse as DVD players, PCs, set top boxes, etc.
      • In practice, this is an enormously complex process, but this has been solved and implemented in a number of different DRM environments including all DTCP-compliant devices, most content application software available on PCs, etc.
      • It is possible to revoke keys (that is, deny their future ability to decode content) if the implementation allows for that. This makes it possible for known compromised keys to no longer be able to decrypt content.

Forensics

  • Forensic science (often shortened to forensics) is the application of a broad spectrum of sciences to answer questions of interest to a legal system.
    • Although technically not a form of Content Protection, the technologies associated with forensics in relation to audiovisual content (watermarking, fingerprinting, hashing, etc.) are vitally important as tools to support Legal Content Protection.
    • Without the verification and proof that Content Forensics can offer, it would be impossible to bring civil or criminal charges against parties suspected of subverting DRM agreements.
  • Watermarking
    • A method of embedding a permanent mark into content such that this mark, if recovered from content in the field, is proof that the content is either the original content or a copy of that content.
    • There are two forms of watermark:
      • Visible Watermarking, often known as a “bug” or a “burn-in”
        • This is frequently used by tv broadcasters to define ownership and copyright on material
        • Also used on screeners and other preview material where the visual detraction is secondary to rendering the content unsuitable for general use or possible resale.
        • Is subject to compromise due to:
          • Since it is visible, the presence of a watermark is known
          • Can be covered or removed without evidence of this action
        • Invisible Watermarking
          • The watermark can be patterns, glyphs or other visual information that can be recognized when looked for
          • Various visual techniques are used to render the watermarks “invisible” to the end user when watching or listening to content for entertainment.
          • Since the exact type, placement, timing and other information on embedding the watermark is known by the watermarking authority, this information is used during forensic recovery to assist in reading the embedded watermarks.
          • Frequently many versions of a watermark are used on a single source item of content, in order to narrow the distribution channel represented by a given watermark.
            • Challenges to invisible watermarking
          • Users attempting to subvert invisible watermarks have become very sophisticated and a number of attacks are now common against embedded watermarks.
          • A high quality watermarking method must offer the following capabilities:
            • Robustness against attacks such as geometric rotation, random bending, cropping, contrast alteration, time compression/expansion and re-encoding using different codecs or bit rates.
            • Robustness against the “analog hole” is also a requirement of a high quality watermark. (The “analog hole” is a hole in the security chain that could be broken by taking a new video of the playback of the original content, such as a camcorder in a theater).
            • Security of the watermark against competent attacks such as image region detection, collusion (parallel comparison and averaging of watermarked materials) and repetition detection.
              • Invisible watermarking must be “invisible”
            • The watermark must not degrade the image nor be easily detectable by eye (if one is not looking for it)
            • Various algorithms are commonly used to select geometric areas of certain frames that are better suited than others to “hide” watermarks. In addition, “tube” or “sliding” techniques can be applied to move the watermark in subsequent frames as an object in the frame moves. This lessens the chance for visual detection.
  • Fingerprinting
    • As opposed to watermarking, fingerprinting makes no prior “marks” to the source content, but rather measures the source content in a very precise way that allows subsequent comparison to forensically prove that the content is identical.
    • Both video and audio can be fingerprinted, but video is of more use and is more common. Audio is easily manipulated, and sufficient changes can be made to “break” a fingerprint comparison without rendering the audio unusable.
    • The video fingerprint files are quite small, and can be stored in databases and used for monitoring of internet sites, broadcasts, DVDs, etc.
  • Hashing
    • In this context, cryptographic hash functions have been explored as a form of “digital fingerprint”
    • This is different from “content fingerprinting” discussed in the previous section, a hash value is a purely numerical value derived via formula from an analysis of all the bits in a digital file.
    • If the hash values of two files are the same, the files are identical.
    • Hashing turns out to be unreliable for use as a forensic tool in this context:
      • A change of just a few bits in an entire file (such as trimming 1 second off the runtime of a movie) will cause a different hash value to be computed.
        • Essentially the same content can have multiple hash values, therefore the hash cannot be used as forensic evidence.
        • Content fingerprinting or watermarking are superior techniques in this regard.
    • Cryptographic hashes have great value in the underlying mechanisms of technical content protection, they are just not suitable as an alternative for watermarking or fingerprinting.
      • As checksums to insure accidental data corruption of critical information (encrypted keys, master key blocks, etc.)
      • As part of the technology that allows “digital signatures”, a method of insuring data has not been changed.
      • As a part of MACs (Message Authentication Codes) used to verify exchanges of privileged data.

Social Content Protection

  • Of the three forms of Rights Management enforcement (Legal, Social, Technical) this is probably the least recognized but if applied properly, the most effective form of enforcement
    • All the forms of Content Protection discussed overlap with each other to some extent
      • Forensics, a part of Technical protection, is what allows Legal protection to work, it gives the basis for claims.
      • Legal protection, in the form of original agreements, precedes all other forms, as Rights cannot be enforced until they are accurately described and agreed upon.
      • Social content protection is an aggregate of methods such as business policies, release strategies, pricing and distribution strategies and similar practices.

    Back to the future… what is the goal of content protection?

    • It’s really to protect the future revenues of proprietary content – to achieve the projected ROI on this asset
    • Ultimately, the most efficient method (or combination of methods) will demonstrate simplicity, low cost, wide user acceptance, ease of deployment and maintenance, and robustness in the face of current and future attempts at subversion.
    • The solution will be heterogeneous and will differentiate across various needs and groups – there is no “one size fits all” in terms of content protection.
    • Recognize the differences in content to be protected
      • Much content is ephemeral, it does not hold value for long
        • Newscasts, commercials, user-contributed content that is topical in nature, etc.
        • This content can be weakly protected, or left unprotected
      • Some content has a long lifespan and is deserving of strong protection
        • Feature movies, music, books, works of art, etc.
        • Even in this category, there will be differentiation:
          • Bottom line is assets that have a high net worth demand a higher level of protection
    • Recognize that effective content protection is a shared responsibility
    • It cannot be universally accomplished at the point of origin
    • Effective content protection involves content creators, owners, distributors (physical and online), package design, hardware and software designers and manufacturers, etc.
    • Each step must integrate successfully or a “break in the chain” can occur, which can be exploited by those that wish to subvert content protection.
    • Understand that most users see content protection as a “negative” – the implementation of various forms of social or technical content protection are perceived as “roadblocks” to the user’s enjoyment of content.
    • Purchasing a DVD while overseas on vacation and finding it will not play in their home DVD player;
    • Discovering that they have purchased the same content 3 or 4 times in order to play in various devices in their home, car, person (VHS, DVD, Blu-ray, iPod, Zune)
    • Purchasing a Blu-ray movie, playing it back in the user’s laptop (since they don’t have a stand-alone BD player and the laptop has a BD drive), finding it plays on the laptop screen but when connected via DVI to their large LCD display nothing is visible, and no error message is displayed[in this case HDCP content protection has disallowed the digital output from the laptop, but the user thinks either their laptop or monitor is broken]
    • One of the least successful attributes of technical content protection is notifying users when content copying/display/retransmission is disallowed.
    • Understand the history and philosophy of content protection in order to get the best worldview on the full ecosystem of this issue
      • The social dilemma is this:  in the past, all content was free as we had only an oral tradition. There was no recording, the only “cost” was that of moving your eyes and ears to where the content was being created (play, song, speech).
      • In order to share content across a wider audience (and to experience content in its original form, as opposed to how uncle Harry described what he heard…) books were invented. This allowed distribution across distance, time and language. The cost of producing was borne by the user (sale of books).
      • Eventually the concept of copyright was formed, a radical idea at the time, as it enriched content owners as well as distributors. The original reason for copyrights was to protect content creators/owners from unscrupulous distributors, not end users.
      • Similar protections were later applied to artwork, music, films, photographs, software and even inventions (in the form of patents).
      • Current patent law protects original inventions for 20 years, copyrights by individual authors survive for the life of the author plus 70 years, “works for hire” [just about all music and movies today] are protected for 120 years from creation.
      • Both patents and copyrights have no value except in the face of enforcement.
      • The IPP (Intellectual Property Protection) business has grown to a multi-billion dollar business

Social Content Protection – New Ideas

  • The scale of the problem may not be accurately stated
  • Current “losses” claimed by content owners (whether they are software, film, books, music the issue is identical) assume every pirated or “use out of license” occurrence should have produced the equivalent income as if a copy of the content was sold at retail.
  • This is unrealistic with a majority of the world’s population having insufficient earning power to purchase content at 1st world prices. For example, Indonesia, a country with high rates of DVD piracy, has an average per capita income of US$150 per month. Given the choice of a $15 legitimate DVD or a $1 pirated copy the vast majority will either do without or purchase an illegal copy.
  • With burgeoning markets in India, China and other non-European countries, a reconsideration of content protection is in order.
  • Even in North America and Western Europe “casual piracy” has become endemic due to high bandwidth pipes, fast PCs, and file sharing networks. These technologies will not go away, they will only get better.
  • A different solution is required – a mix of concepts, business strategies and technology that together will provide a realistic ROI without an excessive cost.
  • Old models that are not working must be retired.
  • New “Social Content Protection” schemes to consider:
    • Differential pricing based on affordability (price localization)
    • Differential pricing based on package (multi-level packaging)
      • Top tier DVD has full clamshell, insert, bonus material
      • Low tier has basic DVD only, no bonus material, paper slipcover
    • Differential pricing based on resolution (for online)
      • Top tier is 16:9 @ 1920×1080, 5.1, etc.
      • Lower tier is 16:9 @ 720×408, stereo, etc.
    •  Bottom line is for content to have strong technical protection matched with variable economic thresholds to match the user thresholds in order that users will find less resistance to legally purchasing content than looking for alternatives
  • Most “alternatively supplied” content is of inferior quality, this can become a marketing advantage.
  • Although file-sharing networks and other technological ‘work-arounds’ exist today, they can be cumbersome and require a certain level of skill, many users will opt away from those if a more attractive option is presented.
  • The current economic situation will be exploited, it only remains to be seen whether that is by “alternative distributors” (aka Blackbeard) or by clever legitimate content owners and distributors.
  • The evolving industry practice of “Day and Date” releasing is another useful tactic.
  • As traditional DVD sales continue to flatten, careful consideration of alternatives to insure an increase in legal sales will be necessary.

Comments on SOPA and PIPA

January 23, 2012 · by parasam

The Stop Online Piracy Act (SOPA) and Protect Intellectual Property Act (PIPA) have received much attention recently. As is often the case with large-scale debate on proposed legislation, the facts and underlying issues can be obscured by emotion and shallow sound-bites. The issues are real but the current proposals to solve the problem are reactive in nature and do not fully address the fundamental challenge.

[Disclaimer:  I currently am employed by Technicolor, a major post-production firm that derives substantial income from the motion-picture industry and associated content owners / distributors. These entities, as well as my employer itself, experience tangible losses from piracy and other methods of intellectual property theft. However, the comments that follow are my personal opinions and do not reflect in any way the position of my employer or any other firm with which I do business.]

For those that need a brief introduction to these two bills that are currently in legislative process:  both bills are similar, and – if enacted – would allow enforcement of the following actions to reduce piracy of goods and services offered via the internet, primarily from off-shore companies.

  1. In one way or another, US-based Internet Service Providers (ISPs) would be required to block the links to any foreign-based server entity that had been identified as infringing on copyrighted material.
  2. Payment providers, advertisers and search engines would be required to cease doing business with foreign-based server sites that infringed on copyrighted material.

The intent behind this legislation is to block access to the sites for US-based consumers, and to remove or substantially reduce the economic returns that could be generated from US-based consumers on behalf of the offending web sites.

For further details on the bills, with some fairly objective comments on both the pros and cons of the bills, check this link. [I have no endorsement of this site, just found it to be reasonable and factual when compared with the wording of the bills themselves.]

The issues surrounding “piracy” (aka theft of intellectual or physical property) are complex. The practice of piracy has been with us since inter-cultural commerce began, with the first documented case being the exploits of the Sea Peoples who threatened the Aegean and Mediterranean seas in the 14th century BC.

Capture of Blackbeard

With the historical definition of piracy constrained to theft ‘on the high seas’ – i.e. areas of ocean that are international, or beyond the jurisdiction of any one nation-state – the extension of the term ‘piracy’ to describe theft based within the international ocean of the internet is entirely appropriate.

While the SOPA and PIPA bills are focused on ‘virtual’ property (movies, software, games and other forms of property that can be downloaded from the internet), modern piracy also affects many physical goods, from oil and other raw materials seized by Somali pirates off the east coast of Africa to stolen or counterfeit perfume, clothing and other tangibles offered for sale over the internet. The worst form of piracy today takes the form of human kidnapping on the high seas for ransom. More than 1,100 people were kidnapped by pirates in 2010, with over 300 people currently being held hostage for ransom by pirates at the time of this article (Jan 2012). The larger issue of piracy is of major international concern, and will require proactive and persistent efforts to mitigate this threat.

While the solutions brought forward by these two bills are well-intentioned, they are reactive in nature and fall short of a practical solution. In addition, they suffer from the same heavy-handed methods that often accompany legislative attempts to modify human behavior. Without regard to any of the underlying issues, and taking no sides in terms of this commentary, governmental attempts to legislate alcohol and drug consumption, reproductive behavior and cohabitation lifestyles have all been either outright failures or fraught with difficulty and have produced little or none of the desired results.

Each side in this current debate has exaggerated both the risks and rewards of the proposed legislation. From the content owner’s side the statements of financial losses are overblown and are in fact very difficult to quantify. One of the most erroneous bases for financial computation of losses is the assumption that every pirated transaction would have been money that the studio or other content owner would have received if the content had been legally purchased. This is not supported by fact. Unfortunately many pirated transactions are motivated by cost (either very low or free) – if the user had to pay for the content they simply would choose not to purchase. It is very difficult to assess the amount of pirated transactions, although many attempts are made to quantify this value.

What certainly can be said is that real losses due occur and they are substantial. However, it would better serve both the content owners, and those that desire to assist these rightsholders, to pursue a more conservative and accurate assessment of losses. To achieve a practical solution to the challenge of Intellectual Property (IP) theft, this must be treated as a business use case, and set aside the moral aspects of this issue. The history of humanity is littered with the carcasses of failed attempts to legislate morality. Judgments of behavior do not generate cash, collection of revenue is the only mechanism that factually puts money in the bank.

Any action in commerce has a financial cost. In order to make an informed choice on the efficacy of a proposed action, the cost must be known, as well as the potential profit or loss. If a retail store wants to reduce the assumed losses due to shoplifting, the cost of the losses must be known as well as the cost of additional security measures in order to make a rational decision on what to spend to resolve the problem. If the cost of securing the merchandise is higher than the losses, then it makes no sense to embark on additional measures.

Overstating the amount of losses due to piracy could appear to justify expensive measures to counteract this theft – if implemented the results may in fact only add to the overall financial loss. In addition, costs to implement security are real, while unearned revenue is potential, not actual.

On the side of the detractors to the SOPA and PIPA legislation, the claims of disruption to the fabric of the internet, as well as potential security breaches if link blocking was enabled are also overstated. As an example, China currently practices large scale link blocking, DNS (Domain Name Server) re-routing and other technical practices that are similar in many respects to the proposed technical solutions of the proposed Acts – and none of this has broken the internet – even internally within China.

The real issue here is that these methods don’t work well. The very nature of the internet (a highly redundant, robust and reliable fabric of connectivity) works against attempts to thwart connections from a client to a server. We have seen many recent attempts by governments to restrict internet connectivity to users within China, the Arab states, Libya, etc – and all have essentially failed.

For both sides of this discussion, a more appropriate direction for legislation, funding and focus of energy is to treat this issue for what it is factually:  a criminal activity that requires mitigation from the public sector through police and judicial efforts, and from the private sector through specific and proven security measures. Again, the analogy of current practices in retail merchandising may be useful:  the various technologies of RFI scanners at store exits, barcoded ‘return authorization tags’ and other measures have proven to substantially reduce property and financial loss without unduly penalizing the majority of honest consumers.

Coupled with specific laws and the policy of prosecuting all shoplifters this two-pronged approach (from both public and private sector) has made substantial inroads to merchandise loss in the retail industry.

Content protection is a complex issue and cannot be solved with just one or two simple acts no matter how much that may be desired. In addition, the actual financial threat posed by piracy of movies and other content must be honestly addressed: it is sometimes convenient to point to perceived losses due to piracy rather than other reasons – for instance poor returns due to simply that no one liked the movie… or distribution costs that are higher than ideal, etc.

A part of the overall landscape of content protection is to look at both the demand side as well as the supply side of the equation. Both the SOPA and PIPA proposals only address the supply side – they attempt to reduce access to, or disrupt payment for – the supply of assets. Most consumers make purchase choices based on a cost/benefit model, even if unconsciously so:  therefore at first glance, the attractiveness of downloading a movie for ‘free’ as opposed to paying $5-$25 for the content is high.

However, there are a number of mitigating factors that make the choice more complex:

  • Quality of the product
  • Ease of use (for both getting and playing the content)
  • Ease of re-use or sharing the content
  • Flexibility of devices on which the content may be consumed
  • Potential of consequences for use of pirated material

With careful attention to the above factors (and more), it is possible for legal content to become potentially more attractive than pirated content, at least for a percentage of consumers. It is impossible to prevent piracy from occurring – the most that is reasonable to expect is a reduction to the point where the financial losses are tolerable. This is the same tactic taken with retail merchandise security – a cost/benefit analysis helps determine the appropriate level of security cost in relation to the losses.

In terms of the factors listed above:

  • Legal commercial content is almost always of substantially higher quality than pirated content, raising the attractiveness of the product.
  • For most consumers (i.e. excluding teenage geeks that have endless time and patience!) a properly designed portal or other download experience CAN be much easier to operate than linking to a pirate site, determining which files to download, uncompressing, etc. etc.Unfortunately, many commercial sites are not well designed, and often are as frustrating to operate as some pirate sites. Attention to this issue is very important, as this is a low cost method to retain legal customers.
  • Depending on the rights purchased, and whether the content was streamed or downloaded, the re-use or legal sharing of purchased content (i.e. within the home or on mobile devices owned by the content purchaser) should ideally be straightforward.Again, this is often not the case, and again motivates consumers to potentially consider pirated material as it is often easier to consume on multiple devices and share with others. This is a very big issue and is only beginning to be substantially addressed by such technologies as UltraViolet, Keychest and others.Another issue that often complicates this factor is the enormously complex and inconsistent legal rights to copyrighted material. Music, books, movies, etc. all have highly divergent rules that govern the distribution and sale of the material. The level of complexity and cost of administering these rights, and the resultant inequities in availability make pirated material much more available and attractive than it should be.
  • With the recent explosion of types of devices available to consume digital content (whether books, movies, tv, music, newspapers, etc.) the consumer rightly desires a seamless consumption model across the devices of their choice. This is often not provided legally, or is available only at significant cost. This is yet another area that can be addressed by content owners and distributors to lower the attractiveness of pirated material.
  • The issue of consequences for end-users that may be held accountable for downloading and consumption of pirated material is complex and fraught with potential backlash to content owners that attempt enforcement in this area. Several recent cases within the music industry have shown that the adverse publicity garnered by content owners suing end users has had a high cost and is generally perceived to be counter-productive.The bulk of legal enforcement at this time is concentrated on the providers of pirated material all through the supply chain, as opposed to the final consumer. This is also a more efficient use of resources, as the effort to identify and legally prosecute potentially millions of consumers of pirated material would be impractical compared to degrading the supply chain itself – often operated by a few hundreds of individuals.There have been recent attempts by some governments and ISPs to monitor and identify the connections from an end consumer to a known pirate site and then mete out some level of punishment for this practice. This usually takes the form of multiple warnings to a user followed by some degradation or interruption of their internet service. There are several factors that complicate the enforcement of this type of policy:
    • This action potentially comes up against privacy concerns, and the level and invasiveness of monitoring of a user’s habits and what they download vary greatly by country and culture.
    • Many so-called ‘pirate’ sites offer a mix of both legally obtained material, illegally obtained material, and storage for user generated content. It is usually impossible to precisely determine which of these content types a user has actually downloaded, so the risk is high that a user could be punished for a perfectly innocent behavior.
    • It is too easy for a pirate site to keep one (or several) steps ahead of this kind of enforcement activity with changing names, ip addresses, and other obfuscating tactics.

In summary, it should be understood that piracy of copyrighted material is a real and serious threat to the financial well-being of content producers throughout the world. What is called for to mitigate this threat is a combined approach that is rational, efficient and affordable. Emotional rhetoric and draconian measures will not solve the problem, but only exacerbate tensions and divert resources from the real problem. A parallel approach of improving the rights management, distribution methodology and security measures associated with legal content – aided by consistent application of law and streamlined judicial and police procedure world-wide – is the most effective method for reducing the trafficking of stolen intellectual property.

Education of the consumer will also help. Although, as stated earlier, one cannot legislate morality – and in the ‘privacy’ of the consumer’s internet connection many will take all they can get for ‘free’ – it cannot hurt to repeatedly describe the knock-on effects of large scale piracy on the content creation sector. The bottom line is that the costs of producing high quality entertainment are significant, and without sufficient financial return this cannot be sustained. The music industry is a prime example of this:  more labels and music studios have gone out of business than remain in business today – as measured from 1970 to 2011. While it is true that the lowered bar of cost due to modern technology has allowed many to ‘self-produce’ it is also true that some of the great recording studios that have gone out of business due to decreased demand and funding have cost us – and future generations – the unique sound that was only possible in those physical rooms. These intangible costs can be very high.

One last fact that should be added to the public awareness concerning online piracy:  the majority of these sites today are either run by or funded by organized criminal cartels. For instance, in Mexico the production and sale of counterfeit DVDs is used primarily as a method of laundering drug money, in addition to the profitable nature of the business itself (since no revenues are returned to the studios whose content is being duplicated). The fact that the subscription fees for the online pirate site of choice is very likely funding human trafficking, sexual slavery, drug distribution and other criminal activity on a large scale should not be ignored. Everyone is free to make a choice. The industry, and collective governments, need to provide thoughtful, useful and practical measures to help consumers make the right choice.

  • Blog at WordPress.com.
  • Connect with us:
  • Twitter
  • Vimeo
  • YouTube
  • RSS
  • Follow Following
    • Parasam
    • Join 95 other followers
    • Already have a WordPress.com account? Log in now.
    • Parasam
    • Customize
    • Follow Following
    • Sign up
    • Log in
    • Report this content
    • View site in Reader
    • Manage subscriptions
    • Collapse this bar
 

Loading Comments...