• Home
  • about this blog
  • Blog Posts

Parasam

Menu

  • design
  • fashion
  • history
  • philosophy
  • photography
  • post-production
    • Content Protection
    • Quality Control
  • science
  • security
  • technology
    • 2nd screen
    • IoT
  • Uncategorized
  • Enter your email address to follow this blog and receive notifications of new posts by email.

  • Recent Posts

    • Take Control of your Phone
    • DI – Disintermediation, 5 years on…
    • Objective Photography is an Oxymoron (all photos lie…)
    • A Historical Moment: The Sylmar Earthquake of 1971 (Los Angeles, CA)
    • Where Did My Images Go? [the challenge of long-term preservation of digital images]
  • Archives

    • September 2020
    • October 2017
    • August 2016
    • June 2016
    • May 2016
    • November 2015
    • June 2015
    • April 2015
    • March 2015
    • December 2014
    • February 2014
    • September 2012
    • August 2012
    • June 2012
    • May 2012
    • April 2012
    • March 2012
    • February 2012
    • January 2012
  • Categories

    • 2nd screen
    • Content Protection
    • design
    • fashion
    • history
    • IoT
    • philosophy
    • photography
    • post-production
    • Quality Control
    • science
    • security
    • technology
    • Uncategorized
  • Meta

    • Register
    • Log in
    • Entries feed
    • Comments feed
    • WordPress.com

Browsing Category technology

Ubiquitous Computational Fabric (UCF)

March 6, 2012 · by parasam

Ok, not every day do I get to coin a new term, but I think this is a good description for what I see coming. The latest thing in the news is “the PC is dead, long live the tablet…”   actually, all forms of current ‘computers’ – whether they are desktops, laptops, ultrabooks, tablets, smartphones, etc. have a life expectancy just short of butter on pavement on a warm afternoon.

We have left the Model “T” days, to use an automotive analogy – where one had to be a trained mechanic to even think about driving a car – and moved on just a little bit.

Ford Model “T” (1910)

We are now at the equivalent of the Model “A” – a slight improvement.

Ford Model “A” (1931)

The user is still expected to understand things like:  OS (Operating Systems), storage, apps, networking, WiFi security modes, printer drivers, etc. etc. The general expectation is that the user conform his or her behavior to the capabilities of the machine, not the other way around. Things we sort of take for granted – without question –
are really archaic. Typing into keyboards as the primary interface. Dealing with a file system – or more likely the frustration that goes along with dealing with incompatible filing systems… Mac vs PC… To use the automobile for one more analogy:  think how frustrating it would be to have to go to different gas stations depending on the type of car you had… because the nozzle on the gas pump would only fit certain cars!

A few “computational systems” today have actually achieved ‘user friendly’ status – but only with a very limited feature set, and this took many, many years to get there:  the telephone is one good example. A 2 yr old can operate it without a manual. It works more or less the same anywhere in the world. In general, it is a highly reliable system. In terms of raw computational power, the world-wide telephone system is one of the most powerful computers on the planet. It has more raw bandwidth than the current ‘internet’ (not well utilized, but that’s a different issue).

We are now seeing “computers” embedded into a wide variety of items, from cars to planes to trains. Even our appliances have built-in touch screens. We are starting to have to redefine the term ‘computer’ – the edges are getting very fuzzy. Embedded sensors  are finding their way into clothing (from inventory control tags in department stores to LED fabric in some cutting edge fashions); pets (tracking chips); credit cards (so-called smart cards); the atmosphere (disposable sensors on small parachutes are dropped by plane or shot from mortars to gather weather data remotely); roads (this is what powers those great traffic maps) and on and on.

It is actually getting hard to find a piece of matter that is not connected in some way to some computing device. The power is more and more becoming ‘the cloud.’ Our way of interacting with computational power is changing as well:  we used to be ‘session based’ – we would sit down at a desktop computer and switch gears (and usually employ a number of well chosen expletives) to get the computer up and running, connected to a printer and the network, then proceed to input our problems and get results.

Now we are an ‘always on’ culture. We just pick up the smartphone and ask Siri “where the heck is…” and expect an answer – and get torqued when she doesn’t know or is out of touch with her cloud. Just as we expect a dial tone to always be there when we pick up the phone, we now expect the same from our ‘computers.’ The annoyance of waiting for a PC to boot up is one of several factors users report on for their attraction to tablets.

Another big change is the type of connectivity that we desire and expect. The telephone analogy points to an anachronistic form of communication: point-to-point. Although, with enough patience or the backup of extra software, you can speak with several people at once, the basic model of the phone system is one-to-one. The cloud model, Google, blogs, YouTube, Facebook, Twitter etc. has changed all that. We now expect to be part of the crowd. Instead of one-to-one we now want many-to-many.

Instead of a single thread joining one user to another, we now live in a fabric of highly interwoven connectivity.

When we look ahead – and by this I mean ten years or so – we will see the extension of trends that are already well underway. Essentially the ‘computer’ will disappear – in all of its current forms. Yes, there will still be ‘portals’ where queries can be put to the cloud for answers; documents will still be written, photographs will still be manipulated, etc. – but the mechanisms will be more ‘appliance like’ – typically these portals will act like the handsets of today’s cellphone network – where 99% of the horsepower is in the backoffice and attached network.

This is what I mean by Ubiquitous Computational Fabric (UCF). It’s going to be an ‘always on’, ‘always there’ environment. The distinction of a separate ‘computer’ will disappear. Our clothing, our cars, our stoves, our roads, even our bodies will be ‘plugged in’ to the background of the cloud system.

There are already small pills you can swallow that have video cameras – your GI tract is video-ed and sent to your doctor while the pill moves through your body. No longer is an expensive and invasive endoscopy required. Of course today this is primitive, but in a decade we’ll swallow a ‘diagnostic’ pill along with our vitamins and many data points of our internal health will be automatically uploaded.

As you get ready to leave the bar, you’ll likely have to pop a little pill (required to be offered free of charge by the bar) that will measure your blood alcohol level and transmit approval to your car before it will start. Really. Research on this, and the accompanying legislation, is under way now.

The military is already experimenting with shirts that have a mesh of small wires embedded in the fabric. When a soldier is shot, the severing of the wires will pinpoint the wound location and automatically transmit this information to the medic.

Today, we have very expensive motion tracking suits that are used in computer animation to make fantasy movies.

Soon, little sensors will be embedded into normal sports clothing and all of an athlete’s motions will be recorded accurately for later study – or injury prevention. One of the most difficult computational problems today – requiring the use of the planet’s most massive supercomputers – is weather prediction. The savings in human life and property damage (from hurricanes, tornadoes, tsunamis, earthquakes, etc.) can be staggering. One of the biggest problems is data input. We will see a massive improvement here with small intelligent sensors being dropped into formative storms to help determine if they will become dangerous. The same with undersea sensors, fault line sensors, etc.

The real winners of tomorrow’s business profits will be those companies that realize this is where the money will flow. Materials science, boring but crucial, will allow for economic dispersal of smart sensors. Really clever data transmission techniques are needed to funnel the amount of collected information through oft time narrow pipes and difficult environments. ‘Spread-spectrum computing’ will be required to minimize energy usage, provide the truly reliable and available fabric that is needed. Continual understanding of human factor design will be needed to allow the operation of these highly complex systems in an intuitive fashion.

We are at an exciting time:  to use the auto one more time – there were early Ford engineers who could visualize Ferraris – even though the materials at time could not support their vision. We need to support those people, those visionaries, those dreamers – for they will provide the expertise and plans to help us realize what is next. We have only scratched the surface of what’s possible.

DI – Disintermediation

February 28, 2012 · by parasam

Disintermediation – a term you should come to know. Essentially this means “to remove the intermediary.” This has always been a disruptive process in cultures – whether affecting religion, law, education or technology. In religion, one well-known example was the rise of the Lutheran church when they felt the ‘intermediary’ process of pope, cardinals, bishops, etc. of the Catholic form was no longer necessary for the common man to connect to their belief in God.

Higher education used to be exclusive to those that could afford to attend brick-and-mortar campuses; now we have iTunesU, distance learning and a host of alternative learning environments open to virtually anyone with the time and focus to consume the knowledge.

Bringing questions of law was traditionally handled by barristers, attorneys and others – ensconced in a system of high cost and slow delivery of service. Today we have storefront paralegal offices, online access to many governmental and private legal services and a plethora of inexpensive software for preparation of common legal forms.

Each of these areas of practice fought long and hard to preserve the intermediary. We were told that our souls might be lost without the guidance of trained priests, that we might lose everything we owned if we prepared legal documents without assistance, and that only a trained professor could teach us anything.

We, collectively, begged to differ. And we slowly, with much bloodshed, sweat and tears, succeeded to emancipate ourselves from the yoke of enforced intermediation. However, like many true tools, knowledge is often represented as a sharp two-edged sword. Going it alone has consequences. There is no substitute for the compassion and experience of spiritual advisor, no matter his or her title. There are many areas of law where the specialized knowledge of legal codes, not to mention the oratorical skills of an experienced courtroom jouster, are essential to victory.  The guidance and explanation of one who has mastered the knowledge of a subject – not to mention the special skills of teaching, of helping a student to reach comprehension – are many times critical to the learning process.

Now we are confronting a new locus of disintermediation:  the provisioning of access to the ‘cloud’ of entertainment and information. The internet – in its largest sense – is in a new stage of democratization. The traditional providers of access (telcos, cable tv, satellite) are in a fight for their collective lives – and they are losing. Attempting to hold onto an outmoded business model is simply ‘dead man walking’ philosophy. At the most you can walk more slowly – but you will reach the hangman’s noose irregardless.

This will not be an overnight change, but we have already seen just how quickly ‘internet time’ moves. The velocity of change is on an exponential upwards curve, and nothing in our recent past has given us any reason to doubt this will alter anytime soon.

There are a number of factors that are fueling this:  the explosion of the number of content creators; the desire of traditional content creators (studios, episodic tv) to sell their content to as wide an audience as rapidly as possible; the high cost and oft-perceived low value-add of traditional NSPs (Network Service Providers – telco, cable, etc.)

One of the biggest reasons that has helped to propel this change in consumer behavior is knowledge:  as little as ten years ago the average media consumer associated the ‘channel’ and the ‘content’ as the same thing. Movies or news on TV came out the wall on a cord (no matter what fed the cord) – or movies could be seen on a plastic disk that you rented/bought. The concept of separation of content from channel did not exist.

Today, even the average 6-year-old understands that he or she can watch SpongeBob on almost anything that has a screen and a speaker. The content is what matters, how it gets there and on what device it is consumed just doesn’t matter much. Not a great thing for makers of networks or devices…

Fortunately the other side of the sword does exist… while the traditional models of telco provisioning of services are either antiquated or obsolete (the time/distance model of tariffs, high cost for low functionality, etc.), the opportunity for new business models does exist. What is disruptive for one set of structures is opportunistic for others.

Once content is ‘unbound’ from its traditional channels, a new world of complexity sets in:  metadata about the content gains importance. Is it SD or HD? What’s the aspect ratio: 4:3 or 16:9? What codec was used (for instance if it was Flash I can’t see it on my iPad), etc. etc.

Finding the content, or the version of it that you want, can be challenging. Licensing and other DRM (Digital Rights Management) issues add to the confusion. If voice communication (aka the telephone) is stripped out of its network and now becomes an ‘app’, (for instance like Skype), who sells/supports the app? If all “private networks” (telco, cable, satellite) become essentially data pipes only, what pricing models can be offered that will attract consumers yet allow such companies to run profitably? There is a growing tendency for “un-bundling” and other measures of transparency – for instance overseas mobile phone operators are backing away from cellphone handset subsidies. This was due in large part to the prevalence of  prepaid phone contracts in these regions – for which no subsidized phones can be provided. This has the knock-on effect of reducing iPhone sales and increasing Android (and other) less expensive phone hardware penetration. For instance, in the last year or so the sales of iPhones has fallen considerably in Greece, Spain, Portugal and Italy… Hmmm, wonder why??

All of these questions will assume larger and larger importance in the near future. Current modalities are either failing outright or becoming marginalized. We have heard the moniker “Content is King” – and it’s still true, much to the chagrin of many network providers. When one is thirsty, you pay for water, not the pipes that it arrives in…

Here’ s another anecdotal piece that helps to demonstrate that you cannot underestimate the importance of content ownership:  as is well known, VFX (Visual Special Effects) are now the ‘star’ of most movies. A decade ago, actors carried a movie, now it’s the effects… Do the research. Look at 2011 box office stats. Try to find a movie that was in top 20 grossing that did NOT have significant special effects… Now here’s the important bit:  one would think that the firms that specialize in creating such fantastic imagery would be wildly successful… NOT. It’s very, very expensive to create this stuff. It takes ungodly amounts of processing power, many really clever humans, and ridiculous amounts of time. Rango just won the Oscar… and in spite of the insanely powerful computers we have today, it took TWO YEARS to animate this movie!

The bottom line is that the ONLY special effects firms that are in business today or are remotely profitable, are the ones connected to either studios or consortiums that themselves own the content on which this magic is applied.

Content is water at the top of the hill. The consumers are at the bottom with their little digital buckets out, waiting to be filled. They just don’t care which path the water runs down the hill… but they DO care that it runs quickly, without damming up, and without someone trying to siphon off  ‘their’ water…

This is not all settled. Many battles will be won and lost before the outcome of the ‘war’ is known. New strategies, new generals, new covert forces will be deployed.

Stay tuned.

VERA (Vision Electronic Recording Apparatus) – one of the first video tape recorders

February 26, 2012 · by parasam

Today we take for granted the ability to watch HDTV on large screens in our homes, see stunning digital cinema at super high resolution in the theatre, and stream high quality video to our smartphones, tablets and computers. It wasn’t always this way…

One of the first attempts at recording and playing video content (with audio as well) – all previous video distribution was completely live in real time – was a British system:  the Vision Electronic Recording Apparatus (VERA). The development of this device began in 1952 by the BBC, under project manager Dr. Peter Axon.

All previous recording technology of that time was used for audio. Of course, only analog techniques were available then. At a high level, the recorders used open-reel magnetic tape that passed over a fixed record/playback head. Although there were several formats for audio recording in use by the 1950’s – common ones used professionally moved the tape at 15ips (inches per second) to record frequencies up to 16kHz – there were no video recording devices.

Video signals, even monochrome, require a much higher bandwidth. The relatively low-resolution cameras of that day (405 tv lines vertically) needed about 3MHz to record the fine detail in the picture. This is roughly 200x the maximum frequency recorded by audio tape machines of that time. Essentially, in order to record a signal on magnetic tape using stationary heads, either the head gap must be made smaller, or the tape must move faster.

Limitations in materials science in the 1950’s, as well as other complex issues of recording head design essentially dictated that the tape needed to move much faster to record video. For the VERA system, the BBC used 52cm (20”) reels of magnetic tape that moved past the stationary record/playback head at 5.08 meters per second (16.7 ft. per sec.!) That’s about 20x faster than audio tape – quite a mechanical achievement for that time.

VERA was capable of recording about 15 minutes (e.g. 4572 meters) of 405-line black-and-white video per reel, and the picture tended to wobble because the synchronizing pulses that keep the picture stable were not recorded accurately enough.

In order to cope with 625-line PAL or SECAM colour transmissions VERA would likely have required an even faster, and possibly unfeasible, tape speed.

Development began in 1952, but VERA was not perfected until 1958, by which time it had already been rendered obsolete by the Ampex quadruplex video recording system. This used 5cm (2”) wide tapes running at a speed of 38cm/s (15ips). The rapid tape-to-head speed was achieved by spinning the heads rapidly on a drum: the system used, with variations, on all video tape systems ever since, as well as DAT.

The BBC scrapped VERA and quickly adopted the Ampex system. It has been suggested that the BBC only continued to develop VERA as a bargaining tool, so it would be offered some of the first Ampex machines produced in unstated exchange for abandoning further work on a potential rival.

The only VERA recordings that survive are film telerecordings of the original demonstration. Even if some of the original tape had survived there would be no way of playing it back today. Rather the film kinescopes were transferred using modern film scanning technology to a digital file, one of which is reproduced here.

Whose Data Is It Anyway?

February 17, 2012 · by parasam

A trending issue, with much recent activity in the headlines, is the thorny topic of what I will call our ‘digital shadow’. By this I mean collectively all the data that represents our real self in the virtual world. This digital shadow is comprised of both explicit data (e-mails you send, web pages you browse, movies/music you stream, etc.) and implicit data (the time of day you visited a web page, how long you spent viewing that page, the location of your cellphone throughout the day, etc.).

Every time you move through the virtual world, you leave a shadow. Some call this your digital footprint. The size of this footprint or shadow is much, much larger than most realize. An example, with something as simple as a single corporate e-mail sent to a colleague at another company:

Your original e-mail may have been a few paragraphs of text (5kB) and a two page Word document (45kB) for a nominal size of 50kB. When you press Send this is cached in your computer, then copied to your firm’s e-mail server. It is copied again, at least twice, before it even leaves your company: once to the shadow backup service (just about all e-mail backup systems today run a live parallel backup to avoid losing any mail), and again to your firm’s data retention archive – mandated by Sarbanes-Oxley, FRCP (Federal Rules of Civil Procedure), etc.

The message then begins its journey across the internet to the recipient. After leaving the actual e-mail server the message must traverse your corporation’s firewall. Each message is typically inspected for outgoing viruses and potentially attachment type or other parameters set by your company’s communications policy. In order to do this, the message is held in memory for a short time.

The e-mail then finally begins its trip on the WAN (Wide Area Network) – which is actually many miles of fiber optic cable with a number of routers to link the segments – that is what the internet is, physically. (Ok, it might be copper, or a microwave, but basically it’s a bunch of pipes and pumps that squirt traffic to where it’s supposed to end up).

A typical international e-mail will pass through at least 30 routers, each one of which holds the message in its internal memory for a while, until that message moves out of the queue. This is known as ‘store and forward’ technology. Eventually the message gets to the recipient firm, and goes through the same steps as when it first left – albeit in reverse order, finally arriving at the recipient’s desktop, now occupying memory on their laptop.

While it’s true that several of the ‘way-stations’ erase the message after sending it on its way to make room for the next batch of messages, there is an average memory utilization for traffic that is quite large. A modern router must have many GB of RAM to process high volume traffic.

Considering all of the copies, it’s not unlikely for an average e-mail to be copied over 50 times from origin to destination. If even 10% of those copies are held more or less permanently (this is a source of much arguing between legal departments and IT departments – data retention policies are difficult to define), this means that your original 50kB e-mail now requires 250kB of storage. Ok, not much – until you realize that (per the stats published by the Radicati Group in 2010) approximately 294 billion e-mails are sent EACH DAY. Do the math…

Now here is where life gets interesting… the e-mail itself is ‘explicit data’, but many other aspects (call it metadata) of the mail, known as ‘implicit data’ are also stored, or at least counted and accumulated.

Unless you fully encrypt your e-mails (becoming more common, but still only practiced by a small fraction of 1% of users) anyone along the way can potentially read or copy your message. While, due to the sheer volume, no one without reason would target an individual message, what is often collected is implicit information:  how many mails a day does a user or group of users send? Where do they go? Is there a typical group of recipients, etc. Often times this implicit information is fair game even if the explicit data cannot be legally examined.

Many law enforcement agencies are permitted to examine header information (implicit data) without a warrant, while actually ‘reading’ the e-mail would require a search warrant. At a high level, sophisticated analysis using neural networks are what is done by agencies such as the NSA, CSE, MI5, and so on. They monitor traffic patterns – who is chatting to whom, in what groups, how often, and then collating these traffic patterns against real world activities and looking for correlation.

All of this just from looking at what happened to a single e-mail as it moved…

Now add in the history of web pages visited, online purchases, visits to social sites, posts to Facebook, Twitter, Pinterest, LinkedIn, etc. etc. Many people feel that they maintain a degree of privacy by using different e-mail addresses or different ‘personalities’ for different activities. In the past, this may have helped, but today little is gained by this attempt at obfuscation – mainly due to a technique known as orthogonal data mining.

Basically this means drilling into data from various ‘viewpoints’ and collating data that at first glance would be disparate. For instance, different social sites may be visited by what appears to be different users (with different usernames) – until a study of ‘implicit data’ [the ip address of the client computer] is seen to be the same…

Each web session a user conducts with a web site transmits a lot of implicit data:  time and duration of visit, pages visited, cross-links visited, ip address of the client, e-mail address and other ‘cookie’ information contained on the client computer, etc.

The real power of this kind of data mining comes from combining data from multiple web sites that are visited by a user. One can see that seemingly innocuous searches for medical conditions, coupled with subsequent visits to “Web MD” or other such sites could be assembled into a profile that may transmit more information to an online ad agency than the user may desire.

Or how about the fact that Facebook (to use one example) offers an API (programmatic interface) to developers that can be used to troll the massive database on people (otherwise known as Facebook) for virtually anything that is posted as ‘public’. Since that privacy permission state is the default (unless a user has chosen specifically to restrict it) – and now with the new Facebook Timeline becoming mandatory in the user interface – it is very easy for an automatic program to interrogate the Facebook archives for the personal history of anyone that has public postings – in chronological order.

Better keep all your stories straight… a prospective employer can now zoom right to your timeline and see if what you posted personally matches your resume… Like most things, there are two sides to all of this:  what propels this profiling is targeted advertising. While some of us may hate the concept, as long as goods and service vendors feel that advertising helps them sell – and targeted ads sell more effectively at lower cost – then we all benefit. These wonderful services that we call online apps are not free. The programmers, the servers, the electricity, the equipment all costs a LOT of money – someone has to pay for it.

Being willing to have some screen real estate used for ads is actually pretty cheap for most users. However, the flip side can be troubling. It is well known that certain governments routinely collect data from Facebook, Twitter and other sites on their citizens – probably not for these same citizens’ good health and peace of mind… Abusive spouses have tracked and injured their mates by using Foursquare and other location services, including GPS monitoring of mobile phones.

In general we collectively need to come to grips with the management of our ‘digital shadows.’ We cannot blindly give de facto ownership of our implicit or explicit data to others. In most cases today, companies take this data without telling the user, give or sell it without notice, and the user has little or no say in the matter.

What only a few years ago was an expensive process (sophisticated data mining) has now become a low cost commodity. With Google’s recent change in privacy policy, they have essentially come out as the world’s largest data mining aggregator. You can read details here, but now any visit to any part of the Google-verse is shared with ALL other bits of that ecosystem. And you can’t opt out. You can limit certain things, but even that is suspect:  in many cases users have found that data that was supposed to be deleted, or marked as private, in fact is not. Some companies (not necessarily Google) have been found to still have photos online years after being specifically served with take-down notices.

And these issues are not just relegated to PC’s on your desk… the proliferation of powerful mobile devices running location-based apps have become an advertiser’s dream… and sometimes a user’s nightmare…

No matter what is said or thought by users at this point, the ‘digital genie’ is long out of the bottle and she’s not going back in… our data, our digital shadow, is out there and is growing every day. The only choice left is for us collectively, as a world culture, to accept this and deal with it. As often is the case, technology outstrips law and social norms in terms of speed of adoption. Most attempts at any sort of unified legal regulation on the ‘internet’ have failed miserably.

But that doesn’t mean this should not happen, but such regulation must be sensible, uniformly enforceable, equitable and fairly applied – with the same sort of due process, ability for appeal and redress, etc. that is available in the ‘real world.’

The first steps toward a more equitable and transparent ‘shadow world’ would be a universal recognition that data about a person belongs to that person, not to whomever collected it. There are innumerable precedents for this in the ‘real world’, where a person’s words, music, art, etc. can be copyrighted and protected from unauthorized use. Of course there are exceptions (the ‘fair use’ policy, legitimate journalistic reporting, photography in public, etc.) but these exceptions are defined, and often refined through judicial process.

One such idea is presented here, whether this will gain traction is uncertain, but at least thought is being directed towards this important issue by some.

[shortly after first posting this I came across another article so germane to this topic I am including the link here – another interesting story on data mining and targeted advertising]

Second Screen observations during the Super Bowl

February 6, 2012 · by parasam

This is a short note on some tests I ran yesterday during the Super Bowl with “2nd screen” devices – tablets and smartphones that feed auxilliary content – usually synced in some fashion – with the main tv content being watched “1st screen.”

An excellent longer review of a number of apps is located here (by Chuck Parker). My observations are more generic and cover the issues associated with the infrastructure required to adequately support a 2nd screen experience for millions of simultaneous viewers – as happened yesterday during the game.

First, a brief note on my test setup:  A dedicated wireless router was attached to a 100Mb/s internet connection. 1st screen was HDTV fed from cable. 2nd screen devices were 2 iPhones and 2 iPads, connected to the router via WiFi. iPhone 4 and 4S, iPad 1 & 2. A laptop was also connected to the router, but was only used for checking connection speed and some network statistics.

Speedtest.net was used to verify internet connection speeds at the beginning of the game, and every 15 minutes thereafter. Actual download speeds averaged 87Mb/s over the game duration, upload averaged 4Mb/s. WiFi was first checked for local channel usage, then a static channel was selected on the router that had the least other local traffic. The SSID was not broadcast (to avoid anyone else attempting to log in and potentially affecting throughput – even though security was enabled). Pre-game testing ensured that all mobile devices were reliably connected to the router.

Speedtest.net- Mobile app was installed and used on all mobile devices to verify WiFi speed. Each device reported an average of 7Mb/s download and 4Mb/s upload.

Each iDevice was running IOS v5.01, internal memory was either 32 or 64GB.

I tested NFL2011 (even though it was supporting 2012 game the name is not updated…); NBC sports; CBS sports; Shazam; PrePlay; Tapcast. There were of course more, but that was as much multi-tasking as I felt my brain could handle!

I also tested ‘live streaming’ of the game from NBC and NFL websites. The NFL did not work at all:  they streamed only Silverlight which is incompatible with iDevices… The NBC feed worked fine, but was delayed a full minute – which made for an odd comparison when looking at 1st screen. The timing was obvious when comparing the running clock in the scorebox at bottom of screen…

In general, the biggest issue was app instability and (assumed) network congestion. All of the apps experienced some degree of freezing, delays, crashes, etc. The NFL app was the most stable, Shazam and TapCast were the most unreliable. TapCast in particular crashed repeatedly, and even when running would lose it’s place often, returning the user to the main menu where one had to re-select the game and start over.

While I have no way of proving this, it felt like the scale of the communications may have affected the performance. It’s one thing to test an app in a lab, it’s another thing entirely to do projected load testing on your backoffice capacity to support 1,000,000 instances of your app trying to interact with your servers simultaneously during Super Bowl…

On one of the iPads I attempted to ‘multi-task’ the apps so I could switch back and forth… NOT. Even on iPad2 with more horsepower this just didn’t work well at all. Most of the apps either crashed outright, or got lost and basically it was as if I had just started the app – had to start over. Thread preservation just didn’t work. I don’t know enough about IOS and app development to understand why, but the bottom line is that the current state of a running app was not preserved.

I won’t comment here on the individual features of the apps, other than to say there was a wide range of styles, graphic excellence, etc. On the whole, I was impressed across the board with the NFL app:  it was the most robust, had good, simple and intuitive graphics – I didn’t feel like I needed a legend to understand what was going on.

I must offer a disclaimer:  I am not a big football fan, so certain subtleties of game description may have gone unnoticed, but in another sense I think this made for an objective review by a casual observer.

My summary is this new sector of content consumption (2nd screen) is here to stay, can be compelling and drive viewship, and has all sorts of advertising possibilities. An interesting commentary on the social aspects of 2nd screen, ads and such can be found here. The infrastructure and the apps themselves need to be more robust, or viewers will get frustrated. On the whole, a good start – and perhaps a bit of ‘trial by fire’ in terms of massive use of 2nd screen apps.

I can’t wait to see how good this gets by next year…

Anonymity, Privacy and Security in the Connected World

February 3, 2012 · by parasam

Anonymity:  the state of lacking individual characteristics, distinction or recognizability.

Privacy:  the quality or state of being apart from observation, freedom from unauthorized intrusion.

Security:  defending the state of a person or property against harm or theft.

The dichotomy of privacy versus social participation is at the root of many discussions recently concerning the internet, with technology often shouldering the blame for perceived faults on both sides. This issue has actually been with us for many thousands of years – it is well documented in ancient Greece (with the Stoics daring to live ‘in public’ – sharing their most private issues and actions:  probably the long forerunner of Facebook…); continuing up until our current time with the social media phenomenon.

This is a pervasive and important issue that sets apart cultures, practices and personality. At the macro-cultural level we have societies such as North Korea on one side – a largely secretive country where there is little transparency; and on the other side perhaps Sweden or the Netherlands – where a more homogeneous, stable and socialistic culture is rather open.

We have all experienced the dualistic nature of the small village where ‘everyone knows everybody’s business’ as compared to the ‘big city’ where the general feeling of anonymity pervades. There are pros and cons to both sides:  the village can feel smothering, yet there is often a level of support and community that is lacking in the ‘city’.  A large urban center has a degree of privacy and freedom for individual expression – yet can feel cold and uncaring.

We enjoy the benefits of our recent social connectedness – Facebook, Twitter, etc. – yet at the same time fear the invasiveness of highly targeted advertising, online stalking, threats to our younger children on the web, etc. There is really nothing new about this social dilemma on the internet – it’s just a new territory for the same old conundrum. We collectively have to work out the ground rules for this new era.

Just as we have moved on from open caves and tents to houses with locked doors behind gated communities, we have moved our ‘valuables’ into encrypted files on our computers and depend on secure and reliable mechanisms for internet banking and shopping.

The challenge for all of us that seek to adapt to this ‘new world order’ is multi-faceted. We need to understand what our implicit expectations of anonymity, privacy and security are. We also need to know what we can explicitly do to actually align our reality to these expectations, should we care to do so.

Firstly, we should realize that a profound and fundamental paradigm shift has occurred with the wide-spread adoption of the internet as our ‘collective information cloud.’ Since the birth of the internet approximately 40 years ago, we have seen a gradual expansion of the connectedness and capability of this vehicle for information exchange. It is an exponential growth, both in physical reality and philosophical impact.

Arthur C. Clarke’s observation that “Any sufficiently advanced technology is indistinguishable from magic” has never been more true… going back thousands of years in philosophy and metaphysics we see the term “akashic records” [Sanskrit word] used to describe “the compendium of all human knowledge.” Other terminology such as “master library”, “universal supercomputer”, “the Book of Knowledge”, and so on have been used by various groups to describe this assumed interconnected fabric of the sum of human knowledge and experience.

If one was to take an iPad connected to the ‘cloud’ and time travel back even a few hundred years, this would be magic indeed. In fact, you would likely be burned as a witch… people have always resisted change, and fear what they don’t understand – weather forecasting and using a voice recognition program (Siri??) to ask and receive answers from the ‘cloud’ would have seriously freaked most observers…

Since we humans do seem to handle gradual adaption, albeit with some resistance and grumbling, we have allowed the ‘internet’ to insidiously invade our daily lives until most of us only realize how dependent we are on this when it goes away. Separation of a teenage girl from her iPhone is a near-death experience… and when Blackberry had a network outage, the business costs were in the millions of dollars.

As ubiquitous computing and persistent connectivity become the norm the world over, this interdependence on the cloud will grow even more. And this is true everywhere, not just in USA and Western Europe. Yes, it’s true that bandwidth, computational horsepower, etc. are far lower in Africa, Latin America, etc. – but – the use of connectivity, cellphones and other small computational devices has exploded everywhere. The per-capita use of cellphones is higher in Africa than in the United States…

Rose Shuman, an enterprising young woman in Santa Monica, formed Question Box, a non-profit company that uses a simple closed-circuit box with a button, mike and speaker to link rural farmers and others in Africa and India to a central office in larger towns that actually have internet access, thereby extending the ‘cloud’ to even the poorest communities with no direct online connectivity. Many other such ‘low-tech’ extensions of the cloud are popping up every day, serving to more fully interconnect a large portion of humanity.

Now that this has occurred we are faced with the same issues in the cloud that we have here on the ground:  how to manage our expectations of privacy, etc.

Two of the most basic exchanges within any society are requests for information and payment for goods or services. In the ‘good old day’ information requests were either performed by reading the newspaper or asking directions at the petrol station; payments were handled by the exchange of cash.

Both of these transactions had the following qualities:  a high level of anonymity, a large degree of privacy, and good security (as long as you didn’t lose your wallet).

Nowadays, every request for information on Google is sold to online advertisers who continually build a detailed dossier on your digital life – reducing your anonymity substantially; you give up a substantial amount of privacy by participation in social sites such as FaceBook; and it’s easier than ever to ‘follow the money’ with credit-card or PayPal transactions being reported to central clearing houses.

With massive ‘data mining’ techniques – such as orthogonal comparison, rule induction and neural networks – certain data warehouse firms are able to extract and match facets of data from highly disparate sources and assemble an uncannily accurate composite of any single person’s habits, likes and travels.  Coupled with facial recognition algorithms, gps/WiFi tracking, the re-use of locational information submitted by users and so on, if one has the desire and access, it is possible to track a single person on a continual basis, and understand their likes for food and services, their political affiliation, their sexual, religious and other group preferences, their income, tax status, ownership of homes and vehicles, etc. etc.

The more that a person participates in social applications, and the more that they share on these apps, the less privacy they have. One of the side effects of the cloud is that it never forgets… in ‘real life’ we tend to forget most of what is told to us on a daily basis, it’s a clever information reduction technique that the human brain uses to avoid overload. It’s just not important to remember that Martha told us in passing last week that she stopped at the dry cleaner… but that fact is forever burnt into the cloud’s memory, since we paid for the transaction with our credit card, and while waiting for the shirts to be brought up from the back we were on our phone Googling something – and Google never forgets where you were or what you asked for when you asked…

These ‘digital bread crumbs’ all are assembled on a continual basis to build various profiles of you, with the hope that someone will pay for them. And they do.

So… what can a person do? And perhaps more importantly, what does a person want to do – in regards to managing their anonymity, privacy and security?

While one can take a ‘bunker mentality’ approach to reducing one’s exposure to such losses of privacy this takes considerable time, focus and energy. Obviously if one chooses to not use the internet then substantial reductions in potential loss of privacy from online techniques occur. Using cash for every transaction can avoid tracking by credit card use. Not partaking in online shopping increases your security, etc.

However, even this brute-force approach does not completely remove the threats to your privacy and security:  you still have to get cash from somewhere, either an ATM or the bank – so at least those transactions are still logged. Facial recognition software and omniscient surveillance will note your presence even if you don’t use FourSquare or a cellphone with GPS.

And most of us would find this form of existence terribly inconvenient. What is reasonable then to expect from our participation in the modern world which includes the cloud? How much anonymity is rightfully ours? What level of security and privacy should be afforded every citizen without that person having to take extraordinary precautions?

The answers of course are in process. This discussion is part of that – hopefully it will motivate discussion and action that will spur onwards the process of reaching a socially acceptable equilibrium of function and personal protection. The law of unintended consequences is very, very powerful in the cloud. Ask any woman who has been stalked and perhaps injured by an ex-husband that tracked her via cellphone or some of the other techniques discussed above…

An interesting side note:  at virtually every ‘battered woman’s center’ in the US now the very first thing they do is take her cellphone away and physically remove the battery. It’s the only way to turn it off totally. Sad but true.

There is not going to a single, simple solution for all of this. The ‘data collection genie’ is so far out of the bottle that it will be impossible on a practical basis to rewind this, and in many cases one would not want to. Nothing is for free, only alternatively funded. So in order to get the usefulness many of us find by using a search engine, a location-based query response for goods or services, etc. – the “cost” of that service is often borne by targeted advertising. In many cases the user is ok with that.

Perhaps the best solution set will be increased transparency on the use of the data collected. In theory, the fact that the government of Egypt maintains massive datasets on internet users and members of particular social applications is not a problem… but the use that the military police makes of that data can be rather harmful to some of their citizens…

We in the US have already seen efforts made in this direction, with privacy policies being either voluntarily adhered to, or mandated, in many sectors. Just as physical laws of behavior have been socially built and accepted for the common good, so does this need to occur in the cloud.

Rules for parking of cars make sense, with fines for parking in areas that obstruct traffic. Breaking into a bank and stealing money will incur punishment – which is almost universal anywhere in the world with a relative alignment of the degree of the penalty. Today, even blatant internet crime is highly variable in terms of punishment or penalty. With less than 20% of the 196 countries in the world having any unified set of laws for enforcement of criminal activity on the internet, this is a challenging situation.

Today, the truth is that to ensure any reliable degree of anonymity, privacy and security of one’s self in the cloud you must take proactive steps at an individual level. This requires time, awareness, knowledge and energy. Hopefully this situation will improve, with certain levels of implicit expectations coming to the norm.

… but to really screw it up you need a computer…

January 26, 2012 · by parasam

We’ve all heard this one, but I wanted to share a few ‘horror stories’ with you as a prequel to a blog I will post shortly on the incredible challenges that our new world of digital content puts to us in terms of quality control of content. Today we are swimming in an ocean of content:  movies, books, music, financial information, e-mail, etc. We collectively put an alarming level of trust in all this digital information – we mostly just assume it’s correct. But what if it is not?

The following ‘disasters’ are all true. Completely. They were originally compiled by Andrew Brandt of the IDG News Service in October 2008 as a commentary on the importance of good QA (Quality Assurance) teams in the IT industry.

<begin article>

Stupid QA tricks: Colossal testing oversights

What do you get when you add the human propensity to screw stuff up to the building of large-scale IT systems? What the military calls the force-multiplier effect — and the need for a cadre of top-notch QA engineers.

After all, if left unchecked, one person’s slip of the mouse can quickly turn into weeks of lost work, months of missing e-mails, or, in the worst cases, whole companies going bankrupt. And with IT infused in every aspect of business, doesn’t it pay to take quality assurance seriously?

Let’s face it. Everybody makes mistakes. Users, managers, admins – no one is immune to the colossally stupid IT miscue now and again. But when a fat-fingered script or a poor security practice goes unnoticed all the way through development and into production, the unsung heroes of IT, the QA engineers, take a very embarrassing center stage. It may seem cliché, but your IT development chain is only as strong as its weakest link. You better hope that weakest link is your QA team, as these five colossal testing oversights attest.

Code “typo” hides high risk of credit derivative

Testing oversight: Bug in financial risk assessment code

Consequence: Institutional investors are led to believe high-risk credit derivatives are highly desirable AAA-rated investments.

Here’s the kind of story we’re not hearing much about these days despite our present economic turmoil.

According to a report published in May 2008 in the Financial Times, Moody’s inadvertently overrated about $4 billion worth of debt instruments known as CPDOs (constant proportion debt obligations), due to a bug in its software. The company, which rates a wide variety of government bonds and obligation debts, underplayed the level of risk to investors as a result of the bug, a glitch that may have contributed to substantial investment losses among today’s reeling financial institutions. CPDOs were sold to large institutional investors beginning in 2006, during the height of the financial bubble, with promises of high returns — nearly 10 times those of prime European mortgage-backed bonds — at very little risk.

Internal Moody’s documents reviewed by reporters from the Financial Times, however, indicated that senior staff at Moody’s were aware in February 2007 that a glitch in some computer models rated CPDOs as much as 3.5 levels higher in the Moody’s metric than they should have been. As a result, Moody’s advertised CPDOs as significantly less risky than they actually were until the ratings were corrected in early 2008.

Institutional investors typically rely on ratings from at least two companies before they put significant money into a new financial product. Standard & Poor’s had previously rated CPDOs with its highest AAA rating, and stood by its evaluation. Moody’s AAA rating provided the critical second rating that spurred investors to begin purchasing CPDOs. But other bond-ratings firms didn’t rate CPDO transactions as highly; John Schiavetta, head of global structured credit at Derivative Fitch in New York, was quoted in the Financial Times in April 2007, saying, “We think the first generation of CPDO transactions are overrated.”

Among the U.S.-based financial institutions that put together CPDO portfolios, trying to cash in on what, in late 2006, seemed to be a gold rush in investments, were Lehman Brothers, Merrill Lynch, and J.P. Morgan. When first reported this past May, the Financial Times story described the bug in Moody’s rating system as “nothing more than a mathematical typo — a small glitch in a line of computer code.” But this glitch may have contributed in some measure to the disastrous financial situation all around us.

It’s kind of hard to come up with a snarky one-liner for a foul-up like that.

Testing tip: When testing something as critical as this, run commonsense trials: Throw variations of data at the formula, and make sure you get the expected result each time. You also have to audit your code periodically with an outside firm, to ensure that a vested insider hasn’t “accidentally” inserted a mathematical error that nets the insider millions. There’s no indication that such an inside job happened in this case, but such a scenario isn’t so far-fetched that it’s beyond the realm of possibility.

Sorry, Mr. Smith, you have cancer. Oh, you’re not Mr. Smith?

Testing oversight: Mismatched contact information in insurer’s customer database

Consequence: Blue Cross/Blue Shield sends 202,000 printed letters containing patient information and Social Security numbers to the wrong patients.

Of course, it sounded like a good idea at the time: Georgia’s largest health insurance company, with 3.1 million members, designed a system that would send patients information about how each visit was covered by their insurance. The EOB (explanation of benefits) letters would provide sensitive patient information, including payment and coverage details, as well as the name of the doctor or medical facility visited and the patient’s insurance ID number.

Most insurance companies send out EOBs after people receive medical treatment or visit a doctor, but the Georgia Blue Cross/Blue Shield system so muddled up its medical data management functionality that its members were sent other members’ sensitive patient information. According to The Atlanta Journal-Constitution, registered nurse Rhonda Bloschock, who is covered by Blue Cross/Blue Shield, received an envelope containing EOB letters for nine different people. Georgia State Insurance Commissioner John Oxendine described the gaffe to WALB news as “the worst breach of healthcare privacy I’ve seen in my 14 years in office.”

As for the roughly 6 percent of Georgia Blue Cross/Blue Shield customers who were affected, I’m sure they will be heartened by the statement provided by spokeswoman Cindy Sanders, who described the event as an isolated incident that “will not impact future EOB mailings.” It’s a mantra Georgia Blue Cross/Blue Shield customers can keep repeating to themselves for years as they constantly check their credit reports for signs of identity theft.

Testing tip: Merging databases is always tricky business, so it’s important to run a number of tests using a large sample set to ensure fields don’t get muddled together. The data set you use for testing should be large enough to stress the system as a normal database would, and the test data should be formatted in such a way to make it painfully obvious if anything is out of place. Never use the production database as your test set.

Where free shipping really, really isn’t free

Testing oversight: Widespread glitches in Web site upgrade

Consequence: Clothier J. Crew suffers huge financial losses and widespread customer dissatisfaction in wake of “upgrade” that blocks and fouls up customer orders for a month.

On June 28, 2008, engineers took down the Web site for clothes retailer J. Crew for 24 hours to perform an upgrade. In terms of the results of this effort, one might argue that the site did not in fact come back online for several weeks, even though it was still serving pages.

The company’s 10-Q filing summarized the problems: “During the second quarter of fiscal 2008 we implemented certain direct channel systems upgrades which impacted our ability to capture, process, ship and service customer orders.” That’s because the upgrade essentially prevented site visitors from doing anything other than look at photos of fashionable clothes.

Among the problems reported by customers was this whopper: A man who ordered some polo shirts received, instead, three child-size shirts and a bill for $44.97 for the shirts, plus $9,208.50 for shipping. And before you ask, no, they weren’t hand-delivered by a princess in an enchanted coach.

As a result, the company temporarily shut down e-mail marketing campaigns designed to drive business to the Web site. It also had to offer discounts, refunds, and other concessions to customers who couldn’t correct orders conducted online or who received partial or wrong orders.

But the biggest story is how the Web site upgrade affected the company’s bottom line. In a conference call with investors in August, CFO James Scully said, “The direct system upgrades did impact our second-quarter results more than we had anticipated and will also impact our third-quarter and fiscal-year results,” according to a transcript of the call.

Ouch.

Testing tip: When your company’s bottom line depends on the availability of your Web site, there’s no excuse for not running a thorough internal trial to probe the functionality of the entire site before you throw that update live to the world. Bring everyone on the Web team into the office, buy a bunch of pizzas, and tell them to click absolutely everything. And keep full backups of your old site’s front and back end, just in case you do somehow push a broken site update live and need to revert to save your company from unmitigated disaster.

Department of Corrections database inadvertently steps into the “user-generated” generation

Testing oversight: Trusted anonymous access to government database

Consequence: Database queries in URLs permit anyone with passing knowledge of SQL to pull down full personal information of anyone affiliated with the Oklahoma Department of Corrections, including prisoners, guards, and officers.

Anyone who’s ever been an employee of the Oklahoma prison system or an unwilling guest of the state now has an additional issue to worry about: identity theft. Thanks to a poorly programmed Web page designed to provide access to the Sexual and Violent Offender Registry, Web visitors were able to gain complete access to the entire Department of Corrections database.

Among the data stored in the database were names, addresses, Social Security numbers, medical histories, and e-mail addresses. But the problem was far worse that that: Anyone who knew how to craft SQL queries could have actually added information to the database.

Got an annoying neighbor who mows his lawn too early on a Sunday? How about a roommate who plays his music too loud, late into the night? Annoying ex-boyfriend or ex-girlfriend? Why not add them to the Sexual and Violent Offender Registry and watch them get rejected from jobs and be dragged off to the pokey after a routine traffic stop?

To add insult to injury, when Alex Papadimoulis, editor of dailywtf.com, alerted Oklahoma corrections officials about the security problem, they fixed it immediately — by making the SQL query case-sensitive.

So instead of adding “social_security_number” to the query string that retrieves that bit of information, it only worked if you used “Social_security_number.” Genius, huh? Nobody would ever have thought of that.

The database-on-a-Web-site issue is only a slice of the problems Oklahoma’s Department of Corrections faces when it comes to IT. An audit of the department published at the end of 2007 explains that the OMS (Offender Management System) is on the brink of collapse. “The current software is so out of date that it cannot reside on newer computer equipment and is maintained on an antiquated hardware platform that is becoming increasingly difficult to repair. A recent malfunction of this server took OMS down for over a full day while replacement parts were located. If this hardware ultimately fails, the agency will lose its most vital technology resource in the day-to-day management of the offender population.”

Testing tip: When you’re building an interface to a database that contains the sensitive data of hundreds or thousands of people, there’s no excuse for taking the least-expensive-coder route. Coding security into a Web application takes a programmer with practical experience. In this case, that didn’t happen. The money you spend on a secure site architecture at the beginning may save you from major embarrassment later, after some kid breaks your security model in five minutes. Remember, “security through obscurity” provides no security at all.

Busted big-time — by the bank

Testing oversight: Contact fields transposed during financial database migration

Consequence: Financial services firm sends detailed “secret” savings and charge card records made for mistresses to customers’ wives.

It’s hard to get away with an affair when the bank won’t play along. That’s what some high-roller clients of an unnamed financial services firm learned when the firm sent statements containing full details of account holders’ assets to their home addresses.

Although that might not sound like a recipe for disaster, this particular firm — which requires a $10 million minimum deposit to open an account — is in the business of providing, shall we say, a financial masquerade for those who wish to sock away cash they don’t want certain members of their marriage to know about. Customers who desire this kind of service typically had one (somewhat abridged) statement mailed home, and another, more detailed (read: incriminating) statement mailed to another address.

When the firm instituted a major upgrade to its customer-facing portal, however, a database migration error slipped through the cracks. The customer’s home address was used for the full, unabridged account statements. The nature and character of the discussions between account holder and spouse regarding charges for hotel rooms, expensive jewelry, flowers, and dinners are left as an exercise for the imagination. According to a source inside the company, the firm lost a number of wealthy clients and nearly $450 million in managed assets as a result of the flub. But the real winners in this case, apparently, were the divorce lawyers.

Testing tip: In this case, it seems like the engineers who designed the upgrade didn’t fully understand the ramifications of what they were doing. The bank executives who maintain this house of cards were ultimately at fault. Communicate the intricacies of your customers’ business relationships to your site designers and follow through with continuous oversight to ensure clients’ dirty laundry, err, sensitive data out of public view.

<end article>

While the above examples are certainly large and adversely affected many people’s lives and finances, these are a very, very small tip of a really, really large iceberg. Our digital world is the modern-day Titanic, steaming ahead while we party to the tunes in our earbuds and believe the 3D we see is real…

We humans like to believe what we see, what we’re told, what we feel. Most of us are trusting of the information we receive every day with little consideration of its accuracy. Only when something doesn’t work, or some calamity occurs due to incorrect data, do we stop to ask ourselves, “Is that right?”

I’ll conclude these thoughts in my next posting, but I will leave you with a clue that will provide security and sanity in the face of potential digital uncertainty:  use commmon sense.

Technology and the Art of Storytelling

January 23, 2012 · by parasam

Why should we talk about this, particularly in relation to content distribution? Isn’t most of the art performed within production and creative services?

I would argue that as much creativity, craft and artistic design goes into preserving and re-creating the intention of the original theatrical story across the plethora of devices and transmission paths as was used in the original post-production process. At the end of the day the goal of the content creator is to provoke a set of responses within the human brain, excited by stimulus to the eyes and ears. (Currently our movies have made little use of smell, taste and touch.. maybe that is next after 3-D becomes old hat??)

       The field of Human Perception Design has recognized that the eye/ear/brain interface is rather easily fooled. If this was not the case, then all modern compression schemes would fail to provide an equivalent experience to the observer in relation to original uncompressed material. While it is not technically feasible to match the viewing experience of the theatre with that of an iPod, it is possible to simulate enough of the original experience to not have the consumption of content in this form stand in the way of the storytelling.

In many ways, the theatrical viewing experience is more tolerant of errors, and is certainly less difficult to produce to, than mobile devices or internet connected televisions at low bandwidths. Theatrical viewing is a closed system, with very high bandwidth, no distractions (such as light or other noise), and an immersive screen size (field of vision fully occupied). Even an HD tv in the home must deal with external unbalanced light sources, imperfect acoustical environment, issues with dynamic range of both video and audio and other parameters that can reduce the effectiveness of the storytelling process. This makes any imperfections more noticeable, since the issues mentioned already have typically removed all the “buffer” between following the story and having the viewing experience interrupted by distractions (such as noticeable artifacts in the picture or sound).

Even though it is far less likely to happen in the theatre, a momentary visual artifact (say blockiness in the picture, or a one-frame freeze) will not usually break the concentration of the viewer, as they are immersed in the dark room / big screen / loud sound chamber – there is so much “presence” of the story surrounding one that this ‘mass of experience’ carries one through these momentary distractions. The same level of error in a mobile or home viewing device will often interrupt the viewing experience – i.e. the distraction is noticed to the point where, even for a moment, the viewer’s concentration breaks from the story to the error.

When one adds in all the issues that present to the Media Services process (low bandwidth, restricted color gamut of both codecs and delivery devices, visual errors due to compression artifacts, etc.) it is easy to see that extraordinary measures must be often brought to bear during the content delivery activity in order to preserve the story.

Typical challenges that affect content in this context are:  conversions from interlaced to progressive; frame rate conversions; resolution changes; codec changes; bit rate constraints; video and audio dynamic range compression; aspect ratio reformatting; audio channel downmixing; etc. There is often more than one way to resolve the issue and other design parameters must be factored such as cost, time efficiency, facility capacity, etc.

Technology should to the greatest part be invisible and just support the storytelling process. Just as a white pole stuck in a dune at White Sands park is almost invisible at noon if it were not for the shadow thrown, the shadow of technology should be all that is visible – just enough to outline and focus the viewer on the story.

Comments on SOPA and PIPA

January 23, 2012 · by parasam

The Stop Online Piracy Act (SOPA) and Protect Intellectual Property Act (PIPA) have received much attention recently. As is often the case with large-scale debate on proposed legislation, the facts and underlying issues can be obscured by emotion and shallow sound-bites. The issues are real but the current proposals to solve the problem are reactive in nature and do not fully address the fundamental challenge.

[Disclaimer:  I currently am employed by Technicolor, a major post-production firm that derives substantial income from the motion-picture industry and associated content owners / distributors. These entities, as well as my employer itself, experience tangible losses from piracy and other methods of intellectual property theft. However, the comments that follow are my personal opinions and do not reflect in any way the position of my employer or any other firm with which I do business.]

For those that need a brief introduction to these two bills that are currently in legislative process:  both bills are similar, and – if enacted – would allow enforcement of the following actions to reduce piracy of goods and services offered via the internet, primarily from off-shore companies.

  1. In one way or another, US-based Internet Service Providers (ISPs) would be required to block the links to any foreign-based server entity that had been identified as infringing on copyrighted material.
  2. Payment providers, advertisers and search engines would be required to cease doing business with foreign-based server sites that infringed on copyrighted material.

The intent behind this legislation is to block access to the sites for US-based consumers, and to remove or substantially reduce the economic returns that could be generated from US-based consumers on behalf of the offending web sites.

For further details on the bills, with some fairly objective comments on both the pros and cons of the bills, check this link. [I have no endorsement of this site, just found it to be reasonable and factual when compared with the wording of the bills themselves.]

The issues surrounding “piracy” (aka theft of intellectual or physical property) are complex. The practice of piracy has been with us since inter-cultural commerce began, with the first documented case being the exploits of the Sea Peoples who threatened the Aegean and Mediterranean seas in the 14th century BC.

Capture of Blackbeard

With the historical definition of piracy constrained to theft ‘on the high seas’ – i.e. areas of ocean that are international, or beyond the jurisdiction of any one nation-state – the extension of the term ‘piracy’ to describe theft based within the international ocean of the internet is entirely appropriate.

While the SOPA and PIPA bills are focused on ‘virtual’ property (movies, software, games and other forms of property that can be downloaded from the internet), modern piracy also affects many physical goods, from oil and other raw materials seized by Somali pirates off the east coast of Africa to stolen or counterfeit perfume, clothing and other tangibles offered for sale over the internet. The worst form of piracy today takes the form of human kidnapping on the high seas for ransom. More than 1,100 people were kidnapped by pirates in 2010, with over 300 people currently being held hostage for ransom by pirates at the time of this article (Jan 2012). The larger issue of piracy is of major international concern, and will require proactive and persistent efforts to mitigate this threat.

While the solutions brought forward by these two bills are well-intentioned, they are reactive in nature and fall short of a practical solution. In addition, they suffer from the same heavy-handed methods that often accompany legislative attempts to modify human behavior. Without regard to any of the underlying issues, and taking no sides in terms of this commentary, governmental attempts to legislate alcohol and drug consumption, reproductive behavior and cohabitation lifestyles have all been either outright failures or fraught with difficulty and have produced little or none of the desired results.

Each side in this current debate has exaggerated both the risks and rewards of the proposed legislation. From the content owner’s side the statements of financial losses are overblown and are in fact very difficult to quantify. One of the most erroneous bases for financial computation of losses is the assumption that every pirated transaction would have been money that the studio or other content owner would have received if the content had been legally purchased. This is not supported by fact. Unfortunately many pirated transactions are motivated by cost (either very low or free) – if the user had to pay for the content they simply would choose not to purchase. It is very difficult to assess the amount of pirated transactions, although many attempts are made to quantify this value.

What certainly can be said is that real losses due occur and they are substantial. However, it would better serve both the content owners, and those that desire to assist these rightsholders, to pursue a more conservative and accurate assessment of losses. To achieve a practical solution to the challenge of Intellectual Property (IP) theft, this must be treated as a business use case, and set aside the moral aspects of this issue. The history of humanity is littered with the carcasses of failed attempts to legislate morality. Judgments of behavior do not generate cash, collection of revenue is the only mechanism that factually puts money in the bank.

Any action in commerce has a financial cost. In order to make an informed choice on the efficacy of a proposed action, the cost must be known, as well as the potential profit or loss. If a retail store wants to reduce the assumed losses due to shoplifting, the cost of the losses must be known as well as the cost of additional security measures in order to make a rational decision on what to spend to resolve the problem. If the cost of securing the merchandise is higher than the losses, then it makes no sense to embark on additional measures.

Overstating the amount of losses due to piracy could appear to justify expensive measures to counteract this theft – if implemented the results may in fact only add to the overall financial loss. In addition, costs to implement security are real, while unearned revenue is potential, not actual.

On the side of the detractors to the SOPA and PIPA legislation, the claims of disruption to the fabric of the internet, as well as potential security breaches if link blocking was enabled are also overstated. As an example, China currently practices large scale link blocking, DNS (Domain Name Server) re-routing and other technical practices that are similar in many respects to the proposed technical solutions of the proposed Acts – and none of this has broken the internet – even internally within China.

The real issue here is that these methods don’t work well. The very nature of the internet (a highly redundant, robust and reliable fabric of connectivity) works against attempts to thwart connections from a client to a server. We have seen many recent attempts by governments to restrict internet connectivity to users within China, the Arab states, Libya, etc – and all have essentially failed.

For both sides of this discussion, a more appropriate direction for legislation, funding and focus of energy is to treat this issue for what it is factually:  a criminal activity that requires mitigation from the public sector through police and judicial efforts, and from the private sector through specific and proven security measures. Again, the analogy of current practices in retail merchandising may be useful:  the various technologies of RFI scanners at store exits, barcoded ‘return authorization tags’ and other measures have proven to substantially reduce property and financial loss without unduly penalizing the majority of honest consumers.

Coupled with specific laws and the policy of prosecuting all shoplifters this two-pronged approach (from both public and private sector) has made substantial inroads to merchandise loss in the retail industry.

Content protection is a complex issue and cannot be solved with just one or two simple acts no matter how much that may be desired. In addition, the actual financial threat posed by piracy of movies and other content must be honestly addressed: it is sometimes convenient to point to perceived losses due to piracy rather than other reasons – for instance poor returns due to simply that no one liked the movie… or distribution costs that are higher than ideal, etc.

A part of the overall landscape of content protection is to look at both the demand side as well as the supply side of the equation. Both the SOPA and PIPA proposals only address the supply side – they attempt to reduce access to, or disrupt payment for – the supply of assets. Most consumers make purchase choices based on a cost/benefit model, even if unconsciously so:  therefore at first glance, the attractiveness of downloading a movie for ‘free’ as opposed to paying $5-$25 for the content is high.

However, there are a number of mitigating factors that make the choice more complex:

  • Quality of the product
  • Ease of use (for both getting and playing the content)
  • Ease of re-use or sharing the content
  • Flexibility of devices on which the content may be consumed
  • Potential of consequences for use of pirated material

With careful attention to the above factors (and more), it is possible for legal content to become potentially more attractive than pirated content, at least for a percentage of consumers. It is impossible to prevent piracy from occurring – the most that is reasonable to expect is a reduction to the point where the financial losses are tolerable. This is the same tactic taken with retail merchandise security – a cost/benefit analysis helps determine the appropriate level of security cost in relation to the losses.

In terms of the factors listed above:

  • Legal commercial content is almost always of substantially higher quality than pirated content, raising the attractiveness of the product.
  • For most consumers (i.e. excluding teenage geeks that have endless time and patience!) a properly designed portal or other download experience CAN be much easier to operate than linking to a pirate site, determining which files to download, uncompressing, etc. etc.Unfortunately, many commercial sites are not well designed, and often are as frustrating to operate as some pirate sites. Attention to this issue is very important, as this is a low cost method to retain legal customers.
  • Depending on the rights purchased, and whether the content was streamed or downloaded, the re-use or legal sharing of purchased content (i.e. within the home or on mobile devices owned by the content purchaser) should ideally be straightforward.Again, this is often not the case, and again motivates consumers to potentially consider pirated material as it is often easier to consume on multiple devices and share with others. This is a very big issue and is only beginning to be substantially addressed by such technologies as UltraViolet, Keychest and others.Another issue that often complicates this factor is the enormously complex and inconsistent legal rights to copyrighted material. Music, books, movies, etc. all have highly divergent rules that govern the distribution and sale of the material. The level of complexity and cost of administering these rights, and the resultant inequities in availability make pirated material much more available and attractive than it should be.
  • With the recent explosion of types of devices available to consume digital content (whether books, movies, tv, music, newspapers, etc.) the consumer rightly desires a seamless consumption model across the devices of their choice. This is often not provided legally, or is available only at significant cost. This is yet another area that can be addressed by content owners and distributors to lower the attractiveness of pirated material.
  • The issue of consequences for end-users that may be held accountable for downloading and consumption of pirated material is complex and fraught with potential backlash to content owners that attempt enforcement in this area. Several recent cases within the music industry have shown that the adverse publicity garnered by content owners suing end users has had a high cost and is generally perceived to be counter-productive.The bulk of legal enforcement at this time is concentrated on the providers of pirated material all through the supply chain, as opposed to the final consumer. This is also a more efficient use of resources, as the effort to identify and legally prosecute potentially millions of consumers of pirated material would be impractical compared to degrading the supply chain itself – often operated by a few hundreds of individuals.There have been recent attempts by some governments and ISPs to monitor and identify the connections from an end consumer to a known pirate site and then mete out some level of punishment for this practice. This usually takes the form of multiple warnings to a user followed by some degradation or interruption of their internet service. There are several factors that complicate the enforcement of this type of policy:
    • This action potentially comes up against privacy concerns, and the level and invasiveness of monitoring of a user’s habits and what they download vary greatly by country and culture.
    • Many so-called ‘pirate’ sites offer a mix of both legally obtained material, illegally obtained material, and storage for user generated content. It is usually impossible to precisely determine which of these content types a user has actually downloaded, so the risk is high that a user could be punished for a perfectly innocent behavior.
    • It is too easy for a pirate site to keep one (or several) steps ahead of this kind of enforcement activity with changing names, ip addresses, and other obfuscating tactics.

In summary, it should be understood that piracy of copyrighted material is a real and serious threat to the financial well-being of content producers throughout the world. What is called for to mitigate this threat is a combined approach that is rational, efficient and affordable. Emotional rhetoric and draconian measures will not solve the problem, but only exacerbate tensions and divert resources from the real problem. A parallel approach of improving the rights management, distribution methodology and security measures associated with legal content – aided by consistent application of law and streamlined judicial and police procedure world-wide – is the most effective method for reducing the trafficking of stolen intellectual property.

Education of the consumer will also help. Although, as stated earlier, one cannot legislate morality – and in the ‘privacy’ of the consumer’s internet connection many will take all they can get for ‘free’ – it cannot hurt to repeatedly describe the knock-on effects of large scale piracy on the content creation sector. The bottom line is that the costs of producing high quality entertainment are significant, and without sufficient financial return this cannot be sustained. The music industry is a prime example of this:  more labels and music studios have gone out of business than remain in business today – as measured from 1970 to 2011. While it is true that the lowered bar of cost due to modern technology has allowed many to ‘self-produce’ it is also true that some of the great recording studios that have gone out of business due to decreased demand and funding have cost us – and future generations – the unique sound that was only possible in those physical rooms. These intangible costs can be very high.

One last fact that should be added to the public awareness concerning online piracy:  the majority of these sites today are either run by or funded by organized criminal cartels. For instance, in Mexico the production and sale of counterfeit DVDs is used primarily as a method of laundering drug money, in addition to the profitable nature of the business itself (since no revenues are returned to the studios whose content is being duplicated). The fact that the subscription fees for the online pirate site of choice is very likely funding human trafficking, sexual slavery, drug distribution and other criminal activity on a large scale should not be ignored. Everyone is free to make a choice. The industry, and collective governments, need to provide thoughtful, useful and practical measures to help consumers make the right choice.

Page 6 of 6 « Previous 1 … 4 5 6
  • Blog at WordPress.com.
  • Connect with us:
  • Twitter
  • Vimeo
  • YouTube
  • RSS
  • Follow Following
    • Parasam
    • Join 95 other followers
    • Already have a WordPress.com account? Log in now.
    • Parasam
    • Customize
    • Follow Following
    • Sign up
    • Log in
    • Report this content
    • View site in Reader
    • Manage subscriptions
    • Collapse this bar
 

Loading Comments...