• Home
  • about this blog
  • Blog Posts

Parasam

Menu

  • design
  • fashion
  • history
  • philosophy
  • photography
  • post-production
    • Content Protection
    • Quality Control
  • science
  • security
  • technology
    • 2nd screen
    • IoT
  • Uncategorized
  • Enter your email address to follow this blog and receive notifications of new posts by email.

  • Recent Posts

    • Take Control of your Phone
    • DI – Disintermediation, 5 years on…
    • Objective Photography is an Oxymoron (all photos lie…)
    • A Historical Moment: The Sylmar Earthquake of 1971 (Los Angeles, CA)
    • Where Did My Images Go? [the challenge of long-term preservation of digital images]
  • Archives

    • September 2020
    • October 2017
    • August 2016
    • June 2016
    • May 2016
    • November 2015
    • June 2015
    • April 2015
    • March 2015
    • December 2014
    • February 2014
    • September 2012
    • August 2012
    • June 2012
    • May 2012
    • April 2012
    • March 2012
    • February 2012
    • January 2012
  • Categories

    • 2nd screen
    • Content Protection
    • design
    • fashion
    • history
    • IoT
    • philosophy
    • photography
    • post-production
    • Quality Control
    • science
    • security
    • technology
    • Uncategorized
  • Meta

    • Register
    • Log in
    • Entries feed
    • Comments feed
    • WordPress.com

Archive For February, 2012

DI – Disintermediation

February 28, 2012 · by parasam

Disintermediation – a term you should come to know. Essentially this means “to remove the intermediary.” This has always been a disruptive process in cultures – whether affecting religion, law, education or technology. In religion, one well-known example was the rise of the Lutheran church when they felt the ‘intermediary’ process of pope, cardinals, bishops, etc. of the Catholic form was no longer necessary for the common man to connect to their belief in God.

Higher education used to be exclusive to those that could afford to attend brick-and-mortar campuses; now we have iTunesU, distance learning and a host of alternative learning environments open to virtually anyone with the time and focus to consume the knowledge.

Bringing questions of law was traditionally handled by barristers, attorneys and others – ensconced in a system of high cost and slow delivery of service. Today we have storefront paralegal offices, online access to many governmental and private legal services and a plethora of inexpensive software for preparation of common legal forms.

Each of these areas of practice fought long and hard to preserve the intermediary. We were told that our souls might be lost without the guidance of trained priests, that we might lose everything we owned if we prepared legal documents without assistance, and that only a trained professor could teach us anything.

We, collectively, begged to differ. And we slowly, with much bloodshed, sweat and tears, succeeded to emancipate ourselves from the yoke of enforced intermediation. However, like many true tools, knowledge is often represented as a sharp two-edged sword. Going it alone has consequences. There is no substitute for the compassion and experience of spiritual advisor, no matter his or her title. There are many areas of law where the specialized knowledge of legal codes, not to mention the oratorical skills of an experienced courtroom jouster, are essential to victory.  The guidance and explanation of one who has mastered the knowledge of a subject – not to mention the special skills of teaching, of helping a student to reach comprehension – are many times critical to the learning process.

Now we are confronting a new locus of disintermediation:  the provisioning of access to the ‘cloud’ of entertainment and information. The internet – in its largest sense – is in a new stage of democratization. The traditional providers of access (telcos, cable tv, satellite) are in a fight for their collective lives – and they are losing. Attempting to hold onto an outmoded business model is simply ‘dead man walking’ philosophy. At the most you can walk more slowly – but you will reach the hangman’s noose irregardless.

This will not be an overnight change, but we have already seen just how quickly ‘internet time’ moves. The velocity of change is on an exponential upwards curve, and nothing in our recent past has given us any reason to doubt this will alter anytime soon.

There are a number of factors that are fueling this:  the explosion of the number of content creators; the desire of traditional content creators (studios, episodic tv) to sell their content to as wide an audience as rapidly as possible; the high cost and oft-perceived low value-add of traditional NSPs (Network Service Providers – telco, cable, etc.)

One of the biggest reasons that has helped to propel this change in consumer behavior is knowledge:  as little as ten years ago the average media consumer associated the ‘channel’ and the ‘content’ as the same thing. Movies or news on TV came out the wall on a cord (no matter what fed the cord) – or movies could be seen on a plastic disk that you rented/bought. The concept of separation of content from channel did not exist.

Today, even the average 6-year-old understands that he or she can watch SpongeBob on almost anything that has a screen and a speaker. The content is what matters, how it gets there and on what device it is consumed just doesn’t matter much. Not a great thing for makers of networks or devices…

Fortunately the other side of the sword does exist… while the traditional models of telco provisioning of services are either antiquated or obsolete (the time/distance model of tariffs, high cost for low functionality, etc.), the opportunity for new business models does exist. What is disruptive for one set of structures is opportunistic for others.

Once content is ‘unbound’ from its traditional channels, a new world of complexity sets in:  metadata about the content gains importance. Is it SD or HD? What’s the aspect ratio: 4:3 or 16:9? What codec was used (for instance if it was Flash I can’t see it on my iPad), etc. etc.

Finding the content, or the version of it that you want, can be challenging. Licensing and other DRM (Digital Rights Management) issues add to the confusion. If voice communication (aka the telephone) is stripped out of its network and now becomes an ‘app’, (for instance like Skype), who sells/supports the app? If all “private networks” (telco, cable, satellite) become essentially data pipes only, what pricing models can be offered that will attract consumers yet allow such companies to run profitably? There is a growing tendency for “un-bundling” and other measures of transparency – for instance overseas mobile phone operators are backing away from cellphone handset subsidies. This was due in large part to the prevalence of  prepaid phone contracts in these regions – for which no subsidized phones can be provided. This has the knock-on effect of reducing iPhone sales and increasing Android (and other) less expensive phone hardware penetration. For instance, in the last year or so the sales of iPhones has fallen considerably in Greece, Spain, Portugal and Italy… Hmmm, wonder why??

All of these questions will assume larger and larger importance in the near future. Current modalities are either failing outright or becoming marginalized. We have heard the moniker “Content is King” – and it’s still true, much to the chagrin of many network providers. When one is thirsty, you pay for water, not the pipes that it arrives in…

Here’ s another anecdotal piece that helps to demonstrate that you cannot underestimate the importance of content ownership:  as is well known, VFX (Visual Special Effects) are now the ‘star’ of most movies. A decade ago, actors carried a movie, now it’s the effects… Do the research. Look at 2011 box office stats. Try to find a movie that was in top 20 grossing that did NOT have significant special effects… Now here’s the important bit:  one would think that the firms that specialize in creating such fantastic imagery would be wildly successful… NOT. It’s very, very expensive to create this stuff. It takes ungodly amounts of processing power, many really clever humans, and ridiculous amounts of time. Rango just won the Oscar… and in spite of the insanely powerful computers we have today, it took TWO YEARS to animate this movie!

The bottom line is that the ONLY special effects firms that are in business today or are remotely profitable, are the ones connected to either studios or consortiums that themselves own the content on which this magic is applied.

Content is water at the top of the hill. The consumers are at the bottom with their little digital buckets out, waiting to be filled. They just don’t care which path the water runs down the hill… but they DO care that it runs quickly, without damming up, and without someone trying to siphon off  ‘their’ water…

This is not all settled. Many battles will be won and lost before the outcome of the ‘war’ is known. New strategies, new generals, new covert forces will be deployed.

Stay tuned.

Blog Site Design Update

February 26, 2012 · by parasam

The site for this blog has been redesigned. It now supports additional features, including search, a more graphical selection pane, drop-down category selection and more. In addition, when the blog is viewed on an iPad a different ‘app-like’ interface is presented that is more appropriate to the “tap & swipe” navigation of a tablet. A slimmed-down, mostly text version is presented to smartphone users. The aim is to present the blog posts in a clean, uncluttered manner no matter what device is used to view the site.

Please comment to this post with any bugs or suggestions. Many thanks for reading!

VERA (Vision Electronic Recording Apparatus) – one of the first video tape recorders

February 26, 2012 · by parasam

Today we take for granted the ability to watch HDTV on large screens in our homes, see stunning digital cinema at super high resolution in the theatre, and stream high quality video to our smartphones, tablets and computers. It wasn’t always this way…

One of the first attempts at recording and playing video content (with audio as well) – all previous video distribution was completely live in real time – was a British system:  the Vision Electronic Recording Apparatus (VERA). The development of this device began in 1952 by the BBC, under project manager Dr. Peter Axon.

All previous recording technology of that time was used for audio. Of course, only analog techniques were available then. At a high level, the recorders used open-reel magnetic tape that passed over a fixed record/playback head. Although there were several formats for audio recording in use by the 1950’s – common ones used professionally moved the tape at 15ips (inches per second) to record frequencies up to 16kHz – there were no video recording devices.

Video signals, even monochrome, require a much higher bandwidth. The relatively low-resolution cameras of that day (405 tv lines vertically) needed about 3MHz to record the fine detail in the picture. This is roughly 200x the maximum frequency recorded by audio tape machines of that time. Essentially, in order to record a signal on magnetic tape using stationary heads, either the head gap must be made smaller, or the tape must move faster.

Limitations in materials science in the 1950’s, as well as other complex issues of recording head design essentially dictated that the tape needed to move much faster to record video. For the VERA system, the BBC used 52cm (20”) reels of magnetic tape that moved past the stationary record/playback head at 5.08 meters per second (16.7 ft. per sec.!) That’s about 20x faster than audio tape – quite a mechanical achievement for that time.

VERA was capable of recording about 15 minutes (e.g. 4572 meters) of 405-line black-and-white video per reel, and the picture tended to wobble because the synchronizing pulses that keep the picture stable were not recorded accurately enough.

In order to cope with 625-line PAL or SECAM colour transmissions VERA would likely have required an even faster, and possibly unfeasible, tape speed.

Development began in 1952, but VERA was not perfected until 1958, by which time it had already been rendered obsolete by the Ampex quadruplex video recording system. This used 5cm (2”) wide tapes running at a speed of 38cm/s (15ips). The rapid tape-to-head speed was achieved by spinning the heads rapidly on a drum: the system used, with variations, on all video tape systems ever since, as well as DAT.

The BBC scrapped VERA and quickly adopted the Ampex system. It has been suggested that the BBC only continued to develop VERA as a bargaining tool, so it would be offered some of the first Ampex machines produced in unstated exchange for abandoning further work on a potential rival.

The only VERA recordings that survive are film telerecordings of the original demonstration. Even if some of the original tape had survived there would be no way of playing it back today. Rather the film kinescopes were transferred using modern film scanning technology to a digital file, one of which is reproduced here.

Streetphotograpy vs Studio Photography

February 21, 2012 · by parasam

A short discussion of style…

In the dawn of the photographic age, studio photography was the only viable method for photographing people – the equipment was not at all portable, both the subject and the camera needed to be very still to accomodate the long exposure times required of early film emulsions, and lots of light was needed.

In this regard, I even include ‘outdoors’ as a studio setting – essentially a studio without walls, and using the sun as the major light source. The style was very formal, posed and often artificial looking (in terms of emotion).

As we move forward a hundred years, we now have lightweight digital capture and amazing quality under the most challenging lighting conditions possible. However, for the high end of ‘people photography’ – fashion editorial, commercial, wedding and portraiture – the use of the ‘studio’ setting is still most common. Even if the ‘studio’ is again a non-tradional room – such as outdoors, a runway at a fashion show – the style of the photo is highly staged. A modern editoral fashion shoot, even at the beach, is often indiscernable from a movie set. You have set designers, set dressers, stylists, makeup, wardrobe, camera assistants, gaffers, grips, directors, producers, etc. etc.

And that’s only for the image capture. Then this rather large pile of pixels heads off to the wonderful world of Photoshop… where many times the end product bears faint resemblance to original image capture. Men and women alike have had the equivalent of facelifts, liposuction, limb extensions, hair removal and all sorts of operations that would be rather unpleasant if performed medically…

The whole “Photoshopped” issue has been dissected extensively in the media and will not be revisted here, except to make this stylistic point:  even at a subtle level, this form of ‘retouching’ is picked up by most people – often with the accompanying statement, “I could never look like that…”

While there is absolutely nothing wrong with idealism, fantasy and other imaging styles that are not accurate reflections of reality, there does appear to be a certain ‘stylistic fatigue’ happening. With the recognition that most of the money spent on women’s clothing, for example, comes from ‘real people’ – as opposed to those fortunate enough to measure their ability to purchase dresses by walk-in closet size rather than credit limit, clothing marketers are turning more casual in their style in some cases.

Even so called ‘catalog’ photography – where the subject is always isolated from any surroundings, often looking like they are suspended on the page (due to shooting on white seamless paper background in the studio) is being subjected to change.

We like things that feel a bit more authentic (whether they are or not is of course not the issue!), more ‘real’, more personal. Whether we are looking at a woman modeling a pair of shoes or a guy getting out of a pickup, there is today a stylistic hunger for ‘approachable’ subject matter. Could this person in the photo be me? Advertisers certainly want the target’s mind to work this way!!

Getting back to style and technology:  the advent of 35mm film cameras with relatively fast lenses helped revolutionize the possibility of ‘candid photography’. One no longer needs tripods, massive lights – or the requirement that the subject stand still. While the change from film to digital imaging has not changed things that much in terms of the actual art and practice of candid photograpy, it certainly has made the entire workflow easier, faster and less expensive.

Today, there is a respected genre of photography, “streetphotography.” Like any other distinctive taxonomy, the edges are blurred. What if the photo was taken inside a building? and so on… that is essentially unimportant, it’s the style that is under discussion:  the look/feel of a candid, unposed moment that was randomly captured. This ‘lack of preparedness’ by the subject (whether factual or not) helps tell a story of ‘everyman/everywoman’ – thereby aligning the essence of the photo with the viewer. People consuming these images relate more closely to the subjects, or at least see them as ‘more real’ compared to high glamor studio shots.

The actual art of streetphotography is one of the most challenging, as the photographer has little control over many of the things a studio photographer is used to taking for granted. Your subjects don’t often stand where you like them to, there are many visual distractions around and behind them – making composition a continual effort. Lighting, and the resultant exposure requirements, can vary widely. Color temperature, reflected light causing color shifts and other issues conspire against a good photograph.

People, cars, etc. will often invade the shot as the shutter is clicked. Ideally the subject is either unaware of the photographer (these days that is easier, as people on cell phones – which seems to account for 97% of the population – are mostly oblivious to their surrroundings), or reacts only after the shutter falls.

There is a social side to this as well. With virtually every person on the planet now having a cellphone/camera combination, we as a culture have grown much more accepting of being photographed. Everyone knows that even if you are not specifically aware of them, just about any populated place on the earth has surveillance cameras – and we are constanly background subjects in other people’s vacation/tourist snaps.

While no one is happy with those paparazzi that overstep both normative behavior and common sense, having one’s picture taken while anyplace that has the perception of a ‘public venue’ is pretty much accepted and generally a non-issue.

I have personally been shooting this style of photography for decades now, and can remember less than a handful of incidents where someone objected to being photographed. In those cases, their request is always honored.

Probably the most difficult aspect of quality streetphotography is previsualization. Just like any other form of imaging, the best results are obtained if you have the completed picture in your mind’s eye before you snap the shutter. The internal mental speed at which this visualization must take place is faster in this genre than just about any other – you see an opportunity taking place, you may have from 1 to 5 seconds to prepare everything (composition, camera placement, focus, lighting, angle, etc.) and get the shot. This requires a lot of instinctual behavior, and a rock-solid knowledge of your equipment. Just like an accomplished guitarist just ‘feels the music’ and their fingers automatically place themselves on the fretboard – a fluid and experienced photographer can run the camera by feel alone. It’s one of the primary reasons I still prefer somewhat “old-fashioned” clunky DSLR cameras with physical buttons – there is no time for futzing with touch screen menus in this type of photography.

Exposure needs to be set ahead of time, as does lens selection (or focal length setting on a zoom lens). Ideally all you should need to do to react to an upcoming photo opportunity is to compose and focus, and even focus can be preset. It’s always better to let a subject ‘walk into focus’ as opposed to chasing them trying to simultaneously focus, compose and trip the shutter.

The most important bit however is the art of seeing. Most of us look but don’t see. The innate punch of a good photograph is the communcation of the photographer’s “sight” to the audience. Be still. Wait. Watch. There are new opportunities every second. In streetphotography most will be missed – it’s the nature of things. Wrong position, wrong light, person walking in front of you, etc. But perseverance furthers – patience and practice will produce shots that could never be duplicated in a studio.

Studio photography is a wonderful genre, and nothing in this discussion should take away from that – rather this is hopefully an exploration of a type of imaging that has been peripheral but is now becoming more accepted and used in different areas.

To revist equipment for a moment, although I have made an argument for the use of “button oriented” DSLR cameras, like anything, the art is in the photographer, not the equipment. Unfortunately, in this hobby/profession, issues around equipment as being the arbiter of quality/success abound perhaps more than any other. My personal opinion is that, with very few exceptions, camera equipment plays a small part in the overall end result of the photograph. Someone with skill and experience and vision can make wonderful images with a cheap cellphone, while the latest monster 40megapixel hand held howitzer of a camera won’t help someone that doesn’t take the time to learn the craft. I have been experimenting with the iPhone a lot recently, and am plesantly surprised by what one can do with this device. I’ll be posting another blog shortly focused exclusively on cellphone cameras, the iPhone in particular.

In closing, here a few examples of how streetphotography is moving more and more into the commercial world. I happened to get the latest Nordstrom catalog in my mailbox today, and noticed a shot on the inside flyleaf that I recognized:  I had taken a ‘street shot’ at this same location recently (Le Petit Four on Sunset Plaza – Hollywood).

Here is the catalog shot:

Revved-up classics The fun here is making old favorites new. (and how cool is that phone attachment?)

And here is the shot I took a few months ago (closeup of the right-hand table in the shot above):

Lunch... he's casual with highlight-seamed 3/4 shorts, plenty of ink, gold avaiator glasses and earring; she's matched the top with striped skirt, small clutch and straw wedges with pewter metallic tops.

Here’s one more example from the same catalog (another ‘street style’ shot):

Bright on bright Two is great, three's even better (count your bag, too)

and here is a shot of mine in the same style – (unfortunately no one was paying me that day for bag/shoes/dress product shots!):

strapless ruched panel dress, high black suede pumps, black patent strap bag - sophisticated cool look for running to the next shop...

access:

cellphone handset

Le Petit Four restaurant

Nordstrom

Sunset Plaza

How ‘where we are’ affects ‘what we see’..

February 17, 2012 · by parasam

I won’t often be reposting other blogs here in their entirety, but this is such a good example of a topic on which I will be posting shortly I wanted to share this with you. “Contextual awareness” has been proven in many instances to color our perception, whether this is visual, auditory, smell, taste, etc.

Here’s the story:  (thanks to Josh Armour for his post that first caught my attention)

Care for another ‘urban legend’? This was has been verified as true by a couple sources.
A man sat at a metro station in Washington DC and started to play the violin; it was a cold January morning. He played six Bach pieces for about 45 minutes. During that time, since it was rush hour, it was calculated that 1,100 people went through the station, most of them on their way to work.
Three minutes went by, and a middle aged man noticed there was musician playing. He slowed his pace, and stopped for a few seconds, and then hurried up to meet his schedule.
A minute later, the violinist received his first dollar tip: a woman threw the money in the till and without stopping, and continued to walk.
A few minutes later, someone leaned against the wall to listen to him, but the man looked at his watch and started to walk again. Clearly he was late for work.
The one who paid the most attention was a 3 year old boy. His mother tagged him along, hurried, but the kid stopped to look at the violinist. Finally, the mother pushed hard, and the child continued to walk, turning his head all the time. This action was repeated by several other children. All the parents, without exception, forced them to move on.
In the 45 minutes the musician played, only 6 people stopped and stayed for a while. About 20 gave him money, but continued to walk their normal pace. He collected $32. When he finished playing and silence took over, no one noticed it. No one applauded, nor was there any recognition.
No one knew this, but the violinist was Joshua Bell, one of the most talented musicians in the world. He had just played one of the most intricate pieces ever written, on a violin worth $3.5 million dollars.
Two days before his playing in the subway, Joshua Bell sold out at a theater in Boston where the seats averaged $100.
This is a real story. Joshua Bell playing incognito in the metro station was organized by the Washington Post as part of a social experiment about perception, taste, and priorities of people. The outlines were: in a commonplace environment at an inappropriate hour: Do we perceive beauty? Do we stop to appreciate it? Do we recognize the talent in an unexpected context?
One of the possible conclusions from this experience could be:
If we do not have a moment to stop and listen to one of the best musicians in the world playing the best music ever written, how many other things are we missing?
Thanks +Kyle Salewski providing the actual video link here: Stop and Hear the Music
+Christine Jacinta Cabalo Points out that Joshua Bell has this story on his website: http://www.joshuabell.com/news/pulitzer-prize-winning-washington-post-feature
http://www.snopes.com/music/artists/bell.asp

Whose Data Is It Anyway?

February 17, 2012 · by parasam

A trending issue, with much recent activity in the headlines, is the thorny topic of what I will call our ‘digital shadow’. By this I mean collectively all the data that represents our real self in the virtual world. This digital shadow is comprised of both explicit data (e-mails you send, web pages you browse, movies/music you stream, etc.) and implicit data (the time of day you visited a web page, how long you spent viewing that page, the location of your cellphone throughout the day, etc.).

Every time you move through the virtual world, you leave a shadow. Some call this your digital footprint. The size of this footprint or shadow is much, much larger than most realize. An example, with something as simple as a single corporate e-mail sent to a colleague at another company:

Your original e-mail may have been a few paragraphs of text (5kB) and a two page Word document (45kB) for a nominal size of 50kB. When you press Send this is cached in your computer, then copied to your firm’s e-mail server. It is copied again, at least twice, before it even leaves your company: once to the shadow backup service (just about all e-mail backup systems today run a live parallel backup to avoid losing any mail), and again to your firm’s data retention archive – mandated by Sarbanes-Oxley, FRCP (Federal Rules of Civil Procedure), etc.

The message then begins its journey across the internet to the recipient. After leaving the actual e-mail server the message must traverse your corporation’s firewall. Each message is typically inspected for outgoing viruses and potentially attachment type or other parameters set by your company’s communications policy. In order to do this, the message is held in memory for a short time.

The e-mail then finally begins its trip on the WAN (Wide Area Network) – which is actually many miles of fiber optic cable with a number of routers to link the segments – that is what the internet is, physically. (Ok, it might be copper, or a microwave, but basically it’s a bunch of pipes and pumps that squirt traffic to where it’s supposed to end up).

A typical international e-mail will pass through at least 30 routers, each one of which holds the message in its internal memory for a while, until that message moves out of the queue. This is known as ‘store and forward’ technology. Eventually the message gets to the recipient firm, and goes through the same steps as when it first left – albeit in reverse order, finally arriving at the recipient’s desktop, now occupying memory on their laptop.

While it’s true that several of the ‘way-stations’ erase the message after sending it on its way to make room for the next batch of messages, there is an average memory utilization for traffic that is quite large. A modern router must have many GB of RAM to process high volume traffic.

Considering all of the copies, it’s not unlikely for an average e-mail to be copied over 50 times from origin to destination. If even 10% of those copies are held more or less permanently (this is a source of much arguing between legal departments and IT departments – data retention policies are difficult to define), this means that your original 50kB e-mail now requires 250kB of storage. Ok, not much – until you realize that (per the stats published by the Radicati Group in 2010) approximately 294 billion e-mails are sent EACH DAY. Do the math…

Now here is where life gets interesting… the e-mail itself is ‘explicit data’, but many other aspects (call it metadata) of the mail, known as ‘implicit data’ are also stored, or at least counted and accumulated.

Unless you fully encrypt your e-mails (becoming more common, but still only practiced by a small fraction of 1% of users) anyone along the way can potentially read or copy your message. While, due to the sheer volume, no one without reason would target an individual message, what is often collected is implicit information:  how many mails a day does a user or group of users send? Where do they go? Is there a typical group of recipients, etc. Often times this implicit information is fair game even if the explicit data cannot be legally examined.

Many law enforcement agencies are permitted to examine header information (implicit data) without a warrant, while actually ‘reading’ the e-mail would require a search warrant. At a high level, sophisticated analysis using neural networks are what is done by agencies such as the NSA, CSE, MI5, and so on. They monitor traffic patterns – who is chatting to whom, in what groups, how often, and then collating these traffic patterns against real world activities and looking for correlation.

All of this just from looking at what happened to a single e-mail as it moved…

Now add in the history of web pages visited, online purchases, visits to social sites, posts to Facebook, Twitter, Pinterest, LinkedIn, etc. etc. Many people feel that they maintain a degree of privacy by using different e-mail addresses or different ‘personalities’ for different activities. In the past, this may have helped, but today little is gained by this attempt at obfuscation – mainly due to a technique known as orthogonal data mining.

Basically this means drilling into data from various ‘viewpoints’ and collating data that at first glance would be disparate. For instance, different social sites may be visited by what appears to be different users (with different usernames) – until a study of ‘implicit data’ [the ip address of the client computer] is seen to be the same…

Each web session a user conducts with a web site transmits a lot of implicit data:  time and duration of visit, pages visited, cross-links visited, ip address of the client, e-mail address and other ‘cookie’ information contained on the client computer, etc.

The real power of this kind of data mining comes from combining data from multiple web sites that are visited by a user. One can see that seemingly innocuous searches for medical conditions, coupled with subsequent visits to “Web MD” or other such sites could be assembled into a profile that may transmit more information to an online ad agency than the user may desire.

Or how about the fact that Facebook (to use one example) offers an API (programmatic interface) to developers that can be used to troll the massive database on people (otherwise known as Facebook) for virtually anything that is posted as ‘public’. Since that privacy permission state is the default (unless a user has chosen specifically to restrict it) – and now with the new Facebook Timeline becoming mandatory in the user interface – it is very easy for an automatic program to interrogate the Facebook archives for the personal history of anyone that has public postings – in chronological order.

Better keep all your stories straight… a prospective employer can now zoom right to your timeline and see if what you posted personally matches your resume… Like most things, there are two sides to all of this:  what propels this profiling is targeted advertising. While some of us may hate the concept, as long as goods and service vendors feel that advertising helps them sell – and targeted ads sell more effectively at lower cost – then we all benefit. These wonderful services that we call online apps are not free. The programmers, the servers, the electricity, the equipment all costs a LOT of money – someone has to pay for it.

Being willing to have some screen real estate used for ads is actually pretty cheap for most users. However, the flip side can be troubling. It is well known that certain governments routinely collect data from Facebook, Twitter and other sites on their citizens – probably not for these same citizens’ good health and peace of mind… Abusive spouses have tracked and injured their mates by using Foursquare and other location services, including GPS monitoring of mobile phones.

In general we collectively need to come to grips with the management of our ‘digital shadows.’ We cannot blindly give de facto ownership of our implicit or explicit data to others. In most cases today, companies take this data without telling the user, give or sell it without notice, and the user has little or no say in the matter.

What only a few years ago was an expensive process (sophisticated data mining) has now become a low cost commodity. With Google’s recent change in privacy policy, they have essentially come out as the world’s largest data mining aggregator. You can read details here, but now any visit to any part of the Google-verse is shared with ALL other bits of that ecosystem. And you can’t opt out. You can limit certain things, but even that is suspect:  in many cases users have found that data that was supposed to be deleted, or marked as private, in fact is not. Some companies (not necessarily Google) have been found to still have photos online years after being specifically served with take-down notices.

And these issues are not just relegated to PC’s on your desk… the proliferation of powerful mobile devices running location-based apps have become an advertiser’s dream… and sometimes a user’s nightmare…

No matter what is said or thought by users at this point, the ‘digital genie’ is long out of the bottle and she’s not going back in… our data, our digital shadow, is out there and is growing every day. The only choice left is for us collectively, as a world culture, to accept this and deal with it. As often is the case, technology outstrips law and social norms in terms of speed of adoption. Most attempts at any sort of unified legal regulation on the ‘internet’ have failed miserably.

But that doesn’t mean this should not happen, but such regulation must be sensible, uniformly enforceable, equitable and fairly applied – with the same sort of due process, ability for appeal and redress, etc. that is available in the ‘real world.’

The first steps toward a more equitable and transparent ‘shadow world’ would be a universal recognition that data about a person belongs to that person, not to whomever collected it. There are innumerable precedents for this in the ‘real world’, where a person’s words, music, art, etc. can be copyrighted and protected from unauthorized use. Of course there are exceptions (the ‘fair use’ policy, legitimate journalistic reporting, photography in public, etc.) but these exceptions are defined, and often refined through judicial process.

One such idea is presented here, whether this will gain traction is uncertain, but at least thought is being directed towards this important issue by some.

[shortly after first posting this I came across another article so germane to this topic I am including the link here – another interesting story on data mining and targeted advertising]

Second Screen observations during the Super Bowl

February 6, 2012 · by parasam

This is a short note on some tests I ran yesterday during the Super Bowl with “2nd screen” devices – tablets and smartphones that feed auxilliary content – usually synced in some fashion – with the main tv content being watched “1st screen.”

An excellent longer review of a number of apps is located here (by Chuck Parker). My observations are more generic and cover the issues associated with the infrastructure required to adequately support a 2nd screen experience for millions of simultaneous viewers – as happened yesterday during the game.

First, a brief note on my test setup:  A dedicated wireless router was attached to a 100Mb/s internet connection. 1st screen was HDTV fed from cable. 2nd screen devices were 2 iPhones and 2 iPads, connected to the router via WiFi. iPhone 4 and 4S, iPad 1 & 2. A laptop was also connected to the router, but was only used for checking connection speed and some network statistics.

Speedtest.net was used to verify internet connection speeds at the beginning of the game, and every 15 minutes thereafter. Actual download speeds averaged 87Mb/s over the game duration, upload averaged 4Mb/s. WiFi was first checked for local channel usage, then a static channel was selected on the router that had the least other local traffic. The SSID was not broadcast (to avoid anyone else attempting to log in and potentially affecting throughput – even though security was enabled). Pre-game testing ensured that all mobile devices were reliably connected to the router.

Speedtest.net- Mobile app was installed and used on all mobile devices to verify WiFi speed. Each device reported an average of 7Mb/s download and 4Mb/s upload.

Each iDevice was running IOS v5.01, internal memory was either 32 or 64GB.

I tested NFL2011 (even though it was supporting 2012 game the name is not updated…); NBC sports; CBS sports; Shazam; PrePlay; Tapcast. There were of course more, but that was as much multi-tasking as I felt my brain could handle!

I also tested ‘live streaming’ of the game from NBC and NFL websites. The NFL did not work at all:  they streamed only Silverlight which is incompatible with iDevices… The NBC feed worked fine, but was delayed a full minute – which made for an odd comparison when looking at 1st screen. The timing was obvious when comparing the running clock in the scorebox at bottom of screen…

In general, the biggest issue was app instability and (assumed) network congestion. All of the apps experienced some degree of freezing, delays, crashes, etc. The NFL app was the most stable, Shazam and TapCast were the most unreliable. TapCast in particular crashed repeatedly, and even when running would lose it’s place often, returning the user to the main menu where one had to re-select the game and start over.

While I have no way of proving this, it felt like the scale of the communications may have affected the performance. It’s one thing to test an app in a lab, it’s another thing entirely to do projected load testing on your backoffice capacity to support 1,000,000 instances of your app trying to interact with your servers simultaneously during Super Bowl…

On one of the iPads I attempted to ‘multi-task’ the apps so I could switch back and forth… NOT. Even on iPad2 with more horsepower this just didn’t work well at all. Most of the apps either crashed outright, or got lost and basically it was as if I had just started the app – had to start over. Thread preservation just didn’t work. I don’t know enough about IOS and app development to understand why, but the bottom line is that the current state of a running app was not preserved.

I won’t comment here on the individual features of the apps, other than to say there was a wide range of styles, graphic excellence, etc. On the whole, I was impressed across the board with the NFL app:  it was the most robust, had good, simple and intuitive graphics – I didn’t feel like I needed a legend to understand what was going on.

I must offer a disclaimer:  I am not a big football fan, so certain subtleties of game description may have gone unnoticed, but in another sense I think this made for an objective review by a casual observer.

My summary is this new sector of content consumption (2nd screen) is here to stay, can be compelling and drive viewship, and has all sorts of advertising possibilities. An interesting commentary on the social aspects of 2nd screen, ads and such can be found here. The infrastructure and the apps themselves need to be more robust, or viewers will get frustrated. On the whole, a good start – and perhaps a bit of ‘trial by fire’ in terms of massive use of 2nd screen apps.

I can’t wait to see how good this gets by next year…

Anonymity, Privacy and Security in the Connected World

February 3, 2012 · by parasam

Anonymity:  the state of lacking individual characteristics, distinction or recognizability.

Privacy:  the quality or state of being apart from observation, freedom from unauthorized intrusion.

Security:  defending the state of a person or property against harm or theft.

The dichotomy of privacy versus social participation is at the root of many discussions recently concerning the internet, with technology often shouldering the blame for perceived faults on both sides. This issue has actually been with us for many thousands of years – it is well documented in ancient Greece (with the Stoics daring to live ‘in public’ – sharing their most private issues and actions:  probably the long forerunner of Facebook…); continuing up until our current time with the social media phenomenon.

This is a pervasive and important issue that sets apart cultures, practices and personality. At the macro-cultural level we have societies such as North Korea on one side – a largely secretive country where there is little transparency; and on the other side perhaps Sweden or the Netherlands – where a more homogeneous, stable and socialistic culture is rather open.

We have all experienced the dualistic nature of the small village where ‘everyone knows everybody’s business’ as compared to the ‘big city’ where the general feeling of anonymity pervades. There are pros and cons to both sides:  the village can feel smothering, yet there is often a level of support and community that is lacking in the ‘city’.  A large urban center has a degree of privacy and freedom for individual expression – yet can feel cold and uncaring.

We enjoy the benefits of our recent social connectedness – Facebook, Twitter, etc. – yet at the same time fear the invasiveness of highly targeted advertising, online stalking, threats to our younger children on the web, etc. There is really nothing new about this social dilemma on the internet – it’s just a new territory for the same old conundrum. We collectively have to work out the ground rules for this new era.

Just as we have moved on from open caves and tents to houses with locked doors behind gated communities, we have moved our ‘valuables’ into encrypted files on our computers and depend on secure and reliable mechanisms for internet banking and shopping.

The challenge for all of us that seek to adapt to this ‘new world order’ is multi-faceted. We need to understand what our implicit expectations of anonymity, privacy and security are. We also need to know what we can explicitly do to actually align our reality to these expectations, should we care to do so.

Firstly, we should realize that a profound and fundamental paradigm shift has occurred with the wide-spread adoption of the internet as our ‘collective information cloud.’ Since the birth of the internet approximately 40 years ago, we have seen a gradual expansion of the connectedness and capability of this vehicle for information exchange. It is an exponential growth, both in physical reality and philosophical impact.

Arthur C. Clarke’s observation that “Any sufficiently advanced technology is indistinguishable from magic” has never been more true… going back thousands of years in philosophy and metaphysics we see the term “akashic records” [Sanskrit word] used to describe “the compendium of all human knowledge.” Other terminology such as “master library”, “universal supercomputer”, “the Book of Knowledge”, and so on have been used by various groups to describe this assumed interconnected fabric of the sum of human knowledge and experience.

If one was to take an iPad connected to the ‘cloud’ and time travel back even a few hundred years, this would be magic indeed. In fact, you would likely be burned as a witch… people have always resisted change, and fear what they don’t understand – weather forecasting and using a voice recognition program (Siri??) to ask and receive answers from the ‘cloud’ would have seriously freaked most observers…

Since we humans do seem to handle gradual adaption, albeit with some resistance and grumbling, we have allowed the ‘internet’ to insidiously invade our daily lives until most of us only realize how dependent we are on this when it goes away. Separation of a teenage girl from her iPhone is a near-death experience… and when Blackberry had a network outage, the business costs were in the millions of dollars.

As ubiquitous computing and persistent connectivity become the norm the world over, this interdependence on the cloud will grow even more. And this is true everywhere, not just in USA and Western Europe. Yes, it’s true that bandwidth, computational horsepower, etc. are far lower in Africa, Latin America, etc. – but – the use of connectivity, cellphones and other small computational devices has exploded everywhere. The per-capita use of cellphones is higher in Africa than in the United States…

Rose Shuman, an enterprising young woman in Santa Monica, formed Question Box, a non-profit company that uses a simple closed-circuit box with a button, mike and speaker to link rural farmers and others in Africa and India to a central office in larger towns that actually have internet access, thereby extending the ‘cloud’ to even the poorest communities with no direct online connectivity. Many other such ‘low-tech’ extensions of the cloud are popping up every day, serving to more fully interconnect a large portion of humanity.

Now that this has occurred we are faced with the same issues in the cloud that we have here on the ground:  how to manage our expectations of privacy, etc.

Two of the most basic exchanges within any society are requests for information and payment for goods or services. In the ‘good old day’ information requests were either performed by reading the newspaper or asking directions at the petrol station; payments were handled by the exchange of cash.

Both of these transactions had the following qualities:  a high level of anonymity, a large degree of privacy, and good security (as long as you didn’t lose your wallet).

Nowadays, every request for information on Google is sold to online advertisers who continually build a detailed dossier on your digital life – reducing your anonymity substantially; you give up a substantial amount of privacy by participation in social sites such as FaceBook; and it’s easier than ever to ‘follow the money’ with credit-card or PayPal transactions being reported to central clearing houses.

With massive ‘data mining’ techniques – such as orthogonal comparison, rule induction and neural networks – certain data warehouse firms are able to extract and match facets of data from highly disparate sources and assemble an uncannily accurate composite of any single person’s habits, likes and travels.  Coupled with facial recognition algorithms, gps/WiFi tracking, the re-use of locational information submitted by users and so on, if one has the desire and access, it is possible to track a single person on a continual basis, and understand their likes for food and services, their political affiliation, their sexual, religious and other group preferences, their income, tax status, ownership of homes and vehicles, etc. etc.

The more that a person participates in social applications, and the more that they share on these apps, the less privacy they have. One of the side effects of the cloud is that it never forgets… in ‘real life’ we tend to forget most of what is told to us on a daily basis, it’s a clever information reduction technique that the human brain uses to avoid overload. It’s just not important to remember that Martha told us in passing last week that she stopped at the dry cleaner… but that fact is forever burnt into the cloud’s memory, since we paid for the transaction with our credit card, and while waiting for the shirts to be brought up from the back we were on our phone Googling something – and Google never forgets where you were or what you asked for when you asked…

These ‘digital bread crumbs’ all are assembled on a continual basis to build various profiles of you, with the hope that someone will pay for them. And they do.

So… what can a person do? And perhaps more importantly, what does a person want to do – in regards to managing their anonymity, privacy and security?

While one can take a ‘bunker mentality’ approach to reducing one’s exposure to such losses of privacy this takes considerable time, focus and energy. Obviously if one chooses to not use the internet then substantial reductions in potential loss of privacy from online techniques occur. Using cash for every transaction can avoid tracking by credit card use. Not partaking in online shopping increases your security, etc.

However, even this brute-force approach does not completely remove the threats to your privacy and security:  you still have to get cash from somewhere, either an ATM or the bank – so at least those transactions are still logged. Facial recognition software and omniscient surveillance will note your presence even if you don’t use FourSquare or a cellphone with GPS.

And most of us would find this form of existence terribly inconvenient. What is reasonable then to expect from our participation in the modern world which includes the cloud? How much anonymity is rightfully ours? What level of security and privacy should be afforded every citizen without that person having to take extraordinary precautions?

The answers of course are in process. This discussion is part of that – hopefully it will motivate discussion and action that will spur onwards the process of reaching a socially acceptable equilibrium of function and personal protection. The law of unintended consequences is very, very powerful in the cloud. Ask any woman who has been stalked and perhaps injured by an ex-husband that tracked her via cellphone or some of the other techniques discussed above…

An interesting side note:  at virtually every ‘battered woman’s center’ in the US now the very first thing they do is take her cellphone away and physically remove the battery. It’s the only way to turn it off totally. Sad but true.

There is not going to a single, simple solution for all of this. The ‘data collection genie’ is so far out of the bottle that it will be impossible on a practical basis to rewind this, and in many cases one would not want to. Nothing is for free, only alternatively funded. So in order to get the usefulness many of us find by using a search engine, a location-based query response for goods or services, etc. – the “cost” of that service is often borne by targeted advertising. In many cases the user is ok with that.

Perhaps the best solution set will be increased transparency on the use of the data collected. In theory, the fact that the government of Egypt maintains massive datasets on internet users and members of particular social applications is not a problem… but the use that the military police makes of that data can be rather harmful to some of their citizens…

We in the US have already seen efforts made in this direction, with privacy policies being either voluntarily adhered to, or mandated, in many sectors. Just as physical laws of behavior have been socially built and accepted for the common good, so does this need to occur in the cloud.

Rules for parking of cars make sense, with fines for parking in areas that obstruct traffic. Breaking into a bank and stealing money will incur punishment – which is almost universal anywhere in the world with a relative alignment of the degree of the penalty. Today, even blatant internet crime is highly variable in terms of punishment or penalty. With less than 20% of the 196 countries in the world having any unified set of laws for enforcement of criminal activity on the internet, this is a challenging situation.

Today, the truth is that to ensure any reliable degree of anonymity, privacy and security of one’s self in the cloud you must take proactive steps at an individual level. This requires time, awareness, knowledge and energy. Hopefully this situation will improve, with certain levels of implicit expectations coming to the norm.

  • Blog at WordPress.com.
  • Connect with us:
  • Twitter
  • Vimeo
  • YouTube
  • RSS
  • Follow Following
    • Parasam
    • Join 95 other followers
    • Already have a WordPress.com account? Log in now.
    • Parasam
    • Customize
    • Follow Following
    • Sign up
    • Log in
    • Report this content
    • View site in Reader
    • Manage subscriptions
    • Collapse this bar
 

Loading Comments...