• Home
  • about this blog
  • Blog Posts

Parasam

Menu

  • design
  • fashion
  • history
  • philosophy
  • photography
  • post-production
    • Content Protection
    • Quality Control
  • science
  • security
  • technology
    • 2nd screen
    • IoT
  • Uncategorized
  • Enter your email address to follow this blog and receive notifications of new posts by email.

  • Recent Posts

    • Take Control of your Phone
    • DI – Disintermediation, 5 years on…
    • Objective Photography is an Oxymoron (all photos lie…)
    • A Historical Moment: The Sylmar Earthquake of 1971 (Los Angeles, CA)
    • Where Did My Images Go? [the challenge of long-term preservation of digital images]
  • Archives

    • September 2020
    • October 2017
    • August 2016
    • June 2016
    • May 2016
    • November 2015
    • June 2015
    • April 2015
    • March 2015
    • December 2014
    • February 2014
    • September 2012
    • August 2012
    • June 2012
    • May 2012
    • April 2012
    • March 2012
    • February 2012
    • January 2012
  • Categories

    • 2nd screen
    • Content Protection
    • design
    • fashion
    • history
    • IoT
    • philosophy
    • photography
    • post-production
    • Quality Control
    • science
    • security
    • technology
    • Uncategorized
  • Meta

    • Register
    • Log in
    • Entries feed
    • Comments feed
    • WordPress.com

Browsing Category science

DI – Disintermediation, 5 years on…

October 12, 2017 · by parasam

I wrote this original article on disintermediation more than 5 years ago (early 2012), and while most of the comments are as true today as then, I wanted to expand a bit now that time and technology have moved on.

The two newest technologies that are now mature enough to act as major fulcrums of further disintermediation on a large scale are AI and blockchain. Both of these have been in development for more than 5 years of course, but these technologies have made a jump in capability, scale and applicability in the last 1-2 years that is changing the entire landscape. Artificial Intelligence (AI) – or perhaps a better term “Augmented Intelligence” – is changing forever the man/machine interface, bringing machine learning to aid human endeavors in a manner that will never be untwined. Blockchain technology is the foundation (originally developed in the arcane mathematical world of cryptography) for digital currencies and other transactions of value.

AI

While the popular term is “AI” or Artificial Intelligence, a better description is “Deep Machine Learning”. Essentially the machine (computer, or rather a whole pile of them…) is given a problem to solve, a set of algorithms to use as a methodology, and a dataset for training. After a number of iterations and tunings, the machine usually refines its response such that the ‘problem’ can be reliably solved accurately and repeatedly. The process, as well as a recently presented theory on how the ‘deep neural networks’ of machine learning operate, is discussed in this excellent article.

The applications for AI are almost unlimited. Some of the popular and original use cases are human voice recognition and pattern recognition tasks that for many years were thought to be too difficult for computers to perform with a high degree of accuracy. Pattern recognition has now improved to the point where a machine can often outperfom a human, and voice recognition is now encapsulated in the Amazon ‘Echo’ device as a home appliance. Many other tasks, particularly ones where the machine assists a human (Augmented Intelligence) by presenting likely possibilities reduced from extremely large and complex datasets, will profoundly change human activity and work. Such examples include medical diagnostics (an AI system can read every journal every written, compare to a history taken by a medical diagnostician, and suggest likely scenarios that could include data the medical professional couldn’t possibly have the time to absorb); fact-checking news stories against many wide-ranging sources; performing financial analysis; writing contracts; etc.

It’s easy to see that many current ‘professions’ will likely be disrupted or disintermediated… corporate law, medical research, scientific testing, pharmaceutical drug trials, manufacturing quality control (AI connected to robotics), and so on. The incredible speed and storage capability of modern computational networks provides the foundation for an ever-increasing usage of AI at a continually falling price. Already apps for mobile devices can scan thousands of images and make suggestions for keywords, mark for collections of similar images, etc. [EyeEm Vision].

Another area where AI is utilized is in autonomous vehicles (self-driving cars). The synthesis of hundreds of inputs from sensors, cameras, etc. are analyzed thousands of times per second in order to safely pilot the vehicle. One of the fundamental powers of AI is the continual learning that takes place. The larger the dataset, the more of a given set of experiences, the better the machine will be at optimizing the best outputs. For instance, every Tesla car gathers massive amounts of data from every drive the car takes, and continually uploads that data to the servers at the factory. The combined experience of how thousands of vehicles respond to varying road and traffic conditions is learned and then shared (downloaded) to every vehicle. So each car in the entire fleet benefits from everything learned by every car. This is impossible to replicate with individual human drivers.

The potential use cases for this new technology is almost unbounded. Some challenging issues likely can only be solved with advanced machine learning. One of these is the (today) seemingly intractable problem of updating and securing a massive IoT (Internet of Things) network. Due to the very low cost, embedded nature, lack of human interface, etc. that is a characteristic of most IoT devices, it’s impossible to “patch” or otherwise update individual sensors or actuators that are discovered to have either functional or security flaws after deployment. By embedding intelligence into the connecting fabric of the network itself that links the IoT devices to nodes or computers that utilize the info, even sub-optimal devices can be ‘corrected’ by the network. Incorrect data can be normalized, attempts at intrusion or deliberate altering of data can be determined and mediated.

Blockchain

The blockchain technology that is often discussed today, usually in the same sentence as Bitcoin or Ethereum, is a foundational platform that allows secure and traceable transactions of value. Essentially each set of transactions is a “block”, and these are distributed widely in an encrypted format for redundancy and security. These transactions are “chained” together, forming the “blockchain”. Since the ‘public ledger’ of these groups of transactions (the blockchains) are impossible to alter, the security of every transaction is ensured. This article explains in more detail. While the initial focus of blockchain technology has been on so-called ‘cryptocurrencies’ there are many other uses for this secure transactional technology. By using the existing internet connectivity, items of value can be securely distributed practically anywhere, to anyone.

One of the most obvious instances of transfer of items of value over the internet is intellectual property: i.e. artistic works such as books, images, movies, etc. Today the wide scale distribution of all of these creative works is handled by a few ‘middlemen’ such as Amazon, iTunes, etc. This introduces two major forms of restriction: the physical bottleneck of client-server networking, where every consumer must pull from a central controlled repository; and the financial bottleneck of unitary control over distribution, with the associated profits and added expense to the consumer.

Even before blockchain, various artists have been exploring making more direct connections with their consumers, taking more control over the distribution of their art, and changing the marketing process and value chain. Interestingly the most successful (particularly in the world of music) are all women: Taylor Swift, Beyoncé, Lady Gaga. Each is now marketing on a direct to fan basis via social media, with followings of millions of consumers. A natural next step will be direct delivery of content to these same users via blockchain – which will have even a large effect on the music industry than iTunes ever did.

SingularDTV is attempting the first ever feature film to be both funded and distributed entirely on a blockchain platform. The world of decentralized distribution is upon us, and will forever change the landscape of intellectual property distribution and monetization. The full effects of this are deep and wide-ranging, and would occupy and entire post… (maybe soon).

In summation, these two notable technologies will continue the democratization of data, originally begun with the printing press, and allow even more users to access information, entertainment and items of value without the constraints of a narrow and inflexible distribution network controlled by a few.

A Historical Moment: The Sylmar Earthquake of 1971 (Los Angeles, CA)

August 14, 2016 · by parasam

A little over 45-1/2 years ago, at a few seconds past 6:00AM on Feb. 9, 1971,  I was jolted out of bed by a massive earthquake in Los Angeles. Or more accurately, the bed moved so far sideways that I fell on the floor… Perhaps a good thing as the bookshelves over my bed promptly dumped all the books, and shelves, onto the bed which I had recently occupied. Other than the Kern County earthquake in 1952, this was the first major quake in California since the calamitous 1906 disaster in San Francisco. Although I went on to experience two more severe earthquakes in California (Loma Prieta / San Francisco in 1989; and Northridge / Los Angeles in 1994), this was the first in my lifetime. As a high school senior, already accepted to an engineering college where I would study physics – including geophysics (the study of earthquakes among other things), I knew instantly what was happening. The force and sound still marveled me: it was so much greater than I could have imagined.

At 6.6 on the Richter Scale, this was a massive, but not apocalyptic, event. The 1906 quake measured 7.8, the later Loma Prieta was 7.1 and the Northridge was 6.7 – however the ‘shaking index’ – Mercalli Intensity Scale – (a measure of the actual movement perceived and damage caused by an earthquake) of this quake was “XI Extreme”, only one step from the end of the scale which is labelled “Total Destruction of Everything”. In comparison, both the Loma Prieta (1989) and Northridge (1994) quakes measured “IX Violent” on the Mercalli Scale, two steps below this quake (the Sylmar Earthquake, 1971). The historical San Francisco earthquake of 1906 measured the same (XI Extreme) in four locations just to the north of San Francisco, but the city itself only felt “X Extreme” shaking intensity on the Mercalli Scale. Remember that the most damage in the SF quake was from the subsequent fires, not the earthquake itself.

Bottom line: the 1971 Sylmar quake in Los Angeles produced the most destructive power of any earthquake in California since the Fort Tejon quake in 1857 (Richter 7.9). The quake lasted technically for 12 seconds – it felt like a lot more than that! – and caused $553 million damage [in 1971 dollars, that would be about $3.28 billion in 2016 dollars]. The recently completed freeway interchange in the north San Fernando Valley was destroyed, and took 2 years to rebuild – only to collapse again in the 1994 Northridge quake… seems the structural engineers keep learning after the fact…

Unless you have lived through a massive earthquake such as this, one simply cannot physically catalog the intensity of such an event. Words, even pictures, just fail. The noise is beyond incredible. The takeoff roll of a 747 aircraft is a whisper in comparison; the sight of the houses across the street rising and falling as if on a wave 20 feet high is beyond comprehension. I will never take the word ‘stability’ for granted again. Ever. We take for granted the earth under our feet is a constant. It doesn’t move. Or it’s not supposed to… the disorientation is extreme.

As soon as I had determined that my family was safe, and our home, although damaged, was not in immediate danger (and I had turned off gas and electricity), I got in my truck and headed off to where I had heard on the radio that the epicenter of damage was: the northern San Fernando Valley. The quake occurred at 6AM, I arrived at the destroyed freeway interchange (for those that know LA, the I-5/Hiway 14 interchange) at about 10AM, when the below images were taken. No police, fire or any emergency personnel had arrived yet. It was surreal, a few other curious humans like myself wandering around – and absolute quiet. One of the busiest freeways in Los Angeles was empty. The only sound was an occasional crow. The real major calamity (the collapse of the Olive View and Veterans Hospitals, which ended up with a death toll of 62) was several miles to the south of my location. There were cracks in the ground several feet wide and many feet deep. The Sylmar Converter Station (a major component of the LA basin electrical power grid) was totaled, with large transformers laying helter-skelter on their sides. I was reminded of H. G. Wells’ “War of The Worlds” with a strange and previously unknown landscape in front of me.

Although I had already been shooting images for almost 10 years (starting with a Kodak Brownie), the pictures below were probably my first real entry into photojournalism and streetphotography. Taken with a plastic (both lens and body) Kodak Instamatic 100 that exposed a proprietary 26mm square film (Kodacolor-X, ASA64) the resulting prints are not of the best quality. While I may still discover the negatives someday in a long-forgotten shelf, all I have at present are the prints from 1971. I’ve scanned, restored and processed them to recover as much of the original integrity as possible, but there is no retouching.

I’ve also included scans of some newspapers in LA from the first days after the quake (found those folded up and stored – reading some of the ads from that period was almost as interesting as the major stories…)

Destroyed overpass and roadway on the I-5.

Destroyed overpass and roadway on the I-5.

Severe damage to McMahan's Furniture in north San Fernando.

Severe damage to McMahan’s Furniture in north San Fernando.

Roadway torn asunder by the force of the earthquake.

Roadway torn asunder by the force of the earthquake.

Fallen transformer at the Sylmar Power station.

Fallen transformer at the Sylmar Power station.

Damaged office building, north San Fernando.

Damaged office building, north San Fernando.

Lateral deformation of the ground near the Sylmar Power station, close to the epicenter of the earthquake.

Lateral deformation of the ground near the Sylmar Power station, close to the epicenter of the earthquake.

Fallen roadway on the I-5.

Fallen roadway on the I-5.

Observers looking at the earthquake damage to the I-5 a few hours after the initial event.

Observers looking at the earthquake damage to the I-5 a few hours after the initial event.

Overpass on the I-5 / Hiway 14 interchange showing separation and subsiding of the roadway.

Overpass on the I-5 / Hiway 14 interchange showing separation and subsiding of the roadway.

Massive split in the roadway of the I-5, looking north.

Massive split in the roadway of the I-5, looking north.

Destroyed overpass on the I-5 freeway.

Destroyed overpass on the I-5 freeway.

Brick walls destroyed in suburban San Fernando Valley.

Brick walls destroyed in suburban San Fernando Valley.

The I-5 / Hiway 14 interchange was still in the final stages of construction when the earthquake hit. This image is deceptive as most of the damage was at the top of the frame. However the roadway that leads in from the lower right is split just after the overpass (detail in another image).

The I-5 / Hiway 14 interchange was still in the final stages of construction when the earthquake hit. This image is deceptive as most of the damage was at the top of the frame. However the roadway that leads in from the lower right is split just after the overpass (detail in another image).

Fallen overpass on the I-5 freeway. A few seconds after this image was taken a major aftershock occurred. I ran really, really fast from under the remaining bridge elements...

Fallen overpass on the I-5 freeway. A few seconds after this image was taken a major aftershock occurred. I ran really, really fast from under the remaining bridge elements…

I-5 / Hiway 14 freeway interchange damage

I-5 / Hiway 14 freeway interchange damage

LA Times, Feb 10 1971, page 1

LA Times, Feb 10 1971, page 1

The Valley News, Feb 9 1971, page 1

The Valley News, Feb 9 1971, page 1

The Valley News, Feb 9 1971, page 2

The Valley News, Feb 9 1971, page 2

Kodak Instamatic 100, released in 1963 with a price of $16 at the time.

Kodak Instamatic 100, released in 1963 with a price of $16 at the time.

Where Did My Images Go? [the challenge of long-term preservation of digital images]

August 13, 2016 · by parasam
Littered Memories - Photos in the Gutter

Littered Memories – Photos in the Gutter (© 2016 Paul Watson, used with permission)

Image Preservation – The Early Days

After viewing the above image from fellow streetphotographer Paul Watson, I wanted to update an issue I’ve addressed previously: the major challenge that digital storage presents in terms of long-term archival endurance and accessibility. Back in my analog days, when still photography was a smelly endeavor in the darkroom for both developing and printing, I slowly learned about careful washing and fixing of negatives, how to make ‘museum’ archival prints (B&W), and the intricacies of dye-transfer color printing (at the time the only color print technology that offered substantial lifetimes). Prints still needed carefully restricted environments for both display and storage, but if all was done properly, a lifetime of 100 years could be expected for monochrome prints and even longer for carefully preserved negatives. Color negatives and prints were much more fragile, particularly color positive film. The emulsions were unstable, and many of the early Ektachrome slides (and motion picture films) faded rapidly after only a decade or so. A well-preserved dye-transfer print could be expected to last for almost 50 years if stored in the dark.

I served for a number of years as a consultant to the Los Angeles County Museum of Art, advising them on photographic archival practices, particularly relating to motion picture films. The Bing Theatre for many years offered a fantastic set of screenings that offered a rare tapestry of great movies from the past – and helped many current directors and others in the industry become better at their craft. In particular, Ron Haver (the film historian, preservationist and LACMA director with whom I worked during that time) was instrumental in supervising the restoration, screening and preservation of many films that would now be in the dust bin of history without his efforts. I learned much from him, and the principles last to this day, even in a digital world that he never experienced.

One project in particular was interesting: bringing the projection room (and associated film storage facilities) up to Los Angeles County Fire Code so we could store and screen early nitrate films from the 1920’s. [For those that don’t know, nitrate film is highly flammable, and once on fire will quite happily burn under water until all the film is consumed. It makes its own oxygen while burning…] Fire departments were not great fans of this stuff… Due to both the large (and expensive) challenges in projecting this type of film, as well as the continual degradation of the film stock, almost all nitrate film left has since been digitally scanned for preservation and safety. I also designed the telecine transfer bay for the only approved nitrate scanning facility in Los Angeles at that period.

What this all underscored was the considerable effort, expense and planning that is required for long term image preservation. Now, while we may think that once digitized, all our image preservation problems are over – the exact opposite is true! We have ample evidence (glass plate negatives from the 1880’s, B&W sheet film negatives from the early 1900’s) that properly stored  monochrome film can easily last 100 years or more, and is readable today as it was the day the film was exposed with no extra knowledge or specialized machinery. B&W movie film is also just as stable as long as printed onto safety film base. Due to the inherent fading of so many early color emulsions, the only sure method for preservation (in the analog era) was to ‘color separate’ the negative film and print the three layers (cyan, magenta and yellow) onto three individual B&W films. – the so-called “Technicolor 3-stripe process”.

Digital Image Preservation

The problem with digital image preservation is not due to the inherent technology of digital conversion – if done well that can yield a perfect reproduction of the original after theoretically an infinite time period. The challenge is how we store, read and write the “0s and 1s” that make up the digital image. Our computer storage and processing capability has moved so quickly over the last 40 years that almost all digital storage from more than 25 years ago is somewhere between difficult and impossible to recover today. This problem is growing worse, not better, in every succeeding year…

IBM 305 RAMAC Disk System 1956: IBM ships the first hard drive in the RAMAC 305 system. The drive holds 5MB of data at $10,000 a megabyte.

IBM 305 RAMAC Disk System 1956: IBM ships the first hard drive in the RAMAC 305 system. The drive holds 5MB of data at $10,000 a megabyte.

This is a hard drive. It holds less than .01% of the data as the smallest iPhone today...

This is a hard drive. It holds less than .01% of the data as the smallest iPhone today…

One of the earliest hard drives available for microcomputers, c.1980. The cost then was $350/MB, today's cost (based on 1TB hard drive) is $0.00004/MB or a factor of 8,750,000 times cheaper.

One of the earliest hard drives available for microcomputers, c.1980. The cost then was $350/MB, today’s cost (based on 1TB hard drive) is $0.00004/MB or a factor of 8,750,000 times cheaper.

Paper tape digital storage as used by DEC PDP-11 minicomputers in 1975.

Paper tape digital storage as used by DEC PDP-11 minicomputers in 1975.

Paper punch card, a standard for data entry in the 1970s.

Paper punch card, a standard for data entry in the 1970s.

Floppy disks: (from left) 8in; 5-1/4"; 3-1/2". The standard data storage format for microcomputers in the 1980s.

Floppy disks: (from left) 8in; 5-1/4″; 3-1/2″. The standard data storage format for microcomputers in the 1980s.

As can be  seen from the above examples, digital storage has changed remarkably over the last few decades. Even though today we look at multi-terabyte hard drives and SSD (Solid State Drives) as ‘cutting edge’, will we chuckle 20 years from now when we look back at something as archaic as spinning disks or NAND flash memory? With quantum memory, holographic storage and other technologies already showing promise in the labs, it’s highly likely that even the 60TB SSD disks that Samsung just announced will take their place alongside 8-inch floppy disks in a decade or so…

And these issues are actually the least of the problem (the physical storage medium). Yes, if you put your ‘digital negatives’ on a floppy disk 15 years ago and now want to read them you have a challenge at hand… but with patience and some time on eBay you could probably assemble the appropriate hardware to retrieve the data into a modern computer. The bigger issue is that of the data format: both of the drives themselves and the actual image files. The file systems – the method that was used to catalog and find the individual images stored on whatever kind of physical storage device, whether ancient hard drive or floppy disk – have changed rapidly over the years. Most early file systems are no longer supported by current OS (Operating Systems), so hooking up an old drive to a modern computer won’t work.

Even if one could find a translator from an older file system to a current one (there is a very limited capability in this regard, many older file systems can literally only be read by a computer as old as the drive), that doesn’t solve the next issue: the image format itself. The issue of ‘backwards compatibility’ is one of the great Achilles Heels of the entire IT industry. The huge push by all vendors to keep all their users relentlessly updating to the latest software, firmware and hardware is just to avoid these same companies having to support older versions of hardware and software. This is not totally a self-serving issue (although there are significant costs and time involved in doing so) – frequently certain changes in technology just can’t support an older paradigm any longer. The earliest versions of Photoshop files, PICT, etc are not easily opened with current applications. Anyone remember Corel Draw?? Even ‘common interchange’ formats such as TIFF and JPEG have evolved, and not every version is supported by every current image processing application.

The more proprietary and specific the image format is, the more fragile it is – in terms of archival longevity. For instance, it may seem that the best archival format would be the Camera Raw format – essentially the full original capture directly from the camera. File types such as RAW, NEF, CR2 and so on are typical. However, each of these is proprietary and typically has about a 5 year life span, in terms of active application support by the vendor. As camera models keep changing – more or less on a yearly cycle – the Raw formats change as well. 3rd party vendors, such as Adobe Photoshop, are under no obligation to support earlier Raw formats forever… and as previously discussed the challenge of maintaining backwards compatibility grows more complex with each passing year. There will always come a time when such formats will no longer be supported by currently active image retrieval, viewing or processing software.

Challenges of Long-Term Digital Image Preservation

Therefore two major challenges must be resolved in order to achieve long term storage and future accessibility of digital images. The first is the physical storage medium itself, whether that is tape (such as LTO-6), hard disk, SSD, optical, etc. The second is the actual image format. Both must be usable and able to transfer images back to the operating system, device and software that is current at the time of retrieval in order for the entire exercise of archival digital storage to be successful. Unfortunately, this is highly problematic at this time. As the pace of technological advance is exponentially increasing, the continual challenge of obsolescence becomes greater every year.

Currently there is no perfect answer for this dilemma – the only solution is one of proactivity on the part of the user. One must accommodate the continuing obsolescence of physical storage mediums, file systems, operating systems and file formats by moving the image files on a regular and continual basis to current versions of all of the above. Typically this is an exercise that must be repeated every five years – at current rates of technological development. For uncompressed images, other than the cost of the move/update there is no impact on the digital image – that is one of the plus sides of digital imagery. However, many images (almost all if you are other than a professional photographer or filmmaker) are stored in a compressed format (JPG, TIFF-LZW/ZIP, MPG, MOV, WMV, etc.). These images/movies will experience a small degradation in quality each time they are copied. The amount and type of artifacts introduced are highly variable, depending on the level of compression and many other factors. The bottom line is that after a number of copy cycles of a compressed file (say 10) it is quite likely that a visible difference from the original file can be seen.

Therefore, particularly for compressed files, a balance must be struck between updating often enough to avoid technical obsolescence and making the fewest number of copies over time in order to avoid image degradation. [It should be noted that potential image degradation will typically only be due to changing/updating the image file format, not moving a bit-perfect copy from one type of storage medium to another].

This process, while a bit tedious, can be automated with scripts or other similar tools, and for the casual photographer or filmmaker will not be too arduous if undertaken every five years or so. It’s another matter entirely for professionals with large libraries, or for museums, archives and anyone else with thousands or millions of image files. A lot of effort, research and thought has been applied to this problem by these professionals, as this is a large cost of both time and money – and no solution other than what’s been described above has been discovered to date. Some useful practices have been developed, both to preserve the integrity of the original images as well as reduce the time and complexity of the upgrade process.

Methods for Successful Digital Image Archiving

A few of those processes are shared below to serve as a guide for those that are interested. Further search will yield a large amount of sites and information that addresses this challenge in detail.

  • The most important aspect of ensuring a long-term archival process that will result in the ability to retrieve your images in the future is planning. Know what you want, and how much effort you are willing to put in to achieve that.
  • While this may be a significant undertaking for professionals with very large libraries, even a few simple steps will benefit the casual user and can protect family albums for decades.
  • In addition to the steps discussed above (updating storage media, OS and file systems, and image formats) another very important aspect is “Where do I store the backup media?” Making just one copy and having it on the hard drive of your computer is not sufficient. (Think about fire, theft, complete breakdown of the computer, etc.)
    • The current ‘best practices’ recommendation is the “3-2-1” approach: Make 3 copies of the archival backup. Store in at least 2 different locations. Place at least 1 copy off-site. A simple but practical example (for a home user) would be: one copy of your image library in your computer. A 2nd copy on a backup drive that is only used for archival image storage. A 3rd copy either on another hard drive that is stored in a vault environment (fireproof data storage or equivalent) or cloud storage.
    • A note on cloud storage: while this can be convenient, be sure to check the fine print on liability, access, etc. of the cloud provider. This solution is typically feasible for up to a few terabytes, beyond that the cost can become significant, particularly when you consider storage for 10-20  years. Also, will the cloud provider be around in 20 years? What insurance do they provide in terms of buyout, bankruptcy, etc.? While the issue of storage media is not an issue with cloud storage and file formats (it is incumbent on the cloud provider to keep that updated) you are still personally responsible for the image format issue: the cloud vendor is only storing a set of binary files, they cannot guarantee that these files will be readable in 20 years.
    • Unless you have a fairly small image library, current optical media (DVD, etc.) is impractical: even double-sided DVDs only hold about 8GB of formatted data. In addition, as one would need to burn these DVDs in your computer, the longevity of ‘burned’ DVDs is not great (compared to printed DVDs like you purchase when you buy a movie). With DVD usage falling off noticeably this is most likely not a good long-term archival format.
    • The best current solution for off-premise archival storage is to physically store external hard drives (or SSDs) with a well known data vaulting vendor (Iron Mountain is one example). The cost is low, and since you only need access every 5 years or so the extra cost for retrieval and re-storage (after updating the storage media) is acceptable even for the casual user.
  • Another vitally important aspect of image preservation is metadata. This is the information about the images. If you don’t know what you have then future retrieval can be difficult and frustrating. In addition to the very basic metadata (file name, simple description, and a master catalog of all your images) it is highly desirable to put in place a metadata schema that can store keywords and a multitude of other information about the images. This can be invaluable to yourself or others who may want to access these images decades in the future. A full discussion of image metadata is beyond the scope of this post, but there is a wealth of information available. One notable challenge is the most basic (and therefore future-proof) still image formats in use today [JPG and TIFF] do not have any facility to attach metadata directly within the image file – it must be stored externally and cross-referenced somehow. Photoshop files on the other hand store both metadata and the image within the same file – but as discussed above this is not the best format for archival storage. There are techniques to cross-reference information to images: from purpose-built archival image software to a simple spreadsheet that uses the filename of the image as a key to the metadata.
  • An important reminder: the whole purpose of an archival exercise is to be able to recover the images at a future date. So test this. Don’t just assume. After putting it all in place, pull up some images from your local offline storage every 3-6 months and see that everything works. Pull one of your archival drives from off-site storage once a year and test it to be sure you can still read everything. Set up reminders in your calendar – it’s so easy to forget until you need a set of images that was accidentally deleted from your computer – and then find out your backup did work as expected.

A final note:  if you look at entities that store valuable images as their sole activity (Library of Congress, The National Archives, etc.) you will find [for still images] that the two most popular image formats are low-compression JPG and uncompressed TIFF. It’s a good place to start…

 

An Interview with Dr. Vivienne Ming: Digital Disruptor, Scientist, Educator, AI Wizard…

June 27, 2016 · by parasam

During the recent Consumer Goods Forum global summit here in Cape Town, I had the opportunity to briefly chat with Vivienne about some of the issues confronting the digital disruption of this industry sector. [The original transcript has been edited for clarity and space.]

Named one of 10 Women to Watch in Tech in 2013 by Inc. Magazine, Vivienne Ming is a theoretical neuroscientist, technologist and entrepreneur. She co-founded Socos, where machine learning and cognitive neuroscience combine to maximize students’ life outcomes. Vivienne is a visiting scholar at UC Berkeley’s Redwood Center for Theoretical Neuroscience, where she pursues her research in neuroprosthetics. In her free time, Vivienne has developed a predictive model of diabetes to better manage the glucose levels of her diabetic son and systems to predict manic episodes in bipolar suffers. She sits on the boards of StartOut, The Palm Center, Emozia, and the Bay Area Rainbow Daycamp, and is an advisor to Credit Suisse, Cornerstone Capital, and BayesImpact. Dr. Ming also speaks frequently on issues of LGBT inclusion and gender in technology. Vivienne lives in Berkeley, CA, with her wife (and co-founder) and their two children.

Every once in a while I have the opportunity to discuss wide-ranging topics with an intellect that stimulates, is passionate and really cares about the bigger picture. Those opportunities are more rare than one would think. Although set in a somewhat unexpected venue (the elite innards of consumer capitalism) her observations on the inescapable disruption that the new wave of modern technologies are prescient and thoughtful. – Ed 

Ed: In a continent where there is a large focus on putting people to work, how do you see the challenges and disruptions resulting from AI, robotics, IoT, VR and other technologies playing out? These technologies, as did other disruptive technologies before them, tend to replace human workers with machine processes.

Vivienne:  There is almost no domain in which artificial intelligence (AI), machine learning and automation will not have a profound and positive impact. Medicine, farming, transportation, etc. will all benefit. There will be a huge impact on human potential, and human work will change. I think this is inevitable, that we are well on the way to this AI-enabled future. The economic incentives to push in this direction are far too strong. But we need social institutions to keep pace with this change.

We need to be building people in as a sophisticated way as we are building our technology infrastructure. There is today a large and significant business sector in educational technology: Microsoft, Apple, Google, Facebook all have serious interest. But this current focus really is just an amplifier for existing paradigms, helping hyper-competitive moms over-prep their kids for standardized testing… which predicts nothing at all about anyone’s actual life outcome.

Whether you get into Princeton vs Brown, or didn’t get into MIT, is not really going to affect your life track all that much. Whereas the transformation that comes from making even a more modest, but broad-scale difference in lives, is huge. Let’s take right here: South Africa is probably one of the perfect examples, maybe along with India, of a region in which to make a difference.

Because of the history, we have a society where there is a starting point of a pretty dramatic inequality of education and preparedness. But, you have an infrastructure. That same history did leave you with a strong infrastructure. Change a child’s life in Denmark, for example, and you probably haven’t made that enormous an impact. You do it in Haiti and the best you might hope for is they might move to somewhere that they might live out a fruitful and more productive life. While it may sound judgmental on Haiti it’s just a fact right now: there’s only so much that one can achieve there as there is so little infrastructure. But you do that here in South Africa, in the townships, or in the slums of Mumbai – and you can have a profound difference in that person’s life. This is because there is an infrastructure to capture that life and do something with it.

In terms of educational technology, doubling down on traditional approaches with AI, bringing computational aids into the classroom, using algorithms to better prepare students for testing… we have not found, either in the literature or our own research with a 122 million person database that this makes any difference to one’s life outcome.

People that do this, that go to great colleges, do often have productive and creative lives… but not for those reasons. All of their life results are outgrowths of latent qualities. General cognitive ability, our meta-cognition problem solving, our creativity, our emotion regulation, our mindset: these are the things that we find are actually predictive of one’s life outcome.

These qualities are hard to teach. We tend to absorb and learn these from a lifetime of modeling of others, of human observation and interaction. So I tend to take a very human perspective on technology. What is the minimum, as someone that builds technology – AI in particular – that can deliver those qualities into people’s lives. If we want to really be effective with this technology, then it must be simple. Simple to deploy and simple to use. Currently, a text-based system is appropriate. Today we use SMS – although it’s a hugely regressive system that is expensive. To reach 1 million kids each year it costs about $5 million per year. To reach that same number of kids using WhatsApp or a similar platform costs about $40 per year. The difference is obscene… The one technology (SMS) that has the farthest reach around the world is severely dis-incentivized… but we’re doing it anyway!

When I’m building a fancy AI, there’s no reason to pair that with an elaborate user interface, there’s no reason I need to force you to buy our testing solution that will collect tons of data, etc. It can, and should, be the simplest interface possible.

Let me give an example with the following narrative:  I pick up my daughter each day after school (she’s five) and she immediately starts sharing with me via pictures. That’s how she interacts. She sits with her friends and draws pictures. The first thing she does is show me what she’s drawn that day. I snap a photo to share with her grandmother, and at the same time I cc: MUSE (the AI system we’ve built). The image comes to us, our deep neural network starts analyzing it.

Then I go pick up my son. Much like me, he like to talk. He loves to tell stories. We can’t upload audio via SMS (prohibitively expensive) but easily done with an app. Hit a button, record 30 sec of his story, or grab a few minutes of us talking to each other. Again, that is captured by the deep neural networks within MUSE and analyzed. Some of this AI could be done with ‘off the shelf’ applications such as available from Google, IBM, etc. Still very sophisticated software, but it’s out there.

The problem with this method is that data about a little kid is now outside our protected system. That’s a problem. In some countries, India or China for example, parents are so desperate for their children to improve in education that they will do almost anything, but in the US everyone’s suspicious. The only sure-fire way to kill a company is to do something bad with data in health or education. So MUSE is entirely self-contained.

Once we have the data and the analysis, we combine that with a system that asks a single question each day. The text-based question and answer is the only requirement (from a participating parent using our system); the image and audio is optional. What the system is actually doing is predicting every parent’s answer to these thousands of questions, every day. This is a machine learning technology known as ‘active learning’. We came up with our own variant, and when it does its predictions, it then says, “If I knew the true answer to one question, which one would provide the biggest information gain?”

This can be interpreted in different ways. Shannon information (for the very wonky people reading this), or which one question will cause the most other questions to be generated. So we ask that one question. The system can select the single most information question to ask that day. We then do a very immodest thing: predicting these kids’ life outcomes. But that is problematic. Not only our research, but others as well, have shown almost unequivocally that sharing this information produces a negative outcome. Turns out that the best thing, we believe, that can be done with this data is to use it to ask a predictive question for the advancement of the child’s learning.

That can lead to a positive transformation of potential outcome. Prior systems have shown that just the daily reminder to a parent to perform a specific activity with their child is beneficial – in our case, with all this data and analysis, our system can ask the one question that we predict will be the most valuable thing for your child on that day.

Now that prediction incorporates more than you might think. The first most important thing: that the parent actually does it. That’s easy to determine: either they did or they didn’t. So we need methods to engage with the parent. The second thing is to attempt to determine how effective our predictive model is for that child. Remember, we’re not literally predicting how long a child will live – we’re predicting how ‘gritty’ they will be (as in the research from Angela Duckworth), do they have more of a growth or fixed mindset (Carol Dweck and others), what will be their working memory span, etc.

Turns out there are dozens and dozens of constructs that people have directly shown are strongly predictive of life outcomes. Our aim is to maximize these qualities in the kids that make use of our system. In terms of our predictive models, think of this more in an actuarial sense: on average, given everything we know about a particular kid, he is more likely to live 5 years longer, she is more likely to go 2 years further in their education, etc. The important thing, our goal, is for none of it to come true, no matter how positive the prediction is. We believe it can always be more. Everyone can be amazing. This may sound like a line, but quite frankly if you don’t believe that you shouldn’t be an educator.

Unfortunately, the education system is full of people that think people can’t change and that it’s not worth the effort… what would South Africa be if this level of positivity was embedded in the education system? I’m sure that it’s not lost on the population here (for instance all the young [mostly African] people serving this event) what their opportunities could be if they could really join the creative class. Unfortunately there are political and policy issues that come into play here, it’s not just a machine learning issue. But I can say the difference would be dramatic.

We did the following analysis in the United States:  if we simply took the kind of things we do with MUSE, and were able to scale that to every kid in the USA (50 million) and had started this 25 years ago, what would the net effect on the US economy be? We didn’t do broad strokes and wishful thinking – we modeled based on actual research (do this little proactive change when kids are young, then observe that in 25 years they are earning 25% more and have better health outcomes, etc.). We took that actual research, and modeled it out, region by region in the USA; demographics, everything. We found that after 25 years we would have added somewhere between $1.3-1.8 trillion to the US economy. That’s huge.

The challenge is how do you scale that out to really large numbers of kids, particularly in India, China, Africa, etc.? That’s where technology comes in.

Who’s invested in a kid’s life? We use the generic term ‘caregiver’ – because in many kid’s lives there isn’t a parent, or only one parent, a grandparent, a foster parent, etc. At any given moment in a kid’s life, hopefully there are at least two pivotal people: a caregiver and a teacher. Instead of trying to replace them with an AI, what if we empower them? What if we gave them a superpower? That’s the spirit of what we’re trying to do.

MUSE is comprised of eight completely independent highly sophisticated machine learning systems, along with integration, data, analytics and interface layers. These systems are analyzing the images, the audio, producing the questions, making the predictions. We use what’s termed a ‘deep reinforcement learning model’ – a very similar concept, at least at a high level, to Google’s “Alpha Go” AI system. This type of system can learn to play highly complex games (Go, video games, etc.) – a fundamentally different type of intelligence than IBM’s older chess-playing programs. This new type of AI actually learns how to play the game itself, as opposed to selecting procedures that have already been programmed into it.

With MUSE, essentially we are designing activities for parents to do with their children that are unique to that child and designed to provide maximum learning stimulus at that point in time. We are also designing a similar structure for teachers to do with their students, for students to do with themselves as they get older. In a similar fashion, we are involved in the workplace: the same system that can help parents get the most out of their kids, that can help teachers get the most out of their students, can also help managers get the most out of their employees. I’ve done a lot of work in labor economics, talent management, etc. – managing people is hard and most aren’t good at it.

Our approach tends to be, “Do a good job and I’ll give you a bonus, do a bad job and I’ll fire you.” We try to be more human than that, but – certainly if you’ve ever been involved in sales – that’s the game! In the TED talk that was just published we showed that methodology was actually a negative predictor of outcomes. In the workplace, your best people are not incentivized by these archaic methodologies, but are rather endogenously motivated. In fact, the research shows that the more you artificially incentivize workers, the more poorly they perform, at least in the medium to long term.

Wow! That is directly in contradiction to how we structure our businesses, our educational systems, our societies in general. It’s really hard to gain these insights if you can’t do a deep analysis of 200,000 salespeople, or 100,000 software developers like we were able to do. Ultimately the massive database of 122 million people that we built at Gild allows a scale of research and analysis that is unprecedented. That scale, and the capability of deep machine learning allows us to factor tens of thousands of variables as a basis for our predictive engines.

I just love this space – of combining human potential with the capabilities of artificial intelligence. I’ve never built a totally autonomous system. Everything I’ve built is about helping a parent, helping a doctor, helping a teacher, helping a manager do better. This may come from my one remaining academic interest: cognitive neural prosthetics. [a versatile method for assisting paralyzed patients and patients with amputations, recording the cognitive state of the subject, rather than signals strictly related to motor execution or sensation] Do I believe that literally jamming things in your brain can make you smarter? Unambiguously yes! I accept we won’t be doing this tomorrow… there aren’t that many volunteers for elective brain surgery… but with amazing technologies, such as neural dust being developed at Lawrence UC Berkley Labs, as well as ECoG, which is nearly ubiquitous but is currently only used during brain surgery for epilepsy.

What can be done with ECoG is amazing! I can tell what you’re subvocalizing, what you’re looking at, I can track your decision process, your emotional state, etc. Now, that is scary, and it should be. We should never shy away from that – but the potential is awesome.

Part of my response to the general ‘AI conundrum’ is, “Let’s beat them to the punch – why wait for them [AI machines] to become super-intelligent – why don’t we do it?” But then this becomes a human story as well. Is intelligence a commodity? Is it a function of how much I can buy? Or is it a human right, like a vaccine? I don’t think these things will ever become ubiquitous or completely free, but whoever gets their first, much like profound eutrophics [accelerated development] and other intelligence-building technologies, will enjoy a huge first-mover advantage. This could be a singularity moment: where potentially we have a small population of super-intelligent people. What happens after that?

I know we started from a not-simple question, but at least a very immediate and realized one: What are the human implications of these sorts of technologies today – into what I think 20 or 30 years from now will fundamentally change the definition of what it means to be human. That’s not a very long time period. But ultimately that’s the thing – technology changes fast. Nowadays people say that technology is changing faster than culture. We need cultural institutions to, if not keep up with the pace of change of technology, change and adapt at a much, much faster pace. We simply cannot accept that these things will figure themselves out over the next 20 years… I mean 20 years is how long it takes to grow a new person – and then it will too late.

It’s like the ice melts in Antarctica: ignoring the problem is leading to potentially catastrophic consequences. The same is true of AI development – this could be catastrophic for Africa, even for America or Europe. But the potential for a good outcome is so enormous – if we react in time. This isn’t the same story as climate change, it isn’t a huge amount of cost just to keep it from being cataclysmic. What I’m saying here is these costs (for human-integrated AI) pay off, they pay back. We’re talking about a much better world. The hard part is getting people to think that it’s worth investing in other peoples’ kids. That’s a bit of an ugly reality, but it’s the truth.

My approach has been: if we can scale out some sophisticated AIs and deliver them in ways that even if not truly free, but can be done at low enough costs that this can be done philanthropically, then that’s what we’ll do.

Ed:  I really appreciate your comments. You went a good way to defining what you meant by ‘Augmented Intelligence’. I had a sense of what you meant by that but this was a most informative journey.

Vivienne:  Thank you. It’s interesting – 10 years ago if you’d asked me about cognitive neural prosthetics, cybernetics, cyborgs… I would have said it’s 50 years away. So now I’ve trimmed more than 10 years off that estimate. Back then, as an academic, I thought, “Ok, what can I do today?” I don’t have access to a brain directly, can I leverage technology somehow to achieve indirect access to a brain? What could we do with Google Glass? What could we do with inferential technologies online? I know I’m not the only person that’s had an idea like this before. My very first startup, we were thinking of “Google Now” long before Google Now came along. The vision was even more aggressive:

“You’re walking down the street, and the system remembers that you read an article 40 days ago about a restaurant and it really piqued your interest. How did it know that? Because it’s modeling you. It’s effectively simulating you. Your emotional responses. It’s reading the article at the same time you’re reading the article. It’s tracking your responses, but it’s also simulating you, like a true cognitive system. [An aside: I’ll echo what many others have said – that IBM’s Cognitive Computing is not really cognitive computing… but such a thing does really exist].

So I’m walking down the street, and the system pings me and says, “You know that restaurant you were interested in? There’s an open table right now, it’s three blocks away, your calendar’s clear and I’ve just made a reservation for you.” Because the system knew, it didn’t need to ask, that you’d say yes.

Now, that’s really ambitious, especially since I was thinking about this ten years ago, but it’s not ambitious in the sense that it can be done, it’s more ambitious about the infrastructure. Where do you get the data from? What kind of processing can you do? I think the infrastructure problem is becoming less and less of one today and that’s where we are seeing many changes.

You brought up the issue of a “Marketplace of Things” [n.b. Ed and Vivienne had a short exchange leading in to this interview regarding IoT and the perspective that localized data/intelligence exchange would dramatically lower bandwidth requirements for upstream delivery, lower system latency, and provide superior results.] and brought up the issue of bandwidth: wouldn’t it better if every light bulb, every  camera, every microphone locally processed information, and then only sent off things that were actually interesting, informative. But didn’t just send it off to a single server – it’s every day at the InfoNYSE: “I’ve got some interesting emotion data on three users in this room, anyone interested in that?”

These transactions won’t necessarily be traditional monetary transactions, possibly just data transactions. “I will trade this information for some data about your users’ interests”, or for future data about how your users responded to this information that I’m providing.

As much as I think the word ‘futurist’ is a bit overused or diffuse, I do admit to thinking about the future and what’s possible. I’ve got a room full of independent processing units that are all talking to each other… I’ve kind of got a brain in that room. I’m actually pretty skeptical of ‘general AI’ as traditionally defined. You know, like I’m going to sit down and have a conversation with this AI entity. [laughs] I think we’ll know that we’ve achieved true general AI when this entity no longer introspects, when it no longer understands its own actions – i.e. when it becomes like us.

I do think general artificial intelligence is possible but it’s going to kind of like a whole building ‘turning on’ – it won’t be having a conversation with us, it will much more like our brain. I like to use this metaphor: “Our brains are like a wildly dysfunctional democracy with all of these circuits voting for different outcomes, but it’s an unequal democracy, as the votes carry different weights.” But remember that we only get to see a tiny segment of those votes: only a very small portion of that process ever comes to our conscious awareness. We do a much better job of post-hoc explaining the ‘votes’ and just making things happen than actually explaining it in the moment.

Another metaphor I use is from the movie “Inside Out”:  except instead of a bunch of cutesy emotions embodied, imagine a room full of really crotchety old economists that hate each other and hold wildly differing opinions.

Ed: “Oh, you mean the Fed!”

Vivienne: “Yes! Imagine the Fed of your head” This is actually not a bad model of our cognitive process. In many ways we show a lot of near optimality, near perfect rationality in our decision making, once you understand all the inputs to our decision making process. And yet we can wildly fluctuate between different decisions. The classic is having people playing a betting game where if you bet, and you reveal they won, they will play again. If you bet, and they reveal they lost, they will play again, but if you bet and don’t reveal – they will be less likely to play again.

Which at one level is irrational, but we hold these weird and competing ideas in our head and these votes take place on a regular basis. It gets really complex: modeling cognition. But if you really want to understand people, that’s the way to do it.

This may have been a long and somewhat belabored answer to your original question regarding augmented intelligence, but the heart of it for me all started with, “What could we do if we really understood someone?” I wanted it to be, “I really understand you because I’m in your brain.” But, lacking that immediate capability, what can I infer about someone, and then what can I feed back to them to make them better?

Now, “better” may be a pretty loose and broad definition, but I’m comfortable with that if I can make people “grittier”, if I can improve their working memory span, if I can improve their ability to regulate their own emotions – not turn their emotions off, not pump them full of Ritalin, but to be aware of how their emotions impact their decision making. That leaves the person free to decide what to do with them. And that’s a world I would be pretty happy with.

I would surely disagree with how a great many people use their lives, even so empowered, but it’s a point of faith for me: I’m a hard numbers scientist, not a religious person, but there’s one point of faith. If we could improve these qualities for everyone, the world would be a better place.

Ed: Going back to the conference that we’re both attending (Consumer Goods) how can this idea of augmented intelligence and what I would call an ‘intelligent surface’ of total our environment (whether enabled by IoT, social media feedback, GoogleNow, etc) help turn the consumer ecosystem on its end and truly make it ‘consumer-centric’? By that I mean actually being in control of what goods and services are invented, let alone sold, to us. Why should firms waste time making and selling us stuff that we don’t want or need, or stuff that is bad for us?

Vivienne:  There’s a couple of different ideas that come to me. One is something I often recommend in regards to talent. While your questions pertains to external customers in regards to retailers/suppliers, an analogy can be drawn to the internal interaction between employees and the firms for which they work: “Companies need to stop trying to align their employees with their business, they need to figure out how to align their business with their employees.”

This doesn’t mean that their business becomes some quixotic thing that is malleable and changeable; you do have a business that produces goods or services. For instance, let’s say your business is Campbell’s Soup – you produce food and ship it around the world. But why does this matter to Ann, Shaniqua, any other of your employees? While this may sound a bit ‘self-helpy’ or ‘business-guru’ it’s actually a big part of my philosophy: Think about the things I’ve said about education: Let’s do this crazy thing – think about what the true outcome we’re after is. I want happy, healthy, productive people – and society will reap the benefits. That is my lone definition of education. Anything else is just details.

I’m telling you I can predict those three things. Therefore any decision I make, right here in the moment, I can align against those three goals. So… should I teach this person some concept in geometry right now? Or how should I teach that concept? How does that align with those three goals?

“My four year old is still not reading, should I panic?” How does that align with those three goals? For a child like that, is that predictive of those three things? For some kids, that might be problematic, and it might be time for some kind of intervention. For others, turns out it’s not predictive at all. I didn’t do well in high school. What I didn’t do there, I did in spades in college… and then flunked completely out. After a big gap in my life I went back to college and did my entire undergraduate degree in one year – with perfect scores. Same person, same place, same things. It wasn’t what I was doing (to borrow a phrase from someone else) – it was why I was doing it.

So figuring out why it suddenly mattered to me at that time was me figuring out that it coalesced around the idea of maximizing human potential. Suddenly it had purpose, I was doing things for a reason.

So now we’re talking about doing this inside of companies, with their employees. Figuring out why your company matters to this employee. You want them to be productive – bonuses aren’t the way to do it. Pay them enough that they feel valued, and then figure out why this is important to them. And true enough, for some people that reason might be money – but to others not.

So what does that mean for our consumer relationship? My big fear is that when CEOs or CMOs hear this (human perception modeling, etc. as is used in AI development) they think, “Oh, let’s figure out why people will buy our products!” When I hear about ‘brain hacks’ I don’t think of sales or marketing, I worry about the food scientists figuring out the perfect ‘sweet spot’ of sodium, fat and carbohydrates in order to make this food maximally addictive (in a soft sense). I’m not talking about that kind of alignment. I’m saying, “What is your long term goal?”

Every one of those people on stage (at the Consumer Goods Forum) made some very impassioned speeches about how it’s about the health of consumers, their well-being, the good of society, it’s about jobs, etc. It’s shocking how bad a reputation those same firms have, at least in the USA, along those same dimensions – if that’s what they truly care about. And yet their response to the above statement is, “Gosh, we need a better branding campaign!”

Well… no, you firms are probably not nearly as aligned around those positive outcomes as you think you are; I believe you feel that way, and that you feel abused in our assumptions that you are not (acting that way). I do a tremendous amount of work and advice in the area of discrimination, in human capital. You know, bias, discrimination… it’s not done by villains, it’s done by humans.

Ed: I think what’s difficult is that for true authenticity to be evident, to really act in an authentic manner, one must be able to be self-aware. It’s rare to find that brutal self-analysis, self-questioning, self-awareness. You have pointed out that many business leaders truly believe their hype, their marketing positions – whether there is any real accuracy in their positions or accuracy.

Vivienne:  I just wrote an op-ed for the Financial Times, “The Neuro-Economics of Inequality” (not it’s actual title but it’s the way I think about the issue). What happens when someone learns, really legitimately learns rationally that their hard work will not pay off. Not the way that, for example, it will for the American white kid down the street. So why bother? Even for a woman; so I’ve got a fancy degree, a great college education: I’m going to have to work twice as hard as the man to get the same pay, the same reward.. and even then I’m never going to make it to the C-suite anyway. If I actually do get there, I’m going to have to be “that” kind of executive once I’m there…  I’d rather just be a mom.

These people are not opting out, they are making rational decisions. You talk to economists… we went through this and did the research. We could prove the ‘cost of being named José in the tech industry’, the ‘cost of being black on Wall St.’ – this completely changes some of these equations when you take that into account. So, bringing this back to consumers, I don’t have ready answers for it as I’m a bit dismissive of it. “Consumerism” – that’s a bad word, isn’t it?

While I’m not sure of the resonance of this thought, what if you could take the idea that I’m talking about – these big predictions, Bayesian models that are giving you probability distributions over this potential consumers outcomes. Not ten minutes from now, or rather ten minutes from now is only part of what I’m talking about. We’re integrating across the probability distribution of all potential life outcomes from something as minor as “they ate your bag of potato chips.”

I’m willing to bet if you had to ‘own’, in some sense, at least morally if nothing else, the consequence of knowing the short-term benefit: nice little hedonic short term increase in happiness; mid-term benefit: decrease in eudaimonic  happiness; long term decrease in liver function and so forth… your outlook might be different. If you’re (brand X) that’s just an externality. So I think there are some legitimate criticisms: why talk a fancy game, it’s just corporate responsibility.

Yes, optimizing your supply chain, reducing food waste is nice, but it’s really just because you spent money moving food around the world, some of which got wasted – you want to cut back on that. Beyond that, my observation as an outsider to this sector is that it’s about corporate responsibility, and by that I mean the marketing practices. If you really want to put your heart where your mouth is then take ownership of the long term outcomes. Think about what it means for a nine-year old to eat potato chips. Certain ‘health food’ enterprises have made a lot of money out of this idea, providing a healthy site in which to shop. Certainly, in comparison to a corner store in a disenfranchised neighborhood in the US they are a wildly healthy choice, but even these health shops have an entire aisle dedicated to potato chips. They’re just organic potato chips that cost three times as much. I buy them every now and then. I’m a firm believer in eating well, just eating with some degree of moderation.

That would be my approach. My approach in talent, in health, in education and a variety of domains in policy-making has been let’s leverage some amazing technology to make these seemingly miraculous predictions (which they’re not, they are really not even predictions but actuarial distributions). But these still inform us.

Right now, with this consumer, we’re balancing a number of things: revenue, sustainability, even the somewhat morbid sustainability of our consumer base; we’re balancing our brand. What’s the one action we could take right now as an organization in respect to this person that could maximize all of those things? Given their history, it’s hard to believe that it’s going to be something more than revenue, or at least something that’s going to actually cost them. If I actually believed they would be willing to take this kind of technology and apply it in a truly positive way – I’d just give it to them.

I mean, what a phenomenal human good it would be if some rather simple machine learning could help them actually have a really different paradigm of ‘consuming’. What if every brand could become your best friend, and do what’s in your best interest? Although as it reflects from the brand-owner’s perspective. Yeah, that’s pretty hopeful to think that could actually happen, but do I think that could happen?

That’s what we’re hoping for in some of our mental health work. By being able to make these predictions we’re not just hoping to intervene on behalf of the sufferer, but trusted confidants as well. The way I often put it is: I would love it if our system could be everything your best friend is, but even more vigilant. What would your best friend do if they recognized the early signs of a manic episode coming on? Can we deliver that two weeks earlier and never miss the signals?

Going back, I just don’t see where big consumer companies own that responsibility. But let me pull back to my ‘Marketplace of Things’ idea. There’s a crucial aspect here: that of agents. I can have my own proxy, my own agent that can represent me. In that context, then these consumer companies can serve their own goals. I think they do have some goal in me being alive, so they can continue to earn out my customer lifetime value as a function of my lifetime. They have some value attached to me spending money in certain ways that are more sustainable, that are better for their infrastructure, etc.

I think in all those areas they could take the kinds of methodologies I’m describing and apply them in a kind of AI/machine learning. On my side, if I’m proxied by own agent – well then we can just negotiate. My agent’s goal is really to model out my health, happiness and productivity. It’s constantly seeking to maximize those in the near, medium and long term. So, it walks into a room and says, “All right, let’s have a negotiation.” Clearly, this can’t be done by people, as it all needs to happen nearly instantaneously.

I don’t think the cost of these solutions will drop low enough that we’ll literally be putting them into bags of potato chips. Firstly we must imagine changes in the infrastructure. Part of paying for shelf space in a supermarket won’t be just paying for the physical shelf space, it will be paying for putting your own agents in place on that shelf space. They’ll be relatively low cost, but probably not as disposable as something you could build into the packaging of potato chips. But simply by visiting that location, I pick up all the nutrition information I need, I can solicit information from the store about other people that are shopping (here I mean that my proxy can do all this). Then that whole system can negotiate this out, and come up with recommendations.

To me, it may seem like my phone or earpiece is simply suggesting, “How about this, how about that?” While not everyone is this way, I’m one of those people who actually enjoys going to the supermarket, feeling how it’s interacting with me in the moment. That’s something my agent can take into account as well. This becomes a story that I find more interesting. Maybe this is a set of combined interactions that takes into account various foods manufacturers, retailers – and my agent.

Today, I’m totally outside this process – I don’t get to play a role. The things I like, I just cross my fingers and hope they are in stock when I am in the store. The price that I pay: I have no participation in that whatsoever (other than choosing to purchase or not).

Another example:  Kate from Facebook [in an earlier panel discussion] was telling us that Facebook gives a discount to advertisers for ads that are ‘stickier’ – that people want to see and spend more time looking at. What if I was willing to watch less enjoyable ads – if FB will share the revenue with me?

None of these are totally novel ideas, but none of them will ever come to realization if one of the fundamental sides to this negotiation never gets to participate. I’m always getting proxied by someone else. I don’t have to think that Facebook or Google are bad companies, or that Larry Page or Mark Zuckerberg are bad people for me to think that they don’t necessarily have my best interests at heart.

That would change the dynamic. But I sense that some people in the audience would see that as a loss of control, and most of them are hyper risk-averse.

Ed:  As a final thought or question, in terms of the participation between consumer and producer/retailer that you have discussed, it occurs to me that perhaps one avenue that may be attractive to these companies would be along the lines of market research.  Most new products or services are developed unilaterally, with perhaps some degree of ‘traditional market research’ where small focus groups are used for feedback. From the number of expensive flops in the marketplace it appears that this methodology is fraught with error. Could these methodologies of AI, of probability prediction, of agent communication, be brought to bear on this issue?

Vivienne:  Interesting… brings up many new ideas. One thing that we did in the past – we’re not doing it now but we could listen in to students conversing with each other online. We actually learned the material they were studying directly from the students themselves. For example, start with a system that knows nothing about biology, it learns biology from the students talking amongst themselves – including wrong ideas about biology. What we found was when we trained the system to predict the grades that the students would receive, after new students entered the class, with new material, and new professors: we knew after one week what grade they would get at the end of the semester. We knew with greater and greater accuracy each week what questions they would get right or wrong on the final exam. Our goal in the exercise was to end all standardized testing. I mean, if we know how they are going to score on the test, why ever have a test?

Part of our outcome there was to simulate the outcome of a lecture. There’s some similarity to what you’re discussing (producing consumer goods). Lectures are costly to develop, you get one chance to deploy it each semester or quarter, limited feedback, etc. You would really like to know ahead of time if this lecture was going to useful. Before we pivoted away from this more academic aspect of education into this life outcomes type of work, we were wondering if we could give feedback on the effectiveness of a given lecture before the lecture was given.

Hey, these five students are not going to understand any of your lecture as it’s currently presented. Either they are going to need something different, or you can explore including something else, some alternative metaphors, in your discussion.

Yes, I think it’s intriguingly very possible to to run this sort of very disruptive market research. Certainly in my domain I’m already talking about this: I’m asking one question each day, and can predict everyone’s answer to thousands of questions. That’s rather profound, quite efficient. What if you had a relationship with a meaningful sample of your customers on Facebook and you could ask each of them one question a day, just like I described with my educational work. Essentially you would have a deep, insightful rolling model of your customers all the time.

You could make predictions against this model community for future products, some basic simulations for those type of experiences. I agree, this could be very appealing to these firms.

A Digital Disruptor: An Interview with Michael Fertik

June 27, 2016 · by parasam

During the recent Consumer Goods Forum global summit here in Cape Town, I had the opportunity to briefly chat with Michael about some of the issues confronting the digital disruption of this industry sector. [The original transcript has been edited for clarity and space.]

Michael Fertik founded Reputation.com with the belief that people and businesses have the right to control and protect their online reputation and privacy. A futurist, Michael is credited with pioneering the field of online reputation management (ORM) and lauded as the world’s leading cyberthinker in digital privacy and reputation. Michael was most recently named Entrepreneur of the Year by TechAmerica, an annual award given by the technology industry trade group to an individual they feel embodies the entrepreneurial spirit that made the U.S. technology sector a global leader.

He is a member of the World Economic Forum Agenda Council on the Future of the Internet, a recipient of the World Economic Forum Technology Pioneer 2011 Award and through his leadership, the Forum named Reputation.com a Global Growth Company in 2012.

Fertik is an industry commentator with guest columns in Harvard Business Review, Reuters, Inc.com and Newsweek. Named a LinkedIn Influencer, he regularly blogs on current events as well as developments in entrepreneurship and technology. Fertik frequently appears on national and international television and radio, including the BBC, Good Morning America, Today Show, Dr. Phil, CBS Early Show, CNN, Fox, Bloomberg, and MSNBC. He is the co-author of two books, Wild West 2.0 (2010), and New York Times best seller, The Reputation Economy (2015).

Fertik founded his first Internet company while at Harvard College. He received his JD from Harvard Law School.

Ed: As we move into a hyper-connected world, where consumers are tracked almost constantly, and now passively through our interactions with an IoT-enabled universe: how do we consumers maintain some level of control and privacy over the data we provide to vendors and other data banks?

Michael:  Yes, passive sharing is actually the lion’s share of data gathering today, and will continue in the future. I think the question of privacy can be broadly broken down into two areas. One is privacy against the government and the other is privacy against ‘the other guy’.

One might call this “Big Brother” (governments) and “Little Brother” (commercial or private interests). The question of invasion of privacy by Big Brother is valid, useful and something we should care about in many parts of the world. While I, as an American, don’t worry overly about the US government’s surveillance actions (I believethat the US is out to get ‘Jihadi John’ not you or me); I do believe that many other governments’ interest in their citizens is not as benign.

I think if you are in much of the world, worrying about the panopticon of visibility from one side of the one-way mirror to the other side where most of us sit is something to think and care about. We are under surveillance by Big Brother (governments) all the time. The surveillance tools are so good, and digital technology makes it possible to have so much of our data easily surveilled by governments that I think that battle is already lost.

What is done with that data, and how it is used is important: I believe that this access and usage should be regulated by the rule of law, and that only activities that could prove to be extremely adverse to our personal and national interests should be actively monitored and pursued.

When it comes to “Little Brother” I worry a lot. I don’t want my private life, my frailties, my strengths, my interests.. surveilled by guys I don’t know. The basic ‘bargain’ of the internet is a Faustian one: they will give you something free to use and in exchange will collect your data without your knowledge or permission for a purpose you can never know. Actually, they will collect your data without your permission and sell it to someone else for a purpose that you can never know!

I think that encryption technologies that help prevent and mitigate those activities are good and I support that. I believe that companies that promise not to do that and actually mean it, that provide real transparency, are welcome and should be supported.

I think this problem is solvable. It’s a problem that begins with technology but is also solvable by technology. I think this issue is more quickly and efficiently solvable by technology than through regulation – which is always behind the curve and slow to react. In the USA privacy is regarded as a benefit, not an absolute right; while in most of Europe it’s a constitutionally guaranteed right, on the same level as dignity. We have elements of privacy in American constitutional law that are recognized, but also massive exceptions – leading to a patchwork of protection in the USA as far as privacy goes. Remember, the constitutional protections for privacy in the USA are directed to the government, not very much towards privacy from other commercial interests or private interests. In this regard I think we have much to learn from other countries.

Interestingly, I think you can rely on incompetence as a relatively effective deterrence against public sector ‘snooping’ to some degree  – as so much government is behind the curve technically. The combination of regulation, bureaucracy, lack of cohesion and general technical lack of applied knowledge all serve to slow the capability of governments to effectively mass surveile their populations.

However, in the commercial sector, the opposite is true. The speed, accuracy, reach and skill of private corporations, groups and individuals is awesome. For the last ten years this (individual privacy and awareness/ownership of one’s data) has been my main professional interest… and I am constantly surprised by how people can get screwed in new ways on the internet.

Ed:  Just as in branding, where many consumers actually pay a lot for clothing, that in addition to being a T-shirt, advertise prominently the brand name of the manufacturer, with no recompense for the consumer; is there any way for digital consumers to ‘own’ and have some degree of control over the use of the data they provide just through their interactions? Or are consumers forever to be relegated to the short end of the stick and give up their data for free?

Michael:  I have mapped out, as well as others, how the consumer can become the ‘verb’ of the sentence instead of what they currently are, the ‘object’ of the sentence. The biggest lie of the internet is that “You” matter… You are the object of the sentence, the butt of the joke. You (or the digital representation of you) is what we (the internet owners/puppeteers) buy and sell. There is nothing about the internet that needs to be this way. This is not a technical or practical requirement of this ecosystem. If we could today ask the grandfathers of the internet how this came to be, they would likely say that one of areas in which they didn’t succeed was to add an authentication layer on top of the operational layer of the internet. And what I mean here is not what some may assume: providing access control credentials in order to use the network.

Ed:  Isn’t attribution another way of saying this? That the data provided (whether a comment or purchasing / browsing data) is attributable to a certain individual?

Michael:  Perhaps “provenance” is closer to what I mean. As an example, let’s say you buy some coffee online. The fact that you bought coffee; that you’re interested in coffee; the fact that you spend money, with a certain credit card, at a certain date and time; etc. are all things that you, the consumer, should have control over – in terms of knowing which 3rd parties may make use of this data and for what purpose. The consumer should be able to ‘barter’ this valuable information for some type of benefit – and I don’t think that means getting ‘better targeted ads!’ That explanation is a pernicious lie that is put forward by those that have only their own financial gain at heart.

What I am for is “a knowing exchange” between both parties, with at least some form of compensation for both parties in the deal. That is a libertarian principle, of which I am a staunch supporter. Perhaps users can accumulate something like ‘frequent flyer miles’ whereby the accumulated data of their online habits can be exchanged for some product or service of value to the user – as a balance against the certain value of the data that is provided to the data mining firms.

Ed:  Wouldn’t this “knowing exchange” also provide more accuracy in the provided data? As opposed to passively or surreptitiously collected data?

Michael:  Absolutely. With a knowing and willing provider, not only is the data collection process more transparent, but if an anomaly is detected (such as a significant change in consumer behavior), this can be questioned and corrected if the data was in error. A lot of noise is produced in the current one-sided data collection model and much time and expense is required to normalize the information.

Ed:  I’d like to move to a different subject and gain your perspective as one who is intimately connected to this current process of digital disruption. The confluence of AI, robotics, automation, IoT, VR, AR and other technologies that are literally exploding into practical usage have a tendency, as did other disruptive technologies before them, to supplant human workers with non-human processes. Here in Africa (and today we are speaking from Cape Town, South Africa) we have massive unemployment – varying between 25% – 50% of working age young people in particular. How do you see this disruption affecting this problem, and can new jobs, new forms of work be created by this sea change?

Michael:  The short answer is No. I think this is a one-way ratchet. I’m not saying that in a hundred years’ time that may change, but in the next 20-50 years, I don’t see it. Many, many current jobs will be replaced by machines, and that will be a fact we must deal with. I think there will be jobs for people that are educated. This makes education much, much more important in the future than it’s even been to date – which is huge enough. I’m not saying that only Ph.D.’s will have work, but to work at all in this disrupted society will require a reasonable level of technical skill.

We are headed towards an irrecoverable loss of unskilled labor jobs. Full stop. For example, we have over a million professional drivers in the USA – virtually all of these jobs are headed for extinction as autonomous vehicles, including taxis and trucks, start replacing human drivers in the next decade. These jobs will never come back.

I do think you have a saving set of graces in the developing world, that may slow down this effect in the short term: the cost of human labor is so low that in many places this will be cheaper than technology for some time; the fact that corruption is often a bigger impediment to job growth than technology; and trade restrictions and unfair practices are also such a huge limiting factor. But none of this will stem the inevitable tide of permanent disruption of the current jobs market.

And this doesn’t just affect the poor and unskilled workers in developing economies: many white collar jobs are at high risk in the USA and Western Europe:  financial analysts, basic lawyers, medical technicians, stock traders, etc.

I’m very bullish on the future in general, but we must be prepared to accommodate these interstitial times, and the very real effects that will result. The good news is that, for the developing world in particular, a person that has even rudimentary computer skills or other machine-interface skills will find work for some time to come – as this truly transformative disruption of so many job markets will not happen overnight.

The Digital Disruption of the Consumer Goods Ecosystem

June 27, 2016 · by parasam

 

CGF06

I attended the current Global Consumer Goods Forum here in Cape Town, where about 800 delegates from around the world came to Africa for the first time in the 60 year history of this consortium to discuss and share experience from both the manufacturer and retailer sides of this industry.

My focus, as an emerging technologies observer, was on how various technologies including IoT, mobile devices and user behaviour are changing this industry sector. Many speakers addressed this issue, covering different aspects. Mark Curtis, Co-Founder of Fjord (Design/Innovation group of Accenture) opened the sessions with an interesting and thought-provoking presentation on how brands (and associated firms) will have to adapt in this time of exponential change.

Re-introducing concepts such as ‘wearables’ and ‘nearables’ – as devices and the network moves ever more proximate to ourselves – Mark pointed out that the space for branding is ever shrinking. (Exactly how much logo space is available on a smart watch??) In addition, as so many functions and interactions today in the digital age are becoming atomized and simplified, the very concept of a branded experience is changing remarkably.

As the external form factor of many of our digital interactions is becoming sublimated into the very fabric of our lives (and soon, into the fabric of our clothes…) the external ‘brand’ that you could touch and see is disappearing or becoming commoditized. Even in the world of smartphones the ecosystem is rapidly changing: the notion of separate apps (which have some brand recognition attached) for disparate functions will soon disappear. Although the respective manufacturers may not love to hear this, the notion of whether the phone is Apple or Samsung will in the end not be as important as the functionality that these devices enable.

The user won’t really care whether an app is called Vanilla or Chocolate, but rather that the combination of hardware and software will enable the user to listen to their music, in the order they want, when and where they want. Period. Or to automatically glean info from their IoT-enabled home and present the shopping list when they are in the store.

The experience is what is now requiring branding. Uber, AirBnB, Spotify, Amazon, etc. are all examples of something more than either a product or a service.

Christophe Beck, EVP of EcoLab, explained how the much-hyped “Internet of Things” (IoT) is moving into real and actionable functions. In this case, the large-scale deployment of sensors feeds their real-time analytic processes to provide feedback and control mechanisms to on-site engineers, creating rapid improvements in process control and quality.

The enormously important use of predictive analytics was further underscored by José González-Hurtado of IRI. The power of huge data farms along with today’s massive computational availability can extrapolate meaning and indicators that was economically impossible only a few years ago. Discussions on the food supply chain, including sustainability, transparency, trackability, food safety, health and other factors dominated many of the presentations. The CEO’s of Campbell Soup and Whole Foods covered how their respective firms are leveraging both IoT, analytics (where one gets useful information from BigData) and social networks to integrate more effectively with their customers and provide the level of food information and transparency that most consumers want today regarding their food.

The panel on “Digital Disruptors” was particularly fascinating: Vivienne Ming (Founder of Socos Learning) showed us how AI (Artificial Intelligence, and more importantly Augmented Intelligence) can and will make enormous impacts within the consumer goods spectrum; Michael Fertik (Founder of Reputation.com) shared the impacts of how digital privacy (or the lack thereof…), security and data ownership are changing the way that customers interact with retailers and suppliers; and Kate Sayer (Head of Consumer Goods Strategy for Facebook) discussed the rapidly changing engagement model that consumer goods suppliers must adopt in relating to their customers.

CGF04            CGF01                CGF03

While all of the participants acknowledged that “digital disruption” is here to stay, there is a large and diverse understanding and implementation of this technology throughout the supply chain. My prime take-away is that the manufacture, distribution, sales and consumption of end-user physical goods lags considerably behind that of purely digital goods and services. When questioned privately, many industry leaders accepted that they are playing “catch-up” with their purely digital counterparts.

There are a number of reasons for this, not all of which are within the control of individual manufacturers or retailers: governmental and international regulations are even further behind the curve than commercial entities in terms of rapid and encompassing adoption of digital technology; industrial process control took decades to move from purely human/mechanical control to in-house closed-loop computer control/feedback systems – the switch to a more open IoT framework must be closely balanced with a profound need for security, reliability and accountability.

As we have seen with general information on the internet, accuracy and accountability have taken a far back seat to perceived efficiency, features and ‘wow-factor’. This is ok for salacious news, music and cool new apps that one can always abandon if they don’t deliver what they promise; it’s another thing entirely when your food, drink or clothing doesn’t deliver on the promises made…

Given this, it’s most likely that more rapid adoption of IoT and other forms of ‘disruptive digital technology’ will occur in the retail sector than the manufacturing sector – and this is probably a good thing. But one thing is sure: this genie is way out of the bottle, and our collective lives will never be the same. The process of finding, buying and consuming both virtual and physical goods is changing forever.

IoT (Internet of Things): A Short Series of Observations [pt 7]: A Snapshot of an IoT-connected World in 2021

May 19, 2016 · by parasam

What Might a Snapshot of a Fully Integrated IoT World Look Like 5 Years from Now?

As we’ve seen on our short journey through the IoT landscape in these posts, the ecosystem of IoT has been under development for some time. A number of factors are accelerating the deployment, and the reality of a large-scale implementation is now upon us. Since 5 years is a forward-looking time frame that is within reason, both in terms of likely technology availability and deployment capabilities, I’ve chosen that to frame the following set of examples. While the exact scenarios may not play out precisely as envisioned, the general technology will be very close to this visualization.

The Setting

Since IoT will be international in scope, and will be deployed from 5th Avenue in mid-town Manhattan to the dusty farmlands of Namibia, more than one example place setting must be considered for this exercise. In order to convey as accurate and potentially realistic a snapshot as possible, I’m picking three real-world locations for our time-travel discussion.

  • San Francisco, CA – USA.  A dense and congested urban location, very forward thinking in terms of civic and business adoption of IT. With an upscale and sophisticated population, the cutting edge of IoT can be examined against such an environment.
  • Kigali, Rwanda – Africa.  An urban center in an equatorial African nation. With the entire country of Rwanda having a GDP of only 2% of San Francisco, it’s a useful comparison of how a relatively modern, urban center in Africa will implement IoT. In relative terms, the local population is literate, skilled and connected [70% literacy rate, is reputed to be one of the most IT-centric cities in Africa, and has a 26% internet connectivity rate nationally (substantially higher in Kigali)].
  • Tihi, a remote farming village in the Malwa area of the the central Indian state of Madhya Pradesh.  This is a small village of about 2,500 people that mostly grows soybeans, wheat, maize and so on. With an average income of $1.12 per year, this is an extremely poor region of central rural India. This little village is however ‘on the map’ due to the installation in 2002 of an ICT kiosk (named e-Choupal, taken from the term “choupal” meaning ‘village square’ or ‘gathering place’) which for the first time allowed internet connectivity to this previously disconnected town. IoT will be implemented here, and it will be instructive to see Tihi 5 years on…

General Assumptions

Crystal ball gazing is always an inexact science, but errors can be reduced by basing the starting point on a reasonable sense of reality, and attempting to err on the side of conservatism and caution in projecting the rollout of nascent technologies – some of which deploy faster than assumed, others much more slowly. Some very respectable consulting firms in 1995 reported that cellphones would remain a fringe device and only expected 1 million cellphones to be in use by the year 2000. In the USA alone, more than 100 million subscribers were online by that year…

I personally was one of the less than 40,000 users in the entire USA in 1984 when cellphones were only a few months old. As I drove on the freeways of Los Angeles talking on a handset (the same size as a landline, connected via coilcord to a box the size of a lunch pail) other drivers would stare and mouth “WTF??” But it aided my productivity enormously, as I sat through massive traffic jams on my 1.5 hr commute each way from home to work. I was able to speak to east coast customers, understand what technical issues would greet me once I arrived at work, etc. I personally couldn’t understand why we didn’t have 100 million subscribers by 1995… this was a transformative technology.

Here are the baseline assumptions from which the following forward-looking scenarios will be developed:

  • There are currently about the same number of deployed IoT devices as people on the planet: 6.8 billion. The number of deployed devices is expected to exceed that of the human population by the end of this year. Approximately 10 billion more devices are expected to be deployed each year over the next 5 years, on average.
  • The overall bandwidth of the world-wide internet will grow at approximately 25% per year over the next 5 years. The current overall traffic is a bit over 1 zettabyte per year [1 zettabyte = 1 million petabytes; 1 petabyte = 1 million gigabytes]. That translates to about 3 zettabytes by 2021. From another perspective, it took 27 years to reach the 1 zettabyte level; in 5 more years the traffic will triple!
  • Broadband data connectivity in general (wired + wireless) is currently available to about 46% of the world’s population, and is increasing by roughly 5% per year. The wireless connectivity is expected to increase in rate, but even being conservative about 60% of the world’s population will have internet access within 5 years.
  • The cost of both computation and storage is still falling, more or less in line with Moore’s law. Hosted computation and storage is essentially available for free (or close to it) for small users (a few GB of storage, basic cloud computations). This means that a soy farmer in Tihi, once connected to the ‘net, can essentially leverage all the storage and compute resources needed to run their farm at only the cost of connectivity.
  • Advertising (the second most trafficked content on the internet after porn) will keep increasing in volume, cleverness, economic productivity and reach. As much as many may be annoyed by this, the massive infrastructure must be fed… and it’s either ads or subscription services. Take your pick. And with all the new-found time, and profits, from an IoT enabled life, maybe one just has to buy something from Amazon? (Can’t wait to see how soon they can get a drone out to Tihi to deliver seeds…)

The Snapshots

San Francisco  We’ll drop in on the life of Samantha C for a few hours of her day in the spring of 2021 to see how IoT interacts with her life. Sam is a single professional who lives in the Noe Valley district in a flat. She works for a financial firm downtown that specializes in pricing and trading ‘information commodities’ – an outgrowth of online advertising now fueled by the enormous amount of data that IoT and other networks generate.

San Francisco and the Golden Gate Bridge

San Francisco and the Golden Gate Bridge

Sam’s alarm app is programmed to wake her between 4:45 – 5:15AM, based on sleep pattern info received from the wrist band she put on before retiring the night before. (The financial day starts very early, but she’s done by 3PM). As soon as the app plays the waking melody, the flat’s environment is signaled that she is waking. Lighting and temperature is adjusted and the espresso machine is turned on to preheat. A screen in the dressing area displays weather prediction to aid in clothing selection. After breakfast she simply walks out the front door, the flat environment automatically turns off lights, heat, checks perimeter and arms the security system. A status signal is sent to her smartphone. As San Francisco has one of the best public transport networks in the nation, only a few blocks walk is needed before boarding an electric bus that takes her almost to her office.

AV09  AV04  AV10

Traffic, which as late as 2018 was often a nightmare during rush hours, has markedly improved since the implementation in 2019 of a complete ban on private vehicles in the downtown and financial districts. Only autonomous vehicles, taxis/Ubers, small delivery vehicles and city vehicles are allowed. There is no longer any street parking required, so existing streets can carry more traffic. Small ‘smart cars’ quickly ferry people from local BART stations and other public transport terminals in and out of the congestion zone very efficiently. All vehicles operating in the downtown area must carry a TSD (Traffic Signalling Device), an IoT sensor and transmitter package that updates the master traffic system every 5 seconds with vehicle position, speed, etc.

AV05  AV08  AV02  AV01

As Samantha enters her office building, her phone acquires the local WiFi signal (but she’s never been out of range, SF now has blanket coverage in the entire city). As her phone logs onto the building network, her entry is noted in her office, and all of her systems are put on standby. The combination of picolocation, enabled through GPS, proximity sensors and WiFi hub triangulation – along with a ‘call and response’ security app on her phone – automatically unlocks the office door as she enters just before 6AM (traders get in before the general office staff). As she enters her area within the office environment the task lighting is adjusted and the IT systems move from standby to authentication mode. Even with the systems described above, a further authentication step of a fingerprint and a voice response to a random question (one of a small number that Sam has preprogrammed into the security algorithm) is required in order to open the trading applications.

San Francisco skyline

San Francisco skyline

The information pricing and trading firm for which Sam works is an economic outgrowth of the massive amount of data that IoT has created over the last 5 years. The firm aggregates raw data, curates and rates it, packages the data into various types and timeframes, etc. Almost all this ‘grunt work’ is performed by AI systems: there are no clerks, engineers, financial analysts or other support staff as would have been required even a few years ago. The bulk of Sam’s work is performed with spoken voice commands to the avatars that are the front end to the AI systems that do the crunching. Her avatars have heuristically learned over time her particular mannerisms, inflections of voice, etc. and can mostly intuit the difference between a statement and question just based on cadence and tonal value of her voice.

This firm is representative of many modern information brokerage service providers: with a staff of only 15 people they trade data based on over 5 billion distinct data sources every day, averaging a trade volume of $10 million per day. The clients range from advertising, utilities, manufacturing, traffic systems, agriculture, logistics and many more. Some of the clients are themselves other ‘info-brokers’ that further repackage the data for their own clients, others are direct consumers of the data. The data from IoT sensors is most often already aggregated to some extent by the time Sam’s firm gains access to it, but some of the data is directly fed to their harvesting networks – which often sit on top of the functional networks for the which the IoT systems were initially designed. A whole new economic model has been born where the cost of implementation of large IoT networks are partially funded by the resale of the data to firms like Samantha’s.

Transportation Network

Transportation Network

We’ll leave Sam in San Francisco as she walks down Bush Street for lunch, still not quite used to the absence of noise and diesel smoke of delivery trucks, congested traffic and honking taxis. The relative quiet, disturbed only by the white noise emitters of the electric vehicles (only electrics are allowed in the congestion area in SF), allows her to hear people, gulls and wind – a city achieving equilibrium through advanced technology.

.

Kigali  This relatively modern city in Rwanda might surprise some that think of Rwanda as “The Land of a Thousand Hills” with primeval forests inhabited with chimpanzees and bush people. For this snapshot, we’ll visit Sebahive D, a senior manager working for the city of Kigali (the capital of Rwanda) in public transport. He has worked for the city his entire professional life, and is enthusiastic about the changes that are occurring as a result of the significant deployment of IoT throughout the city over the last few years. As his name means “Bringer of Good Fortune” Sebahive is well positioned to help enable an improved transport environment for the Rwandans living in Kigali.

Kigali - this is also Africa...

Kigali – this is also Africa…

Even though Kigali is a very modern city by African standards, with a skyline that belies a city of just over a million people in a country that has been ‘reborn’ in many ways since the horrific times of 1994, many challenges remain. One of the largest is common to much of Africa: that of reliable urban transport. Very few people own private cars (there were only 21,000 cars in the entire country as of 2012, the latest year for which accurate figures were available) so the vast majority of people depend on public transport. The minibus taxi is the most common mode of transport, accounting for over 50% of all public transport vehicles in the country. Historically, they operated in a rather haphazard manner, with no schedules and flexible routes. Typically the taxis would just drive on routes that had proved over time to offer many passengers, hooting to attract riders and stopping wherever and whenever the driver decided. Roadworthiness, the presence of a driving license and other such basic structures was often optional…

Kigali city center on the hill

Kigali city center on the hill

We’ll join Sebahive as he prepares his staff for a meeting with Toyota who has come to Kigali to present information on their new line of “e-Quantum” minibus taxis. These vehicles are a gas/electric hybrid powered unit, with many of the same features that fully autonomous vehicles being used currently in Japan posses. The infrastructure, roads, IT networks and other basic requirements are insufficient in Kigali (and most of the rest of Africa) to support fully autonomous vehicles at this time. However, a ‘semi-autonomous’ mode has been developed, using both sophisticated on-board computers supplemented by an array of inexpensive IoT devices on roads, bus stops, buildings, etc. This “SA” (Semi-Autonomous) mode, as differentiated from a “FA” (Fully-Autonomous) mode, acts a bit like an auto-pilot or a very clever ‘cruise control’. When activated, the vehicle will maintain the speed at which it was travelling when switched on, and will use sensors both on the exterior of the minibus as well as receive data from roadside sensors to keep the vehicle in its lane and not too close to other vehicles. The driver is still required to steer, and tapping the brake will immediately give full control back to the vehicle operator.

AV11  AV07  AV03

Rather than the oft-hazardous manner of ‘taxi-hailing’ – which basically means stepping out into traffic and waving or whistling – many small IoT sensor/actuators (that are solar powered) are mounted on light poles, bus stop structures, sides of buildings, etc. Pressing the button on the device transmits a taxi request via WiFi/WiMax to the taxi signalling network, which in turn notifies any close taxis of a passenger waiting, and the location is displayed on the dashboard mapping display. A red LED is also illuminated on the transmitter so the passenger waiting knows the request has been sent. When the taxi is close (each taxi is constantly tracked using a combo IoT sensor/transceiver device) the LED turns green to notify the passenger to look for the nearby taxi.

The relatively good IT networks in Kigali make the taxi signalling network possible. One of the fortuitous aspects of local geography (the city is essentially built on four large hills) is that a very good wireless network was easy to establish due to overlooking locations. Although he is encouraged by the possibility of a safer and more modern fleet of taxis, Sebahive is experienced enough to wonder about the many challenges that just living in Africa offers… power outages, the occasional torrential rains, vandalism of public infrastructure, etc. Although there are only about 2,500 minibus taxis in the entire country, it often seems like most of them are in the suburb of Kacyiru, Gasebo district (where the presidential palace and most of the ministries, including Sebahive’s office), is located at rush hour. An IoT solution that keeps taxis, motorcycles (the single most common conveyance in Rwanda), pedestrians and very old diesel lorries from turning a roadway with lanes into an impenetrable morass of.. everything… has yet to be invented!

IT Center in suburban Kigali

IT Center in suburban Kigali

Another aspect of technology, assisted by IoT, that is making life simpler, safer and more efficient is cellphone-based payment systems. With almost everyone having a smartphone today, and even the most unschooled having learned how to purchase airtime, electricity and other basic utilities and enter those credits into a phone or smart meter, the need to pay cash for transport services is fast disappearing. Variations on SnapScan, NFC technology, etc. all offer rapid and mobile payment methods in taxis or buses, speeding up transactions and reducing the opportunity for street theft. One of the many things in Sebahive’s brief is the continual push to get more and more retail establishments to offer the sale of transport coupons (just like airtime or electricity) that can be loaded into a user’s cellphone app.

IoT in Africa is a blend of modern technology with age-old customs, with a healthy dose of reality dropped in…

.

Tihi  Ravi Sham C. is a soybean farmer in one of the poorest ares of rural India, a small village named Tihi in the central Indian state of Madhya Pradesh. However, he’s a sanchalak (lead farmer) with considerable IT experience relative to his environment, having been using a computer and online services since 2004, some 17 years now. Ravi started his involvement with the ITC’s “e-Choupal” service back then, and was able for the first time to gain knowledge of world-wide commodity prices rather than be at the mercy of the often unscrupulous middlemen that ran the “mandis” (physical marketplaces) in rural India. These traders would unfairly pay as little as possible to the farmers, who had no knowledge of the final selling price of their crops. The long-standing cultural, caste and other barriers to free trade in India also did not help the situation.

Indian farmers tilling the earth in Tihi

Indian farmers tilling the earth in Tihi

Although the first decade of internet connectivity greatly improved Ravi’s (and the other farmers in his group area) life and profitability, the last few years (from 2019 onwards) have seen a huge jump in productivity. The initial period was one of knowledge enhancement, becoming aware of the supply chain, learning pricing and distribution costs, being able to get good weather forecasting, etc. The actual farming practice however wasn’t much changed from a hundred years ago. With electricity in scare supply, almost no motorized vehicles or farm equipment, light basically supplied by the sun and so on, real advances toward modern farming were not easily feasible.

As India is making a massive investment into IoT, particularly in the manufacturing and supply chain sectors, an updated version of the “e-Choupal” was delivered to Ravi’s village. The original ‘gathering place’ was basically a computer that communicated over antiquated phone lines at very low speed and mostly supported text transmissions. The new “super-Choupal” was a small shipping container that housed several computers, a small server array with storage and a set of powerful WiFi/WiMax hubs. Connectivity is provided with a combination of the BBNL (Bharat Broadband Network Limited) service supported by the Indian national government, which provided fiber connectivity to many rural areas throughout the country, and a ‘Super WiFi’ service using Microsoft White Spaces technology (essentially identifying and taking advantage of unused portions of the RF spectrum in particular locations [so called “white spaces”] to link the super-Choupal IT container with the edge of the fiber network.

Power for the container is a combination of a large solar array on the top of the container supplemented by fuel cells. As an outgrowth of Intelligent Energy’s deal with India to provide backup power to most of the country’s rural off-grid cell towers (replacing expensive diesel generators), there has been a marked increase in availability of hydrogen as a fuel cell source. The fuel is delivered as a replaceable cartridge, minimizing transport and safety concerns. Since the super-Choupal now serves as a micro datacenter, Ravi spends more of his time running the IT hub, training other farmers and maintaining/expanding the IoT network than farming. Along with the container hub, hundreds of small soil and weather sensors have been deployed to all the surrounding village farms, giving accurate information on where and when to irrigate, etc. In addition, the local boreholes are now monitored for toxic levels of chemical and other pollutants. The power supplies that run the container also provide free electricity for locals to charge their cellphones, etc.

As each farmer harvests their crops, the soybeans, maize, etc. are bagged and then tagged with small passive IoT devices that indicate the exact type of product, amount, date packed, agreed upon selling price and tracking information. This now becomes the starting point for the supply chain, and can be monitored all the way from local distribution to eventual overseas markets. The farmers can now essentially sell online, and receive electronic payment as soon as the product arrives at the local distribution hub. The lost days of each farmer physically transporting their goods to the “mandi” – and getting ripped off by the greedy middlemen – are now in the past. A cooperative collection scheme sends trucks around to each village center, where the IoT-tagged crops are scanned and loaded, with each farmer immediately seeing a receipt for goods on their cellphone. The cost of the trucking is apportioned by weight and distance and billed against the proceeds of the sale of the crops. The distributor can see in almost real time where each truck is, and estimate with knowledge how much grain and so on can be promised per day from the hub.

The combination of improved farming techniques, knowledge, fair pay for the crops and rapid payment have more than tripled Ravi’s, and his fellow farmers’ incomes over the past two years. While this may seem like a small drop in the bucket of international wealth (an increase from $1.12 per year to $3.50 per year by first world standards is hard to appreciate), the difference on the ground is huge. There are over 1 billion Ravi’s in India…

 

This concludes the series on Internet of Things – a continually evolving story. The full series is available as a downloadable PDF here. Queries may be directed to ed@parasam.com

References:

Inside the Tech Revolution That Could Be Rwanda’s Future

Rwanda information

Republic of Rwanda – Ministry of Infrastructure Final Report on Transport Sector

IoT in rural India

Among India’s Rural Poor Farming Community, Technology Is the Great Equalizer

ITC eChoupal Initiative

India’s Soybean Farmers Join the Global Village

a development chronology of tihi

Connecting Rural India : This Is How Global Internet Companies Plan To Disrupt

Bharat Broadband Network

Intelligent Energy’s Fuel Cells

IoT (Internet of Things): A Short Series of Observations [pt 6]: The Disruptive Power of IoT

May 19, 2016 · by parasam

Tsunamis, Volcanoes, Cellphones and Wireless Broadband – Disruptive Elements

Natural disruptions change the environment, and although a new equilibrium is achieved nothing is quite the same. In recent history, the advent of the cellphone changed most of humanity, allowing a level of communication and cohesion that was never before possible. Following on that is the ever-increasing availability of sufficient wireless bandwidth to enable powerful distributed computing. We must not lose sight of the fact that with smartphones, we all walk around now with mobile computers that happen to also make phone calls… And for comparison, an Apple iPhone6 is 120 million times more capable (in terms of total memory and cpu clock speed) than the computers that send man to the moon less than 50 years ago.

disruption03

The long-term disruptive effect of IoT in the next decade will eclipse any other technological revolution in history. Period. To be more precise, the combination of technologies that will encompass IoT will form the juggernaut that will propel this massive disruption. These include IoT itself (the devices and directly interconnecting network fabric), AI (Artificial Intelligence), VR/AR (Virtual Reality / Augmented Reality) and DCT (Distributed Cloud Technology). Each of these technologies are rapidly maturing on their own, but are more or less interdependent, and will collectively construct a layer of intelligence, awareness and responsiveness that will forever change how humans interact with the physical world and each other.

Disruption02                                     

The number of electronic devices that are all interconnected is expected to outnumber the total population of the planet within a year, and to exceed the number of people by over 10:1 within 7 years. In mysticism and philosophy we used to think the term Akashic Records (the complete record of all human thought and emotion, recorded on an astral plane of some sort) was science fiction or the result of the ingestion of controlled substances… now we know this as Google… With virtually every moment of our lives recorded on Instagram, Facebook, etc. and the capacity of both storage and processing making possible the search and derivation of new data from all these memories, a new form of life is developing. Terminology such as avatars, virtual presence, CAL (Computer Aided Living), etc. are fast becoming part of our normal lexicon.

One of the most enduring tests of when a certain technology has thoroughly disrupted an existing paradigm is that of expectation. An example: if a person is blindfolded and taken to an unknown location, and then put in a dark room and the blindfold removed, what happens? That person will almost immediately start feeling on the wall about 1.5 meters off the floor for a small protrusion, and when finding it will push or flick it, with the complete expectation that light will result. The expectation of electricity, infrastructure, light switches, lamps, etc. has become so ingrained that this action will occur for a person of any language or culture, unless living in one of the very few isolated communities left off the grid.

IoT as the Most Powerful Disruptor

Cellphones, now available to 97% of humanity, along with wireless broadband connectivity (46% of the world has such connectivity today) are two of the most recent major disruptive elements in technology. All businesses have had to adapt, and entirely new processes and economies have resulted. The changes that have resulted from just these two things will pale in comparison to what the IoT ecosystem will cause. There are multiple reasons for this:

  • The passive nature of IoT may be the single largest formative factor in large scale disruption. All previous technologies have required active choice on the part of the user: pick up your phone, type on your computer, turn on your stereo, press a light switch, etc. With IoT just your presence, or the interaction of inanimate objects (such as freight, plants, buildings, etc.) will generate data and create new information objects that can be searched, acted upon, etc.
  • The ubiquitous nature of IoT will be such that virtually every person and thing that exists will interact in some way with at least a small portion of the IoT ecosphere. In a highly connected urban center, the penetration of IoT will be so dense that all activity of people and things will reverberate in the IoT universe as well. To take a quick sample of what will be very likely within one year, in a city such as Johannesburg, Berlin or New York: a density approaching 50 devices per square meter.
  • The almost ‘perfect storm’ of a number of collaborative technologies, including IoT itself, that all build on each other and will exponentially increase the collective capability of each technology individually. The proliferation of low-latency, high bandwidth network fabric; the availability of HPC (High Performance Computing) on a massive and economical scale (as provided by Amazon, Google, Microsoft, etc.); the development of truly spectacular applications in AI, VR, AR; and the diffusion of compute power into the network itself (DCT – Distributed Cloud Technology) all build almost a chain-reaction of performance.
  • Hyperconnectivity – the aspect of massively interconnected data stores, compute farms, sensor fabrics, etc. The re-use of data will explode and will most likely become a commodity – perhaps becoming a new economic entity where large blocks of particular types of data are traded much the same as wheat futures are today on a commodity exchange. An example: a large array of temperature, humidity and soil water tension sensors are installed by a farming collective in order to better manage their irrigation process. That data, as well as being used locally, is uploaded to the farming corporation’s data center to be processed as part of their larger business activity. Very likely, that data, perhaps anonomized to some degree, will be ‘sold’ to any other data consumer that wants weather and soil data for that area. The number of times this data will be repackaged and reused will multiply to the point that it will impossible to track with absolute precision.
  • Adding to the notion of ‘passive engagement’ discussed above is the ingredient of ‘implied consent’ that will add millions of data points every hour to the collective ‘infosphere’ that is abstracted from the actual device layer of IoT. For instance, when you enter your car soon, autonomous or human-driven, the vehicle will automatically connect to the traffic management network in your region. This will not be optional, it will be a requirement just like having a license to drive, or that the car has working safety features such as airbags and brakes. Your location, speed, etc will become part of the collective data fabric of the transport sector. Your electricity usage will be monitored by the smart meter that links your home to the grid, and your consumption, on a moment to moment basis, will be transmitted to the electric utility… and to whomever is buying that data onward from the utility.
  • The privacy and security aspects of this massively shared amount of data have been discussed previously, but should be understood here to add to the disruptive nature of this technology. Whatever fragments of perception of privacy one had to date must be retired along with kerosene lanterns, horse-drawn buggies and steam engines. Perhaps someday we will go to ‘privacy museums’ which will depict situations and tableaus of times past where one could move, speak and interact with no one else knowing…

The Results and Beneficiaries of the IoT Disruption

As with each technological sea change before it, the world will adapt. The earth won’t stop turning on its axis, and the masses won’t storm the castles (well, not unless their tech stops working once they expect it to..). Ten years from now, as we have come to appreciate, expect and benefit from the reduced friction of living in a truly connected and hyperaware universe, we will wonder how we got along in the prehistoric age. Even now, as phone booths have almost completely disappeared from the urban landscape in so many cities, we can hardly imagine life before cellphones.

Yes, the introductory phase, as with many earlier technologies, will be plagued with frustrations, disappointments, failures and other speed bumps on the way to a robust deployment. As this technology, in its largest sense, will have the most profound effect on humanity in general, we must expect a long implementation timeframe. Many moral, ethical, legal and regulatory issues must be confronted, and this always takes much, much longer than the underlying technology itself to resolve. Due to the implications of privacy, data ownership, etc. – on such a massive scale – entirely new constructs of both law and economics will be born.

In terms of economic benefit, the good news is this technology is far too diffuse and varied for any small group of firms to control, patent or otherwise exercise significant ‘walled garden’ control. While there is much posturing right now from large industrial firms that will likely manufacture IoT devices, and the Big Four of IT (Google, Amazon, Microsoft, Facebook); none of these will be able to put a wall around IoT. Partially due to the very international manner of IoT, the ubiquity and breadth of sensor/actuator types, and the highly diffuse use and reuse of data IoT will rapidly become a commodity.

We will certainly need standards, regulations and other constructs in order for the myriad of players to effectively communicate and interact without undue friction, but this has been true of railroads, telephones, highways, shipping, etc. for centuries. Therefore the beneficiaries will be spread out massively over time. All humans will benefit in some manner, as will most businesses of almost any type. Ten years on, many small businesses may not ever directly make a specific investment in IoT, but this technology will be embedded in everything they do; from ordering stock, transport, sales, etc.

Like other major innovations before it IoT will ultimately become just part of the fabric of life for humanity. The challenge is right now, during the formative years, to attempt to match the physical technology with concomitant economic, legal and ethical guidelines so that this technology is implemented in the best possible way for all.

 

The final section of this post “A Snapshot of an IoT-connected World in 2021” may be found here.

IoT (Internet of Things): A Short Series of Observations [pt 5]: IoT from the Business Point of View

May 19, 2016 · by parasam

IoT from the Business Perspective

While much of the current reporting on IoT describes how life will change for the end user / consumer once IoT matures and many of the features and functions that IoT can enable have deployed, the other side of the coin is equally compelling. The business of IoT can be broken down into roughly three areas: the design, manufacture and sales of the IoT technology; the generalized service providers that will implement and operate this technology for their business partners; and the ‘end user’ firms that will actually use this technology to enhance their business – whether that be in transportation, technology, food, clothing, medicine or a myriad of other sectors.

The manufacture, installation and operation of billions of IoT devices will be expensive in its totality. The only reason this will happen is that overall a net positive cash flow will result. Business is not charity, and no matter how ‘cool’ some new technology is perceived to be, no one is going to roll this out for the bragging rights. Even at this nascent stage the potential results of this technology are recognized by many different areas of commerce as such a powerful fulcrum that there is a large appetite for IoT. The driving force for the entire industry is the understanding of how goods and services can be made and delivered with increased efficiency, better value and lower friction.

InternetOfThings02  sensor07

As the whole notion of IoT matures, several aspects of this technology that must be present initially for IoT to succeed (such as an intelligent network, as discussed in prior posts in this series) will benefit other areas of the general IT ecosystem, even those not directly involved with IoT. Distributed and powerful networks will enhance ‘normal’ computational work, reduce loads on centralized data centers and in general provide a lower latency and improved experience for all users. The concept of increased contextual awareness that IoT technology brings can benefit many current applications and processes.

Even though many of today’s sophisticated supply chains have large portions that are automated and are otherwise interwoven with a degree of IT, many still have significant silos of ‘darkness’ where either there is no information, or process must be performed by humans. For example, the logistics of importing furniture from Indonesia is rife with many handoffs, instructions, commercial transactions and so on that are verbal or at best hand written notes. The fax is still ‘high technology’ in many pieces of this supply chain, and exactly what ends up in any given container, and even exactly which ship it’s on, is still often a matter of guesswork. IoT tags that are part of the original order (retailer in Los Angeles wants 12 bookcases) can be encoded locally in Indonesia and delivered to the craftsperson, who will attach each one to the completed bookcase. The items can then be tracked during the entire journey, providing everyone involved with a greater ease and efficiency of operations (local truckers, dockworkers, customs officials, freight security, aggregation and consignment through truck and rail in the US, etc.)

As IoT is in its infancy at this stage it’s interesting to note that the largest amount of traction is in the logistics and supply chain parts of commerce. The perceived functionality of IoT is so high, with relatively low risk from early adopter malfunction, that many supply chain entities are jumping on board, even with some half-baked technology. As was mentioned in an earlier article, temperature variations during transport are the single highest risk factor for the delivery of wine internationally. IoT can easily provide end-to-end monitoring of the temperature (and vibration) for every case of wine at an acceptable cost. The identification of suspect cases, and the attribution of liability to the carriers, will improve quality, lower losses and lead to reforms and changes where necessary in delivery firms to avoid future liability for spoiled wine.

As with many ‘buzzwords’ in the IT industry, it will be incumbent on each company to determine how IoT fits (or does not) within that firm’s product or service offerings. This technology is still in the very early stages of significant implementation, and many regulatory, legal, ethical and commercial aspects of how IoT will interact within the larger existing ecosystems of business, finance and law have yet to be worked out. Early adoption has advantages but also risk and increased costs. Rational evaluation and clear analysis will, as always, be the best way forward.

The next section of this post “The Disruptive Power of IoT” may be found here.

IoT (Internet of Things): A Short Series of Observations [pt 4]: IoT from the Consumer’s Point of View

May 19, 2016 · by parasam

Functional IoT from the Consumer’s Perspective

The single largest difference between this technology and most others that have come before – along with the requisite hype, news coverage, discussion and confusion – is that almost without exception the user won’t have to do anything to participate in this ‘new world’ of IoT. All previous major technical innovations have required either purchasing a new gadget, or making some active, conscious choice to participate in some way. Examples include getting a smartphone, a computer, a digital camera, a CD player, etc. Even if sometimes the user makes an implicit choice to embrace a new technology (such as a digital camera instead of a film camera) there is still an explicit act of bringing this different thing into their lives.

With IoT, almost every interaction with this ecosystem will be passive – i.e. will not involve conscious choice by the consumer. While the effects and benefits of the IoT technology will directly affect the user, and in many cases will be part of other interactions with the technosphere (home automation, autonomous cars, smartphone apps, etc.) the IoT aspect is in the background. The various sensors, actuators and network intelligence that makes all this work may never directly be part of a user’s awareness. The fabric of one’s daily life simply will become more responsive, more intelligent and more contextually aware.

During the adoption phase, where the intelligence, interaction and accuracy of both sensor, actuator and software interpretation of the data is maturing we can expect hiccups. Some of these will be laughable, some frustrating – and some downright dangerous. Good controls, security and common sense will need to prevail to ensure that this new technology is implemented correctly. Real-time location information can be reassuring to a parent whose young children are walking to school – and yet if that data is not protected or is hacked, can provide information to others that may have far darker intentions in mind. We will collectively experience ‘double-booked’ parking spaces (where smart parking technology gets it wrong sometimes), refrigerators that order vodka instead of milk when the product tracking goes haywire and so on. The challenge will be that the consumer will have far less knowledge, or information, about what went wrong and who to contact to sort it out.

When your weather app is consistently wrong, you can contact the app vendor, or if the data itself is wrong, the app maker can approach the weather data provider service. When a liter of vodka shows up in your shopping delivery instead of a liter of milk, is it the sensor in the fridge, the data transmission, an incorrectly coded tag on the last liter of milk consumed, the backoffice software in the data collection center, the picking algorithm in the online shopping store… the number of possible areas of malfunction are simply enormous in the IoT universe and a considerable effort will be required to ascertain where the root cause of failure is with each error.

A big part of a successful rollout of IoT will be a very sophisticated fault analysis layer that extends across the entire ecosystem. This again is a reason why the network of IoT itself must be so intelligent for things to work correctly. In order for data to be believed by upstream analysis and correctly integrated into a knowledge-based ecosystem, and for correct actions to be taken a high degree of contextual awareness and ‘range of acceptable data/outcomes’ must be built in to the overall network of IoT. When anomalies show up, the fault detection layer must intervene. Over time, the heuristic learning capability of many network elements may be able to actually correct for the bad data but at least data that is suspect must be flagged and not blindly acted upon.

A big deal was recently made over the next incarnation of Siri (Viv) managing to correctly order and deliver a pizza via voice recognition technology and AI (Artificial Intelligence). This type of interaction will fast become the norm in an IoT-enabled universe. Not all of the perceived functionality will be purely IoT – in many cases the data that IoT can provide will supplement other more traditional data inputs (voice, keyboard, thumbpress, fingerswipes, etc.). The combined data, along with a wealth of contextual knowledge (location, time of day, temperature, etc) and sophisticated algorithms, AI computation and the capability of low-latency ultra-high-speed networks and compute nodes will all work together to manifest the desired outcome of an apparently smart surrounding.

The Parallel Universes of IoT Communities

As the IoT technology rolls out during the next few years, different cultures and countries with different priorities and capabilities will implement these devices and networks in various ways. While the sophistication of a hyperfunctional BMW autonomous car driving you to a shop, finding and parking all without any human intervention may be the experience of a user in Munich, a farmer in rural Asia may use a low complexity app on their smartphone to read the data in some small sensors in local wells to determine that heavy metals have not polluted the water. If in fact the water is not up to standards, the app may then (with a very low bandwidth burst of data) inform the regional network that attention is required, and discover where nearby suitable drinking water is available.

Over time, the data collected by individual communities will aggregate and provide a continual improvement of knowledge of environment, migration of humans and animals, overall health patterns and many other data points that today often must be proactively gathered by human volunteers. It will take time, and continual work on data grooming, but the quantity and quality of profoundly useful data will increase many-fold during the next decade.

One area of critical usefulness where IoT, along with AI and considerable cleverness in data mining and analysis, can potentially save many lives and economic costs is in the detection and early reaction to medical pandemics. As we have recently seen with bird flu, Ebola and other diseases, the rapid transportation systems along with delayed incubation times can post a considerable risk for large groups of humanity. Since (in theory) all airline travel, and much train/boat travel is identifiable and trackable, the transmission vectors of potential carriers could be quickly analyzed if localized data in a particular area began to suggest a medical threat. The early signs of trouble are often in areas of low data awareness and generation (bird and chicken deaths in rural areas in Asia for example) – but as IoT brings an improvement in overall contextual awareness of environment initially unrelated occurrences can be monitored.

The importance and viability of the IoT market in developing economies cannot be underestimated: several major firms that specialize in such predictions (Morgan Stanley, Forbes, Gartner, etc.) predict that roughly a third of all sales in the IoT sector will come from emerging economies. The ‘perfect storm’ of relatively low-cost devices, the continual increase in wireless connectivity and the proliferation of relatively inexpensive but powerful compute nodes (smartphones, intelligent network nodes, etc.) can easily be implemented in areas that just five years ago were thought impenetrable by modern technology.

The next section of this post “IoT from the Business Point of View” may be found here.

Page 1 of 2 1 2 Next »
  • Blog at WordPress.com.
  • Connect with us:
  • Twitter
  • Vimeo
  • YouTube
  • RSS
  • Follow Following
    • Parasam
    • Join 95 other followers
    • Already have a WordPress.com account? Log in now.
    • Parasam
    • Customize
    • Follow Following
    • Sign up
    • Log in
    • Report this content
    • View site in Reader
    • Manage subscriptions
    • Collapse this bar
 

Loading Comments...