• Home
  • about this blog
  • Blog Posts

Parasam

Menu

  • design
  • fashion
  • history
  • philosophy
  • photography
  • post-production
    • Content Protection
    • Quality Control
  • science
  • security
  • technology
    • 2nd screen
    • IoT
  • Uncategorized
  • Enter your email address to follow this blog and receive notifications of new posts by email.

  • Recent Posts

    • Take Control of your Phone
    • DI – Disintermediation, 5 years on…
    • Objective Photography is an Oxymoron (all photos lie…)
    • A Historical Moment: The Sylmar Earthquake of 1971 (Los Angeles, CA)
    • Where Did My Images Go? [the challenge of long-term preservation of digital images]
  • Archives

    • September 2020
    • October 2017
    • August 2016
    • June 2016
    • May 2016
    • November 2015
    • June 2015
    • April 2015
    • March 2015
    • December 2014
    • February 2014
    • September 2012
    • August 2012
    • June 2012
    • May 2012
    • April 2012
    • March 2012
    • February 2012
    • January 2012
  • Categories

    • 2nd screen
    • Content Protection
    • design
    • fashion
    • history
    • IoT
    • philosophy
    • photography
    • post-production
    • Quality Control
    • science
    • security
    • technology
    • Uncategorized
  • Meta

    • Register
    • Log in
    • Entries feed
    • Comments feed
    • WordPress.com

Browsing Category technology

Take Control of your Phone

September 28, 2020 · by parasam

A ton of info has been well written on the addictive qualities of the smartphone, its intrusion into our daily lives, and the two-edged sword of “free” apps. I won’t repeat any of that here, rather just offer a short set of solutions to make your phone work for you, instead of the platforms, ad agencies and data resellers that all too often have made your attention the product.

If you have not seen it, the movie “/the social dilemma_” is a good summary of the issues. https://www.netflix.com/za/title/81254224

The core of the situation is that our phones (and to a lesser extent our tablets and computers) have become a tool for a relatively few large firms to command and hold our attention, then using that to present ads which fuel that economic ecosystem. You may have heard terms such as “data is the new oil”, “your data is for sale”, etc. These aphorisms miss the point: what is for sale is your attention, the underlying data of your behavior and what is likely to hold your attention is merely the mechanism.

The software that grabs, and then holds, our attention is comprised of two main aspects: the User Experience / User Interface (UX/UI) of the device itself (iPhone, Android, etc.); and the design of individual apps (particularly social media such as Facebook, Instagram, Twitter, etc.)

This post only deals with the former: the things one can easily do to reduce the actions, noise, and other programmatic functions of your phone that are designed to trigger a response (to pick up your phone and interact).

I have used the Apple ecosystem (iOS) as the example here mainly as it is well-known and universal, while there are a large number of variations on the Android OS, with each hardware manufacturer often tweaking it a bit. However the principles are exactly the same, and one can duplicate in most cases my suggestions.

Notifications

This is the animal you need to tame. The blinking, dinging and buzzing that says “Look At Me“; the little red badges that induce the anxiety of FOMO (Fear Of Missing Out)…

To a lesser extent, the layout of apps on your phone, the organization of your apps, and a few other tunings also affect the subtle interaction of phone behavior.

Using iOS as the example, open Settings, tap on Notifications. You will see a list of all your apps. Turn off ALL your notifications [the switch that says Allow Notifications]. As a deterrent you cannot switch them all off at once, you must turn each one off individually. I recommend this procedure as you won’t miss one this way. Turning back on certain app notifications then becomes a conscious decision.

When it comes to turning on a notification, think hard about what do you absolutely have to see without first allowing yourself to be in control – When do you want to check, What do you want to check, Why to you want to check. I recommend only turning notifications on for apps that tell you that People want to connect with you, not things (such as social media, news sites, etc.) For example, in that category here is the list of what is turned on in my phone:

  • Phone
  • FaceTime
  • Messages
  • Signal (an encrypted messaging app)
  • WhatsApp

That’s all! In addition, for the few apps that you do allow to draw your attention, you can modify the behavior of the notification to further lower the level of disturbance. Once the Allow Notifications switch is turned on, the choices listed under Alerts are Lock Screen (which allows the Notification to appear even if your phone is asleep); Notification Center (showing the Alert there); and Banners (which show up at the top of your screen when you are looking at another app).

As a suggestion and example, for WhatsApp I have Lock Screen and Notification Center turned on, but Banners turned off. Here in South Africa WhatsApp is the primary means of text communication, so I do depend on seeing that even on my Lock Screen to know when another person is trying to reach me. But it’s not so vital that my attention needs to be dragged away from answering an e-mail with a banner interrupting me that someone wants to chat on WhatsApp.

If you turn on Banners, I suggest you always use Temporary, as this makes the Banner go away after a few seconds. Otherwise you must further divert your attention to manually dismiss the Banner.

The next group of alert behaviors consists of two switches: Sounds and Badges. Again, be sparing in your use of sound, as that can be quite distracting. I only have Sounds turned on for my phone, everything else I can see the next time I look at my phone. Badges are insidious, it’s that little red circle with a number of what you haven’t given your attention to. Once you are in an app, you will see what is there that you haven’t dealt with, turn Badges off!

The last section (Options) has one important setting: Show Previews. This has three possibilities: Always (the default setting), When Unlocked, and Never. This shows the first few lines of the message, WhatsApp, etc. – and if Always is selected then even on your lock screen (for those apps that you have set to alert you there) messages that may be private are displayed for anyone that can see your phone. I either set this to When Unlocked or Never, depending on the app. The remaining setting (Notification Grouping) is fine left on Automatic.

You will notice I have not allowed notifications for e-mail, even though this can be from people. It is far too disturbing and unnecessary to receive alerts for every e-mail.

There are a few apps that I do allow notifications to appear that are not “people oriented”: mainly security. Here is my list as an example:

  • Buzzer (neighborhood security app)
  • Earthquake
  • Find iPhone
  • LoadShed CT (we have lots of load shedding here in Cape Town)
  • Reminders
  • Weather (for severe weather alerts only)
  • Waze (so I know when to leave for planned trip to an appointment)

There are three last things that will help in terms of taming your phone.

  1. Only put task oriented apps on the Home Page (Reminders, Calendar, Settings, etc. Put all other apps on additional pages. Put ALL apps inside folders – this not only helps in organization, it also requires you to make at least three actions to access a social media app such as Facebook: 1) swipe to 2nd page; 2) open folder; 3) select app.
  2. Set Homepage to monochrome (far less disturbing and distracting). On iPhone this is done by going to Settings/General/Accessibility, scroll all the way to the bottom of the list and tap Accessibility Shortcut. Choose Color Filters. Exit Settings. Triple-clicking the Home button will switch from normal colored icons to gray-scale icons. Try it out…
  3. Lastly, turn on Night Shift. This is in Settings/Display & Brightness. This warms up the color temperature of the screen in dark surroundings, normally in evening and nighttime. You may find that you want to move the slider to the left a bit, I find the default middle setting too orange, but it really does reduce the ‘blue light syndrome’ associated with sleep disturbance.

Hope this is of use.

DI – Disintermediation, 5 years on…

October 12, 2017 · by parasam

I wrote this original article on disintermediation more than 5 years ago (early 2012), and while most of the comments are as true today as then, I wanted to expand a bit now that time and technology have moved on.

The two newest technologies that are now mature enough to act as major fulcrums of further disintermediation on a large scale are AI and blockchain. Both of these have been in development for more than 5 years of course, but these technologies have made a jump in capability, scale and applicability in the last 1-2 years that is changing the entire landscape. Artificial Intelligence (AI) – or perhaps a better term “Augmented Intelligence” – is changing forever the man/machine interface, bringing machine learning to aid human endeavors in a manner that will never be untwined. Blockchain technology is the foundation (originally developed in the arcane mathematical world of cryptography) for digital currencies and other transactions of value.

AI

While the popular term is “AI” or Artificial Intelligence, a better description is “Deep Machine Learning”. Essentially the machine (computer, or rather a whole pile of them…) is given a problem to solve, a set of algorithms to use as a methodology, and a dataset for training. After a number of iterations and tunings, the machine usually refines its response such that the ‘problem’ can be reliably solved accurately and repeatedly. The process, as well as a recently presented theory on how the ‘deep neural networks’ of machine learning operate, is discussed in this excellent article.

The applications for AI are almost unlimited. Some of the popular and original use cases are human voice recognition and pattern recognition tasks that for many years were thought to be too difficult for computers to perform with a high degree of accuracy. Pattern recognition has now improved to the point where a machine can often outperfom a human, and voice recognition is now encapsulated in the Amazon ‘Echo’ device as a home appliance. Many other tasks, particularly ones where the machine assists a human (Augmented Intelligence) by presenting likely possibilities reduced from extremely large and complex datasets, will profoundly change human activity and work. Such examples include medical diagnostics (an AI system can read every journal every written, compare to a history taken by a medical diagnostician, and suggest likely scenarios that could include data the medical professional couldn’t possibly have the time to absorb); fact-checking news stories against many wide-ranging sources; performing financial analysis; writing contracts; etc.

It’s easy to see that many current ‘professions’ will likely be disrupted or disintermediated… corporate law, medical research, scientific testing, pharmaceutical drug trials, manufacturing quality control (AI connected to robotics), and so on. The incredible speed and storage capability of modern computational networks provides the foundation for an ever-increasing usage of AI at a continually falling price. Already apps for mobile devices can scan thousands of images and make suggestions for keywords, mark for collections of similar images, etc. [EyeEm Vision].

Another area where AI is utilized is in autonomous vehicles (self-driving cars). The synthesis of hundreds of inputs from sensors, cameras, etc. are analyzed thousands of times per second in order to safely pilot the vehicle. One of the fundamental powers of AI is the continual learning that takes place. The larger the dataset, the more of a given set of experiences, the better the machine will be at optimizing the best outputs. For instance, every Tesla car gathers massive amounts of data from every drive the car takes, and continually uploads that data to the servers at the factory. The combined experience of how thousands of vehicles respond to varying road and traffic conditions is learned and then shared (downloaded) to every vehicle. So each car in the entire fleet benefits from everything learned by every car. This is impossible to replicate with individual human drivers.

The potential use cases for this new technology is almost unbounded. Some challenging issues likely can only be solved with advanced machine learning. One of these is the (today) seemingly intractable problem of updating and securing a massive IoT (Internet of Things) network. Due to the very low cost, embedded nature, lack of human interface, etc. that is a characteristic of most IoT devices, it’s impossible to “patch” or otherwise update individual sensors or actuators that are discovered to have either functional or security flaws after deployment. By embedding intelligence into the connecting fabric of the network itself that links the IoT devices to nodes or computers that utilize the info, even sub-optimal devices can be ‘corrected’ by the network. Incorrect data can be normalized, attempts at intrusion or deliberate altering of data can be determined and mediated.

Blockchain

The blockchain technology that is often discussed today, usually in the same sentence as Bitcoin or Ethereum, is a foundational platform that allows secure and traceable transactions of value. Essentially each set of transactions is a “block”, and these are distributed widely in an encrypted format for redundancy and security. These transactions are “chained” together, forming the “blockchain”. Since the ‘public ledger’ of these groups of transactions (the blockchains) are impossible to alter, the security of every transaction is ensured. This article explains in more detail. While the initial focus of blockchain technology has been on so-called ‘cryptocurrencies’ there are many other uses for this secure transactional technology. By using the existing internet connectivity, items of value can be securely distributed practically anywhere, to anyone.

One of the most obvious instances of transfer of items of value over the internet is intellectual property: i.e. artistic works such as books, images, movies, etc. Today the wide scale distribution of all of these creative works is handled by a few ‘middlemen’ such as Amazon, iTunes, etc. This introduces two major forms of restriction: the physical bottleneck of client-server networking, where every consumer must pull from a central controlled repository; and the financial bottleneck of unitary control over distribution, with the associated profits and added expense to the consumer.

Even before blockchain, various artists have been exploring making more direct connections with their consumers, taking more control over the distribution of their art, and changing the marketing process and value chain. Interestingly the most successful (particularly in the world of music) are all women: Taylor Swift, Beyoncé, Lady Gaga. Each is now marketing on a direct to fan basis via social media, with followings of millions of consumers. A natural next step will be direct delivery of content to these same users via blockchain – which will have even a large effect on the music industry than iTunes ever did.

SingularDTV is attempting the first ever feature film to be both funded and distributed entirely on a blockchain platform. The world of decentralized distribution is upon us, and will forever change the landscape of intellectual property distribution and monetization. The full effects of this are deep and wide-ranging, and would occupy and entire post… (maybe soon).

In summation, these two notable technologies will continue the democratization of data, originally begun with the printing press, and allow even more users to access information, entertainment and items of value without the constraints of a narrow and inflexible distribution network controlled by a few.

Objective Photography is an Oxymoron (all photos lie…)

August 18, 2016 · by parasam

There is no such thing as an objective photograph

A recent article in the Wall Street Journal (here) entitled “When Pictures Are Too Perfect” prompted this post. The premise of the article is that too much ‘manipulation’ (i.e. Photoshopping) is present in many of today’s images, particularly in photojournalism and photo contests. There is evidently an arbitrary standard (that no one can appear to objectively define) that posits that essentially only an image ‘straight out of the camera’ is ‘honest’ or acceptable – particularly if one is a photojournalist or is entering your image into some form of competition. Examples are given, such as Harry Fisch having a top prize from National Geographic (for the image “Preparing the Prayers at the Ganges”) taken away because he digitally removed an extraneous plastic bag from an unimportant area of the image. Steve McCurry, best known for his iconic “Afghan Girl” photo on the cover of National Geographic magazine in 1985, was accused of digital manipulation of some images shot in 1983 in Bangladesh and India.

On the whole, I find this absurd and the logic behind such attempts at defining an ‘objective photograph’ fatally flawed. From a purely scientific point of view, there is absolutely no such thing as an ‘objective’ photograph – for a host of reasons. All photographs lie, permanently and absolutely. The only distinction is by how much, and in how many areas.

The First Lie: Framing

The very nature of photography, from the earliest days until now, has at its core an essential feature: the frame. Only a certain amount of what can be seen by the photographer can be captured as an image. There are four edges to every photograph. Whether the final ‘edges’ presented to the viewer are due to the limitations of the camera/film/image sensor, or cropping during the editing process, is immaterial. The initial choice of frame is made by the photographer, in concert with the camera in use, which presents physical limitations that cannot be exceeded. The choice of frame is completely subjective: it is the eye/brain/intuition of the photographer that decides in the moment where to point the camera, what to include in the frame. Is pivoting the camera a few degrees to the left to avoid an unsightly telephone pole “unwarranted digital manipulation?” Most news editors and photo contest judges would probably not agree. But what if the same exact result is obtained by cropping the image during an editing process – already we start to see disagreement in the literature.

If Mr. Fisch had simply walked over and picked up the offending plastic bag before exposing the image, he would likely be the deserved recipient of his 1st place prize from National Geographic, but as he removed the bag during editing his photograph was disqualified. By this same logic, when Leonardo Da Vinci painted the “Mona Lisa” there is a balustrade with two columns behind her. There is perfect symmetry in the placement of Lisa Gherardini (the presumed model) between the columns, which helps frame the subject. Painting takes time, it is likely that a bird would land from time to time on the balustrade. Was Leonardo supposed to include the bird or not? Did he ‘manipulate’ the image by only including the parts of the image that were important to the composition? Would any editor or judge dare ask him today, if that was possible?

Framing and large depth-of-field was necessary in this shot to tell the story: Background and facial expression of the subject both needed to be in focus. [©2012 Ed Elliott / Clearlight Imagery]

“So-So Happy!” Framing and large depth-of-field was necessary in this shot to tell the story: Background and facial expression of the subject both needed to be in focus. [©2012 Ed Elliott / Clearlight Imagery]

Framing and large depth-of-field was necessary in this shot to tell the story: Background and facial expression of the subject both needed to be in focus. [©2012 Ed Elliott / Clearlight Imagery]

“So-So Happy… NOT!” Framing and large depth-of-field was necessary in this shot to tell the story: Background and facial expression of the subject both needed to be in focus. [©2012 Ed Elliott / Clearlight Imagery]

A combination example of framing and depth-of-field. One photographer is standing 6 ft further away (from my camera position) than the other, but the foreshortening of the 200mm telephoto appears to depict 'dueling photographers'. [©2012 Ed Elliott / Clearlight Imagery]

A combination example of framing and depth-of-field. One photographer is standing 6 ft further away (from my camera position) than the other, but the foreshortening of the 200mm telephoto appears to depict ‘dueling photographers’. [©2012 Ed Elliott / Clearlight Imagery]

The Second Lie: The Lens

No photograph can occur without a lens. Every lens has certain irrefutable properties: focal length and maximum aperture being the most important. Each of these parameters impart a vital, and subjective, aspect to the image subsequently captured. Since the ‘lingua franca’ of focal length is the ubiquitous 35mm camera, we can generalize here: 50mm being the so-called ‘normal’ lens; 35mm is considered ‘wide angle’, 24mm ‘very wide angle’ and 10mm a ‘fisheye’. Going in the other direction, 85mm is often considered a ‘portrait’ lens (slight close-up), 105mm a medium ‘telephoto’, 200mm a ‘telephoto’ and anything beyond is for sports or space exploration. Each focal length brings more or less of the frame into focus, and inversely shortens the depth of field. Wide angle lenses tend to bring the entire field of view into sharp focus, while telephotos blur out everything except what the photographer has selected as the prime focus point.

Normal

Normal lens [©2016 Ed Elliott / Clearlight Imagery]

Telephoto lens (for illustrative purpose only, please ignore the soft focus at edges caused by lens adapter)

Telephoto lens (for illustrative purpose only, please ignore the soft focus at edges caused by lens adapter) [©2016 Ed Elliott / Clearlight Imagery]

Wide Angle lens (for illustrative purpose only, please ignore the soft focus at edges caused by lens adapter)

Wide Angle lens (for illustrative purpose only, please ignore the soft focus at edges caused by lens adapter) [©2016 Ed Elliott / Clearlight Imagery]

FishEye lens (for illustrative purpose only, please ignore the soft focus at edges caused by lens adapter) - curvature and edge distortions are normal for such an extreme angle-of-view lens

FishEye lens (for illustrative purpose only, please ignore the soft focus at edges caused by lens adapter) – curvature and edge distortions are normal for such an extreme angle-of-view lens   [©2016 Ed Elliott / Clearlight Imagery]

In addition, each lens type distorts the field of view noticeably: wide angle lenses tend to exaggerate the distance between foreground and background, making the closer objects in the frame look larger than they actually are, and making distant objects even smaller. Telephoto lenses have the opposite effect, foreshortening the image and ‘flattening’ the resulting picture. For example, in a long telephoto shot of a tree on a ridge backlit by the moon, both the tree and the moon can be tack sharp and apparently the moon is directly behind the tree, even though it is 239,000 miles away.

The other major ‘subjective’ quality of any lens is the aperture chosen by the photographer. Otherwise commonly known as the “f-stop” this is the ratio of the focal length of the lens divided by the diameter of the ‘entrance pupil’ (the size of the hole that the aperture diaphragm is set to on a given capture). The maximum aperture (the largest ‘hole’ that can be set by the photographer) depends on the diameter of the lens itself, in relation to the focal length. For example, with a ‘normal’ 50mm lens if the lens is 25mm in diameter then the maximum aperture is f/2 (50/25). Larger apertures (lower f-stop ratios) require larger lenses, and are correspondingly more difficult to use, heavy and expensive. One can see that an f/2 lens for a 50mm focal length is not that huge, to obtain the same f/2 ratio for a 200mm telephoto would require a lens that is at least 100mm (4in) in diameter – making such a device huge, heavy and obscenely expensive. As a quick comparison, (Nikon lenses, full frame, prime lens, priced from B&H Photo – discount photo equipment supplier) a 50mm f/2.8 lens costs $300, while the same lens in f/1.2 costs $700. A 400mm telephoto in f/5.6 would be $2,200, while an identical focal length with a maximum aperture of f/2.8 will set you back a little over $12,000.

Exaggeration of object size with wide angle lens: farther objects appear much smaller than in 'reality'.

Exaggeration of object size with wide angle lens: farther objects appear much smaller than in ‘reality’. [©2011 Ed Elliott / Clearlight Imagery]

Flattening and foreshortening of the image as a result of long telephoto lens (f8, 400mm lens) - the crane is hundreds of feet closer to the camera than the dark buildings behind, but looks like they are directly adjacent.

Flattening and foreshortening of the image as a result of long telephoto lens (f/8, 400mm lens) – the crane is hundreds of feet closer to the camera than the dark buildings behind, but looks like they are directly adjacent. [©2013 Ed Elliott / Clearlight Imagery]

Depth of field with shallow aperture (f/2.4) - in this case even with a wide angle lens the background is out of focus due to the large distance between the foreground and the background (in this case the Hudson River separated the two...)

Depth of field with shallow aperture (f/2.4) – in this case even with a wide angle lens the background is out of focus due to the large distance between the foreground and the background (in this case the Hudson River separated the two…) [©2013 Ed Elliott / Clearlight Imagery]

Flattening and foreshortening of the image with a long telephoto lens. The ship is almost 1/4 mile further away than the green roadway sign, yet appears to be directly behind it... (f4, 400mm)

Flattening and foreshortening of the image with a long telephoto lens. The ship is almost 1/4 mile further away than the green roadway sign, yet appears to be directly behind it… (f/4, 400mm) [©2013 Ed Elliott / Clearlight Imagery]

Wide angle lens (14-24mm zoom lens, set at 16mm - f2.8)

Wide angle lens (14-24mm zoom lens, set at 16mm – f/2.8) [©2012 Ed Elliott / Clearlight Imagery]

Shallow depth of field due to large aperture on telephoto lens (f/4 - 200mm lens on full-frame 35mm DSLR)

Shallow depth of field due to large aperture on telephoto lens (f/4 – 200mm lens on full-frame 35mm DSLR) [©2012 Ed Elliott / Clearlight Imagery]

Wide angle shot, demonstrating sharp focus from foreground to the background. Also exaggeration of perspective makes the bow of the vessel appear much taller than the stern.

Wide angle shot, demonstrating sharp focus from foreground to the background. Also exaggeration of perspective makes the bow of the vessel appear much taller than the stern. [©2013 Ed Elliott / Clearlight Imagery]

The bottom line is that the choice of lens and aperture is a controlling element of the photographer (or her pocketbook) – and has a huge effect on the image taken with that lens and setting. None of these choices can be deemed to be either ‘analog’ or ‘digital’ manipulation of the image during editing, but they have arguably a greater effect on the outcome, message, impact and tenor of the photograph than anything that can be done subsequently in the darkroom (whether chemical or digital).

The Third Lie: Shutter Speed

Every exposure is a product of two factors: Light X Time. The amount of light that strikes a negative (or digital sensor) is governed solely by the selected aperture (and possibly by any additional filters placed in front of the lens); the duration for which the light is allowed to impinge on the negative is set by the shutter speed. While the main property of setting the shutter speed is to produce the correct exposure once the aperture has been selected (to avoid either under or over-exposing the image), there is a huge secondary effect of shutter speed on any motion of either the camera or objects in the frame. Fast shutter speeds (over 1/125th of a second with a normal lens) will essentially freeze any motion, while slow shutter speeds will result in ‘shake’, ‘blur’ and other motion artifacts. While some of these can be just annoying, in the hands of a skilled photographer motion artifacts tell a story. And likewise a ‘freeze-frame’ (from a very fast shutter speed) can distort reality in the other direction, giving the observer a point of view that the human eye could never glimpse in reality. The hours-long time exposure of star trails or the suspended animation shot of a bullet about to pierce a balloon are both ‘manipulations’ of reality – but they take place as the image is formed, not in the darkroom. The subjective experience of a football distorted as the kicker’s foot impacts it – locked in time by a shutter speed of 1/2000th second – is very different to the same shot of the kicker at 1/15th second where his leg is a blurry arc against a sharp background of grass. Two entirely different stories, just from shutter speed choice.

Fast shutter speed to stop action

Fast shutter speed to stop action [©2013 Ed Elliott / Clearlight Imagery]

Combination of two effects: fast shutter speed to stop motion (but not too fast, slight blurring of left foot imparts motion) - and shallow depth of field to render background soft-focus (f4, 200mm lens)

Combination of two effects: fast shutter speed to stop motion (but not too fast, slight blurring of left foot imparts motion) – and shallow depth of field to render background soft-focus (f/4, 200mm lens) [©2013 Ed Elliott / Clearlight Imagery]

High shutter speed to freeze the motion. 1/2000 sec. [©2012 Ed Elliott / Clearlight Imagery]

High shutter speed to freeze the motion. 1/2000 sec. [©2012 Ed Elliott / Clearlight Imagery]

Fast shutter speed to provide clarity and freeze the motion. 1/800 sec @ f/8 [©2012 Ed Elliott / Clearlight Imagery]

Fast shutter speed to provide clarity and freeze the motion. 1/800 sec @ f/8 [©2012 Ed Elliott / Clearlight Imagery]

Although a hand-held shot, I wanted as fine-grained a result as possible, so took advantage of the stillness of the subjects and a convenient wall on which to place the camera. 2 sec exposure with ISO 500 at f8 to keep the depth of field. [©2012 Ed Elliott / Clearlight Imagery]

Although a hand-held shot, I wanted as fine-grained a result as possible, so took advantage of the stillness of the subjects and a convenient wall on which to place the camera. 2 sec exposure with ISO 500 at f/8 to keep the depth of field. [©2012 Ed Elliott / Clearlight Imagery]

The Fourth Lie: Film (or Sensor) Sensitivity [ISO]

As if Pinocchio’s nose hasn’t grown long enough already, we have yet another ‘distortion’ of reality that every image contains as a basic building block: that of film/sensor sensitivity. While we have discussed exposure as a product of Light Intensity X Time of Exposure, one further parameter remains. A so-called ‘correct’ exposure is one that has a balance of tonal values, and (more or less) represents the tonal values of the scene that was photographed. This means essentially that blacks, shadows, mid-tones, highlights and whites are all apparent and distinct in the resulting photograph, and the contrast values are more or less in line with that of the original scene. The sensitivity of the film (or digital sensor) is critical in this regard. Very sensitive film will allow a correct image with a lower exposure (either a smaller aperture, faster shutter speed, or both), while a ‘slow’ [insensitive] film will require the opposite.

A high ISO was necessary to capture the image during late twilight. In addition a slow shutter speed was used - 1/15 sec with ISO of 6400. [©2011 Ed Elliott / Clearlight Imagery]

A high ISO was necessary to capture the image during late twilight. In addition a slow shutter speed was used – 1/15 sec with ISO of 6400. [©2011 Ed Elliott / Clearlight Imagery]

Low ISO (50) to achieve relatively fine grain and best possible resolution (this was a cellphone shot). [©2015 Ed Elliott / Clearlight Imagery]

Low ISO (50) to achieve relatively fine grain and best possible resolution (this was a cellphone shot). [©2015 Ed Elliott / Clearlight Imagery]

Cellphone image at dusk, resulting in ISO 800 with 1/15 sec exposure. Taken from a parking garage, the highlight on the palm is from car headlights. [©2012 Ed Elliott / Clearlight Imagery]

Cellphone image at dusk, resulting in ISO 800 with 1/15 sec exposure. Taken from a parking garage, the highlight on the palm is from car headlights. [©2012 Ed Elliott / Clearlight Imagery]

Night photography often requires very high ISO values and slow shutter speeds. The resulting grain can provide texture as opposed to being a detriment to the shot. [©2012 Ed Elliott / Clearlight Imagery]

Night photography often requires very high ISO values and slow shutter speeds. The resulting grain can provide texture as opposed to being a detriment to the shot. [©2012 Ed Elliott / Clearlight Imagery]

Fine grain achieved with low ISO of 50. [©2012 Ed Elliott / Clearlight Imagery]

Fine grain achieved with low ISO of 50. [©2012 Ed Elliott / Clearlight Imagery]

Slow ISO setting for high resolution, minimal grain (ISO 50) [©2012 Ed Elliott / Clearlight Imagery]

Slow ISO setting for high resolution, minimal grain (ISO 50) [©2012 Ed Elliott / Clearlight Imagery]

Sometimes you frame the shot and do the best you can with the other parameters - and it works. Cellphone image at night meant slow shutter speed (1/15 sec) and lots of grain with ISO 800 - but the resultant grain and blurring did not detract from the result. [©2012 Ed Elliott / Clearlight Imagery]

Sometimes you frame the shot and do the best you can with the other parameters – and it works. Cellphone image at night meant slow shutter speed (1/15 sec) and lots of grain with ISO 800 – but the resultant grain and blurring did not detract from the result. [©2012 Ed Elliott / Clearlight Imagery]

A corollary to film sensitivity is grain (in film) or noise (in digital sensors). If you desire a fine-grained, super sharp negative, then you must use a slow film. If you need a fast film that can produce an acceptable image in low light without a flash, say for photojournalism or surveillance work, then you must use a fast film and accept grain the size of rice in some cases… Life is all about compromise. Again, the final outcome is subjective, and totally within the control of the accomplished photographer, but this exists completely outside the darkroom (or Photoshop). Two identical scenes shot with widely disparate ISO films (or sensor settings) will give very different results. A slow ISO will produce a very sharp, super-realistic image; while a very fast ISO will be grainy, somewhat fuzzy and can tend towards surrealism if pushed to an extreme.  [technical note: the arithmetic portion of the ISO rating is the same as the older ASA rating scale, I use the current nomenclature]

Editing: White Lies, Black Lies, Dutone and Technicolor…

In my personal work as a streetphotographer (my gallery is here) I tell ‘white lies’ all the time in editorial. By that I mean the small adjustments to focus, color balance, contrast, highlight and shadow balance, etc. This is a highly personal and subjective experience. I learned from master photographers (including Ansel Adams), books and much trial and even more error… to pre-visualize my shots, and mentally place the components of the image on the Zone Scale as accurately as possible with the equipment and lighting on hand. This process was most helpful when in university with no money – every shot cost, both in film and developing ingredients. I would often choose between beer and film.. film always won… fewer friends, more images.. not quite sure about that choice but I was fascinated with imagery. While pre-visualization is, I feel, an important magic and can result in the difference between an ok image and a great one – it’s not an easy process to follow in candid streetphotography, where the recognition of a potential shot and the chance to grab it is often 1-2 seconds.

This results, quite frequently, with things in the image not being where I imagined them in terms of composition, lighting, color balance, etc. So enter my ‘white lies’. I used to accomplish this in the darkroom with push/pull of developing, and significant tweaking during printing (burning, dodging, different choice of contrast printing papers, etc.). Now I use Photoshop (I’m not particularly an Adobe disciple, but I started with this program in 1989 with version 0.87 (known as part of Barneyscan, on my Mac Classic) and we’ve kind of grown up together… I just haven’t bothered to learn another program. It does what I need, I’m sure that I only know about 20% of its current capabilities, but that’s enough for my requirements.

The other extreme that can be accomplished by Photoshop experts (and I use the term generically here) are the ‘black lies’. This is where one puts Oprah’s head on someone else’s body, performs ‘digital liposuction’ to the extent that Lena Dunham and Adele both scream “enough!”, and many celebrities find their faces applied to actors and scenes (typically in North Hollywood) where they have never been, nor would want to… There’s actually a great novel by the late Michael Crichton [Rising Sun, 1992] that contains a detailed subplot about digital photomanipulation of video imagery. At that time, it took a supercomputer to accomplish the detailed and sophisticated retouching of long video sequences – today tools such as Photoshop and After Effects could accomplish this on a desktop workstation in a matter of hours.

"Duotone" technique [background masked and converted to monochrome to focus the viewer on the foreground image]

“Duotone” technique [background masked and converted to monochrome to focus the viewer on the foreground image] [©2016 Ed Elliott / Clearlight Imagery]

A technique I frequently use is Duotone – and even here I am being technically inaccurate. What I mean by this is separating the object of interest from the background by masking the subject and turning the rest of the image into black and white. The juxtaposition of a color subject against a monochrome background helps isolate and focus the viewer’s attention on the subject. Frequently in streetphotography the opportunity to place the subject against a non-intrusive background doesn’t exist, so this technique is quite effective in ‘turning down’ the importance of the often busy and distracting surrounds. [Technically the term duotone is used for printing the entire image in gradations of only two colors]. Is this ‘manipulation’? Yes. Does it materially detract from, or alter the intent of, the original image that I pre-visualized in my head? No. I firmly stand behind this point of view, that all photographs “lie” to one extent or another, and any tool that the photographer has at his or her hand to generate a final image that is in accordance with the original intent is fair game. What matters is the act of conveying the vision of the photographer to the brain of the viewer. Period.

The ‘photograph’ is just the medium that transports that image. At the end of the day, a photo is a conglomeration of pixels (either printed or glowing) that transmit photons to the human visual system, and ultimately end up in the visual cortex in the back of the human brain. That is where we actually “see”.

Early photography (and motion picture films) were only available in black & white. When color photography first came along, the colors were not ‘natural’. As emulsions improved things got better, but even so there was a marked deviation from ‘natural’ that was actually ‘designed in’ by Kodak and other film manufacturers. The saturation and color mapping of Kodachrome did not match reality, but it did satisfy the public that equated punchy colors with a ‘good color photo’ and made those vacation memories happy ones.. and therefore sold more film. The more subdued, and realistic, Ektachrome came along as professional photographers pushed for choice (and quite frankly an easier and more open developing process – Kodachrome could only be processed by licensed labs and it was notoriously difficult to process well). The down side of early Ektachrome emulsions was the unfortunate instability of the dye layers in color transparency film – leading to rapid fading of both slides and movies.

As one who has worked in film preservation and restoration for decades, it was interesting to note that an early color process (the Technicolor 3-stripe method) that was originally designed just to get vibrant colors on the movie screen in the 1930’s had a resurgence in film preservation. Turned out that so many of the early Ektachrome films from the 1950’s and 1960’s experienced rapid fading that significant restoration efforts were necessary to salvage some important movies. The only way at that time (before economical digital scanning of movies was possible) was to – after restoration of the color negative – scan using the Technicolor process and make 3 separate black & white films that represented the cyan, magenta and yellow dye layers. Then someday in the future the 3 negatives could be optically combined and printed back on to color film for viewing.

There is No Objective Truth in Photography (or Painting, Music…)

All photography is an illusion. Using a lens, a photo-sensitive element of some sort and a box to restrict the image to only the light coming through the lens, a photograph is a rendering of what is before the lens. Nothing more. Distorted and limited by the photographer’s choice of point of view, lens, aperture, shutter speed, film/sensor and so on; the resultant image – if correctly executed, reflects at most the inner vision of the photographer’s mind/perception of the original scene. Every photograph has a story (some more boring than others).

One of the great challenges of photography (and possibly one of the reasons that until quite recently this art form was not taken seriously) is that on first glance many photos appear to be just a ‘copy of reality’ – and therefore contain no inherent artistic value. Nothing could be further from the truth. It’s just that that ‘art’ hides in plain sight… It is our collective, subjective, and inaccurate view that photographs are ‘truthful’ and accurately represent the reality that was before the lens that is the root of the problem that engendered this post. We naively assume that photos can be trusted, that they show us the only possible view of reality. It’s time to grow up, to accept that photography, just like all other art forms, is a product of the artist, first and foremost.

Even the unassuming mom who is taking snapshots of her kids is making choices – whether she knows it or not – about each of the parameters already discussed. Since most snapshot (or cellphone) cameras have wide angle lenses, the ‘huge nose’ effect of close-up pics of babies and youngsters (that will haunt these innocent children forever on Facebook and Instagram – data never dies…) is just an objective artifact of lens choice and distance to subject. Somewhere along the line our moral compass became out of whack when we started drawing highly artificial lines around ‘acceptable editorial behavior’ and so on. An entirely different discussion – which is worthy of a separate post – can be had in terms of the photographer’s (or publisher’s) intention in sharing an image. If a deliberate attempt to misrepresent the scene, for financial gain, allocation of justice, change in power, etc. is taken, that is an issue. But the same issue exists whether the medium that transports such a distortion is the written word, an audio recording, a painting or a 3D holograph. It is illogical to apply a set of standards or restrictions to one art form and not another, just to attempt to reign in inadvertent or deliberate distortions in a story that may be deduced from the art by an observer.

To use another common example, we have all seen many photos of a full moon rising behind a skyline, trees on a ridge, etc. – typically with a really large moon – and most observers just appreciate the image, the impact, the feeling. Even some rudimentary science, and a bit of experience with photography, reveals that most such images are a composite, with a moon image enlarged and layered in behind the foreground. The moon, is simply never that large, in relation to the rest of the image. In many cases I have seen, the lighting of the rest of the scene clearly shows that the foreground was shot at a different time of night than the moon (a full moon on the horizon only occurs at dusk). I have also seen many full moons in photographs that are at astronomically impossible locations in the sky, given the longitude and latitude of the foreground that is shown in the image.

An example of "Moon on Steroids"... The actual size of the moon (31 minutes of arc) is about the same size as your thumbnail if you extend your arm fully. In this picture it's obvious (look at the grasses on the ground) that the tree is approximately 10 ft tall. In reality, the moon would be nestled in between a couple of the smaller branches.

An example of “Moon on Steroids”… The actual size of the moon (31 minutes of arc) is about the same size as your thumbnail if you extend your arm fully. In this picture it’s obvious (look at the grasses on the ground) that the tree is approximately 10 ft tall. In reality, the moon would be nestled in between a couple of the smaller branches.

Why is it that such an esteemed, and talented, photographer as Steve McCurry is chastised for removing some distracting bits of an image – which in no way detracted from the ‘story’ of the image – and yet I dare say that no one in their right mind wold criticize Leonardo da Vinci for including a physically impossible background (the almost mythological mountains and seas) in his rendition of Lisa Gherardini for his painting of “Mona Lisa”? As someone who has worked in the film/video/audio industry for my entire professional life, I can tell you with absolute certainty that no modern audio recording – from Adele to Ziggy Marley – is released that is not ‘digitally altered’ in some fashion. Period. It is just an absolute in today’s production environment to ‘clean up’ every track, every mix, every completed master – removing unwanted echoes, noise, coughs, burps, and other audio equivalents of Mr. Fisch’s plastic bag… and no one, ever, has complained about this or accused the artists of being ‘dishonest’.

This double standard needs to be put to rest permanently. It reflects poorly on those who take this position, demonstrating their lack of technical knowledge and a narrow perception of the art form of photography, and furthermore gives power to those whose only interest is to malign others and detract from the powerful impact that a great image can create. If ignorant observers can really believe that an airplane in an image as depicted is ‘real’ (for the airplane to be of such a size in relation to the tunnel and ladders it would have to be flying at a massively illegal low altitude in that location) then such observers must take responsibility. Does the knowledge that this placement of the plane is ‘not real’ detract from the photo? Does the contraposition of ‘stillness vs movement’ (concrete and steel silo vs rapidly moving aircraft) create a visually stimulating image? Is it important whether that occurred ‘in reality’ or not? Would an observer judge it differently if this was a painting or a sketch instead of a photograph?

I love the art and science of photography. I am daily enamored with the images that talented and creative people all over the world share, whether a mixture of camera originals, composites, pure fiction created in the ‘darkroom’ or some combination of all. This is a wondrous art form, and must be supported at all costs. It’s not easy, it takes dedication, effort, skill, perseverance, money, time and love – just as any art form. I would hope that we could move the conversation to what matters: ‘truth in advertising’. In a photo contest, nothing, repeat nothing, should matter except the image itself. Just like painting, sculpture, music, ceramics, dance, etc. – the observed ‘art’ should be judged only by the merits of the entity itself, without subjective expectations or philosophical distortions. If an image is used to reinforce a particular ‘story’ – whether for ethical, legal or news purposes, then both the words and the images must be authentic. Authentic does not mean ‘un-retouched’, it does mean that there is no ‘black lie’ in what is conveyed.

To summarize, let’s stop believing that photographs are ‘real’ – but let’s start accepting the art, craftsmanship, effort and focus that this medium brings to all of us. Let’s apply a common frame of reference to all forms of art, whether they be painting, writing, photography, music, etc. – terms of authenticity and purpose. Would we chide Escher for attempting to fool us with visual cues of an impossible reality?

Where Did My Images Go? [the challenge of long-term preservation of digital images]

August 13, 2016 · by parasam

Littered Memories - Photos in the Gutter

Littered Memories – Photos in the Gutter (© 2016 Paul Watson, used with permission)

Image Preservation – The Early Days

After viewing the above image from fellow streetphotographer Paul Watson, I wanted to update an issue I’ve addressed previously: the major challenge that digital storage presents in terms of long-term archival endurance and accessibility. Back in my analog days, when still photography was a smelly endeavor in the darkroom for both developing and printing, I slowly learned about careful washing and fixing of negatives, how to make ‘museum’ archival prints (B&W), and the intricacies of dye-transfer color printing (at the time the only color print technology that offered substantial lifetimes). Prints still needed carefully restricted environments for both display and storage, but if all was done properly, a lifetime of 100 years could be expected for monochrome prints and even longer for carefully preserved negatives. Color negatives and prints were much more fragile, particularly color positive film. The emulsions were unstable, and many of the early Ektachrome slides (and motion picture films) faded rapidly after only a decade or so. A well-preserved dye-transfer print could be expected to last for almost 50 years if stored in the dark.

I served for a number of years as a consultant to the Los Angeles County Museum of Art, advising them on photographic archival practices, particularly relating to motion picture films. The Bing Theatre for many years offered a fantastic set of screenings that offered a rare tapestry of great movies from the past – and helped many current directors and others in the industry become better at their craft. In particular, Ron Haver (the film historian, preservationist and LACMA director with whom I worked during that time) was instrumental in supervising the restoration, screening and preservation of many films that would now be in the dust bin of history without his efforts. I learned much from him, and the principles last to this day, even in a digital world that he never experienced.

One project in particular was interesting: bringing the projection room (and associated film storage facilities) up to Los Angeles County Fire Code so we could store and screen early nitrate films from the 1920’s. [For those that don’t know, nitrate film is highly flammable, and once on fire will quite happily burn under water until all the film is consumed. It makes its own oxygen while burning…] Fire departments were not great fans of this stuff… Due to both the large (and expensive) challenges in projecting this type of film, as well as the continual degradation of the film stock, almost all nitrate film left has since been digitally scanned for preservation and safety. I also designed the telecine transfer bay for the only approved nitrate scanning facility in Los Angeles at that period.

What this all underscored was the considerable effort, expense and planning that is required for long term image preservation. Now, while we may think that once digitized, all our image preservation problems are over – the exact opposite is true! We have ample evidence (glass plate negatives from the 1880’s, B&W sheet film negatives from the early 1900’s) that properly stored  monochrome film can easily last 100 years or more, and is readable today as it was the day the film was exposed with no extra knowledge or specialized machinery. B&W movie film is also just as stable as long as printed onto safety film base. Due to the inherent fading of so many early color emulsions, the only sure method for preservation (in the analog era) was to ‘color separate’ the negative film and print the three layers (cyan, magenta and yellow) onto three individual B&W films. – the so-called “Technicolor 3-stripe process”.

Digital Image Preservation

The problem with digital image preservation is not due to the inherent technology of digital conversion – if done well that can yield a perfect reproduction of the original after theoretically an infinite time period. The challenge is how we store, read and write the “0s and 1s” that make up the digital image. Our computer storage and processing capability has moved so quickly over the last 40 years that almost all digital storage from more than 25 years ago is somewhere between difficult and impossible to recover today. This problem is growing worse, not better, in every succeeding year…

IBM 305 RAMAC Disk System 1956: IBM ships the first hard drive in the RAMAC 305 system. The drive holds 5MB of data at $10,000 a megabyte.

IBM 305 RAMAC Disk System 1956: IBM ships the first hard drive in the RAMAC 305 system. The drive holds 5MB of data at $10,000 a megabyte.

This is a hard drive. It holds less than .01% of the data as the smallest iPhone today...

This is a hard drive. It holds less than .01% of the data as the smallest iPhone today…

One of the earliest hard drives available for microcomputers, c.1980. The cost then was $350/MB, today's cost (based on 1TB hard drive) is $0.00004/MB or a factor of 8,750,000 times cheaper.

One of the earliest hard drives available for microcomputers, c.1980. The cost then was $350/MB, today’s cost (based on 1TB hard drive) is $0.00004/MB or a factor of 8,750,000 times cheaper.

Paper tape digital storage as used by DEC PDP-11 minicomputers in 1975.

Paper tape digital storage as used by DEC PDP-11 minicomputers in 1975.

Paper punch card, a standard for data entry in the 1970s.

Paper punch card, a standard for data entry in the 1970s.

Floppy disks: (from left) 8in; 5-1/4"; 3-1/2". The standard data storage format for microcomputers in the 1980s.

Floppy disks: (from left) 8in; 5-1/4″; 3-1/2″. The standard data storage format for microcomputers in the 1980s.

As can be  seen from the above examples, digital storage has changed remarkably over the last few decades. Even though today we look at multi-terabyte hard drives and SSD (Solid State Drives) as ‘cutting edge’, will we chuckle 20 years from now when we look back at something as archaic as spinning disks or NAND flash memory? With quantum memory, holographic storage and other technologies already showing promise in the labs, it’s highly likely that even the 60TB SSD disks that Samsung just announced will take their place alongside 8-inch floppy disks in a decade or so…

And these issues are actually the least of the problem (the physical storage medium). Yes, if you put your ‘digital negatives’ on a floppy disk 15 years ago and now want to read them you have a challenge at hand… but with patience and some time on eBay you could probably assemble the appropriate hardware to retrieve the data into a modern computer. The bigger issue is that of the data format: both of the drives themselves and the actual image files. The file systems – the method that was used to catalog and find the individual images stored on whatever kind of physical storage device, whether ancient hard drive or floppy disk – have changed rapidly over the years. Most early file systems are no longer supported by current OS (Operating Systems), so hooking up an old drive to a modern computer won’t work.

Even if one could find a translator from an older file system to a current one (there is a very limited capability in this regard, many older file systems can literally only be read by a computer as old as the drive), that doesn’t solve the next issue: the image format itself. The issue of ‘backwards compatibility’ is one of the great Achilles Heels of the entire IT industry. The huge push by all vendors to keep all their users relentlessly updating to the latest software, firmware and hardware is just to avoid these same companies having to support older versions of hardware and software. This is not totally a self-serving issue (although there are significant costs and time involved in doing so) – frequently certain changes in technology just can’t support an older paradigm any longer. The earliest versions of Photoshop files, PICT, etc are not easily opened with current applications. Anyone remember Corel Draw?? Even ‘common interchange’ formats such as TIFF and JPEG have evolved, and not every version is supported by every current image processing application.

The more proprietary and specific the image format is, the more fragile it is – in terms of archival longevity. For instance, it may seem that the best archival format would be the Camera Raw format – essentially the full original capture directly from the camera. File types such as RAW, NEF, CR2 and so on are typical. However, each of these is proprietary and typically has about a 5 year life span, in terms of active application support by the vendor. As camera models keep changing – more or less on a yearly cycle – the Raw formats change as well. 3rd party vendors, such as Adobe Photoshop, are under no obligation to support earlier Raw formats forever… and as previously discussed the challenge of maintaining backwards compatibility grows more complex with each passing year. There will always come a time when such formats will no longer be supported by currently active image retrieval, viewing or processing software.

Challenges of Long-Term Digital Image Preservation

Therefore two major challenges must be resolved in order to achieve long term storage and future accessibility of digital images. The first is the physical storage medium itself, whether that is tape (such as LTO-6), hard disk, SSD, optical, etc. The second is the actual image format. Both must be usable and able to transfer images back to the operating system, device and software that is current at the time of retrieval in order for the entire exercise of archival digital storage to be successful. Unfortunately, this is highly problematic at this time. As the pace of technological advance is exponentially increasing, the continual challenge of obsolescence becomes greater every year.

Currently there is no perfect answer for this dilemma – the only solution is one of proactivity on the part of the user. One must accommodate the continuing obsolescence of physical storage mediums, file systems, operating systems and file formats by moving the image files on a regular and continual basis to current versions of all of the above. Typically this is an exercise that must be repeated every five years – at current rates of technological development. For uncompressed images, other than the cost of the move/update there is no impact on the digital image – that is one of the plus sides of digital imagery. However, many images (almost all if you are other than a professional photographer or filmmaker) are stored in a compressed format (JPG, TIFF-LZW/ZIP, MPG, MOV, WMV, etc.). These images/movies will experience a small degradation in quality each time they are copied. The amount and type of artifacts introduced are highly variable, depending on the level of compression and many other factors. The bottom line is that after a number of copy cycles of a compressed file (say 10) it is quite likely that a visible difference from the original file can be seen.

Therefore, particularly for compressed files, a balance must be struck between updating often enough to avoid technical obsolescence and making the fewest number of copies over time in order to avoid image degradation. [It should be noted that potential image degradation will typically only be due to changing/updating the image file format, not moving a bit-perfect copy from one type of storage medium to another].

This process, while a bit tedious, can be automated with scripts or other similar tools, and for the casual photographer or filmmaker will not be too arduous if undertaken every five years or so. It’s another matter entirely for professionals with large libraries, or for museums, archives and anyone else with thousands or millions of image files. A lot of effort, research and thought has been applied to this problem by these professionals, as this is a large cost of both time and money – and no solution other than what’s been described above has been discovered to date. Some useful practices have been developed, both to preserve the integrity of the original images as well as reduce the time and complexity of the upgrade process.

Methods for Successful Digital Image Archiving

A few of those processes are shared below to serve as a guide for those that are interested. Further search will yield a large amount of sites and information that addresses this challenge in detail.

  • The most important aspect of ensuring a long-term archival process that will result in the ability to retrieve your images in the future is planning. Know what you want, and how much effort you are willing to put in to achieve that.
  • While this may be a significant undertaking for professionals with very large libraries, even a few simple steps will benefit the casual user and can protect family albums for decades.
  • In addition to the steps discussed above (updating storage media, OS and file systems, and image formats) another very important aspect is “Where do I store the backup media?” Making just one copy and having it on the hard drive of your computer is not sufficient. (Think about fire, theft, complete breakdown of the computer, etc.)
    • The current ‘best practices’ recommendation is the “3-2-1” approach: Make 3 copies of the archival backup. Store in at least 2 different locations. Place at least 1 copy off-site. A simple but practical example (for a home user) would be: one copy of your image library in your computer. A 2nd copy on a backup drive that is only used for archival image storage. A 3rd copy either on another hard drive that is stored in a vault environment (fireproof data storage or equivalent) or cloud storage.
    • A note on cloud storage: while this can be convenient, be sure to check the fine print on liability, access, etc. of the cloud provider. This solution is typically feasible for up to a few terabytes, beyond that the cost can become significant, particularly when you consider storage for 10-20  years. Also, will the cloud provider be around in 20 years? What insurance do they provide in terms of buyout, bankruptcy, etc.? While the issue of storage media is not an issue with cloud storage and file formats (it is incumbent on the cloud provider to keep that updated) you are still personally responsible for the image format issue: the cloud vendor is only storing a set of binary files, they cannot guarantee that these files will be readable in 20 years.
    • Unless you have a fairly small image library, current optical media (DVD, etc.) is impractical: even double-sided DVDs only hold about 8GB of formatted data. In addition, as one would need to burn these DVDs in your computer, the longevity of ‘burned’ DVDs is not great (compared to printed DVDs like you purchase when you buy a movie). With DVD usage falling off noticeably this is most likely not a good long-term archival format.
    • The best current solution for off-premise archival storage is to physically store external hard drives (or SSDs) with a well known data vaulting vendor (Iron Mountain is one example). The cost is low, and since you only need access every 5 years or so the extra cost for retrieval and re-storage (after updating the storage media) is acceptable even for the casual user.
  • Another vitally important aspect of image preservation is metadata. This is the information about the images. If you don’t know what you have then future retrieval can be difficult and frustrating. In addition to the very basic metadata (file name, simple description, and a master catalog of all your images) it is highly desirable to put in place a metadata schema that can store keywords and a multitude of other information about the images. This can be invaluable to yourself or others who may want to access these images decades in the future. A full discussion of image metadata is beyond the scope of this post, but there is a wealth of information available. One notable challenge is the most basic (and therefore future-proof) still image formats in use today [JPG and TIFF] do not have any facility to attach metadata directly within the image file – it must be stored externally and cross-referenced somehow. Photoshop files on the other hand store both metadata and the image within the same file – but as discussed above this is not the best format for archival storage. There are techniques to cross-reference information to images: from purpose-built archival image software to a simple spreadsheet that uses the filename of the image as a key to the metadata.
  • An important reminder: the whole purpose of an archival exercise is to be able to recover the images at a future date. So test this. Don’t just assume. After putting it all in place, pull up some images from your local offline storage every 3-6 months and see that everything works. Pull one of your archival drives from off-site storage once a year and test it to be sure you can still read everything. Set up reminders in your calendar – it’s so easy to forget until you need a set of images that was accidentally deleted from your computer – and then find out your backup did work as expected.

A final note:  if you look at entities that store valuable images as their sole activity (Library of Congress, The National Archives, etc.) you will find [for still images] that the two most popular image formats are low-compression JPG and uncompressed TIFF. It’s a good place to start…

 

An Interview with Dr. Vivienne Ming: Digital Disruptor, Scientist, Educator, AI Wizard…

June 27, 2016 · by parasam

During the recent Consumer Goods Forum global summit here in Cape Town, I had the opportunity to briefly chat with Vivienne about some of the issues confronting the digital disruption of this industry sector. [The original transcript has been edited for clarity and space.]

Named one of 10 Women to Watch in Tech in 2013 by Inc. Magazine, Vivienne Ming is a theoretical neuroscientist, technologist and entrepreneur. She co-founded Socos, where machine learning and cognitive neuroscience combine to maximize students’ life outcomes. Vivienne is a visiting scholar at UC Berkeley’s Redwood Center for Theoretical Neuroscience, where she pursues her research in neuroprosthetics. In her free time, Vivienne has developed a predictive model of diabetes to better manage the glucose levels of her diabetic son and systems to predict manic episodes in bipolar suffers. She sits on the boards of StartOut, The Palm Center, Emozia, and the Bay Area Rainbow Daycamp, and is an advisor to Credit Suisse, Cornerstone Capital, and BayesImpact. Dr. Ming also speaks frequently on issues of LGBT inclusion and gender in technology. Vivienne lives in Berkeley, CA, with her wife (and co-founder) and their two children.

Every once in a while I have the opportunity to discuss wide-ranging topics with an intellect that stimulates, is passionate and really cares about the bigger picture. Those opportunities are more rare than one would think. Although set in a somewhat unexpected venue (the elite innards of consumer capitalism) her observations on the inescapable disruption that the new wave of modern technologies are prescient and thoughtful. – Ed 

Ed: In a continent where there is a large focus on putting people to work, how do you see the challenges and disruptions resulting from AI, robotics, IoT, VR and other technologies playing out? These technologies, as did other disruptive technologies before them, tend to replace human workers with machine processes.

Vivienne:  There is almost no domain in which artificial intelligence (AI), machine learning and automation will not have a profound and positive impact. Medicine, farming, transportation, etc. will all benefit. There will be a huge impact on human potential, and human work will change. I think this is inevitable, that we are well on the way to this AI-enabled future. The economic incentives to push in this direction are far too strong. But we need social institutions to keep pace with this change.

We need to be building people in as a sophisticated way as we are building our technology infrastructure. There is today a large and significant business sector in educational technology: Microsoft, Apple, Google, Facebook all have serious interest. But this current focus really is just an amplifier for existing paradigms, helping hyper-competitive moms over-prep their kids for standardized testing… which predicts nothing at all about anyone’s actual life outcome.

Whether you get into Princeton vs Brown, or didn’t get into MIT, is not really going to affect your life track all that much. Whereas the transformation that comes from making even a more modest, but broad-scale difference in lives, is huge. Let’s take right here: South Africa is probably one of the perfect examples, maybe along with India, of a region in which to make a difference.

Because of the history, we have a society where there is a starting point of a pretty dramatic inequality of education and preparedness. But, you have an infrastructure. That same history did leave you with a strong infrastructure. Change a child’s life in Denmark, for example, and you probably haven’t made that enormous an impact. You do it in Haiti and the best you might hope for is they might move to somewhere that they might live out a fruitful and more productive life. While it may sound judgmental on Haiti it’s just a fact right now: there’s only so much that one can achieve there as there is so little infrastructure. But you do that here in South Africa, in the townships, or in the slums of Mumbai – and you can have a profound difference in that person’s life. This is because there is an infrastructure to capture that life and do something with it.

In terms of educational technology, doubling down on traditional approaches with AI, bringing computational aids into the classroom, using algorithms to better prepare students for testing… we have not found, either in the literature or our own research with a 122 million person database that this makes any difference to one’s life outcome.

People that do this, that go to great colleges, do often have productive and creative lives… but not for those reasons. All of their life results are outgrowths of latent qualities. General cognitive ability, our meta-cognition problem solving, our creativity, our emotion regulation, our mindset: these are the things that we find are actually predictive of one’s life outcome.

These qualities are hard to teach. We tend to absorb and learn these from a lifetime of modeling of others, of human observation and interaction. So I tend to take a very human perspective on technology. What is the minimum, as someone that builds technology – AI in particular – that can deliver those qualities into people’s lives. If we want to really be effective with this technology, then it must be simple. Simple to deploy and simple to use. Currently, a text-based system is appropriate. Today we use SMS – although it’s a hugely regressive system that is expensive. To reach 1 million kids each year it costs about $5 million per year. To reach that same number of kids using WhatsApp or a similar platform costs about $40 per year. The difference is obscene… The one technology (SMS) that has the farthest reach around the world is severely dis-incentivized… but we’re doing it anyway!

When I’m building a fancy AI, there’s no reason to pair that with an elaborate user interface, there’s no reason I need to force you to buy our testing solution that will collect tons of data, etc. It can, and should, be the simplest interface possible.

Let me give an example with the following narrative:  I pick up my daughter each day after school (she’s five) and she immediately starts sharing with me via pictures. That’s how she interacts. She sits with her friends and draws pictures. The first thing she does is show me what she’s drawn that day. I snap a photo to share with her grandmother, and at the same time I cc: MUSE (the AI system we’ve built). The image comes to us, our deep neural network starts analyzing it.

Then I go pick up my son. Much like me, he like to talk. He loves to tell stories. We can’t upload audio via SMS (prohibitively expensive) but easily done with an app. Hit a button, record 30 sec of his story, or grab a few minutes of us talking to each other. Again, that is captured by the deep neural networks within MUSE and analyzed. Some of this AI could be done with ‘off the shelf’ applications such as available from Google, IBM, etc. Still very sophisticated software, but it’s out there.

The problem with this method is that data about a little kid is now outside our protected system. That’s a problem. In some countries, India or China for example, parents are so desperate for their children to improve in education that they will do almost anything, but in the US everyone’s suspicious. The only sure-fire way to kill a company is to do something bad with data in health or education. So MUSE is entirely self-contained.

Once we have the data and the analysis, we combine that with a system that asks a single question each day. The text-based question and answer is the only requirement (from a participating parent using our system); the image and audio is optional. What the system is actually doing is predicting every parent’s answer to these thousands of questions, every day. This is a machine learning technology known as ‘active learning’. We came up with our own variant, and when it does its predictions, it then says, “If I knew the true answer to one question, which one would provide the biggest information gain?”

This can be interpreted in different ways. Shannon information (for the very wonky people reading this), or which one question will cause the most other questions to be generated. So we ask that one question. The system can select the single most information question to ask that day. We then do a very immodest thing: predicting these kids’ life outcomes. But that is problematic. Not only our research, but others as well, have shown almost unequivocally that sharing this information produces a negative outcome. Turns out that the best thing, we believe, that can be done with this data is to use it to ask a predictive question for the advancement of the child’s learning.

That can lead to a positive transformation of potential outcome. Prior systems have shown that just the daily reminder to a parent to perform a specific activity with their child is beneficial – in our case, with all this data and analysis, our system can ask the one question that we predict will be the most valuable thing for your child on that day.

Now that prediction incorporates more than you might think. The first most important thing: that the parent actually does it. That’s easy to determine: either they did or they didn’t. So we need methods to engage with the parent. The second thing is to attempt to determine how effective our predictive model is for that child. Remember, we’re not literally predicting how long a child will live – we’re predicting how ‘gritty’ they will be (as in the research from Angela Duckworth), do they have more of a growth or fixed mindset (Carol Dweck and others), what will be their working memory span, etc.

Turns out there are dozens and dozens of constructs that people have directly shown are strongly predictive of life outcomes. Our aim is to maximize these qualities in the kids that make use of our system. In terms of our predictive models, think of this more in an actuarial sense: on average, given everything we know about a particular kid, he is more likely to live 5 years longer, she is more likely to go 2 years further in their education, etc. The important thing, our goal, is for none of it to come true, no matter how positive the prediction is. We believe it can always be more. Everyone can be amazing. This may sound like a line, but quite frankly if you don’t believe that you shouldn’t be an educator.

Unfortunately, the education system is full of people that think people can’t change and that it’s not worth the effort… what would South Africa be if this level of positivity was embedded in the education system? I’m sure that it’s not lost on the population here (for instance all the young [mostly African] people serving this event) what their opportunities could be if they could really join the creative class. Unfortunately there are political and policy issues that come into play here, it’s not just a machine learning issue. But I can say the difference would be dramatic.

We did the following analysis in the United States:  if we simply took the kind of things we do with MUSE, and were able to scale that to every kid in the USA (50 million) and had started this 25 years ago, what would the net effect on the US economy be? We didn’t do broad strokes and wishful thinking – we modeled based on actual research (do this little proactive change when kids are young, then observe that in 25 years they are earning 25% more and have better health outcomes, etc.). We took that actual research, and modeled it out, region by region in the USA; demographics, everything. We found that after 25 years we would have added somewhere between $1.3-1.8 trillion to the US economy. That’s huge.

The challenge is how do you scale that out to really large numbers of kids, particularly in India, China, Africa, etc.? That’s where technology comes in.

Who’s invested in a kid’s life? We use the generic term ‘caregiver’ – because in many kid’s lives there isn’t a parent, or only one parent, a grandparent, a foster parent, etc. At any given moment in a kid’s life, hopefully there are at least two pivotal people: a caregiver and a teacher. Instead of trying to replace them with an AI, what if we empower them? What if we gave them a superpower? That’s the spirit of what we’re trying to do.

MUSE is comprised of eight completely independent highly sophisticated machine learning systems, along with integration, data, analytics and interface layers. These systems are analyzing the images, the audio, producing the questions, making the predictions. We use what’s termed a ‘deep reinforcement learning model’ – a very similar concept, at least at a high level, to Google’s “Alpha Go” AI system. This type of system can learn to play highly complex games (Go, video games, etc.) – a fundamentally different type of intelligence than IBM’s older chess-playing programs. This new type of AI actually learns how to play the game itself, as opposed to selecting procedures that have already been programmed into it.

With MUSE, essentially we are designing activities for parents to do with their children that are unique to that child and designed to provide maximum learning stimulus at that point in time. We are also designing a similar structure for teachers to do with their students, for students to do with themselves as they get older. In a similar fashion, we are involved in the workplace: the same system that can help parents get the most out of their kids, that can help teachers get the most out of their students, can also help managers get the most out of their employees. I’ve done a lot of work in labor economics, talent management, etc. – managing people is hard and most aren’t good at it.

Our approach tends to be, “Do a good job and I’ll give you a bonus, do a bad job and I’ll fire you.” We try to be more human than that, but – certainly if you’ve ever been involved in sales – that’s the game! In the TED talk that was just published we showed that methodology was actually a negative predictor of outcomes. In the workplace, your best people are not incentivized by these archaic methodologies, but are rather endogenously motivated. In fact, the research shows that the more you artificially incentivize workers, the more poorly they perform, at least in the medium to long term.

Wow! That is directly in contradiction to how we structure our businesses, our educational systems, our societies in general. It’s really hard to gain these insights if you can’t do a deep analysis of 200,000 salespeople, or 100,000 software developers like we were able to do. Ultimately the massive database of 122 million people that we built at Gild allows a scale of research and analysis that is unprecedented. That scale, and the capability of deep machine learning allows us to factor tens of thousands of variables as a basis for our predictive engines.

I just love this space – of combining human potential with the capabilities of artificial intelligence. I’ve never built a totally autonomous system. Everything I’ve built is about helping a parent, helping a doctor, helping a teacher, helping a manager do better. This may come from my one remaining academic interest: cognitive neural prosthetics. [a versatile method for assisting paralyzed patients and patients with amputations, recording the cognitive state of the subject, rather than signals strictly related to motor execution or sensation] Do I believe that literally jamming things in your brain can make you smarter? Unambiguously yes! I accept we won’t be doing this tomorrow… there aren’t that many volunteers for elective brain surgery… but with amazing technologies, such as neural dust being developed at Lawrence UC Berkley Labs, as well as ECoG, which is nearly ubiquitous but is currently only used during brain surgery for epilepsy.

What can be done with ECoG is amazing! I can tell what you’re subvocalizing, what you’re looking at, I can track your decision process, your emotional state, etc. Now, that is scary, and it should be. We should never shy away from that – but the potential is awesome.

Part of my response to the general ‘AI conundrum’ is, “Let’s beat them to the punch – why wait for them [AI machines] to become super-intelligent – why don’t we do it?” But then this becomes a human story as well. Is intelligence a commodity? Is it a function of how much I can buy? Or is it a human right, like a vaccine? I don’t think these things will ever become ubiquitous or completely free, but whoever gets their first, much like profound eutrophics [accelerated development] and other intelligence-building technologies, will enjoy a huge first-mover advantage. This could be a singularity moment: where potentially we have a small population of super-intelligent people. What happens after that?

I know we started from a not-simple question, but at least a very immediate and realized one: What are the human implications of these sorts of technologies today – into what I think 20 or 30 years from now will fundamentally change the definition of what it means to be human. That’s not a very long time period. But ultimately that’s the thing – technology changes fast. Nowadays people say that technology is changing faster than culture. We need cultural institutions to, if not keep up with the pace of change of technology, change and adapt at a much, much faster pace. We simply cannot accept that these things will figure themselves out over the next 20 years… I mean 20 years is how long it takes to grow a new person – and then it will too late.

It’s like the ice melts in Antarctica: ignoring the problem is leading to potentially catastrophic consequences. The same is true of AI development – this could be catastrophic for Africa, even for America or Europe. But the potential for a good outcome is so enormous – if we react in time. This isn’t the same story as climate change, it isn’t a huge amount of cost just to keep it from being cataclysmic. What I’m saying here is these costs (for human-integrated AI) pay off, they pay back. We’re talking about a much better world. The hard part is getting people to think that it’s worth investing in other peoples’ kids. That’s a bit of an ugly reality, but it’s the truth.

My approach has been: if we can scale out some sophisticated AIs and deliver them in ways that even if not truly free, but can be done at low enough costs that this can be done philanthropically, then that’s what we’ll do.

Ed:  I really appreciate your comments. You went a good way to defining what you meant by ‘Augmented Intelligence’. I had a sense of what you meant by that but this was a most informative journey.

Vivienne:  Thank you. It’s interesting – 10 years ago if you’d asked me about cognitive neural prosthetics, cybernetics, cyborgs… I would have said it’s 50 years away. So now I’ve trimmed more than 10 years off that estimate. Back then, as an academic, I thought, “Ok, what can I do today?” I don’t have access to a brain directly, can I leverage technology somehow to achieve indirect access to a brain? What could we do with Google Glass? What could we do with inferential technologies online? I know I’m not the only person that’s had an idea like this before. My very first startup, we were thinking of “Google Now” long before Google Now came along. The vision was even more aggressive:

“You’re walking down the street, and the system remembers that you read an article 40 days ago about a restaurant and it really piqued your interest. How did it know that? Because it’s modeling you. It’s effectively simulating you. Your emotional responses. It’s reading the article at the same time you’re reading the article. It’s tracking your responses, but it’s also simulating you, like a true cognitive system. [An aside: I’ll echo what many others have said – that IBM’s Cognitive Computing is not really cognitive computing… but such a thing does really exist].

So I’m walking down the street, and the system pings me and says, “You know that restaurant you were interested in? There’s an open table right now, it’s three blocks away, your calendar’s clear and I’ve just made a reservation for you.” Because the system knew, it didn’t need to ask, that you’d say yes.

Now, that’s really ambitious, especially since I was thinking about this ten years ago, but it’s not ambitious in the sense that it can be done, it’s more ambitious about the infrastructure. Where do you get the data from? What kind of processing can you do? I think the infrastructure problem is becoming less and less of one today and that’s where we are seeing many changes.

You brought up the issue of a “Marketplace of Things” [n.b. Ed and Vivienne had a short exchange leading in to this interview regarding IoT and the perspective that localized data/intelligence exchange would dramatically lower bandwidth requirements for upstream delivery, lower system latency, and provide superior results.] and brought up the issue of bandwidth: wouldn’t it better if every light bulb, every  camera, every microphone locally processed information, and then only sent off things that were actually interesting, informative. But didn’t just send it off to a single server – it’s every day at the InfoNYSE: “I’ve got some interesting emotion data on three users in this room, anyone interested in that?”

These transactions won’t necessarily be traditional monetary transactions, possibly just data transactions. “I will trade this information for some data about your users’ interests”, or for future data about how your users responded to this information that I’m providing.

As much as I think the word ‘futurist’ is a bit overused or diffuse, I do admit to thinking about the future and what’s possible. I’ve got a room full of independent processing units that are all talking to each other… I’ve kind of got a brain in that room. I’m actually pretty skeptical of ‘general AI’ as traditionally defined. You know, like I’m going to sit down and have a conversation with this AI entity. [laughs] I think we’ll know that we’ve achieved true general AI when this entity no longer introspects, when it no longer understands its own actions – i.e. when it becomes like us.

I do think general artificial intelligence is possible but it’s going to kind of like a whole building ‘turning on’ – it won’t be having a conversation with us, it will much more like our brain. I like to use this metaphor: “Our brains are like a wildly dysfunctional democracy with all of these circuits voting for different outcomes, but it’s an unequal democracy, as the votes carry different weights.” But remember that we only get to see a tiny segment of those votes: only a very small portion of that process ever comes to our conscious awareness. We do a much better job of post-hoc explaining the ‘votes’ and just making things happen than actually explaining it in the moment.

Another metaphor I use is from the movie “Inside Out”:  except instead of a bunch of cutesy emotions embodied, imagine a room full of really crotchety old economists that hate each other and hold wildly differing opinions.

Ed: “Oh, you mean the Fed!”

Vivienne: “Yes! Imagine the Fed of your head” This is actually not a bad model of our cognitive process. In many ways we show a lot of near optimality, near perfect rationality in our decision making, once you understand all the inputs to our decision making process. And yet we can wildly fluctuate between different decisions. The classic is having people playing a betting game where if you bet, and you reveal they won, they will play again. If you bet, and they reveal they lost, they will play again, but if you bet and don’t reveal – they will be less likely to play again.

Which at one level is irrational, but we hold these weird and competing ideas in our head and these votes take place on a regular basis. It gets really complex: modeling cognition. But if you really want to understand people, that’s the way to do it.

This may have been a long and somewhat belabored answer to your original question regarding augmented intelligence, but the heart of it for me all started with, “What could we do if we really understood someone?” I wanted it to be, “I really understand you because I’m in your brain.” But, lacking that immediate capability, what can I infer about someone, and then what can I feed back to them to make them better?

Now, “better” may be a pretty loose and broad definition, but I’m comfortable with that if I can make people “grittier”, if I can improve their working memory span, if I can improve their ability to regulate their own emotions – not turn their emotions off, not pump them full of Ritalin, but to be aware of how their emotions impact their decision making. That leaves the person free to decide what to do with them. And that’s a world I would be pretty happy with.

I would surely disagree with how a great many people use their lives, even so empowered, but it’s a point of faith for me: I’m a hard numbers scientist, not a religious person, but there’s one point of faith. If we could improve these qualities for everyone, the world would be a better place.

Ed: Going back to the conference that we’re both attending (Consumer Goods) how can this idea of augmented intelligence and what I would call an ‘intelligent surface’ of total our environment (whether enabled by IoT, social media feedback, GoogleNow, etc) help turn the consumer ecosystem on its end and truly make it ‘consumer-centric’? By that I mean actually being in control of what goods and services are invented, let alone sold, to us. Why should firms waste time making and selling us stuff that we don’t want or need, or stuff that is bad for us?

Vivienne:  There’s a couple of different ideas that come to me. One is something I often recommend in regards to talent. While your questions pertains to external customers in regards to retailers/suppliers, an analogy can be drawn to the internal interaction between employees and the firms for which they work: “Companies need to stop trying to align their employees with their business, they need to figure out how to align their business with their employees.”

This doesn’t mean that their business becomes some quixotic thing that is malleable and changeable; you do have a business that produces goods or services. For instance, let’s say your business is Campbell’s Soup – you produce food and ship it around the world. But why does this matter to Ann, Shaniqua, any other of your employees? While this may sound a bit ‘self-helpy’ or ‘business-guru’ it’s actually a big part of my philosophy: Think about the things I’ve said about education: Let’s do this crazy thing – think about what the true outcome we’re after is. I want happy, healthy, productive people – and society will reap the benefits. That is my lone definition of education. Anything else is just details.

I’m telling you I can predict those three things. Therefore any decision I make, right here in the moment, I can align against those three goals. So… should I teach this person some concept in geometry right now? Or how should I teach that concept? How does that align with those three goals?

“My four year old is still not reading, should I panic?” How does that align with those three goals? For a child like that, is that predictive of those three things? For some kids, that might be problematic, and it might be time for some kind of intervention. For others, turns out it’s not predictive at all. I didn’t do well in high school. What I didn’t do there, I did in spades in college… and then flunked completely out. After a big gap in my life I went back to college and did my entire undergraduate degree in one year – with perfect scores. Same person, same place, same things. It wasn’t what I was doing (to borrow a phrase from someone else) – it was why I was doing it.

So figuring out why it suddenly mattered to me at that time was me figuring out that it coalesced around the idea of maximizing human potential. Suddenly it had purpose, I was doing things for a reason.

So now we’re talking about doing this inside of companies, with their employees. Figuring out why your company matters to this employee. You want them to be productive – bonuses aren’t the way to do it. Pay them enough that they feel valued, and then figure out why this is important to them. And true enough, for some people that reason might be money – but to others not.

So what does that mean for our consumer relationship? My big fear is that when CEOs or CMOs hear this (human perception modeling, etc. as is used in AI development) they think, “Oh, let’s figure out why people will buy our products!” When I hear about ‘brain hacks’ I don’t think of sales or marketing, I worry about the food scientists figuring out the perfect ‘sweet spot’ of sodium, fat and carbohydrates in order to make this food maximally addictive (in a soft sense). I’m not talking about that kind of alignment. I’m saying, “What is your long term goal?”

Every one of those people on stage (at the Consumer Goods Forum) made some very impassioned speeches about how it’s about the health of consumers, their well-being, the good of society, it’s about jobs, etc. It’s shocking how bad a reputation those same firms have, at least in the USA, along those same dimensions – if that’s what they truly care about. And yet their response to the above statement is, “Gosh, we need a better branding campaign!”

Well… no, you firms are probably not nearly as aligned around those positive outcomes as you think you are; I believe you feel that way, and that you feel abused in our assumptions that you are not (acting that way). I do a tremendous amount of work and advice in the area of discrimination, in human capital. You know, bias, discrimination… it’s not done by villains, it’s done by humans.

Ed: I think what’s difficult is that for true authenticity to be evident, to really act in an authentic manner, one must be able to be self-aware. It’s rare to find that brutal self-analysis, self-questioning, self-awareness. You have pointed out that many business leaders truly believe their hype, their marketing positions – whether there is any real accuracy in their positions or accuracy.

Vivienne:  I just wrote an op-ed for the Financial Times, “The Neuro-Economics of Inequality” (not it’s actual title but it’s the way I think about the issue). What happens when someone learns, really legitimately learns rationally that their hard work will not pay off. Not the way that, for example, it will for the American white kid down the street. So why bother? Even for a woman; so I’ve got a fancy degree, a great college education: I’m going to have to work twice as hard as the man to get the same pay, the same reward.. and even then I’m never going to make it to the C-suite anyway. If I actually do get there, I’m going to have to be “that” kind of executive once I’m there…  I’d rather just be a mom.

These people are not opting out, they are making rational decisions. You talk to economists… we went through this and did the research. We could prove the ‘cost of being named José in the tech industry’, the ‘cost of being black on Wall St.’ – this completely changes some of these equations when you take that into account. So, bringing this back to consumers, I don’t have ready answers for it as I’m a bit dismissive of it. “Consumerism” – that’s a bad word, isn’t it?

While I’m not sure of the resonance of this thought, what if you could take the idea that I’m talking about – these big predictions, Bayesian models that are giving you probability distributions over this potential consumers outcomes. Not ten minutes from now, or rather ten minutes from now is only part of what I’m talking about. We’re integrating across the probability distribution of all potential life outcomes from something as minor as “they ate your bag of potato chips.”

I’m willing to bet if you had to ‘own’, in some sense, at least morally if nothing else, the consequence of knowing the short-term benefit: nice little hedonic short term increase in happiness; mid-term benefit: decrease in eudaimonic  happiness; long term decrease in liver function and so forth… your outlook might be different. If you’re (brand X) that’s just an externality. So I think there are some legitimate criticisms: why talk a fancy game, it’s just corporate responsibility.

Yes, optimizing your supply chain, reducing food waste is nice, but it’s really just because you spent money moving food around the world, some of which got wasted – you want to cut back on that. Beyond that, my observation as an outsider to this sector is that it’s about corporate responsibility, and by that I mean the marketing practices. If you really want to put your heart where your mouth is then take ownership of the long term outcomes. Think about what it means for a nine-year old to eat potato chips. Certain ‘health food’ enterprises have made a lot of money out of this idea, providing a healthy site in which to shop. Certainly, in comparison to a corner store in a disenfranchised neighborhood in the US they are a wildly healthy choice, but even these health shops have an entire aisle dedicated to potato chips. They’re just organic potato chips that cost three times as much. I buy them every now and then. I’m a firm believer in eating well, just eating with some degree of moderation.

That would be my approach. My approach in talent, in health, in education and a variety of domains in policy-making has been let’s leverage some amazing technology to make these seemingly miraculous predictions (which they’re not, they are really not even predictions but actuarial distributions). But these still inform us.

Right now, with this consumer, we’re balancing a number of things: revenue, sustainability, even the somewhat morbid sustainability of our consumer base; we’re balancing our brand. What’s the one action we could take right now as an organization in respect to this person that could maximize all of those things? Given their history, it’s hard to believe that it’s going to be something more than revenue, or at least something that’s going to actually cost them. If I actually believed they would be willing to take this kind of technology and apply it in a truly positive way – I’d just give it to them.

I mean, what a phenomenal human good it would be if some rather simple machine learning could help them actually have a really different paradigm of ‘consuming’. What if every brand could become your best friend, and do what’s in your best interest? Although as it reflects from the brand-owner’s perspective. Yeah, that’s pretty hopeful to think that could actually happen, but do I think that could happen?

That’s what we’re hoping for in some of our mental health work. By being able to make these predictions we’re not just hoping to intervene on behalf of the sufferer, but trusted confidants as well. The way I often put it is: I would love it if our system could be everything your best friend is, but even more vigilant. What would your best friend do if they recognized the early signs of a manic episode coming on? Can we deliver that two weeks earlier and never miss the signals?

Going back, I just don’t see where big consumer companies own that responsibility. But let me pull back to my ‘Marketplace of Things’ idea. There’s a crucial aspect here: that of agents. I can have my own proxy, my own agent that can represent me. In that context, then these consumer companies can serve their own goals. I think they do have some goal in me being alive, so they can continue to earn out my customer lifetime value as a function of my lifetime. They have some value attached to me spending money in certain ways that are more sustainable, that are better for their infrastructure, etc.

I think in all those areas they could take the kinds of methodologies I’m describing and apply them in a kind of AI/machine learning. On my side, if I’m proxied by own agent – well then we can just negotiate. My agent’s goal is really to model out my health, happiness and productivity. It’s constantly seeking to maximize those in the near, medium and long term. So, it walks into a room and says, “All right, let’s have a negotiation.” Clearly, this can’t be done by people, as it all needs to happen nearly instantaneously.

I don’t think the cost of these solutions will drop low enough that we’ll literally be putting them into bags of potato chips. Firstly we must imagine changes in the infrastructure. Part of paying for shelf space in a supermarket won’t be just paying for the physical shelf space, it will be paying for putting your own agents in place on that shelf space. They’ll be relatively low cost, but probably not as disposable as something you could build into the packaging of potato chips. But simply by visiting that location, I pick up all the nutrition information I need, I can solicit information from the store about other people that are shopping (here I mean that my proxy can do all this). Then that whole system can negotiate this out, and come up with recommendations.

To me, it may seem like my phone or earpiece is simply suggesting, “How about this, how about that?” While not everyone is this way, I’m one of those people who actually enjoys going to the supermarket, feeling how it’s interacting with me in the moment. That’s something my agent can take into account as well. This becomes a story that I find more interesting. Maybe this is a set of combined interactions that takes into account various foods manufacturers, retailers – and my agent.

Today, I’m totally outside this process – I don’t get to play a role. The things I like, I just cross my fingers and hope they are in stock when I am in the store. The price that I pay: I have no participation in that whatsoever (other than choosing to purchase or not).

Another example:  Kate from Facebook [in an earlier panel discussion] was telling us that Facebook gives a discount to advertisers for ads that are ‘stickier’ – that people want to see and spend more time looking at. What if I was willing to watch less enjoyable ads – if FB will share the revenue with me?

None of these are totally novel ideas, but none of them will ever come to realization if one of the fundamental sides to this negotiation never gets to participate. I’m always getting proxied by someone else. I don’t have to think that Facebook or Google are bad companies, or that Larry Page or Mark Zuckerberg are bad people for me to think that they don’t necessarily have my best interests at heart.

That would change the dynamic. But I sense that some people in the audience would see that as a loss of control, and most of them are hyper risk-averse.

Ed:  As a final thought or question, in terms of the participation between consumer and producer/retailer that you have discussed, it occurs to me that perhaps one avenue that may be attractive to these companies would be along the lines of market research.  Most new products or services are developed unilaterally, with perhaps some degree of ‘traditional market research’ where small focus groups are used for feedback. From the number of expensive flops in the marketplace it appears that this methodology is fraught with error. Could these methodologies of AI, of probability prediction, of agent communication, be brought to bear on this issue?

Vivienne:  Interesting… brings up many new ideas. One thing that we did in the past – we’re not doing it now but we could listen in to students conversing with each other online. We actually learned the material they were studying directly from the students themselves. For example, start with a system that knows nothing about biology, it learns biology from the students talking amongst themselves – including wrong ideas about biology. What we found was when we trained the system to predict the grades that the students would receive, after new students entered the class, with new material, and new professors: we knew after one week what grade they would get at the end of the semester. We knew with greater and greater accuracy each week what questions they would get right or wrong on the final exam. Our goal in the exercise was to end all standardized testing. I mean, if we know how they are going to score on the test, why ever have a test?

Part of our outcome there was to simulate the outcome of a lecture. There’s some similarity to what you’re discussing (producing consumer goods). Lectures are costly to develop, you get one chance to deploy it each semester or quarter, limited feedback, etc. You would really like to know ahead of time if this lecture was going to useful. Before we pivoted away from this more academic aspect of education into this life outcomes type of work, we were wondering if we could give feedback on the effectiveness of a given lecture before the lecture was given.

Hey, these five students are not going to understand any of your lecture as it’s currently presented. Either they are going to need something different, or you can explore including something else, some alternative metaphors, in your discussion.

Yes, I think it’s intriguingly very possible to to run this sort of very disruptive market research. Certainly in my domain I’m already talking about this: I’m asking one question each day, and can predict everyone’s answer to thousands of questions. That’s rather profound, quite efficient. What if you had a relationship with a meaningful sample of your customers on Facebook and you could ask each of them one question a day, just like I described with my educational work. Essentially you would have a deep, insightful rolling model of your customers all the time.

You could make predictions against this model community for future products, some basic simulations for those type of experiences. I agree, this could be very appealing to these firms.

A Digital Disruptor: An Interview with Michael Fertik

June 27, 2016 · by parasam

During the recent Consumer Goods Forum global summit here in Cape Town, I had the opportunity to briefly chat with Michael about some of the issues confronting the digital disruption of this industry sector. [The original transcript has been edited for clarity and space.]

Michael Fertik founded Reputation.com with the belief that people and businesses have the right to control and protect their online reputation and privacy. A futurist, Michael is credited with pioneering the field of online reputation management (ORM) and lauded as the world’s leading cyberthinker in digital privacy and reputation. Michael was most recently named Entrepreneur of the Year by TechAmerica, an annual award given by the technology industry trade group to an individual they feel embodies the entrepreneurial spirit that made the U.S. technology sector a global leader.

He is a member of the World Economic Forum Agenda Council on the Future of the Internet, a recipient of the World Economic Forum Technology Pioneer 2011 Award and through his leadership, the Forum named Reputation.com a Global Growth Company in 2012.

Fertik is an industry commentator with guest columns in Harvard Business Review, Reuters, Inc.com and Newsweek. Named a LinkedIn Influencer, he regularly blogs on current events as well as developments in entrepreneurship and technology. Fertik frequently appears on national and international television and radio, including the BBC, Good Morning America, Today Show, Dr. Phil, CBS Early Show, CNN, Fox, Bloomberg, and MSNBC. He is the co-author of two books, Wild West 2.0 (2010), and New York Times best seller, The Reputation Economy (2015).

Fertik founded his first Internet company while at Harvard College. He received his JD from Harvard Law School.

Ed: As we move into a hyper-connected world, where consumers are tracked almost constantly, and now passively through our interactions with an IoT-enabled universe: how do we consumers maintain some level of control and privacy over the data we provide to vendors and other data banks?

Michael:  Yes, passive sharing is actually the lion’s share of data gathering today, and will continue in the future. I think the question of privacy can be broadly broken down into two areas. One is privacy against the government and the other is privacy against ‘the other guy’.

One might call this “Big Brother” (governments) and “Little Brother” (commercial or private interests). The question of invasion of privacy by Big Brother is valid, useful and something we should care about in many parts of the world. While I, as an American, don’t worry overly about the US government’s surveillance actions (I believethat the US is out to get ‘Jihadi John’ not you or me); I do believe that many other governments’ interest in their citizens is not as benign.

I think if you are in much of the world, worrying about the panopticon of visibility from one side of the one-way mirror to the other side where most of us sit is something to think and care about. We are under surveillance by Big Brother (governments) all the time. The surveillance tools are so good, and digital technology makes it possible to have so much of our data easily surveilled by governments that I think that battle is already lost.

What is done with that data, and how it is used is important: I believe that this access and usage should be regulated by the rule of law, and that only activities that could prove to be extremely adverse to our personal and national interests should be actively monitored and pursued.

When it comes to “Little Brother” I worry a lot. I don’t want my private life, my frailties, my strengths, my interests.. surveilled by guys I don’t know. The basic ‘bargain’ of the internet is a Faustian one: they will give you something free to use and in exchange will collect your data without your knowledge or permission for a purpose you can never know. Actually, they will collect your data without your permission and sell it to someone else for a purpose that you can never know!

I think that encryption technologies that help prevent and mitigate those activities are good and I support that. I believe that companies that promise not to do that and actually mean it, that provide real transparency, are welcome and should be supported.

I think this problem is solvable. It’s a problem that begins with technology but is also solvable by technology. I think this issue is more quickly and efficiently solvable by technology than through regulation – which is always behind the curve and slow to react. In the USA privacy is regarded as a benefit, not an absolute right; while in most of Europe it’s a constitutionally guaranteed right, on the same level as dignity. We have elements of privacy in American constitutional law that are recognized, but also massive exceptions – leading to a patchwork of protection in the USA as far as privacy goes. Remember, the constitutional protections for privacy in the USA are directed to the government, not very much towards privacy from other commercial interests or private interests. In this regard I think we have much to learn from other countries.

Interestingly, I think you can rely on incompetence as a relatively effective deterrence against public sector ‘snooping’ to some degree  – as so much government is behind the curve technically. The combination of regulation, bureaucracy, lack of cohesion and general technical lack of applied knowledge all serve to slow the capability of governments to effectively mass surveile their populations.

However, in the commercial sector, the opposite is true. The speed, accuracy, reach and skill of private corporations, groups and individuals is awesome. For the last ten years this (individual privacy and awareness/ownership of one’s data) has been my main professional interest… and I am constantly surprised by how people can get screwed in new ways on the internet.

Ed:  Just as in branding, where many consumers actually pay a lot for clothing, that in addition to being a T-shirt, advertise prominently the brand name of the manufacturer, with no recompense for the consumer; is there any way for digital consumers to ‘own’ and have some degree of control over the use of the data they provide just through their interactions? Or are consumers forever to be relegated to the short end of the stick and give up their data for free?

Michael:  I have mapped out, as well as others, how the consumer can become the ‘verb’ of the sentence instead of what they currently are, the ‘object’ of the sentence. The biggest lie of the internet is that “You” matter… You are the object of the sentence, the butt of the joke. You (or the digital representation of you) is what we (the internet owners/puppeteers) buy and sell. There is nothing about the internet that needs to be this way. This is not a technical or practical requirement of this ecosystem. If we could today ask the grandfathers of the internet how this came to be, they would likely say that one of areas in which they didn’t succeed was to add an authentication layer on top of the operational layer of the internet. And what I mean here is not what some may assume: providing access control credentials in order to use the network.

Ed:  Isn’t attribution another way of saying this? That the data provided (whether a comment or purchasing / browsing data) is attributable to a certain individual?

Michael:  Perhaps “provenance” is closer to what I mean. As an example, let’s say you buy some coffee online. The fact that you bought coffee; that you’re interested in coffee; the fact that you spend money, with a certain credit card, at a certain date and time; etc. are all things that you, the consumer, should have control over – in terms of knowing which 3rd parties may make use of this data and for what purpose. The consumer should be able to ‘barter’ this valuable information for some type of benefit – and I don’t think that means getting ‘better targeted ads!’ That explanation is a pernicious lie that is put forward by those that have only their own financial gain at heart.

What I am for is “a knowing exchange” between both parties, with at least some form of compensation for both parties in the deal. That is a libertarian principle, of which I am a staunch supporter. Perhaps users can accumulate something like ‘frequent flyer miles’ whereby the accumulated data of their online habits can be exchanged for some product or service of value to the user – as a balance against the certain value of the data that is provided to the data mining firms.

Ed:  Wouldn’t this “knowing exchange” also provide more accuracy in the provided data? As opposed to passively or surreptitiously collected data?

Michael:  Absolutely. With a knowing and willing provider, not only is the data collection process more transparent, but if an anomaly is detected (such as a significant change in consumer behavior), this can be questioned and corrected if the data was in error. A lot of noise is produced in the current one-sided data collection model and much time and expense is required to normalize the information.

Ed:  I’d like to move to a different subject and gain your perspective as one who is intimately connected to this current process of digital disruption. The confluence of AI, robotics, automation, IoT, VR, AR and other technologies that are literally exploding into practical usage have a tendency, as did other disruptive technologies before them, to supplant human workers with non-human processes. Here in Africa (and today we are speaking from Cape Town, South Africa) we have massive unemployment – varying between 25% – 50% of working age young people in particular. How do you see this disruption affecting this problem, and can new jobs, new forms of work be created by this sea change?

Michael:  The short answer is No. I think this is a one-way ratchet. I’m not saying that in a hundred years’ time that may change, but in the next 20-50 years, I don’t see it. Many, many current jobs will be replaced by machines, and that will be a fact we must deal with. I think there will be jobs for people that are educated. This makes education much, much more important in the future than it’s even been to date – which is huge enough. I’m not saying that only Ph.D.’s will have work, but to work at all in this disrupted society will require a reasonable level of technical skill.

We are headed towards an irrecoverable loss of unskilled labor jobs. Full stop. For example, we have over a million professional drivers in the USA – virtually all of these jobs are headed for extinction as autonomous vehicles, including taxis and trucks, start replacing human drivers in the next decade. These jobs will never come back.

I do think you have a saving set of graces in the developing world, that may slow down this effect in the short term: the cost of human labor is so low that in many places this will be cheaper than technology for some time; the fact that corruption is often a bigger impediment to job growth than technology; and trade restrictions and unfair practices are also such a huge limiting factor. But none of this will stem the inevitable tide of permanent disruption of the current jobs market.

And this doesn’t just affect the poor and unskilled workers in developing economies: many white collar jobs are at high risk in the USA and Western Europe:  financial analysts, basic lawyers, medical technicians, stock traders, etc.

I’m very bullish on the future in general, but we must be prepared to accommodate these interstitial times, and the very real effects that will result. The good news is that, for the developing world in particular, a person that has even rudimentary computer skills or other machine-interface skills will find work for some time to come – as this truly transformative disruption of so many job markets will not happen overnight.

The Digital Disruption of the Consumer Goods Ecosystem

June 27, 2016 · by parasam

 

CGF06

I attended the current Global Consumer Goods Forum here in Cape Town, where about 800 delegates from around the world came to Africa for the first time in the 60 year history of this consortium to discuss and share experience from both the manufacturer and retailer sides of this industry.

My focus, as an emerging technologies observer, was on how various technologies including IoT, mobile devices and user behaviour are changing this industry sector. Many speakers addressed this issue, covering different aspects. Mark Curtis, Co-Founder of Fjord (Design/Innovation group of Accenture) opened the sessions with an interesting and thought-provoking presentation on how brands (and associated firms) will have to adapt in this time of exponential change.

Re-introducing concepts such as ‘wearables’ and ‘nearables’ – as devices and the network moves ever more proximate to ourselves – Mark pointed out that the space for branding is ever shrinking. (Exactly how much logo space is available on a smart watch??) In addition, as so many functions and interactions today in the digital age are becoming atomized and simplified, the very concept of a branded experience is changing remarkably.

As the external form factor of many of our digital interactions is becoming sublimated into the very fabric of our lives (and soon, into the fabric of our clothes…) the external ‘brand’ that you could touch and see is disappearing or becoming commoditized. Even in the world of smartphones the ecosystem is rapidly changing: the notion of separate apps (which have some brand recognition attached) for disparate functions will soon disappear. Although the respective manufacturers may not love to hear this, the notion of whether the phone is Apple or Samsung will in the end not be as important as the functionality that these devices enable.

The user won’t really care whether an app is called Vanilla or Chocolate, but rather that the combination of hardware and software will enable the user to listen to their music, in the order they want, when and where they want. Period. Or to automatically glean info from their IoT-enabled home and present the shopping list when they are in the store.

The experience is what is now requiring branding. Uber, AirBnB, Spotify, Amazon, etc. are all examples of something more than either a product or a service.

Christophe Beck, EVP of EcoLab, explained how the much-hyped “Internet of Things” (IoT) is moving into real and actionable functions. In this case, the large-scale deployment of sensors feeds their real-time analytic processes to provide feedback and control mechanisms to on-site engineers, creating rapid improvements in process control and quality.

The enormously important use of predictive analytics was further underscored by José González-Hurtado of IRI. The power of huge data farms along with today’s massive computational availability can extrapolate meaning and indicators that was economically impossible only a few years ago. Discussions on the food supply chain, including sustainability, transparency, trackability, food safety, health and other factors dominated many of the presentations. The CEO’s of Campbell Soup and Whole Foods covered how their respective firms are leveraging both IoT, analytics (where one gets useful information from BigData) and social networks to integrate more effectively with their customers and provide the level of food information and transparency that most consumers want today regarding their food.

The panel on “Digital Disruptors” was particularly fascinating: Vivienne Ming (Founder of Socos Learning) showed us how AI (Artificial Intelligence, and more importantly Augmented Intelligence) can and will make enormous impacts within the consumer goods spectrum; Michael Fertik (Founder of Reputation.com) shared the impacts of how digital privacy (or the lack thereof…), security and data ownership are changing the way that customers interact with retailers and suppliers; and Kate Sayer (Head of Consumer Goods Strategy for Facebook) discussed the rapidly changing engagement model that consumer goods suppliers must adopt in relating to their customers.

CGF04            CGF01                CGF03

While all of the participants acknowledged that “digital disruption” is here to stay, there is a large and diverse understanding and implementation of this technology throughout the supply chain. My prime take-away is that the manufacture, distribution, sales and consumption of end-user physical goods lags considerably behind that of purely digital goods and services. When questioned privately, many industry leaders accepted that they are playing “catch-up” with their purely digital counterparts.

There are a number of reasons for this, not all of which are within the control of individual manufacturers or retailers: governmental and international regulations are even further behind the curve than commercial entities in terms of rapid and encompassing adoption of digital technology; industrial process control took decades to move from purely human/mechanical control to in-house closed-loop computer control/feedback systems – the switch to a more open IoT framework must be closely balanced with a profound need for security, reliability and accountability.

As we have seen with general information on the internet, accuracy and accountability have taken a far back seat to perceived efficiency, features and ‘wow-factor’. This is ok for salacious news, music and cool new apps that one can always abandon if they don’t deliver what they promise; it’s another thing entirely when your food, drink or clothing doesn’t deliver on the promises made…

Given this, it’s most likely that more rapid adoption of IoT and other forms of ‘disruptive digital technology’ will occur in the retail sector than the manufacturing sector – and this is probably a good thing. But one thing is sure: this genie is way out of the bottle, and our collective lives will never be the same. The process of finding, buying and consuming both virtual and physical goods is changing forever.

IoT (Internet of Things): A Short Series of Observations [pt 7]: A Snapshot of an IoT-connected World in 2021

May 19, 2016 · by parasam

What Might a Snapshot of a Fully Integrated IoT World Look Like 5 Years from Now?

As we’ve seen on our short journey through the IoT landscape in these posts, the ecosystem of IoT has been under development for some time. A number of factors are accelerating the deployment, and the reality of a large-scale implementation is now upon us. Since 5 years is a forward-looking time frame that is within reason, both in terms of likely technology availability and deployment capabilities, I’ve chosen that to frame the following set of examples. While the exact scenarios may not play out precisely as envisioned, the general technology will be very close to this visualization.

The Setting

Since IoT will be international in scope, and will be deployed from 5th Avenue in mid-town Manhattan to the dusty farmlands of Namibia, more than one example place setting must be considered for this exercise. In order to convey as accurate and potentially realistic a snapshot as possible, I’m picking three real-world locations for our time-travel discussion.

  • San Francisco, CA – USA.  A dense and congested urban location, very forward thinking in terms of civic and business adoption of IT. With an upscale and sophisticated population, the cutting edge of IoT can be examined against such an environment.
  • Kigali, Rwanda – Africa.  An urban center in an equatorial African nation. With the entire country of Rwanda having a GDP of only 2% of San Francisco, it’s a useful comparison of how a relatively modern, urban center in Africa will implement IoT. In relative terms, the local population is literate, skilled and connected [70% literacy rate, is reputed to be one of the most IT-centric cities in Africa, and has a 26% internet connectivity rate nationally (substantially higher in Kigali)].
  • Tihi, a remote farming village in the Malwa area of the the central Indian state of Madhya Pradesh.  This is a small village of about 2,500 people that mostly grows soybeans, wheat, maize and so on. With an average income of $1.12 per year, this is an extremely poor region of central rural India. This little village is however ‘on the map’ due to the installation in 2002 of an ICT kiosk (named e-Choupal, taken from the term “choupal” meaning ‘village square’ or ‘gathering place’) which for the first time allowed internet connectivity to this previously disconnected town. IoT will be implemented here, and it will be instructive to see Tihi 5 years on…

General Assumptions

Crystal ball gazing is always an inexact science, but errors can be reduced by basing the starting point on a reasonable sense of reality, and attempting to err on the side of conservatism and caution in projecting the rollout of nascent technologies – some of which deploy faster than assumed, others much more slowly. Some very respectable consulting firms in 1995 reported that cellphones would remain a fringe device and only expected 1 million cellphones to be in use by the year 2000. In the USA alone, more than 100 million subscribers were online by that year…

I personally was one of the less than 40,000 users in the entire USA in 1984 when cellphones were only a few months old. As I drove on the freeways of Los Angeles talking on a handset (the same size as a landline, connected via coilcord to a box the size of a lunch pail) other drivers would stare and mouth “WTF??” But it aided my productivity enormously, as I sat through massive traffic jams on my 1.5 hr commute each way from home to work. I was able to speak to east coast customers, understand what technical issues would greet me once I arrived at work, etc. I personally couldn’t understand why we didn’t have 100 million subscribers by 1995… this was a transformative technology.

Here are the baseline assumptions from which the following forward-looking scenarios will be developed:

  • There are currently about the same number of deployed IoT devices as people on the planet: 6.8 billion. The number of deployed devices is expected to exceed that of the human population by the end of this year. Approximately 10 billion more devices are expected to be deployed each year over the next 5 years, on average.
  • The overall bandwidth of the world-wide internet will grow at approximately 25% per year over the next 5 years. The current overall traffic is a bit over 1 zettabyte per year [1 zettabyte = 1 million petabytes; 1 petabyte = 1 million gigabytes]. That translates to about 3 zettabytes by 2021. From another perspective, it took 27 years to reach the 1 zettabyte level; in 5 more years the traffic will triple!
  • Broadband data connectivity in general (wired + wireless) is currently available to about 46% of the world’s population, and is increasing by roughly 5% per year. The wireless connectivity is expected to increase in rate, but even being conservative about 60% of the world’s population will have internet access within 5 years.
  • The cost of both computation and storage is still falling, more or less in line with Moore’s law. Hosted computation and storage is essentially available for free (or close to it) for small users (a few GB of storage, basic cloud computations). This means that a soy farmer in Tihi, once connected to the ‘net, can essentially leverage all the storage and compute resources needed to run their farm at only the cost of connectivity.
  • Advertising (the second most trafficked content on the internet after porn) will keep increasing in volume, cleverness, economic productivity and reach. As much as many may be annoyed by this, the massive infrastructure must be fed… and it’s either ads or subscription services. Take your pick. And with all the new-found time, and profits, from an IoT enabled life, maybe one just has to buy something from Amazon? (Can’t wait to see how soon they can get a drone out to Tihi to deliver seeds…)

The Snapshots

San Francisco  We’ll drop in on the life of Samantha C for a few hours of her day in the spring of 2021 to see how IoT interacts with her life. Sam is a single professional who lives in the Noe Valley district in a flat. She works for a financial firm downtown that specializes in pricing and trading ‘information commodities’ – an outgrowth of online advertising now fueled by the enormous amount of data that IoT and other networks generate.

San Francisco and the Golden Gate Bridge

San Francisco and the Golden Gate Bridge

Sam’s alarm app is programmed to wake her between 4:45 – 5:15AM, based on sleep pattern info received from the wrist band she put on before retiring the night before. (The financial day starts very early, but she’s done by 3PM). As soon as the app plays the waking melody, the flat’s environment is signaled that she is waking. Lighting and temperature is adjusted and the espresso machine is turned on to preheat. A screen in the dressing area displays weather prediction to aid in clothing selection. After breakfast she simply walks out the front door, the flat environment automatically turns off lights, heat, checks perimeter and arms the security system. A status signal is sent to her smartphone. As San Francisco has one of the best public transport networks in the nation, only a few blocks walk is needed before boarding an electric bus that takes her almost to her office.

AV09  AV04  AV10

Traffic, which as late as 2018 was often a nightmare during rush hours, has markedly improved since the implementation in 2019 of a complete ban on private vehicles in the downtown and financial districts. Only autonomous vehicles, taxis/Ubers, small delivery vehicles and city vehicles are allowed. There is no longer any street parking required, so existing streets can carry more traffic. Small ‘smart cars’ quickly ferry people from local BART stations and other public transport terminals in and out of the congestion zone very efficiently. All vehicles operating in the downtown area must carry a TSD (Traffic Signalling Device), an IoT sensor and transmitter package that updates the master traffic system every 5 seconds with vehicle position, speed, etc.

AV05  AV08  AV02  AV01

As Samantha enters her office building, her phone acquires the local WiFi signal (but she’s never been out of range, SF now has blanket coverage in the entire city). As her phone logs onto the building network, her entry is noted in her office, and all of her systems are put on standby. The combination of picolocation, enabled through GPS, proximity sensors and WiFi hub triangulation – along with a ‘call and response’ security app on her phone – automatically unlocks the office door as she enters just before 6AM (traders get in before the general office staff). As she enters her area within the office environment the task lighting is adjusted and the IT systems move from standby to authentication mode. Even with the systems described above, a further authentication step of a fingerprint and a voice response to a random question (one of a small number that Sam has preprogrammed into the security algorithm) is required in order to open the trading applications.

San Francisco skyline

San Francisco skyline

The information pricing and trading firm for which Sam works is an economic outgrowth of the massive amount of data that IoT has created over the last 5 years. The firm aggregates raw data, curates and rates it, packages the data into various types and timeframes, etc. Almost all this ‘grunt work’ is performed by AI systems: there are no clerks, engineers, financial analysts or other support staff as would have been required even a few years ago. The bulk of Sam’s work is performed with spoken voice commands to the avatars that are the front end to the AI systems that do the crunching. Her avatars have heuristically learned over time her particular mannerisms, inflections of voice, etc. and can mostly intuit the difference between a statement and question just based on cadence and tonal value of her voice.

This firm is representative of many modern information brokerage service providers: with a staff of only 15 people they trade data based on over 5 billion distinct data sources every day, averaging a trade volume of $10 million per day. The clients range from advertising, utilities, manufacturing, traffic systems, agriculture, logistics and many more. Some of the clients are themselves other ‘info-brokers’ that further repackage the data for their own clients, others are direct consumers of the data. The data from IoT sensors is most often already aggregated to some extent by the time Sam’s firm gains access to it, but some of the data is directly fed to their harvesting networks – which often sit on top of the functional networks for the which the IoT systems were initially designed. A whole new economic model has been born where the cost of implementation of large IoT networks are partially funded by the resale of the data to firms like Samantha’s.

Transportation Network

Transportation Network

We’ll leave Sam in San Francisco as she walks down Bush Street for lunch, still not quite used to the absence of noise and diesel smoke of delivery trucks, congested traffic and honking taxis. The relative quiet, disturbed only by the white noise emitters of the electric vehicles (only electrics are allowed in the congestion area in SF), allows her to hear people, gulls and wind – a city achieving equilibrium through advanced technology.

.

Kigali  This relatively modern city in Rwanda might surprise some that think of Rwanda as “The Land of a Thousand Hills” with primeval forests inhabited with chimpanzees and bush people. For this snapshot, we’ll visit Sebahive D, a senior manager working for the city of Kigali (the capital of Rwanda) in public transport. He has worked for the city his entire professional life, and is enthusiastic about the changes that are occurring as a result of the significant deployment of IoT throughout the city over the last few years. As his name means “Bringer of Good Fortune” Sebahive is well positioned to help enable an improved transport environment for the Rwandans living in Kigali.

Kigali - this is also Africa...

Kigali – this is also Africa…

Even though Kigali is a very modern city by African standards, with a skyline that belies a city of just over a million people in a country that has been ‘reborn’ in many ways since the horrific times of 1994, many challenges remain. One of the largest is common to much of Africa: that of reliable urban transport. Very few people own private cars (there were only 21,000 cars in the entire country as of 2012, the latest year for which accurate figures were available) so the vast majority of people depend on public transport. The minibus taxi is the most common mode of transport, accounting for over 50% of all public transport vehicles in the country. Historically, they operated in a rather haphazard manner, with no schedules and flexible routes. Typically the taxis would just drive on routes that had proved over time to offer many passengers, hooting to attract riders and stopping wherever and whenever the driver decided. Roadworthiness, the presence of a driving license and other such basic structures was often optional…

Kigali city center on the hill

Kigali city center on the hill

We’ll join Sebahive as he prepares his staff for a meeting with Toyota who has come to Kigali to present information on their new line of “e-Quantum” minibus taxis. These vehicles are a gas/electric hybrid powered unit, with many of the same features that fully autonomous vehicles being used currently in Japan posses. The infrastructure, roads, IT networks and other basic requirements are insufficient in Kigali (and most of the rest of Africa) to support fully autonomous vehicles at this time. However, a ‘semi-autonomous’ mode has been developed, using both sophisticated on-board computers supplemented by an array of inexpensive IoT devices on roads, bus stops, buildings, etc. This “SA” (Semi-Autonomous) mode, as differentiated from a “FA” (Fully-Autonomous) mode, acts a bit like an auto-pilot or a very clever ‘cruise control’. When activated, the vehicle will maintain the speed at which it was travelling when switched on, and will use sensors both on the exterior of the minibus as well as receive data from roadside sensors to keep the vehicle in its lane and not too close to other vehicles. The driver is still required to steer, and tapping the brake will immediately give full control back to the vehicle operator.

AV11  AV07  AV03

Rather than the oft-hazardous manner of ‘taxi-hailing’ – which basically means stepping out into traffic and waving or whistling – many small IoT sensor/actuators (that are solar powered) are mounted on light poles, bus stop structures, sides of buildings, etc. Pressing the button on the device transmits a taxi request via WiFi/WiMax to the taxi signalling network, which in turn notifies any close taxis of a passenger waiting, and the location is displayed on the dashboard mapping display. A red LED is also illuminated on the transmitter so the passenger waiting knows the request has been sent. When the taxi is close (each taxi is constantly tracked using a combo IoT sensor/transceiver device) the LED turns green to notify the passenger to look for the nearby taxi.

The relatively good IT networks in Kigali make the taxi signalling network possible. One of the fortuitous aspects of local geography (the city is essentially built on four large hills) is that a very good wireless network was easy to establish due to overlooking locations. Although he is encouraged by the possibility of a safer and more modern fleet of taxis, Sebahive is experienced enough to wonder about the many challenges that just living in Africa offers… power outages, the occasional torrential rains, vandalism of public infrastructure, etc. Although there are only about 2,500 minibus taxis in the entire country, it often seems like most of them are in the suburb of Kacyiru, Gasebo district (where the presidential palace and most of the ministries, including Sebahive’s office), is located at rush hour. An IoT solution that keeps taxis, motorcycles (the single most common conveyance in Rwanda), pedestrians and very old diesel lorries from turning a roadway with lanes into an impenetrable morass of.. everything… has yet to be invented!

IT Center in suburban Kigali

IT Center in suburban Kigali

Another aspect of technology, assisted by IoT, that is making life simpler, safer and more efficient is cellphone-based payment systems. With almost everyone having a smartphone today, and even the most unschooled having learned how to purchase airtime, electricity and other basic utilities and enter those credits into a phone or smart meter, the need to pay cash for transport services is fast disappearing. Variations on SnapScan, NFC technology, etc. all offer rapid and mobile payment methods in taxis or buses, speeding up transactions and reducing the opportunity for street theft. One of the many things in Sebahive’s brief is the continual push to get more and more retail establishments to offer the sale of transport coupons (just like airtime or electricity) that can be loaded into a user’s cellphone app.

IoT in Africa is a blend of modern technology with age-old customs, with a healthy dose of reality dropped in…

.

Tihi  Ravi Sham C. is a soybean farmer in one of the poorest ares of rural India, a small village named Tihi in the central Indian state of Madhya Pradesh. However, he’s a sanchalak (lead farmer) with considerable IT experience relative to his environment, having been using a computer and online services since 2004, some 17 years now. Ravi started his involvement with the ITC’s “e-Choupal” service back then, and was able for the first time to gain knowledge of world-wide commodity prices rather than be at the mercy of the often unscrupulous middlemen that ran the “mandis” (physical marketplaces) in rural India. These traders would unfairly pay as little as possible to the farmers, who had no knowledge of the final selling price of their crops. The long-standing cultural, caste and other barriers to free trade in India also did not help the situation.

Indian farmers tilling the earth in Tihi

Indian farmers tilling the earth in Tihi

Although the first decade of internet connectivity greatly improved Ravi’s (and the other farmers in his group area) life and profitability, the last few years (from 2019 onwards) have seen a huge jump in productivity. The initial period was one of knowledge enhancement, becoming aware of the supply chain, learning pricing and distribution costs, being able to get good weather forecasting, etc. The actual farming practice however wasn’t much changed from a hundred years ago. With electricity in scare supply, almost no motorized vehicles or farm equipment, light basically supplied by the sun and so on, real advances toward modern farming were not easily feasible.

As India is making a massive investment into IoT, particularly in the manufacturing and supply chain sectors, an updated version of the “e-Choupal” was delivered to Ravi’s village. The original ‘gathering place’ was basically a computer that communicated over antiquated phone lines at very low speed and mostly supported text transmissions. The new “super-Choupal” was a small shipping container that housed several computers, a small server array with storage and a set of powerful WiFi/WiMax hubs. Connectivity is provided with a combination of the BBNL (Bharat Broadband Network Limited) service supported by the Indian national government, which provided fiber connectivity to many rural areas throughout the country, and a ‘Super WiFi’ service using Microsoft White Spaces technology (essentially identifying and taking advantage of unused portions of the RF spectrum in particular locations [so called “white spaces”] to link the super-Choupal IT container with the edge of the fiber network.

Power for the container is a combination of a large solar array on the top of the container supplemented by fuel cells. As an outgrowth of Intelligent Energy’s deal with India to provide backup power to most of the country’s rural off-grid cell towers (replacing expensive diesel generators), there has been a marked increase in availability of hydrogen as a fuel cell source. The fuel is delivered as a replaceable cartridge, minimizing transport and safety concerns. Since the super-Choupal now serves as a micro datacenter, Ravi spends more of his time running the IT hub, training other farmers and maintaining/expanding the IoT network than farming. Along with the container hub, hundreds of small soil and weather sensors have been deployed to all the surrounding village farms, giving accurate information on where and when to irrigate, etc. In addition, the local boreholes are now monitored for toxic levels of chemical and other pollutants. The power supplies that run the container also provide free electricity for locals to charge their cellphones, etc.

As each farmer harvests their crops, the soybeans, maize, etc. are bagged and then tagged with small passive IoT devices that indicate the exact type of product, amount, date packed, agreed upon selling price and tracking information. This now becomes the starting point for the supply chain, and can be monitored all the way from local distribution to eventual overseas markets. The farmers can now essentially sell online, and receive electronic payment as soon as the product arrives at the local distribution hub. The lost days of each farmer physically transporting their goods to the “mandi” – and getting ripped off by the greedy middlemen – are now in the past. A cooperative collection scheme sends trucks around to each village center, where the IoT-tagged crops are scanned and loaded, with each farmer immediately seeing a receipt for goods on their cellphone. The cost of the trucking is apportioned by weight and distance and billed against the proceeds of the sale of the crops. The distributor can see in almost real time where each truck is, and estimate with knowledge how much grain and so on can be promised per day from the hub.

The combination of improved farming techniques, knowledge, fair pay for the crops and rapid payment have more than tripled Ravi’s, and his fellow farmers’ incomes over the past two years. While this may seem like a small drop in the bucket of international wealth (an increase from $1.12 per year to $3.50 per year by first world standards is hard to appreciate), the difference on the ground is huge. There are over 1 billion Ravi’s in India…

 

This concludes the series on Internet of Things – a continually evolving story. The full series is available as a downloadable PDF here. Queries may be directed to ed@parasam.com

References:

Inside the Tech Revolution That Could Be Rwanda’s Future

Rwanda information

Republic of Rwanda – Ministry of Infrastructure Final Report on Transport Sector

IoT in rural India

Among India’s Rural Poor Farming Community, Technology Is the Great Equalizer

ITC eChoupal Initiative

India’s Soybean Farmers Join the Global Village

a development chronology of tihi

Connecting Rural India : This Is How Global Internet Companies Plan To Disrupt

Bharat Broadband Network

Intelligent Energy’s Fuel Cells

IoT (Internet of Things): A Short Series of Observations [pt 6]: The Disruptive Power of IoT

May 19, 2016 · by parasam

Tsunamis, Volcanoes, Cellphones and Wireless Broadband – Disruptive Elements

Natural disruptions change the environment, and although a new equilibrium is achieved nothing is quite the same. In recent history, the advent of the cellphone changed most of humanity, allowing a level of communication and cohesion that was never before possible. Following on that is the ever-increasing availability of sufficient wireless bandwidth to enable powerful distributed computing. We must not lose sight of the fact that with smartphones, we all walk around now with mobile computers that happen to also make phone calls… And for comparison, an Apple iPhone6 is 120 million times more capable (in terms of total memory and cpu clock speed) than the computers that send man to the moon less than 50 years ago.

disruption03

The long-term disruptive effect of IoT in the next decade will eclipse any other technological revolution in history. Period. To be more precise, the combination of technologies that will encompass IoT will form the juggernaut that will propel this massive disruption. These include IoT itself (the devices and directly interconnecting network fabric), AI (Artificial Intelligence), VR/AR (Virtual Reality / Augmented Reality) and DCT (Distributed Cloud Technology). Each of these technologies are rapidly maturing on their own, but are more or less interdependent, and will collectively construct a layer of intelligence, awareness and responsiveness that will forever change how humans interact with the physical world and each other.

Disruption02                                     

The number of electronic devices that are all interconnected is expected to outnumber the total population of the planet within a year, and to exceed the number of people by over 10:1 within 7 years. In mysticism and philosophy we used to think the term Akashic Records (the complete record of all human thought and emotion, recorded on an astral plane of some sort) was science fiction or the result of the ingestion of controlled substances… now we know this as Google… With virtually every moment of our lives recorded on Instagram, Facebook, etc. and the capacity of both storage and processing making possible the search and derivation of new data from all these memories, a new form of life is developing. Terminology such as avatars, virtual presence, CAL (Computer Aided Living), etc. are fast becoming part of our normal lexicon.

One of the most enduring tests of when a certain technology has thoroughly disrupted an existing paradigm is that of expectation. An example: if a person is blindfolded and taken to an unknown location, and then put in a dark room and the blindfold removed, what happens? That person will almost immediately start feeling on the wall about 1.5 meters off the floor for a small protrusion, and when finding it will push or flick it, with the complete expectation that light will result. The expectation of electricity, infrastructure, light switches, lamps, etc. has become so ingrained that this action will occur for a person of any language or culture, unless living in one of the very few isolated communities left off the grid.

IoT as the Most Powerful Disruptor

Cellphones, now available to 97% of humanity, along with wireless broadband connectivity (46% of the world has such connectivity today) are two of the most recent major disruptive elements in technology. All businesses have had to adapt, and entirely new processes and economies have resulted. The changes that have resulted from just these two things will pale in comparison to what the IoT ecosystem will cause. There are multiple reasons for this:

  • The passive nature of IoT may be the single largest formative factor in large scale disruption. All previous technologies have required active choice on the part of the user: pick up your phone, type on your computer, turn on your stereo, press a light switch, etc. With IoT just your presence, or the interaction of inanimate objects (such as freight, plants, buildings, etc.) will generate data and create new information objects that can be searched, acted upon, etc.
  • The ubiquitous nature of IoT will be such that virtually every person and thing that exists will interact in some way with at least a small portion of the IoT ecosphere. In a highly connected urban center, the penetration of IoT will be so dense that all activity of people and things will reverberate in the IoT universe as well. To take a quick sample of what will be very likely within one year, in a city such as Johannesburg, Berlin or New York: a density approaching 50 devices per square meter.
  • The almost ‘perfect storm’ of a number of collaborative technologies, including IoT itself, that all build on each other and will exponentially increase the collective capability of each technology individually. The proliferation of low-latency, high bandwidth network fabric; the availability of HPC (High Performance Computing) on a massive and economical scale (as provided by Amazon, Google, Microsoft, etc.); the development of truly spectacular applications in AI, VR, AR; and the diffusion of compute power into the network itself (DCT – Distributed Cloud Technology) all build almost a chain-reaction of performance.
  • Hyperconnectivity – the aspect of massively interconnected data stores, compute farms, sensor fabrics, etc. The re-use of data will explode and will most likely become a commodity – perhaps becoming a new economic entity where large blocks of particular types of data are traded much the same as wheat futures are today on a commodity exchange. An example: a large array of temperature, humidity and soil water tension sensors are installed by a farming collective in order to better manage their irrigation process. That data, as well as being used locally, is uploaded to the farming corporation’s data center to be processed as part of their larger business activity. Very likely, that data, perhaps anonomized to some degree, will be ‘sold’ to any other data consumer that wants weather and soil data for that area. The number of times this data will be repackaged and reused will multiply to the point that it will impossible to track with absolute precision.
  • Adding to the notion of ‘passive engagement’ discussed above is the ingredient of ‘implied consent’ that will add millions of data points every hour to the collective ‘infosphere’ that is abstracted from the actual device layer of IoT. For instance, when you enter your car soon, autonomous or human-driven, the vehicle will automatically connect to the traffic management network in your region. This will not be optional, it will be a requirement just like having a license to drive, or that the car has working safety features such as airbags and brakes. Your location, speed, etc will become part of the collective data fabric of the transport sector. Your electricity usage will be monitored by the smart meter that links your home to the grid, and your consumption, on a moment to moment basis, will be transmitted to the electric utility… and to whomever is buying that data onward from the utility.
  • The privacy and security aspects of this massively shared amount of data have been discussed previously, but should be understood here to add to the disruptive nature of this technology. Whatever fragments of perception of privacy one had to date must be retired along with kerosene lanterns, horse-drawn buggies and steam engines. Perhaps someday we will go to ‘privacy museums’ which will depict situations and tableaus of times past where one could move, speak and interact with no one else knowing…

The Results and Beneficiaries of the IoT Disruption

As with each technological sea change before it, the world will adapt. The earth won’t stop turning on its axis, and the masses won’t storm the castles (well, not unless their tech stops working once they expect it to..). Ten years from now, as we have come to appreciate, expect and benefit from the reduced friction of living in a truly connected and hyperaware universe, we will wonder how we got along in the prehistoric age. Even now, as phone booths have almost completely disappeared from the urban landscape in so many cities, we can hardly imagine life before cellphones.

Yes, the introductory phase, as with many earlier technologies, will be plagued with frustrations, disappointments, failures and other speed bumps on the way to a robust deployment. As this technology, in its largest sense, will have the most profound effect on humanity in general, we must expect a long implementation timeframe. Many moral, ethical, legal and regulatory issues must be confronted, and this always takes much, much longer than the underlying technology itself to resolve. Due to the implications of privacy, data ownership, etc. – on such a massive scale – entirely new constructs of both law and economics will be born.

In terms of economic benefit, the good news is this technology is far too diffuse and varied for any small group of firms to control, patent or otherwise exercise significant ‘walled garden’ control. While there is much posturing right now from large industrial firms that will likely manufacture IoT devices, and the Big Four of IT (Google, Amazon, Microsoft, Facebook); none of these will be able to put a wall around IoT. Partially due to the very international manner of IoT, the ubiquity and breadth of sensor/actuator types, and the highly diffuse use and reuse of data IoT will rapidly become a commodity.

We will certainly need standards, regulations and other constructs in order for the myriad of players to effectively communicate and interact without undue friction, but this has been true of railroads, telephones, highways, shipping, etc. for centuries. Therefore the beneficiaries will be spread out massively over time. All humans will benefit in some manner, as will most businesses of almost any type. Ten years on, many small businesses may not ever directly make a specific investment in IoT, but this technology will be embedded in everything they do; from ordering stock, transport, sales, etc.

Like other major innovations before it IoT will ultimately become just part of the fabric of life for humanity. The challenge is right now, during the formative years, to attempt to match the physical technology with concomitant economic, legal and ethical guidelines so that this technology is implemented in the best possible way for all.

 

The final section of this post “A Snapshot of an IoT-connected World in 2021” may be found here.

IoT (Internet of Things): A Short Series of Observations [pt 5]: IoT from the Business Point of View

May 19, 2016 · by parasam

IoT from the Business Perspective

While much of the current reporting on IoT describes how life will change for the end user / consumer once IoT matures and many of the features and functions that IoT can enable have deployed, the other side of the coin is equally compelling. The business of IoT can be broken down into roughly three areas: the design, manufacture and sales of the IoT technology; the generalized service providers that will implement and operate this technology for their business partners; and the ‘end user’ firms that will actually use this technology to enhance their business – whether that be in transportation, technology, food, clothing, medicine or a myriad of other sectors.

The manufacture, installation and operation of billions of IoT devices will be expensive in its totality. The only reason this will happen is that overall a net positive cash flow will result. Business is not charity, and no matter how ‘cool’ some new technology is perceived to be, no one is going to roll this out for the bragging rights. Even at this nascent stage the potential results of this technology are recognized by many different areas of commerce as such a powerful fulcrum that there is a large appetite for IoT. The driving force for the entire industry is the understanding of how goods and services can be made and delivered with increased efficiency, better value and lower friction.

InternetOfThings02  sensor07

As the whole notion of IoT matures, several aspects of this technology that must be present initially for IoT to succeed (such as an intelligent network, as discussed in prior posts in this series) will benefit other areas of the general IT ecosystem, even those not directly involved with IoT. Distributed and powerful networks will enhance ‘normal’ computational work, reduce loads on centralized data centers and in general provide a lower latency and improved experience for all users. The concept of increased contextual awareness that IoT technology brings can benefit many current applications and processes.

Even though many of today’s sophisticated supply chains have large portions that are automated and are otherwise interwoven with a degree of IT, many still have significant silos of ‘darkness’ where either there is no information, or process must be performed by humans. For example, the logistics of importing furniture from Indonesia is rife with many handoffs, instructions, commercial transactions and so on that are verbal or at best hand written notes. The fax is still ‘high technology’ in many pieces of this supply chain, and exactly what ends up in any given container, and even exactly which ship it’s on, is still often a matter of guesswork. IoT tags that are part of the original order (retailer in Los Angeles wants 12 bookcases) can be encoded locally in Indonesia and delivered to the craftsperson, who will attach each one to the completed bookcase. The items can then be tracked during the entire journey, providing everyone involved with a greater ease and efficiency of operations (local truckers, dockworkers, customs officials, freight security, aggregation and consignment through truck and rail in the US, etc.)

As IoT is in its infancy at this stage it’s interesting to note that the largest amount of traction is in the logistics and supply chain parts of commerce. The perceived functionality of IoT is so high, with relatively low risk from early adopter malfunction, that many supply chain entities are jumping on board, even with some half-baked technology. As was mentioned in an earlier article, temperature variations during transport are the single highest risk factor for the delivery of wine internationally. IoT can easily provide end-to-end monitoring of the temperature (and vibration) for every case of wine at an acceptable cost. The identification of suspect cases, and the attribution of liability to the carriers, will improve quality, lower losses and lead to reforms and changes where necessary in delivery firms to avoid future liability for spoiled wine.

As with many ‘buzzwords’ in the IT industry, it will be incumbent on each company to determine how IoT fits (or does not) within that firm’s product or service offerings. This technology is still in the very early stages of significant implementation, and many regulatory, legal, ethical and commercial aspects of how IoT will interact within the larger existing ecosystems of business, finance and law have yet to be worked out. Early adoption has advantages but also risk and increased costs. Rational evaluation and clear analysis will, as always, be the best way forward.

The next section of this post “The Disruptive Power of IoT” may be found here.

Page 1 of 6 1 2 3 … 6 Next »
  • Blog at WordPress.com.
  • Connect with us:
  • Twitter
  • Vimeo
  • YouTube
  • RSS
  • Follow Following
    • Parasam
    • Join 95 other followers
    • Already have a WordPress.com account? Log in now.
    • Parasam
    • Customize
    • Follow Following
    • Sign up
    • Log in
    • Report this content
    • View site in Reader
    • Manage subscriptions
    • Collapse this bar
 

Loading Comments...