• Home
  • about this blog
  • Blog Posts

Parasam

Menu

  • design
  • fashion
  • history
  • philosophy
  • photography
  • post-production
    • Content Protection
    • Quality Control
  • science
  • security
  • technology
    • 2nd screen
    • IoT
  • Uncategorized
  • Enter your email address to follow this blog and receive notifications of new posts by email.

  • Recent Posts

    • Take Control of your Phone
    • DI – Disintermediation, 5 years on…
    • Objective Photography is an Oxymoron (all photos lie…)
    • A Historical Moment: The Sylmar Earthquake of 1971 (Los Angeles, CA)
    • Where Did My Images Go? [the challenge of long-term preservation of digital images]
  • Archives

    • September 2020
    • October 2017
    • August 2016
    • June 2016
    • May 2016
    • November 2015
    • June 2015
    • April 2015
    • March 2015
    • December 2014
    • February 2014
    • September 2012
    • August 2012
    • June 2012
    • May 2012
    • April 2012
    • March 2012
    • February 2012
    • January 2012
  • Categories

    • 2nd screen
    • Content Protection
    • design
    • fashion
    • history
    • IoT
    • philosophy
    • photography
    • post-production
    • Quality Control
    • science
    • security
    • technology
    • Uncategorized
  • Meta

    • Register
    • Log in
    • Entries feed
    • Comments feed
    • WordPress.com

Browsing Category philosophy

DI – Disintermediation, 5 years on…

October 12, 2017 · by parasam

I wrote this original article on disintermediation more than 5 years ago (early 2012), and while most of the comments are as true today as then, I wanted to expand a bit now that time and technology have moved on.

The two newest technologies that are now mature enough to act as major fulcrums of further disintermediation on a large scale are AI and blockchain. Both of these have been in development for more than 5 years of course, but these technologies have made a jump in capability, scale and applicability in the last 1-2 years that is changing the entire landscape. Artificial Intelligence (AI) – or perhaps a better term “Augmented Intelligence” – is changing forever the man/machine interface, bringing machine learning to aid human endeavors in a manner that will never be untwined. Blockchain technology is the foundation (originally developed in the arcane mathematical world of cryptography) for digital currencies and other transactions of value.

AI

While the popular term is “AI” or Artificial Intelligence, a better description is “Deep Machine Learning”. Essentially the machine (computer, or rather a whole pile of them…) is given a problem to solve, a set of algorithms to use as a methodology, and a dataset for training. After a number of iterations and tunings, the machine usually refines its response such that the ‘problem’ can be reliably solved accurately and repeatedly. The process, as well as a recently presented theory on how the ‘deep neural networks’ of machine learning operate, is discussed in this excellent article.

The applications for AI are almost unlimited. Some of the popular and original use cases are human voice recognition and pattern recognition tasks that for many years were thought to be too difficult for computers to perform with a high degree of accuracy. Pattern recognition has now improved to the point where a machine can often outperfom a human, and voice recognition is now encapsulated in the Amazon ‘Echo’ device as a home appliance. Many other tasks, particularly ones where the machine assists a human (Augmented Intelligence) by presenting likely possibilities reduced from extremely large and complex datasets, will profoundly change human activity and work. Such examples include medical diagnostics (an AI system can read every journal every written, compare to a history taken by a medical diagnostician, and suggest likely scenarios that could include data the medical professional couldn’t possibly have the time to absorb); fact-checking news stories against many wide-ranging sources; performing financial analysis; writing contracts; etc.

It’s easy to see that many current ‘professions’ will likely be disrupted or disintermediated… corporate law, medical research, scientific testing, pharmaceutical drug trials, manufacturing quality control (AI connected to robotics), and so on. The incredible speed and storage capability of modern computational networks provides the foundation for an ever-increasing usage of AI at a continually falling price. Already apps for mobile devices can scan thousands of images and make suggestions for keywords, mark for collections of similar images, etc. [EyeEm Vision].

Another area where AI is utilized is in autonomous vehicles (self-driving cars). The synthesis of hundreds of inputs from sensors, cameras, etc. are analyzed thousands of times per second in order to safely pilot the vehicle. One of the fundamental powers of AI is the continual learning that takes place. The larger the dataset, the more of a given set of experiences, the better the machine will be at optimizing the best outputs. For instance, every Tesla car gathers massive amounts of data from every drive the car takes, and continually uploads that data to the servers at the factory. The combined experience of how thousands of vehicles respond to varying road and traffic conditions is learned and then shared (downloaded) to every vehicle. So each car in the entire fleet benefits from everything learned by every car. This is impossible to replicate with individual human drivers.

The potential use cases for this new technology is almost unbounded. Some challenging issues likely can only be solved with advanced machine learning. One of these is the (today) seemingly intractable problem of updating and securing a massive IoT (Internet of Things) network. Due to the very low cost, embedded nature, lack of human interface, etc. that is a characteristic of most IoT devices, it’s impossible to “patch” or otherwise update individual sensors or actuators that are discovered to have either functional or security flaws after deployment. By embedding intelligence into the connecting fabric of the network itself that links the IoT devices to nodes or computers that utilize the info, even sub-optimal devices can be ‘corrected’ by the network. Incorrect data can be normalized, attempts at intrusion or deliberate altering of data can be determined and mediated.

Blockchain

The blockchain technology that is often discussed today, usually in the same sentence as Bitcoin or Ethereum, is a foundational platform that allows secure and traceable transactions of value. Essentially each set of transactions is a “block”, and these are distributed widely in an encrypted format for redundancy and security. These transactions are “chained” together, forming the “blockchain”. Since the ‘public ledger’ of these groups of transactions (the blockchains) are impossible to alter, the security of every transaction is ensured. This article explains in more detail. While the initial focus of blockchain technology has been on so-called ‘cryptocurrencies’ there are many other uses for this secure transactional technology. By using the existing internet connectivity, items of value can be securely distributed practically anywhere, to anyone.

One of the most obvious instances of transfer of items of value over the internet is intellectual property: i.e. artistic works such as books, images, movies, etc. Today the wide scale distribution of all of these creative works is handled by a few ‘middlemen’ such as Amazon, iTunes, etc. This introduces two major forms of restriction: the physical bottleneck of client-server networking, where every consumer must pull from a central controlled repository; and the financial bottleneck of unitary control over distribution, with the associated profits and added expense to the consumer.

Even before blockchain, various artists have been exploring making more direct connections with their consumers, taking more control over the distribution of their art, and changing the marketing process and value chain. Interestingly the most successful (particularly in the world of music) are all women: Taylor Swift, Beyoncé, Lady Gaga. Each is now marketing on a direct to fan basis via social media, with followings of millions of consumers. A natural next step will be direct delivery of content to these same users via blockchain – which will have even a large effect on the music industry than iTunes ever did.

SingularDTV is attempting the first ever feature film to be both funded and distributed entirely on a blockchain platform. The world of decentralized distribution is upon us, and will forever change the landscape of intellectual property distribution and monetization. The full effects of this are deep and wide-ranging, and would occupy and entire post… (maybe soon).

In summation, these two notable technologies will continue the democratization of data, originally begun with the printing press, and allow even more users to access information, entertainment and items of value without the constraints of a narrow and inflexible distribution network controlled by a few.

Objective Photography is an Oxymoron (all photos lie…)

August 18, 2016 · by parasam

There is no such thing as an objective photograph

A recent article in the Wall Street Journal (here) entitled “When Pictures Are Too Perfect” prompted this post. The premise of the article is that too much ‘manipulation’ (i.e. Photoshopping) is present in many of today’s images, particularly in photojournalism and photo contests. There is evidently an arbitrary standard (that no one can appear to objectively define) that posits that essentially only an image ‘straight out of the camera’ is ‘honest’ or acceptable – particularly if one is a photojournalist or is entering your image into some form of competition. Examples are given, such as Harry Fisch having a top prize from National Geographic (for the image “Preparing the Prayers at the Ganges”) taken away because he digitally removed an extraneous plastic bag from an unimportant area of the image. Steve McCurry, best known for his iconic “Afghan Girl” photo on the cover of National Geographic magazine in 1985, was accused of digital manipulation of some images shot in 1983 in Bangladesh and India.

On the whole, I find this absurd and the logic behind such attempts at defining an ‘objective photograph’ fatally flawed. From a purely scientific point of view, there is absolutely no such thing as an ‘objective’ photograph – for a host of reasons. All photographs lie, permanently and absolutely. The only distinction is by how much, and in how many areas.

The First Lie: Framing

The very nature of photography, from the earliest days until now, has at its core an essential feature: the frame. Only a certain amount of what can be seen by the photographer can be captured as an image. There are four edges to every photograph. Whether the final ‘edges’ presented to the viewer are due to the limitations of the camera/film/image sensor, or cropping during the editing process, is immaterial. The initial choice of frame is made by the photographer, in concert with the camera in use, which presents physical limitations that cannot be exceeded. The choice of frame is completely subjective: it is the eye/brain/intuition of the photographer that decides in the moment where to point the camera, what to include in the frame. Is pivoting the camera a few degrees to the left to avoid an unsightly telephone pole “unwarranted digital manipulation?” Most news editors and photo contest judges would probably not agree. But what if the same exact result is obtained by cropping the image during an editing process – already we start to see disagreement in the literature.

If Mr. Fisch had simply walked over and picked up the offending plastic bag before exposing the image, he would likely be the deserved recipient of his 1st place prize from National Geographic, but as he removed the bag during editing his photograph was disqualified. By this same logic, when Leonardo Da Vinci painted the “Mona Lisa” there is a balustrade with two columns behind her. There is perfect symmetry in the placement of Lisa Gherardini (the presumed model) between the columns, which helps frame the subject. Painting takes time, it is likely that a bird would land from time to time on the balustrade. Was Leonardo supposed to include the bird or not? Did he ‘manipulate’ the image by only including the parts of the image that were important to the composition? Would any editor or judge dare ask him today, if that was possible?

Framing and large depth-of-field was necessary in this shot to tell the story: Background and facial expression of the subject both needed to be in focus. [©2012 Ed Elliott / Clearlight Imagery]

“So-So Happy!” Framing and large depth-of-field was necessary in this shot to tell the story: Background and facial expression of the subject both needed to be in focus. [©2012 Ed Elliott / Clearlight Imagery]

Framing and large depth-of-field was necessary in this shot to tell the story: Background and facial expression of the subject both needed to be in focus. [©2012 Ed Elliott / Clearlight Imagery]

“So-So Happy… NOT!” Framing and large depth-of-field was necessary in this shot to tell the story: Background and facial expression of the subject both needed to be in focus. [©2012 Ed Elliott / Clearlight Imagery]

A combination example of framing and depth-of-field. One photographer is standing 6 ft further away (from my camera position) than the other, but the foreshortening of the 200mm telephoto appears to depict 'dueling photographers'. [©2012 Ed Elliott / Clearlight Imagery]

A combination example of framing and depth-of-field. One photographer is standing 6 ft further away (from my camera position) than the other, but the foreshortening of the 200mm telephoto appears to depict ‘dueling photographers’. [©2012 Ed Elliott / Clearlight Imagery]

The Second Lie: The Lens

No photograph can occur without a lens. Every lens has certain irrefutable properties: focal length and maximum aperture being the most important. Each of these parameters impart a vital, and subjective, aspect to the image subsequently captured. Since the ‘lingua franca’ of focal length is the ubiquitous 35mm camera, we can generalize here: 50mm being the so-called ‘normal’ lens; 35mm is considered ‘wide angle’, 24mm ‘very wide angle’ and 10mm a ‘fisheye’. Going in the other direction, 85mm is often considered a ‘portrait’ lens (slight close-up), 105mm a medium ‘telephoto’, 200mm a ‘telephoto’ and anything beyond is for sports or space exploration. Each focal length brings more or less of the frame into focus, and inversely shortens the depth of field. Wide angle lenses tend to bring the entire field of view into sharp focus, while telephotos blur out everything except what the photographer has selected as the prime focus point.

Normal

Normal lens [©2016 Ed Elliott / Clearlight Imagery]

Telephoto lens (for illustrative purpose only, please ignore the soft focus at edges caused by lens adapter)

Telephoto lens (for illustrative purpose only, please ignore the soft focus at edges caused by lens adapter) [©2016 Ed Elliott / Clearlight Imagery]

Wide Angle lens (for illustrative purpose only, please ignore the soft focus at edges caused by lens adapter)

Wide Angle lens (for illustrative purpose only, please ignore the soft focus at edges caused by lens adapter) [©2016 Ed Elliott / Clearlight Imagery]

FishEye lens (for illustrative purpose only, please ignore the soft focus at edges caused by lens adapter) - curvature and edge distortions are normal for such an extreme angle-of-view lens

FishEye lens (for illustrative purpose only, please ignore the soft focus at edges caused by lens adapter) – curvature and edge distortions are normal for such an extreme angle-of-view lens   [©2016 Ed Elliott / Clearlight Imagery]

In addition, each lens type distorts the field of view noticeably: wide angle lenses tend to exaggerate the distance between foreground and background, making the closer objects in the frame look larger than they actually are, and making distant objects even smaller. Telephoto lenses have the opposite effect, foreshortening the image and ‘flattening’ the resulting picture. For example, in a long telephoto shot of a tree on a ridge backlit by the moon, both the tree and the moon can be tack sharp and apparently the moon is directly behind the tree, even though it is 239,000 miles away.

The other major ‘subjective’ quality of any lens is the aperture chosen by the photographer. Otherwise commonly known as the “f-stop” this is the ratio of the focal length of the lens divided by the diameter of the ‘entrance pupil’ (the size of the hole that the aperture diaphragm is set to on a given capture). The maximum aperture (the largest ‘hole’ that can be set by the photographer) depends on the diameter of the lens itself, in relation to the focal length. For example, with a ‘normal’ 50mm lens if the lens is 25mm in diameter then the maximum aperture is f/2 (50/25). Larger apertures (lower f-stop ratios) require larger lenses, and are correspondingly more difficult to use, heavy and expensive. One can see that an f/2 lens for a 50mm focal length is not that huge, to obtain the same f/2 ratio for a 200mm telephoto would require a lens that is at least 100mm (4in) in diameter – making such a device huge, heavy and obscenely expensive. As a quick comparison, (Nikon lenses, full frame, prime lens, priced from B&H Photo – discount photo equipment supplier) a 50mm f/2.8 lens costs $300, while the same lens in f/1.2 costs $700. A 400mm telephoto in f/5.6 would be $2,200, while an identical focal length with a maximum aperture of f/2.8 will set you back a little over $12,000.

Exaggeration of object size with wide angle lens: farther objects appear much smaller than in 'reality'.

Exaggeration of object size with wide angle lens: farther objects appear much smaller than in ‘reality’. [©2011 Ed Elliott / Clearlight Imagery]

Flattening and foreshortening of the image as a result of long telephoto lens (f8, 400mm lens) - the crane is hundreds of feet closer to the camera than the dark buildings behind, but looks like they are directly adjacent.

Flattening and foreshortening of the image as a result of long telephoto lens (f/8, 400mm lens) – the crane is hundreds of feet closer to the camera than the dark buildings behind, but looks like they are directly adjacent. [©2013 Ed Elliott / Clearlight Imagery]

Depth of field with shallow aperture (f/2.4) - in this case even with a wide angle lens the background is out of focus due to the large distance between the foreground and the background (in this case the Hudson River separated the two...)

Depth of field with shallow aperture (f/2.4) – in this case even with a wide angle lens the background is out of focus due to the large distance between the foreground and the background (in this case the Hudson River separated the two…) [©2013 Ed Elliott / Clearlight Imagery]

Flattening and foreshortening of the image with a long telephoto lens. The ship is almost 1/4 mile further away than the green roadway sign, yet appears to be directly behind it... (f4, 400mm)

Flattening and foreshortening of the image with a long telephoto lens. The ship is almost 1/4 mile further away than the green roadway sign, yet appears to be directly behind it… (f/4, 400mm) [©2013 Ed Elliott / Clearlight Imagery]

Wide angle lens (14-24mm zoom lens, set at 16mm - f2.8)

Wide angle lens (14-24mm zoom lens, set at 16mm – f/2.8) [©2012 Ed Elliott / Clearlight Imagery]

Shallow depth of field due to large aperture on telephoto lens (f/4 - 200mm lens on full-frame 35mm DSLR)

Shallow depth of field due to large aperture on telephoto lens (f/4 – 200mm lens on full-frame 35mm DSLR) [©2012 Ed Elliott / Clearlight Imagery]

Wide angle shot, demonstrating sharp focus from foreground to the background. Also exaggeration of perspective makes the bow of the vessel appear much taller than the stern.

Wide angle shot, demonstrating sharp focus from foreground to the background. Also exaggeration of perspective makes the bow of the vessel appear much taller than the stern. [©2013 Ed Elliott / Clearlight Imagery]

The bottom line is that the choice of lens and aperture is a controlling element of the photographer (or her pocketbook) – and has a huge effect on the image taken with that lens and setting. None of these choices can be deemed to be either ‘analog’ or ‘digital’ manipulation of the image during editing, but they have arguably a greater effect on the outcome, message, impact and tenor of the photograph than anything that can be done subsequently in the darkroom (whether chemical or digital).

The Third Lie: Shutter Speed

Every exposure is a product of two factors: Light X Time. The amount of light that strikes a negative (or digital sensor) is governed solely by the selected aperture (and possibly by any additional filters placed in front of the lens); the duration for which the light is allowed to impinge on the negative is set by the shutter speed. While the main property of setting the shutter speed is to produce the correct exposure once the aperture has been selected (to avoid either under or over-exposing the image), there is a huge secondary effect of shutter speed on any motion of either the camera or objects in the frame. Fast shutter speeds (over 1/125th of a second with a normal lens) will essentially freeze any motion, while slow shutter speeds will result in ‘shake’, ‘blur’ and other motion artifacts. While some of these can be just annoying, in the hands of a skilled photographer motion artifacts tell a story. And likewise a ‘freeze-frame’ (from a very fast shutter speed) can distort reality in the other direction, giving the observer a point of view that the human eye could never glimpse in reality. The hours-long time exposure of star trails or the suspended animation shot of a bullet about to pierce a balloon are both ‘manipulations’ of reality – but they take place as the image is formed, not in the darkroom. The subjective experience of a football distorted as the kicker’s foot impacts it – locked in time by a shutter speed of 1/2000th second – is very different to the same shot of the kicker at 1/15th second where his leg is a blurry arc against a sharp background of grass. Two entirely different stories, just from shutter speed choice.

Fast shutter speed to stop action

Fast shutter speed to stop action [©2013 Ed Elliott / Clearlight Imagery]

Combination of two effects: fast shutter speed to stop motion (but not too fast, slight blurring of left foot imparts motion) - and shallow depth of field to render background soft-focus (f4, 200mm lens)

Combination of two effects: fast shutter speed to stop motion (but not too fast, slight blurring of left foot imparts motion) – and shallow depth of field to render background soft-focus (f/4, 200mm lens) [©2013 Ed Elliott / Clearlight Imagery]

High shutter speed to freeze the motion. 1/2000 sec. [©2012 Ed Elliott / Clearlight Imagery]

High shutter speed to freeze the motion. 1/2000 sec. [©2012 Ed Elliott / Clearlight Imagery]

Fast shutter speed to provide clarity and freeze the motion. 1/800 sec @ f/8 [©2012 Ed Elliott / Clearlight Imagery]

Fast shutter speed to provide clarity and freeze the motion. 1/800 sec @ f/8 [©2012 Ed Elliott / Clearlight Imagery]

Although a hand-held shot, I wanted as fine-grained a result as possible, so took advantage of the stillness of the subjects and a convenient wall on which to place the camera. 2 sec exposure with ISO 500 at f8 to keep the depth of field. [©2012 Ed Elliott / Clearlight Imagery]

Although a hand-held shot, I wanted as fine-grained a result as possible, so took advantage of the stillness of the subjects and a convenient wall on which to place the camera. 2 sec exposure with ISO 500 at f/8 to keep the depth of field. [©2012 Ed Elliott / Clearlight Imagery]

The Fourth Lie: Film (or Sensor) Sensitivity [ISO]

As if Pinocchio’s nose hasn’t grown long enough already, we have yet another ‘distortion’ of reality that every image contains as a basic building block: that of film/sensor sensitivity. While we have discussed exposure as a product of Light Intensity X Time of Exposure, one further parameter remains. A so-called ‘correct’ exposure is one that has a balance of tonal values, and (more or less) represents the tonal values of the scene that was photographed. This means essentially that blacks, shadows, mid-tones, highlights and whites are all apparent and distinct in the resulting photograph, and the contrast values are more or less in line with that of the original scene. The sensitivity of the film (or digital sensor) is critical in this regard. Very sensitive film will allow a correct image with a lower exposure (either a smaller aperture, faster shutter speed, or both), while a ‘slow’ [insensitive] film will require the opposite.

A high ISO was necessary to capture the image during late twilight. In addition a slow shutter speed was used - 1/15 sec with ISO of 6400. [©2011 Ed Elliott / Clearlight Imagery]

A high ISO was necessary to capture the image during late twilight. In addition a slow shutter speed was used – 1/15 sec with ISO of 6400. [©2011 Ed Elliott / Clearlight Imagery]

Low ISO (50) to achieve relatively fine grain and best possible resolution (this was a cellphone shot). [©2015 Ed Elliott / Clearlight Imagery]

Low ISO (50) to achieve relatively fine grain and best possible resolution (this was a cellphone shot). [©2015 Ed Elliott / Clearlight Imagery]

Cellphone image at dusk, resulting in ISO 800 with 1/15 sec exposure. Taken from a parking garage, the highlight on the palm is from car headlights. [©2012 Ed Elliott / Clearlight Imagery]

Cellphone image at dusk, resulting in ISO 800 with 1/15 sec exposure. Taken from a parking garage, the highlight on the palm is from car headlights. [©2012 Ed Elliott / Clearlight Imagery]

Night photography often requires very high ISO values and slow shutter speeds. The resulting grain can provide texture as opposed to being a detriment to the shot. [©2012 Ed Elliott / Clearlight Imagery]

Night photography often requires very high ISO values and slow shutter speeds. The resulting grain can provide texture as opposed to being a detriment to the shot. [©2012 Ed Elliott / Clearlight Imagery]

Fine grain achieved with low ISO of 50. [©2012 Ed Elliott / Clearlight Imagery]

Fine grain achieved with low ISO of 50. [©2012 Ed Elliott / Clearlight Imagery]

Slow ISO setting for high resolution, minimal grain (ISO 50) [©2012 Ed Elliott / Clearlight Imagery]

Slow ISO setting for high resolution, minimal grain (ISO 50) [©2012 Ed Elliott / Clearlight Imagery]

Sometimes you frame the shot and do the best you can with the other parameters - and it works. Cellphone image at night meant slow shutter speed (1/15 sec) and lots of grain with ISO 800 - but the resultant grain and blurring did not detract from the result. [©2012 Ed Elliott / Clearlight Imagery]

Sometimes you frame the shot and do the best you can with the other parameters – and it works. Cellphone image at night meant slow shutter speed (1/15 sec) and lots of grain with ISO 800 – but the resultant grain and blurring did not detract from the result. [©2012 Ed Elliott / Clearlight Imagery]

A corollary to film sensitivity is grain (in film) or noise (in digital sensors). If you desire a fine-grained, super sharp negative, then you must use a slow film. If you need a fast film that can produce an acceptable image in low light without a flash, say for photojournalism or surveillance work, then you must use a fast film and accept grain the size of rice in some cases… Life is all about compromise. Again, the final outcome is subjective, and totally within the control of the accomplished photographer, but this exists completely outside the darkroom (or Photoshop). Two identical scenes shot with widely disparate ISO films (or sensor settings) will give very different results. A slow ISO will produce a very sharp, super-realistic image; while a very fast ISO will be grainy, somewhat fuzzy and can tend towards surrealism if pushed to an extreme.  [technical note: the arithmetic portion of the ISO rating is the same as the older ASA rating scale, I use the current nomenclature]

Editing: White Lies, Black Lies, Dutone and Technicolor…

In my personal work as a streetphotographer (my gallery is here) I tell ‘white lies’ all the time in editorial. By that I mean the small adjustments to focus, color balance, contrast, highlight and shadow balance, etc. This is a highly personal and subjective experience. I learned from master photographers (including Ansel Adams), books and much trial and even more error… to pre-visualize my shots, and mentally place the components of the image on the Zone Scale as accurately as possible with the equipment and lighting on hand. This process was most helpful when in university with no money – every shot cost, both in film and developing ingredients. I would often choose between beer and film.. film always won… fewer friends, more images.. not quite sure about that choice but I was fascinated with imagery. While pre-visualization is, I feel, an important magic and can result in the difference between an ok image and a great one – it’s not an easy process to follow in candid streetphotography, where the recognition of a potential shot and the chance to grab it is often 1-2 seconds.

This results, quite frequently, with things in the image not being where I imagined them in terms of composition, lighting, color balance, etc. So enter my ‘white lies’. I used to accomplish this in the darkroom with push/pull of developing, and significant tweaking during printing (burning, dodging, different choice of contrast printing papers, etc.). Now I use Photoshop (I’m not particularly an Adobe disciple, but I started with this program in 1989 with version 0.87 (known as part of Barneyscan, on my Mac Classic) and we’ve kind of grown up together… I just haven’t bothered to learn another program. It does what I need, I’m sure that I only know about 20% of its current capabilities, but that’s enough for my requirements.

The other extreme that can be accomplished by Photoshop experts (and I use the term generically here) are the ‘black lies’. This is where one puts Oprah’s head on someone else’s body, performs ‘digital liposuction’ to the extent that Lena Dunham and Adele both scream “enough!”, and many celebrities find their faces applied to actors and scenes (typically in North Hollywood) where they have never been, nor would want to… There’s actually a great novel by the late Michael Crichton [Rising Sun, 1992] that contains a detailed subplot about digital photomanipulation of video imagery. At that time, it took a supercomputer to accomplish the detailed and sophisticated retouching of long video sequences – today tools such as Photoshop and After Effects could accomplish this on a desktop workstation in a matter of hours.

"Duotone" technique [background masked and converted to monochrome to focus the viewer on the foreground image]

“Duotone” technique [background masked and converted to monochrome to focus the viewer on the foreground image] [©2016 Ed Elliott / Clearlight Imagery]

A technique I frequently use is Duotone – and even here I am being technically inaccurate. What I mean by this is separating the object of interest from the background by masking the subject and turning the rest of the image into black and white. The juxtaposition of a color subject against a monochrome background helps isolate and focus the viewer’s attention on the subject. Frequently in streetphotography the opportunity to place the subject against a non-intrusive background doesn’t exist, so this technique is quite effective in ‘turning down’ the importance of the often busy and distracting surrounds. [Technically the term duotone is used for printing the entire image in gradations of only two colors]. Is this ‘manipulation’? Yes. Does it materially detract from, or alter the intent of, the original image that I pre-visualized in my head? No. I firmly stand behind this point of view, that all photographs “lie” to one extent or another, and any tool that the photographer has at his or her hand to generate a final image that is in accordance with the original intent is fair game. What matters is the act of conveying the vision of the photographer to the brain of the viewer. Period.

The ‘photograph’ is just the medium that transports that image. At the end of the day, a photo is a conglomeration of pixels (either printed or glowing) that transmit photons to the human visual system, and ultimately end up in the visual cortex in the back of the human brain. That is where we actually “see”.

Early photography (and motion picture films) were only available in black & white. When color photography first came along, the colors were not ‘natural’. As emulsions improved things got better, but even so there was a marked deviation from ‘natural’ that was actually ‘designed in’ by Kodak and other film manufacturers. The saturation and color mapping of Kodachrome did not match reality, but it did satisfy the public that equated punchy colors with a ‘good color photo’ and made those vacation memories happy ones.. and therefore sold more film. The more subdued, and realistic, Ektachrome came along as professional photographers pushed for choice (and quite frankly an easier and more open developing process – Kodachrome could only be processed by licensed labs and it was notoriously difficult to process well). The down side of early Ektachrome emulsions was the unfortunate instability of the dye layers in color transparency film – leading to rapid fading of both slides and movies.

As one who has worked in film preservation and restoration for decades, it was interesting to note that an early color process (the Technicolor 3-stripe method) that was originally designed just to get vibrant colors on the movie screen in the 1930’s had a resurgence in film preservation. Turned out that so many of the early Ektachrome films from the 1950’s and 1960’s experienced rapid fading that significant restoration efforts were necessary to salvage some important movies. The only way at that time (before economical digital scanning of movies was possible) was to – after restoration of the color negative – scan using the Technicolor process and make 3 separate black & white films that represented the cyan, magenta and yellow dye layers. Then someday in the future the 3 negatives could be optically combined and printed back on to color film for viewing.

There is No Objective Truth in Photography (or Painting, Music…)

All photography is an illusion. Using a lens, a photo-sensitive element of some sort and a box to restrict the image to only the light coming through the lens, a photograph is a rendering of what is before the lens. Nothing more. Distorted and limited by the photographer’s choice of point of view, lens, aperture, shutter speed, film/sensor and so on; the resultant image – if correctly executed, reflects at most the inner vision of the photographer’s mind/perception of the original scene. Every photograph has a story (some more boring than others).

One of the great challenges of photography (and possibly one of the reasons that until quite recently this art form was not taken seriously) is that on first glance many photos appear to be just a ‘copy of reality’ – and therefore contain no inherent artistic value. Nothing could be further from the truth. It’s just that that ‘art’ hides in plain sight… It is our collective, subjective, and inaccurate view that photographs are ‘truthful’ and accurately represent the reality that was before the lens that is the root of the problem that engendered this post. We naively assume that photos can be trusted, that they show us the only possible view of reality. It’s time to grow up, to accept that photography, just like all other art forms, is a product of the artist, first and foremost.

Even the unassuming mom who is taking snapshots of her kids is making choices – whether she knows it or not – about each of the parameters already discussed. Since most snapshot (or cellphone) cameras have wide angle lenses, the ‘huge nose’ effect of close-up pics of babies and youngsters (that will haunt these innocent children forever on Facebook and Instagram – data never dies…) is just an objective artifact of lens choice and distance to subject. Somewhere along the line our moral compass became out of whack when we started drawing highly artificial lines around ‘acceptable editorial behavior’ and so on. An entirely different discussion – which is worthy of a separate post – can be had in terms of the photographer’s (or publisher’s) intention in sharing an image. If a deliberate attempt to misrepresent the scene, for financial gain, allocation of justice, change in power, etc. is taken, that is an issue. But the same issue exists whether the medium that transports such a distortion is the written word, an audio recording, a painting or a 3D holograph. It is illogical to apply a set of standards or restrictions to one art form and not another, just to attempt to reign in inadvertent or deliberate distortions in a story that may be deduced from the art by an observer.

To use another common example, we have all seen many photos of a full moon rising behind a skyline, trees on a ridge, etc. – typically with a really large moon – and most observers just appreciate the image, the impact, the feeling. Even some rudimentary science, and a bit of experience with photography, reveals that most such images are a composite, with a moon image enlarged and layered in behind the foreground. The moon, is simply never that large, in relation to the rest of the image. In many cases I have seen, the lighting of the rest of the scene clearly shows that the foreground was shot at a different time of night than the moon (a full moon on the horizon only occurs at dusk). I have also seen many full moons in photographs that are at astronomically impossible locations in the sky, given the longitude and latitude of the foreground that is shown in the image.

An example of "Moon on Steroids"... The actual size of the moon (31 minutes of arc) is about the same size as your thumbnail if you extend your arm fully. In this picture it's obvious (look at the grasses on the ground) that the tree is approximately 10 ft tall. In reality, the moon would be nestled in between a couple of the smaller branches.

An example of “Moon on Steroids”… The actual size of the moon (31 minutes of arc) is about the same size as your thumbnail if you extend your arm fully. In this picture it’s obvious (look at the grasses on the ground) that the tree is approximately 10 ft tall. In reality, the moon would be nestled in between a couple of the smaller branches.

Why is it that such an esteemed, and talented, photographer as Steve McCurry is chastised for removing some distracting bits of an image – which in no way detracted from the ‘story’ of the image – and yet I dare say that no one in their right mind wold criticize Leonardo da Vinci for including a physically impossible background (the almost mythological mountains and seas) in his rendition of Lisa Gherardini for his painting of “Mona Lisa”? As someone who has worked in the film/video/audio industry for my entire professional life, I can tell you with absolute certainty that no modern audio recording – from Adele to Ziggy Marley – is released that is not ‘digitally altered’ in some fashion. Period. It is just an absolute in today’s production environment to ‘clean up’ every track, every mix, every completed master – removing unwanted echoes, noise, coughs, burps, and other audio equivalents of Mr. Fisch’s plastic bag… and no one, ever, has complained about this or accused the artists of being ‘dishonest’.

This double standard needs to be put to rest permanently. It reflects poorly on those who take this position, demonstrating their lack of technical knowledge and a narrow perception of the art form of photography, and furthermore gives power to those whose only interest is to malign others and detract from the powerful impact that a great image can create. If ignorant observers can really believe that an airplane in an image as depicted is ‘real’ (for the airplane to be of such a size in relation to the tunnel and ladders it would have to be flying at a massively illegal low altitude in that location) then such observers must take responsibility. Does the knowledge that this placement of the plane is ‘not real’ detract from the photo? Does the contraposition of ‘stillness vs movement’ (concrete and steel silo vs rapidly moving aircraft) create a visually stimulating image? Is it important whether that occurred ‘in reality’ or not? Would an observer judge it differently if this was a painting or a sketch instead of a photograph?

I love the art and science of photography. I am daily enamored with the images that talented and creative people all over the world share, whether a mixture of camera originals, composites, pure fiction created in the ‘darkroom’ or some combination of all. This is a wondrous art form, and must be supported at all costs. It’s not easy, it takes dedication, effort, skill, perseverance, money, time and love – just as any art form. I would hope that we could move the conversation to what matters: ‘truth in advertising’. In a photo contest, nothing, repeat nothing, should matter except the image itself. Just like painting, sculpture, music, ceramics, dance, etc. – the observed ‘art’ should be judged only by the merits of the entity itself, without subjective expectations or philosophical distortions. If an image is used to reinforce a particular ‘story’ – whether for ethical, legal or news purposes, then both the words and the images must be authentic. Authentic does not mean ‘un-retouched’, it does mean that there is no ‘black lie’ in what is conveyed.

To summarize, let’s stop believing that photographs are ‘real’ – but let’s start accepting the art, craftsmanship, effort and focus that this medium brings to all of us. Let’s apply a common frame of reference to all forms of art, whether they be painting, writing, photography, music, etc. – terms of authenticity and purpose. Would we chide Escher for attempting to fool us with visual cues of an impossible reality?

An Interview with Dr. Vivienne Ming: Digital Disruptor, Scientist, Educator, AI Wizard…

June 27, 2016 · by parasam

During the recent Consumer Goods Forum global summit here in Cape Town, I had the opportunity to briefly chat with Vivienne about some of the issues confronting the digital disruption of this industry sector. [The original transcript has been edited for clarity and space.]

Named one of 10 Women to Watch in Tech in 2013 by Inc. Magazine, Vivienne Ming is a theoretical neuroscientist, technologist and entrepreneur. She co-founded Socos, where machine learning and cognitive neuroscience combine to maximize students’ life outcomes. Vivienne is a visiting scholar at UC Berkeley’s Redwood Center for Theoretical Neuroscience, where she pursues her research in neuroprosthetics. In her free time, Vivienne has developed a predictive model of diabetes to better manage the glucose levels of her diabetic son and systems to predict manic episodes in bipolar suffers. She sits on the boards of StartOut, The Palm Center, Emozia, and the Bay Area Rainbow Daycamp, and is an advisor to Credit Suisse, Cornerstone Capital, and BayesImpact. Dr. Ming also speaks frequently on issues of LGBT inclusion and gender in technology. Vivienne lives in Berkeley, CA, with her wife (and co-founder) and their two children.

Every once in a while I have the opportunity to discuss wide-ranging topics with an intellect that stimulates, is passionate and really cares about the bigger picture. Those opportunities are more rare than one would think. Although set in a somewhat unexpected venue (the elite innards of consumer capitalism) her observations on the inescapable disruption that the new wave of modern technologies are prescient and thoughtful. – Ed 

Ed: In a continent where there is a large focus on putting people to work, how do you see the challenges and disruptions resulting from AI, robotics, IoT, VR and other technologies playing out? These technologies, as did other disruptive technologies before them, tend to replace human workers with machine processes.

Vivienne:  There is almost no domain in which artificial intelligence (AI), machine learning and automation will not have a profound and positive impact. Medicine, farming, transportation, etc. will all benefit. There will be a huge impact on human potential, and human work will change. I think this is inevitable, that we are well on the way to this AI-enabled future. The economic incentives to push in this direction are far too strong. But we need social institutions to keep pace with this change.

We need to be building people in as a sophisticated way as we are building our technology infrastructure. There is today a large and significant business sector in educational technology: Microsoft, Apple, Google, Facebook all have serious interest. But this current focus really is just an amplifier for existing paradigms, helping hyper-competitive moms over-prep their kids for standardized testing… which predicts nothing at all about anyone’s actual life outcome.

Whether you get into Princeton vs Brown, or didn’t get into MIT, is not really going to affect your life track all that much. Whereas the transformation that comes from making even a more modest, but broad-scale difference in lives, is huge. Let’s take right here: South Africa is probably one of the perfect examples, maybe along with India, of a region in which to make a difference.

Because of the history, we have a society where there is a starting point of a pretty dramatic inequality of education and preparedness. But, you have an infrastructure. That same history did leave you with a strong infrastructure. Change a child’s life in Denmark, for example, and you probably haven’t made that enormous an impact. You do it in Haiti and the best you might hope for is they might move to somewhere that they might live out a fruitful and more productive life. While it may sound judgmental on Haiti it’s just a fact right now: there’s only so much that one can achieve there as there is so little infrastructure. But you do that here in South Africa, in the townships, or in the slums of Mumbai – and you can have a profound difference in that person’s life. This is because there is an infrastructure to capture that life and do something with it.

In terms of educational technology, doubling down on traditional approaches with AI, bringing computational aids into the classroom, using algorithms to better prepare students for testing… we have not found, either in the literature or our own research with a 122 million person database that this makes any difference to one’s life outcome.

People that do this, that go to great colleges, do often have productive and creative lives… but not for those reasons. All of their life results are outgrowths of latent qualities. General cognitive ability, our meta-cognition problem solving, our creativity, our emotion regulation, our mindset: these are the things that we find are actually predictive of one’s life outcome.

These qualities are hard to teach. We tend to absorb and learn these from a lifetime of modeling of others, of human observation and interaction. So I tend to take a very human perspective on technology. What is the minimum, as someone that builds technology – AI in particular – that can deliver those qualities into people’s lives. If we want to really be effective with this technology, then it must be simple. Simple to deploy and simple to use. Currently, a text-based system is appropriate. Today we use SMS – although it’s a hugely regressive system that is expensive. To reach 1 million kids each year it costs about $5 million per year. To reach that same number of kids using WhatsApp or a similar platform costs about $40 per year. The difference is obscene… The one technology (SMS) that has the farthest reach around the world is severely dis-incentivized… but we’re doing it anyway!

When I’m building a fancy AI, there’s no reason to pair that with an elaborate user interface, there’s no reason I need to force you to buy our testing solution that will collect tons of data, etc. It can, and should, be the simplest interface possible.

Let me give an example with the following narrative:  I pick up my daughter each day after school (she’s five) and she immediately starts sharing with me via pictures. That’s how she interacts. She sits with her friends and draws pictures. The first thing she does is show me what she’s drawn that day. I snap a photo to share with her grandmother, and at the same time I cc: MUSE (the AI system we’ve built). The image comes to us, our deep neural network starts analyzing it.

Then I go pick up my son. Much like me, he like to talk. He loves to tell stories. We can’t upload audio via SMS (prohibitively expensive) but easily done with an app. Hit a button, record 30 sec of his story, or grab a few minutes of us talking to each other. Again, that is captured by the deep neural networks within MUSE and analyzed. Some of this AI could be done with ‘off the shelf’ applications such as available from Google, IBM, etc. Still very sophisticated software, but it’s out there.

The problem with this method is that data about a little kid is now outside our protected system. That’s a problem. In some countries, India or China for example, parents are so desperate for their children to improve in education that they will do almost anything, but in the US everyone’s suspicious. The only sure-fire way to kill a company is to do something bad with data in health or education. So MUSE is entirely self-contained.

Once we have the data and the analysis, we combine that with a system that asks a single question each day. The text-based question and answer is the only requirement (from a participating parent using our system); the image and audio is optional. What the system is actually doing is predicting every parent’s answer to these thousands of questions, every day. This is a machine learning technology known as ‘active learning’. We came up with our own variant, and when it does its predictions, it then says, “If I knew the true answer to one question, which one would provide the biggest information gain?”

This can be interpreted in different ways. Shannon information (for the very wonky people reading this), or which one question will cause the most other questions to be generated. So we ask that one question. The system can select the single most information question to ask that day. We then do a very immodest thing: predicting these kids’ life outcomes. But that is problematic. Not only our research, but others as well, have shown almost unequivocally that sharing this information produces a negative outcome. Turns out that the best thing, we believe, that can be done with this data is to use it to ask a predictive question for the advancement of the child’s learning.

That can lead to a positive transformation of potential outcome. Prior systems have shown that just the daily reminder to a parent to perform a specific activity with their child is beneficial – in our case, with all this data and analysis, our system can ask the one question that we predict will be the most valuable thing for your child on that day.

Now that prediction incorporates more than you might think. The first most important thing: that the parent actually does it. That’s easy to determine: either they did or they didn’t. So we need methods to engage with the parent. The second thing is to attempt to determine how effective our predictive model is for that child. Remember, we’re not literally predicting how long a child will live – we’re predicting how ‘gritty’ they will be (as in the research from Angela Duckworth), do they have more of a growth or fixed mindset (Carol Dweck and others), what will be their working memory span, etc.

Turns out there are dozens and dozens of constructs that people have directly shown are strongly predictive of life outcomes. Our aim is to maximize these qualities in the kids that make use of our system. In terms of our predictive models, think of this more in an actuarial sense: on average, given everything we know about a particular kid, he is more likely to live 5 years longer, she is more likely to go 2 years further in their education, etc. The important thing, our goal, is for none of it to come true, no matter how positive the prediction is. We believe it can always be more. Everyone can be amazing. This may sound like a line, but quite frankly if you don’t believe that you shouldn’t be an educator.

Unfortunately, the education system is full of people that think people can’t change and that it’s not worth the effort… what would South Africa be if this level of positivity was embedded in the education system? I’m sure that it’s not lost on the population here (for instance all the young [mostly African] people serving this event) what their opportunities could be if they could really join the creative class. Unfortunately there are political and policy issues that come into play here, it’s not just a machine learning issue. But I can say the difference would be dramatic.

We did the following analysis in the United States:  if we simply took the kind of things we do with MUSE, and were able to scale that to every kid in the USA (50 million) and had started this 25 years ago, what would the net effect on the US economy be? We didn’t do broad strokes and wishful thinking – we modeled based on actual research (do this little proactive change when kids are young, then observe that in 25 years they are earning 25% more and have better health outcomes, etc.). We took that actual research, and modeled it out, region by region in the USA; demographics, everything. We found that after 25 years we would have added somewhere between $1.3-1.8 trillion to the US economy. That’s huge.

The challenge is how do you scale that out to really large numbers of kids, particularly in India, China, Africa, etc.? That’s where technology comes in.

Who’s invested in a kid’s life? We use the generic term ‘caregiver’ – because in many kid’s lives there isn’t a parent, or only one parent, a grandparent, a foster parent, etc. At any given moment in a kid’s life, hopefully there are at least two pivotal people: a caregiver and a teacher. Instead of trying to replace them with an AI, what if we empower them? What if we gave them a superpower? That’s the spirit of what we’re trying to do.

MUSE is comprised of eight completely independent highly sophisticated machine learning systems, along with integration, data, analytics and interface layers. These systems are analyzing the images, the audio, producing the questions, making the predictions. We use what’s termed a ‘deep reinforcement learning model’ – a very similar concept, at least at a high level, to Google’s “Alpha Go” AI system. This type of system can learn to play highly complex games (Go, video games, etc.) – a fundamentally different type of intelligence than IBM’s older chess-playing programs. This new type of AI actually learns how to play the game itself, as opposed to selecting procedures that have already been programmed into it.

With MUSE, essentially we are designing activities for parents to do with their children that are unique to that child and designed to provide maximum learning stimulus at that point in time. We are also designing a similar structure for teachers to do with their students, for students to do with themselves as they get older. In a similar fashion, we are involved in the workplace: the same system that can help parents get the most out of their kids, that can help teachers get the most out of their students, can also help managers get the most out of their employees. I’ve done a lot of work in labor economics, talent management, etc. – managing people is hard and most aren’t good at it.

Our approach tends to be, “Do a good job and I’ll give you a bonus, do a bad job and I’ll fire you.” We try to be more human than that, but – certainly if you’ve ever been involved in sales – that’s the game! In the TED talk that was just published we showed that methodology was actually a negative predictor of outcomes. In the workplace, your best people are not incentivized by these archaic methodologies, but are rather endogenously motivated. In fact, the research shows that the more you artificially incentivize workers, the more poorly they perform, at least in the medium to long term.

Wow! That is directly in contradiction to how we structure our businesses, our educational systems, our societies in general. It’s really hard to gain these insights if you can’t do a deep analysis of 200,000 salespeople, or 100,000 software developers like we were able to do. Ultimately the massive database of 122 million people that we built at Gild allows a scale of research and analysis that is unprecedented. That scale, and the capability of deep machine learning allows us to factor tens of thousands of variables as a basis for our predictive engines.

I just love this space – of combining human potential with the capabilities of artificial intelligence. I’ve never built a totally autonomous system. Everything I’ve built is about helping a parent, helping a doctor, helping a teacher, helping a manager do better. This may come from my one remaining academic interest: cognitive neural prosthetics. [a versatile method for assisting paralyzed patients and patients with amputations, recording the cognitive state of the subject, rather than signals strictly related to motor execution or sensation] Do I believe that literally jamming things in your brain can make you smarter? Unambiguously yes! I accept we won’t be doing this tomorrow… there aren’t that many volunteers for elective brain surgery… but with amazing technologies, such as neural dust being developed at Lawrence UC Berkley Labs, as well as ECoG, which is nearly ubiquitous but is currently only used during brain surgery for epilepsy.

What can be done with ECoG is amazing! I can tell what you’re subvocalizing, what you’re looking at, I can track your decision process, your emotional state, etc. Now, that is scary, and it should be. We should never shy away from that – but the potential is awesome.

Part of my response to the general ‘AI conundrum’ is, “Let’s beat them to the punch – why wait for them [AI machines] to become super-intelligent – why don’t we do it?” But then this becomes a human story as well. Is intelligence a commodity? Is it a function of how much I can buy? Or is it a human right, like a vaccine? I don’t think these things will ever become ubiquitous or completely free, but whoever gets their first, much like profound eutrophics [accelerated development] and other intelligence-building technologies, will enjoy a huge first-mover advantage. This could be a singularity moment: where potentially we have a small population of super-intelligent people. What happens after that?

I know we started from a not-simple question, but at least a very immediate and realized one: What are the human implications of these sorts of technologies today – into what I think 20 or 30 years from now will fundamentally change the definition of what it means to be human. That’s not a very long time period. But ultimately that’s the thing – technology changes fast. Nowadays people say that technology is changing faster than culture. We need cultural institutions to, if not keep up with the pace of change of technology, change and adapt at a much, much faster pace. We simply cannot accept that these things will figure themselves out over the next 20 years… I mean 20 years is how long it takes to grow a new person – and then it will too late.

It’s like the ice melts in Antarctica: ignoring the problem is leading to potentially catastrophic consequences. The same is true of AI development – this could be catastrophic for Africa, even for America or Europe. But the potential for a good outcome is so enormous – if we react in time. This isn’t the same story as climate change, it isn’t a huge amount of cost just to keep it from being cataclysmic. What I’m saying here is these costs (for human-integrated AI) pay off, they pay back. We’re talking about a much better world. The hard part is getting people to think that it’s worth investing in other peoples’ kids. That’s a bit of an ugly reality, but it’s the truth.

My approach has been: if we can scale out some sophisticated AIs and deliver them in ways that even if not truly free, but can be done at low enough costs that this can be done philanthropically, then that’s what we’ll do.

Ed:  I really appreciate your comments. You went a good way to defining what you meant by ‘Augmented Intelligence’. I had a sense of what you meant by that but this was a most informative journey.

Vivienne:  Thank you. It’s interesting – 10 years ago if you’d asked me about cognitive neural prosthetics, cybernetics, cyborgs… I would have said it’s 50 years away. So now I’ve trimmed more than 10 years off that estimate. Back then, as an academic, I thought, “Ok, what can I do today?” I don’t have access to a brain directly, can I leverage technology somehow to achieve indirect access to a brain? What could we do with Google Glass? What could we do with inferential technologies online? I know I’m not the only person that’s had an idea like this before. My very first startup, we were thinking of “Google Now” long before Google Now came along. The vision was even more aggressive:

“You’re walking down the street, and the system remembers that you read an article 40 days ago about a restaurant and it really piqued your interest. How did it know that? Because it’s modeling you. It’s effectively simulating you. Your emotional responses. It’s reading the article at the same time you’re reading the article. It’s tracking your responses, but it’s also simulating you, like a true cognitive system. [An aside: I’ll echo what many others have said – that IBM’s Cognitive Computing is not really cognitive computing… but such a thing does really exist].

So I’m walking down the street, and the system pings me and says, “You know that restaurant you were interested in? There’s an open table right now, it’s three blocks away, your calendar’s clear and I’ve just made a reservation for you.” Because the system knew, it didn’t need to ask, that you’d say yes.

Now, that’s really ambitious, especially since I was thinking about this ten years ago, but it’s not ambitious in the sense that it can be done, it’s more ambitious about the infrastructure. Where do you get the data from? What kind of processing can you do? I think the infrastructure problem is becoming less and less of one today and that’s where we are seeing many changes.

You brought up the issue of a “Marketplace of Things” [n.b. Ed and Vivienne had a short exchange leading in to this interview regarding IoT and the perspective that localized data/intelligence exchange would dramatically lower bandwidth requirements for upstream delivery, lower system latency, and provide superior results.] and brought up the issue of bandwidth: wouldn’t it better if every light bulb, every  camera, every microphone locally processed information, and then only sent off things that were actually interesting, informative. But didn’t just send it off to a single server – it’s every day at the InfoNYSE: “I’ve got some interesting emotion data on three users in this room, anyone interested in that?”

These transactions won’t necessarily be traditional monetary transactions, possibly just data transactions. “I will trade this information for some data about your users’ interests”, or for future data about how your users responded to this information that I’m providing.

As much as I think the word ‘futurist’ is a bit overused or diffuse, I do admit to thinking about the future and what’s possible. I’ve got a room full of independent processing units that are all talking to each other… I’ve kind of got a brain in that room. I’m actually pretty skeptical of ‘general AI’ as traditionally defined. You know, like I’m going to sit down and have a conversation with this AI entity. [laughs] I think we’ll know that we’ve achieved true general AI when this entity no longer introspects, when it no longer understands its own actions – i.e. when it becomes like us.

I do think general artificial intelligence is possible but it’s going to kind of like a whole building ‘turning on’ – it won’t be having a conversation with us, it will much more like our brain. I like to use this metaphor: “Our brains are like a wildly dysfunctional democracy with all of these circuits voting for different outcomes, but it’s an unequal democracy, as the votes carry different weights.” But remember that we only get to see a tiny segment of those votes: only a very small portion of that process ever comes to our conscious awareness. We do a much better job of post-hoc explaining the ‘votes’ and just making things happen than actually explaining it in the moment.

Another metaphor I use is from the movie “Inside Out”:  except instead of a bunch of cutesy emotions embodied, imagine a room full of really crotchety old economists that hate each other and hold wildly differing opinions.

Ed: “Oh, you mean the Fed!”

Vivienne: “Yes! Imagine the Fed of your head” This is actually not a bad model of our cognitive process. In many ways we show a lot of near optimality, near perfect rationality in our decision making, once you understand all the inputs to our decision making process. And yet we can wildly fluctuate between different decisions. The classic is having people playing a betting game where if you bet, and you reveal they won, they will play again. If you bet, and they reveal they lost, they will play again, but if you bet and don’t reveal – they will be less likely to play again.

Which at one level is irrational, but we hold these weird and competing ideas in our head and these votes take place on a regular basis. It gets really complex: modeling cognition. But if you really want to understand people, that’s the way to do it.

This may have been a long and somewhat belabored answer to your original question regarding augmented intelligence, but the heart of it for me all started with, “What could we do if we really understood someone?” I wanted it to be, “I really understand you because I’m in your brain.” But, lacking that immediate capability, what can I infer about someone, and then what can I feed back to them to make them better?

Now, “better” may be a pretty loose and broad definition, but I’m comfortable with that if I can make people “grittier”, if I can improve their working memory span, if I can improve their ability to regulate their own emotions – not turn their emotions off, not pump them full of Ritalin, but to be aware of how their emotions impact their decision making. That leaves the person free to decide what to do with them. And that’s a world I would be pretty happy with.

I would surely disagree with how a great many people use their lives, even so empowered, but it’s a point of faith for me: I’m a hard numbers scientist, not a religious person, but there’s one point of faith. If we could improve these qualities for everyone, the world would be a better place.

Ed: Going back to the conference that we’re both attending (Consumer Goods) how can this idea of augmented intelligence and what I would call an ‘intelligent surface’ of total our environment (whether enabled by IoT, social media feedback, GoogleNow, etc) help turn the consumer ecosystem on its end and truly make it ‘consumer-centric’? By that I mean actually being in control of what goods and services are invented, let alone sold, to us. Why should firms waste time making and selling us stuff that we don’t want or need, or stuff that is bad for us?

Vivienne:  There’s a couple of different ideas that come to me. One is something I often recommend in regards to talent. While your questions pertains to external customers in regards to retailers/suppliers, an analogy can be drawn to the internal interaction between employees and the firms for which they work: “Companies need to stop trying to align their employees with their business, they need to figure out how to align their business with their employees.”

This doesn’t mean that their business becomes some quixotic thing that is malleable and changeable; you do have a business that produces goods or services. For instance, let’s say your business is Campbell’s Soup – you produce food and ship it around the world. But why does this matter to Ann, Shaniqua, any other of your employees? While this may sound a bit ‘self-helpy’ or ‘business-guru’ it’s actually a big part of my philosophy: Think about the things I’ve said about education: Let’s do this crazy thing – think about what the true outcome we’re after is. I want happy, healthy, productive people – and society will reap the benefits. That is my lone definition of education. Anything else is just details.

I’m telling you I can predict those three things. Therefore any decision I make, right here in the moment, I can align against those three goals. So… should I teach this person some concept in geometry right now? Or how should I teach that concept? How does that align with those three goals?

“My four year old is still not reading, should I panic?” How does that align with those three goals? For a child like that, is that predictive of those three things? For some kids, that might be problematic, and it might be time for some kind of intervention. For others, turns out it’s not predictive at all. I didn’t do well in high school. What I didn’t do there, I did in spades in college… and then flunked completely out. After a big gap in my life I went back to college and did my entire undergraduate degree in one year – with perfect scores. Same person, same place, same things. It wasn’t what I was doing (to borrow a phrase from someone else) – it was why I was doing it.

So figuring out why it suddenly mattered to me at that time was me figuring out that it coalesced around the idea of maximizing human potential. Suddenly it had purpose, I was doing things for a reason.

So now we’re talking about doing this inside of companies, with their employees. Figuring out why your company matters to this employee. You want them to be productive – bonuses aren’t the way to do it. Pay them enough that they feel valued, and then figure out why this is important to them. And true enough, for some people that reason might be money – but to others not.

So what does that mean for our consumer relationship? My big fear is that when CEOs or CMOs hear this (human perception modeling, etc. as is used in AI development) they think, “Oh, let’s figure out why people will buy our products!” When I hear about ‘brain hacks’ I don’t think of sales or marketing, I worry about the food scientists figuring out the perfect ‘sweet spot’ of sodium, fat and carbohydrates in order to make this food maximally addictive (in a soft sense). I’m not talking about that kind of alignment. I’m saying, “What is your long term goal?”

Every one of those people on stage (at the Consumer Goods Forum) made some very impassioned speeches about how it’s about the health of consumers, their well-being, the good of society, it’s about jobs, etc. It’s shocking how bad a reputation those same firms have, at least in the USA, along those same dimensions – if that’s what they truly care about. And yet their response to the above statement is, “Gosh, we need a better branding campaign!”

Well… no, you firms are probably not nearly as aligned around those positive outcomes as you think you are; I believe you feel that way, and that you feel abused in our assumptions that you are not (acting that way). I do a tremendous amount of work and advice in the area of discrimination, in human capital. You know, bias, discrimination… it’s not done by villains, it’s done by humans.

Ed: I think what’s difficult is that for true authenticity to be evident, to really act in an authentic manner, one must be able to be self-aware. It’s rare to find that brutal self-analysis, self-questioning, self-awareness. You have pointed out that many business leaders truly believe their hype, their marketing positions – whether there is any real accuracy in their positions or accuracy.

Vivienne:  I just wrote an op-ed for the Financial Times, “The Neuro-Economics of Inequality” (not it’s actual title but it’s the way I think about the issue). What happens when someone learns, really legitimately learns rationally that their hard work will not pay off. Not the way that, for example, it will for the American white kid down the street. So why bother? Even for a woman; so I’ve got a fancy degree, a great college education: I’m going to have to work twice as hard as the man to get the same pay, the same reward.. and even then I’m never going to make it to the C-suite anyway. If I actually do get there, I’m going to have to be “that” kind of executive once I’m there…  I’d rather just be a mom.

These people are not opting out, they are making rational decisions. You talk to economists… we went through this and did the research. We could prove the ‘cost of being named José in the tech industry’, the ‘cost of being black on Wall St.’ – this completely changes some of these equations when you take that into account. So, bringing this back to consumers, I don’t have ready answers for it as I’m a bit dismissive of it. “Consumerism” – that’s a bad word, isn’t it?

While I’m not sure of the resonance of this thought, what if you could take the idea that I’m talking about – these big predictions, Bayesian models that are giving you probability distributions over this potential consumers outcomes. Not ten minutes from now, or rather ten minutes from now is only part of what I’m talking about. We’re integrating across the probability distribution of all potential life outcomes from something as minor as “they ate your bag of potato chips.”

I’m willing to bet if you had to ‘own’, in some sense, at least morally if nothing else, the consequence of knowing the short-term benefit: nice little hedonic short term increase in happiness; mid-term benefit: decrease in eudaimonic  happiness; long term decrease in liver function and so forth… your outlook might be different. If you’re (brand X) that’s just an externality. So I think there are some legitimate criticisms: why talk a fancy game, it’s just corporate responsibility.

Yes, optimizing your supply chain, reducing food waste is nice, but it’s really just because you spent money moving food around the world, some of which got wasted – you want to cut back on that. Beyond that, my observation as an outsider to this sector is that it’s about corporate responsibility, and by that I mean the marketing practices. If you really want to put your heart where your mouth is then take ownership of the long term outcomes. Think about what it means for a nine-year old to eat potato chips. Certain ‘health food’ enterprises have made a lot of money out of this idea, providing a healthy site in which to shop. Certainly, in comparison to a corner store in a disenfranchised neighborhood in the US they are a wildly healthy choice, but even these health shops have an entire aisle dedicated to potato chips. They’re just organic potato chips that cost three times as much. I buy them every now and then. I’m a firm believer in eating well, just eating with some degree of moderation.

That would be my approach. My approach in talent, in health, in education and a variety of domains in policy-making has been let’s leverage some amazing technology to make these seemingly miraculous predictions (which they’re not, they are really not even predictions but actuarial distributions). But these still inform us.

Right now, with this consumer, we’re balancing a number of things: revenue, sustainability, even the somewhat morbid sustainability of our consumer base; we’re balancing our brand. What’s the one action we could take right now as an organization in respect to this person that could maximize all of those things? Given their history, it’s hard to believe that it’s going to be something more than revenue, or at least something that’s going to actually cost them. If I actually believed they would be willing to take this kind of technology and apply it in a truly positive way – I’d just give it to them.

I mean, what a phenomenal human good it would be if some rather simple machine learning could help them actually have a really different paradigm of ‘consuming’. What if every brand could become your best friend, and do what’s in your best interest? Although as it reflects from the brand-owner’s perspective. Yeah, that’s pretty hopeful to think that could actually happen, but do I think that could happen?

That’s what we’re hoping for in some of our mental health work. By being able to make these predictions we’re not just hoping to intervene on behalf of the sufferer, but trusted confidants as well. The way I often put it is: I would love it if our system could be everything your best friend is, but even more vigilant. What would your best friend do if they recognized the early signs of a manic episode coming on? Can we deliver that two weeks earlier and never miss the signals?

Going back, I just don’t see where big consumer companies own that responsibility. But let me pull back to my ‘Marketplace of Things’ idea. There’s a crucial aspect here: that of agents. I can have my own proxy, my own agent that can represent me. In that context, then these consumer companies can serve their own goals. I think they do have some goal in me being alive, so they can continue to earn out my customer lifetime value as a function of my lifetime. They have some value attached to me spending money in certain ways that are more sustainable, that are better for their infrastructure, etc.

I think in all those areas they could take the kinds of methodologies I’m describing and apply them in a kind of AI/machine learning. On my side, if I’m proxied by own agent – well then we can just negotiate. My agent’s goal is really to model out my health, happiness and productivity. It’s constantly seeking to maximize those in the near, medium and long term. So, it walks into a room and says, “All right, let’s have a negotiation.” Clearly, this can’t be done by people, as it all needs to happen nearly instantaneously.

I don’t think the cost of these solutions will drop low enough that we’ll literally be putting them into bags of potato chips. Firstly we must imagine changes in the infrastructure. Part of paying for shelf space in a supermarket won’t be just paying for the physical shelf space, it will be paying for putting your own agents in place on that shelf space. They’ll be relatively low cost, but probably not as disposable as something you could build into the packaging of potato chips. But simply by visiting that location, I pick up all the nutrition information I need, I can solicit information from the store about other people that are shopping (here I mean that my proxy can do all this). Then that whole system can negotiate this out, and come up with recommendations.

To me, it may seem like my phone or earpiece is simply suggesting, “How about this, how about that?” While not everyone is this way, I’m one of those people who actually enjoys going to the supermarket, feeling how it’s interacting with me in the moment. That’s something my agent can take into account as well. This becomes a story that I find more interesting. Maybe this is a set of combined interactions that takes into account various foods manufacturers, retailers – and my agent.

Today, I’m totally outside this process – I don’t get to play a role. The things I like, I just cross my fingers and hope they are in stock when I am in the store. The price that I pay: I have no participation in that whatsoever (other than choosing to purchase or not).

Another example:  Kate from Facebook [in an earlier panel discussion] was telling us that Facebook gives a discount to advertisers for ads that are ‘stickier’ – that people want to see and spend more time looking at. What if I was willing to watch less enjoyable ads – if FB will share the revenue with me?

None of these are totally novel ideas, but none of them will ever come to realization if one of the fundamental sides to this negotiation never gets to participate. I’m always getting proxied by someone else. I don’t have to think that Facebook or Google are bad companies, or that Larry Page or Mark Zuckerberg are bad people for me to think that they don’t necessarily have my best interests at heart.

That would change the dynamic. But I sense that some people in the audience would see that as a loss of control, and most of them are hyper risk-averse.

Ed:  As a final thought or question, in terms of the participation between consumer and producer/retailer that you have discussed, it occurs to me that perhaps one avenue that may be attractive to these companies would be along the lines of market research.  Most new products or services are developed unilaterally, with perhaps some degree of ‘traditional market research’ where small focus groups are used for feedback. From the number of expensive flops in the marketplace it appears that this methodology is fraught with error. Could these methodologies of AI, of probability prediction, of agent communication, be brought to bear on this issue?

Vivienne:  Interesting… brings up many new ideas. One thing that we did in the past – we’re not doing it now but we could listen in to students conversing with each other online. We actually learned the material they were studying directly from the students themselves. For example, start with a system that knows nothing about biology, it learns biology from the students talking amongst themselves – including wrong ideas about biology. What we found was when we trained the system to predict the grades that the students would receive, after new students entered the class, with new material, and new professors: we knew after one week what grade they would get at the end of the semester. We knew with greater and greater accuracy each week what questions they would get right or wrong on the final exam. Our goal in the exercise was to end all standardized testing. I mean, if we know how they are going to score on the test, why ever have a test?

Part of our outcome there was to simulate the outcome of a lecture. There’s some similarity to what you’re discussing (producing consumer goods). Lectures are costly to develop, you get one chance to deploy it each semester or quarter, limited feedback, etc. You would really like to know ahead of time if this lecture was going to useful. Before we pivoted away from this more academic aspect of education into this life outcomes type of work, we were wondering if we could give feedback on the effectiveness of a given lecture before the lecture was given.

Hey, these five students are not going to understand any of your lecture as it’s currently presented. Either they are going to need something different, or you can explore including something else, some alternative metaphors, in your discussion.

Yes, I think it’s intriguingly very possible to to run this sort of very disruptive market research. Certainly in my domain I’m already talking about this: I’m asking one question each day, and can predict everyone’s answer to thousands of questions. That’s rather profound, quite efficient. What if you had a relationship with a meaningful sample of your customers on Facebook and you could ask each of them one question a day, just like I described with my educational work. Essentially you would have a deep, insightful rolling model of your customers all the time.

You could make predictions against this model community for future products, some basic simulations for those type of experiences. I agree, this could be very appealing to these firms.

A Digital Disruptor: An Interview with Michael Fertik

June 27, 2016 · by parasam

During the recent Consumer Goods Forum global summit here in Cape Town, I had the opportunity to briefly chat with Michael about some of the issues confronting the digital disruption of this industry sector. [The original transcript has been edited for clarity and space.]

Michael Fertik founded Reputation.com with the belief that people and businesses have the right to control and protect their online reputation and privacy. A futurist, Michael is credited with pioneering the field of online reputation management (ORM) and lauded as the world’s leading cyberthinker in digital privacy and reputation. Michael was most recently named Entrepreneur of the Year by TechAmerica, an annual award given by the technology industry trade group to an individual they feel embodies the entrepreneurial spirit that made the U.S. technology sector a global leader.

He is a member of the World Economic Forum Agenda Council on the Future of the Internet, a recipient of the World Economic Forum Technology Pioneer 2011 Award and through his leadership, the Forum named Reputation.com a Global Growth Company in 2012.

Fertik is an industry commentator with guest columns in Harvard Business Review, Reuters, Inc.com and Newsweek. Named a LinkedIn Influencer, he regularly blogs on current events as well as developments in entrepreneurship and technology. Fertik frequently appears on national and international television and radio, including the BBC, Good Morning America, Today Show, Dr. Phil, CBS Early Show, CNN, Fox, Bloomberg, and MSNBC. He is the co-author of two books, Wild West 2.0 (2010), and New York Times best seller, The Reputation Economy (2015).

Fertik founded his first Internet company while at Harvard College. He received his JD from Harvard Law School.

Ed: As we move into a hyper-connected world, where consumers are tracked almost constantly, and now passively through our interactions with an IoT-enabled universe: how do we consumers maintain some level of control and privacy over the data we provide to vendors and other data banks?

Michael:  Yes, passive sharing is actually the lion’s share of data gathering today, and will continue in the future. I think the question of privacy can be broadly broken down into two areas. One is privacy against the government and the other is privacy against ‘the other guy’.

One might call this “Big Brother” (governments) and “Little Brother” (commercial or private interests). The question of invasion of privacy by Big Brother is valid, useful and something we should care about in many parts of the world. While I, as an American, don’t worry overly about the US government’s surveillance actions (I believethat the US is out to get ‘Jihadi John’ not you or me); I do believe that many other governments’ interest in their citizens is not as benign.

I think if you are in much of the world, worrying about the panopticon of visibility from one side of the one-way mirror to the other side where most of us sit is something to think and care about. We are under surveillance by Big Brother (governments) all the time. The surveillance tools are so good, and digital technology makes it possible to have so much of our data easily surveilled by governments that I think that battle is already lost.

What is done with that data, and how it is used is important: I believe that this access and usage should be regulated by the rule of law, and that only activities that could prove to be extremely adverse to our personal and national interests should be actively monitored and pursued.

When it comes to “Little Brother” I worry a lot. I don’t want my private life, my frailties, my strengths, my interests.. surveilled by guys I don’t know. The basic ‘bargain’ of the internet is a Faustian one: they will give you something free to use and in exchange will collect your data without your knowledge or permission for a purpose you can never know. Actually, they will collect your data without your permission and sell it to someone else for a purpose that you can never know!

I think that encryption technologies that help prevent and mitigate those activities are good and I support that. I believe that companies that promise not to do that and actually mean it, that provide real transparency, are welcome and should be supported.

I think this problem is solvable. It’s a problem that begins with technology but is also solvable by technology. I think this issue is more quickly and efficiently solvable by technology than through regulation – which is always behind the curve and slow to react. In the USA privacy is regarded as a benefit, not an absolute right; while in most of Europe it’s a constitutionally guaranteed right, on the same level as dignity. We have elements of privacy in American constitutional law that are recognized, but also massive exceptions – leading to a patchwork of protection in the USA as far as privacy goes. Remember, the constitutional protections for privacy in the USA are directed to the government, not very much towards privacy from other commercial interests or private interests. In this regard I think we have much to learn from other countries.

Interestingly, I think you can rely on incompetence as a relatively effective deterrence against public sector ‘snooping’ to some degree  – as so much government is behind the curve technically. The combination of regulation, bureaucracy, lack of cohesion and general technical lack of applied knowledge all serve to slow the capability of governments to effectively mass surveile their populations.

However, in the commercial sector, the opposite is true. The speed, accuracy, reach and skill of private corporations, groups and individuals is awesome. For the last ten years this (individual privacy and awareness/ownership of one’s data) has been my main professional interest… and I am constantly surprised by how people can get screwed in new ways on the internet.

Ed:  Just as in branding, where many consumers actually pay a lot for clothing, that in addition to being a T-shirt, advertise prominently the brand name of the manufacturer, with no recompense for the consumer; is there any way for digital consumers to ‘own’ and have some degree of control over the use of the data they provide just through their interactions? Or are consumers forever to be relegated to the short end of the stick and give up their data for free?

Michael:  I have mapped out, as well as others, how the consumer can become the ‘verb’ of the sentence instead of what they currently are, the ‘object’ of the sentence. The biggest lie of the internet is that “You” matter… You are the object of the sentence, the butt of the joke. You (or the digital representation of you) is what we (the internet owners/puppeteers) buy and sell. There is nothing about the internet that needs to be this way. This is not a technical or practical requirement of this ecosystem. If we could today ask the grandfathers of the internet how this came to be, they would likely say that one of areas in which they didn’t succeed was to add an authentication layer on top of the operational layer of the internet. And what I mean here is not what some may assume: providing access control credentials in order to use the network.

Ed:  Isn’t attribution another way of saying this? That the data provided (whether a comment or purchasing / browsing data) is attributable to a certain individual?

Michael:  Perhaps “provenance” is closer to what I mean. As an example, let’s say you buy some coffee online. The fact that you bought coffee; that you’re interested in coffee; the fact that you spend money, with a certain credit card, at a certain date and time; etc. are all things that you, the consumer, should have control over – in terms of knowing which 3rd parties may make use of this data and for what purpose. The consumer should be able to ‘barter’ this valuable information for some type of benefit – and I don’t think that means getting ‘better targeted ads!’ That explanation is a pernicious lie that is put forward by those that have only their own financial gain at heart.

What I am for is “a knowing exchange” between both parties, with at least some form of compensation for both parties in the deal. That is a libertarian principle, of which I am a staunch supporter. Perhaps users can accumulate something like ‘frequent flyer miles’ whereby the accumulated data of their online habits can be exchanged for some product or service of value to the user – as a balance against the certain value of the data that is provided to the data mining firms.

Ed:  Wouldn’t this “knowing exchange” also provide more accuracy in the provided data? As opposed to passively or surreptitiously collected data?

Michael:  Absolutely. With a knowing and willing provider, not only is the data collection process more transparent, but if an anomaly is detected (such as a significant change in consumer behavior), this can be questioned and corrected if the data was in error. A lot of noise is produced in the current one-sided data collection model and much time and expense is required to normalize the information.

Ed:  I’d like to move to a different subject and gain your perspective as one who is intimately connected to this current process of digital disruption. The confluence of AI, robotics, automation, IoT, VR, AR and other technologies that are literally exploding into practical usage have a tendency, as did other disruptive technologies before them, to supplant human workers with non-human processes. Here in Africa (and today we are speaking from Cape Town, South Africa) we have massive unemployment – varying between 25% – 50% of working age young people in particular. How do you see this disruption affecting this problem, and can new jobs, new forms of work be created by this sea change?

Michael:  The short answer is No. I think this is a one-way ratchet. I’m not saying that in a hundred years’ time that may change, but in the next 20-50 years, I don’t see it. Many, many current jobs will be replaced by machines, and that will be a fact we must deal with. I think there will be jobs for people that are educated. This makes education much, much more important in the future than it’s even been to date – which is huge enough. I’m not saying that only Ph.D.’s will have work, but to work at all in this disrupted society will require a reasonable level of technical skill.

We are headed towards an irrecoverable loss of unskilled labor jobs. Full stop. For example, we have over a million professional drivers in the USA – virtually all of these jobs are headed for extinction as autonomous vehicles, including taxis and trucks, start replacing human drivers in the next decade. These jobs will never come back.

I do think you have a saving set of graces in the developing world, that may slow down this effect in the short term: the cost of human labor is so low that in many places this will be cheaper than technology for some time; the fact that corruption is often a bigger impediment to job growth than technology; and trade restrictions and unfair practices are also such a huge limiting factor. But none of this will stem the inevitable tide of permanent disruption of the current jobs market.

And this doesn’t just affect the poor and unskilled workers in developing economies: many white collar jobs are at high risk in the USA and Western Europe:  financial analysts, basic lawyers, medical technicians, stock traders, etc.

I’m very bullish on the future in general, but we must be prepared to accommodate these interstitial times, and the very real effects that will result. The good news is that, for the developing world in particular, a person that has even rudimentary computer skills or other machine-interface skills will find work for some time to come – as this truly transformative disruption of so many job markets will not happen overnight.

Science – a different perspective on scientists and what they measure…

November 4, 2015 · by parasam

Scientists are humans too…

If you think that most scientists are boring old men with failed wardrobes and hair-challenged noggins.. and speak an alternative language that disengages most other humans… then you’re only partially correct… The world of scientists is now populated by an incredible array of fascinating people – many of whom are crazy-smart but also share humor and a variety of interests. I’ll introduce you to a few of them here – so the next time you hear of the latest ‘scientific discovery’ you might have a different mental picture of ‘scientist’.

Dr. Zeray Alemseged

Dr. Zeray Alemseged

 

Paleoanthropologist and chair and senior curator of Anthropology at the California Academy of Sciences

Zeresenay Alemseged is an Ethiopian paleoanthropologist who studies the origins of humanity in the Ethiopian desert, focusing on the emergence of childhood and tool use. His most exciting find was the 3.3-million-year-old bones of Selam, a 3-year-old girl from the species Australopithecus afarensis.

He speaks five languages.

 

 

 

Dr. Quyen Nguyen
Dr. Quyen Nguyen

 

Doctor and professor of surgery and director of the Facial Nerve Clinic at the University of California, San Diego.

Quyen Nguyen is developing ways to guide surgeons during tumor removal surgery by using fluorescent compounds to make tumor cells — and just tumor cells — glow during surgery, which helps surgeons perform successful operations and get more of the cancer out of the body. She’s using similar methods to label nerves during surgery to help doctors avoid accidentally injuring them.

She is fluent in French and Vietnamese.

 

 

Dr. Tali Sharot

Dr. Tali Sharot

 

Faculty member of the Department of Cognitive, Perceptual & Brain Sciences

Tali Sharot is an Israeli who studies the neuroscience of emotion, social interaction, decision making, and memory. Specifically her lab studies how our experience of emotion impacts how we think and behave on a daily basis, and when we suffer from mental illnesses like depression and anxiety.

She’s a descendant of Karl Marx.

 

 

 

Dr. Michelle Khine

Dr. Michelle Khine

 

Biomedical engineer, professor at UC Irvine, and co-Founder at Shrink Nanotechnologies

Michelle Khine uses Shrinky Dinks — a favorite childhood toy that shrinks when you bake it in the oven — to build microfluidic chips to create affordable tests for diseases in developing countries.

She set a world record speed of 38.4 mph for a human-powered vehicle as a mechanical engineering grad student at UC Berkeley in 2000.

 

 

 

Dr. Nina Tandon

Dr. Nina Tandon

 

Electrical and biomedical engineer at Columbia’s Laboratory for Stem Cells and Tissue Engineering; Adjunct professor of Electrical Engineering at the Cooper Union

Nina Tandon uses electrical signals and environmental manipulations to grow artificial tissues for transplants and other therapies. For example, she worked on an electronic nose used to “smell” lung cancer and now she’s working on growing artificial hearts and bones.

In her spare time, the Ted Fellow does yoga, runs, backpacks, and she likes to bake and do metalsmithing. Her nickname is “Dr. Frankenstein.”

 

 

Dr. Lisa Randall

Dr. Lisa Randall

 

Physicist and professor

Lisa Randall is considered to be one of the nation’s foremost theoretical physicists, with an expertise in particle physics and cosmology. The math whiz from Queens is best known for her models of particle physics and study of extra dimensions.

She wrote the lyrics to an opera that premiered in Paris and has an eclectic taste in movies.

 

 

 

Dr. Maria Spiropulu

Dr. Maria Spiropulu

 

Experimental particle physicist and professor of physics at Caltech

Maria Spiropulu develops experiments to search for dark matter and other theories that go beyond the Standard Model, which describes how the particles we know of interact. Her work is helping to fill in holes and deficiencies in that model. She works with data from the Large Hadron Collider.

She’s the great-grandchild of Enrico Fermi in Ph.D lineage — which means her graduate adviser’s adviser’s adviser was the great Enrico Fermi who played a key role in the development of basic physics.

 

 

 

Dr. Jean-Baptiste Michel

Dr. Jean-Baptiste Michel

Mathematician, engineer, and researcher

The French-Mauritian Jean-Baptiste Michel is a mathematician and engineer who’s interested in analyzing large volumes of quantitative data to better understand our world.

For example, he studied the evolution of human language and culture by analyzing millions of digitized books. He also used math to understand the evolution of disease-causing cells, violence during conflicts, and the way language and culture change with time.

He likes “Modern Family” and “Parks and Recreation,” he listens to the Black Keys and Feist, and his favorite restaurant in New York City is Kyo Ya.

 

 

Dr. John Dabiri

Dr. John Dabiri

 

Professor of aeronautics and bioengineering

John Dabiri studies biological fluid mechanics and wind energy — specifically how animals like jellyfish use water to move around. He also developed a mathematical model for placing wind turbines at an optimal distance from each other based on data from how fish schools move together in the water.

He won a MacArthur “genius grant” in 2010.

 

 

 

Isaac Kinde

Isaac Kinde

 

As a graduate student (M.D./Ph.D.) at Johns Hopkins, Isaac Kinde (Ethiopian/Eritrean) is working on improving the accuracy of genetic sequencing so that it can be used to diagnose cancer at an early stage in a simple, noninvasive manner.

In 2007 he worked with Bert Vogelstein, who just won the $3 million Breakthrough Prize in Life Sciences.

He’s an avid biker, coffee drinker and occasional video game player.

 

 

 

Dr. Franziska Michor

Dr. Franziska Michor

 

 

Franziska Michor received her PhD from Harvard’s Department of Organismic and Evolutionary Biology in 2005, followed by work at Dana-Farber Cancer Institute in Boston, then was assistant professor at Sloan-Kettering Cancer Center in New York City. In 2010, she moved to the Dana-Farber Cancer Institute and Harvard School of Public Health. Her lab investigates the evolutionary dynamics of cancer.

Both Franziska and her sister Johanna, who has a PhD in Mathematics, are licensed to drive 18-wheelers in Austria.

 

 

 

 

Heather Knight

Heather Knight

 

Heather Knight (Ph.D. student at Carnegie Mellon) loves robots — and she wants you to love them too. She founded Marilyn Monrobot, which creates socially intelligent robot performances and sensor-based electronic art. Her robotic installations have been featured at the Smithsonian-Cooper Hewitt Design Museum, LACMA, and PopTech.

In her graduate work she studies human-robot interaction, personal robots, theatrical robot performances, and designs behavior systems.

When she’s not building robots, she likes salsa dancing, karaoke, traveling, and film festivals.

 

 

Clio Cresswell

Clio Cresswell

 

 

The Australian author of “Mathematics and Sex,” Clio Cresswell uses math to understand how humans should find their partners. She came up with what she calls the “12 Bonk Rule,” which means that singles have a greater chance of finding their perfect partner after they date 12 people.

If she’s not at her desk brain working, you’ll find her at the gym either bench pressing her body weight or hanging upside down from the gym rings.

 

 

 

Dr. Cheska Burleson

Dr. Cheska Burleson

 

 

Cheska Burleson (PhD in Chemical Oceanography) focused her research on the production of toxins by algae blooms commonly known as “red tide.” She also evaluated the medicinal potential of these algal species, finding extracts that show promise as anti-malarial drugs and in combating various strains of staphylococcus, including MRSA—the most resistant and devastating form of the bacteria.

Cheska entered college at seventeen with enough credits to be classified as a junior. She also spent her formative years as a competitive figure skater.

 

 

 

 

Bobak Ferdowsi

Bobak Ferdowsi

 

Systems engineer and flight director for the Mars Curiosity rover

Bobak Ferdowsi (an Iranian-American) gained international fame when the Mars Curiosity rover landed on the surface of Mars last August. Since then, the space hunk has become an Internet sensation, gaining more than 50,000 followers on Twitter, multiple wedding proposals from women, and the unofficial title of NASA’s sexy “Mohawk Guy.”

He’s most known for this star-and-stripes mohawk (which he debuted for the Curiosity landing), but changes his hairstyle frequently.

He is a major Star Trek fan.

 

Ariel Garten

Ariel Garten

 

CEO and head of research at InteraXon

By tracking brain activity, the Canadian Ariel Garten creates products to improve people’s cognition and reduce stress. Her company just debuted Muse, a brain-sensing headband that shows your brain’s activity on a smartphone or tablet. The goal is to eventually let you control devices with your mind.

She runs her own fashion design business and has opened fashion week runways with shirts featuring brainwaves.

 

 

 

Amy Mainzer

Amy Mainzer

 

Astrophysicist and deputy project scientist at the wide-field infrared survey explorer, NASA’s Jet Propulsion Lab.

Amy Mainzer built the sensor for the Spitzer Space Telescope, a NASA infrared telescope that’s been going strong for the last 10 years in deep space. Now she’s the head scientist for a new-ish telescope, using it to search the sky for asteroids and comets.

Her job is to better understand potentially hazardous asteroids, including how many there are as well as their orbits, sizes, and compositions.

Her idea of a good time is to do roller disco in a Star Trek costume.

 

 

Dr. Aditi Shankardass

Dr. Aditi Shankardass

 

 

Pedriatric Neurologist, Boston Children’s Hospital, Harvard Medical School (her British pedigree includes B. Sc. in physiology from King’s College London; M.Sc. in neurological science from University College London; Ph.D in cognitive neuroscience from the University of Sheffield).

Aditi Shankardass is a renowned pediatric neurologist who uses real-time brain recordings to accurately diagnose children with developmental disorders and the underlying neurological causes of dyslexia.

Her father is a lawyer who represents celebrities. She enjoys dancing, acting, and painting.

 

 

 

Albert Mach

Albert Mach

 

Bio-engineer and senior scientist at Integrated Plasmonics Corporation

As a graduate student Albert Mach designed tiny chips that can separate out cells from fluids and perform tests using blood, pleural effusions, and urine to detect cancers and monitor them over time.

He loved playing with LEGOs as a kid.

 

 

 

Now that we’ve had a look at ‘modern scientists’ it is only appropriate to look at a bit of ‘modern science’…

STEM (Science, Technology, Engineering, Math) is not a job – it’s a way of life. You can’t turn it off… you don’t leave that understanding and outlook at the office when you leave (not that any of the above braniacs stop working on a problem just because it’s 5PM). One of the fundamental aspects of science is measurement: it is absolutely integral to every sector of scientific endeavor. In order to measure, and communicate those measurements to other scientists, a set of metrics is required, one that is shared and agreed upon by all. We are familiar with the meter, the litre, the hectare and the kilogram. But there are other, less well-known measurements – one of which I will share with you here.

What do humans, beer, sub-atomic collisions, the cosmos and the ‘broad side of a barn’ have in common? Let’s find out!

Units of measure are extremely important in any branch of science, but some of the most fundamental units are those of length (1 dimension), area (2 dimensions) and volume (3 dimensions). In human terms, we are familiar with the meter, the square meter and the litre. (For those lonely Americans – practically the only culture that is not metric – think of yards, square yards and quarts). I want to introduce you to a few different measurements.

There are three grand ‘scales’ of measurement in our observable universe: the very small (as in subatomic); the human scale; and the very large (as in cosmology). Even if you are not science-minded in detail, you will have heard of the angstrom [10−10 m (one ten-billionth of a meter)] – used to measure things like atoms; and the light-year [ 9 trillion kilometers (or about 6 trillion miles)] – used to measure galaxies.

the broad side of a barn

the broad side of a barn

The actual unit of measure I want to share is called “Hubble-barn” (a unit of volume), but a bit of background is needed. The barn is a serious unit of measure (area) used by nuclear physicists. One barn is equal to 1.0 x 10−28 m2. That’s about the size of the cross-sectional area of an atomic nucleus. The term was coined by these physicists that were running these incredibly large machines (atom smashers) that help us understand subatomic function and structure by basically creating head-on collisions with particles travelling in opposite directions at very high speeds. It is very, very difficult! And the colloquial expression “you can’t hit the broad side of a barn” led to the name for this unit of measure…

subatomic collision

subatomic collision

Now, since even scientists have a sense of humor, but understand rigor and analogy, they further derived two more units of measure based on the barn: the outhouse (smaller than a barn) [1.0 x 10−6 barns]; and the shed (smaller than an outhouse)  [1.0 x 10−24 barns].

So now we have resolved part of the aforementioned “Hubble-barn” unit of measure. Remember that a unit of volume requires three dimensions, and now we have established two of those with a unit of area (the barn). A

Large Hadron Collider

Large Hadron Collider

third dimension (of length, which is one-dimensional quantity) is needed…

The predecessor to Hubble-barn is the Barn-megaparsec. A parsec is equal to about 3.26 light years (31 trillion kilometers / 19 trillion miles). Its name is derived from the distance at which one astronomical unit subtends an angle of one arcsecond. [Don’t stress if you don’t get those geeky details, the parsec basically makes it easy for astronomers to calculate distances directly from telescope observations].

A megaparsec is one million parsecs. As in a bit over 3 million light years. A really, really long way… this type of unit of measure is what astronomers and astrophysicists use to measure the size and distance of galaxies and entire chunks of the universe.

 

The bottom line is if you multiply a really small unit (a barn) by a really big unit (a megaparsec), you get something rather normal. In this case 112ml (about ½ cup). So here is where we get to the fun (ok a bit geeky) aspect of scientific measurements. The next time your cake recipe calls for ½ cup of sugar, just ask for a Barn-megaparsec of sugar…

The Hubble length

The Hubble length

Since for our final scientific observation, we need a unit of measure a bit more than a teaspoon, we turn to the Hubble length (the radius of the entire observable universe, derived by multiplying the Hubble constant by the speed of light). It’s about 4,228 million parsecs [13.8 billion light years]. As you can see, by using a larger unit than the megaparsec we now can get a larger unit of volumetric measure. Once the high mathematics is complete, we find that one unit of Hubble-barn is pretty much equivalent to… a pint.. of beer. (473ml). So two physicists go into a bar… and order a Hubble-barn of ale…

See… there IS a practical use for science!

Pint of Beer

Pint of Beer

Hubble telescope

Hubble telescope

 

The Certainty of Uncertainty… a simple look at the fascinating and fundamental world of Quantum Physics

June 7, 2015 · by parasam

Recently a number of articles have been posted all reporting on a new set of experiments that attempt to shed light on one of the most fundamental postulates of quantum physics: the apparent contradictory nature of physical essence. Is ‘stuff’ a Wave or a Particle? Or is it both at the same time? Or even more perplexing (and apparently validated by a number of experiments, including this most recent one): the very observation of the essence determines whether it manifests as a wave or a particle.

Yes, this can sound weird – and even really smart scientists understood this challenge of normative values:  Einstein called this property “spooky action at a distance”; Neils Bohr (one of the pioneers of quantum theory) said “if quantum mechanics hasn’t profoundly shocked you, you haven’t understood it yet.”

If you are not immediately familiar with the duality paradox of objects in quantum physics (the so-called wave/particle paradox) – then I’ll share here a very brief introduction, followed by some links to articles that are well worth reading before continuing with this post, as the basic premise and current theories really will help you understand the further comments and alternate theory that I will offer. The authors of these referenced articles are much smarter than me on the details and mathematics of quantum physics – and yet do not (in these articles) use high mathematics, etc – I found the presentations clear and easy to comprehend.

Essentially back when the physics of very small things was being explored by the great minds of Einstein, Neils Bohr, Werner Heisenberg and others various properties of matter were derived. One of the principles of quantum mechanics – the so-called ‘Heisenberg Uncertainty Principle’ (after Heisenberg who first introduced this theory in 1927) – is that you cannot know precisely both the position and the momentum of any given particle at the same time.

The more precisely you know one property (say, position) the less precisely can you know the other property (in this case, momentum). In our everyday macro world, we “know” that a policeman can know both our position (300 meters past the intersection of Grand Ave. and 3rd St.) and our momentum (87 km/h) when his radar gun observed us in a 60km/h zone… and as much as we may not like the result, we understand and believe the science behind it.

In the very, very small world of quantum objects (where an atom is a really big thing…) things do not behave as one would think. The Uncertainty Principle is the beginning of ‘strangeness’, and it only gets weirder from there. As the further study of quantum mechanics, quantum physics and other areas of subatomic knowledge grew, scientists began to understand that matter apparently could behave as either a wave or a particle.

This is like saying that a bit of matter could be either a stream of water or a piece of ice – at the same time! And if this does not stretch your mental process into enough of a knot… the mathematics proves that everything (all matter in the known universe) actually possesses this property of “superposition” [the wave/particle duality] and that the act of observing the object is what determines whether the bit of matter ‘becomes’ a wave or a particle.

John Wheeler, in 1978 with his now-famous “delayed choice” thought experiment [Gedankenexperiment], first proposed the concept of using a double-slit test on a quantum object. Essentially you fire a quantum object (a tiny bit of matter) at two parallel slits in a solid panel. If the object is a particle (like a ball of ice) then it can only go through one slit or the other – it can’t stay a particle and go through both at the same time. If however the quantum object is a wave (such as a light wave) then it can go through both slits at the same time. Observation of the result tells us whether the quantum object was a wave or particle. How? If the object was a wave, then we will see an interference pattern on the other side of the slit (since the wave, passing through both slits, will recombine on the other side of the slits and exhibit areas of ‘interference’ – where some of the wave combines positively and some  combines destructively – these alternating ‘crests and valleys’ are easily recognized.


However, if the quantum object is a particle then no interference will be observed, as the object could only pass through one slit. Here’s the kicker: assume that we can open or close one of the slits very quickly – AFTER the quantum object (say a photon) has been fired at the slits. This is part of the ‘observation’. What if the decision to open or close one of the slits is made after the particle has committed to passing through either one slit or the other (as a particle) – and let’s say that initially (when the photon was fired) only one slit was open – but that subsequently we opened the other slit. If the particle was ‘really’ a particle then we should see no interference pattern – the expected behavior. But… the weirdness of quantum physics says that if we open the second slit (thereby setting up the stage to observe ‘double slit interference’) – that is exactly what we will observe!

And this is precisely what Alain Aspect and his colleagues proved experimentally in 2007 (in France). These experiments were performed with photons. The most recent experiments proved for the first time that the wave/particle duality applies to massive objects such as entire atoms (remember in the quantum world an atom is really, really huge). Andrew Truscott (with colleagues at the Australian National University in late 2014) repeated the same experiment in principle but with helium atoms. (They whacked the atoms with laser pulses instead of feeding them through slits but the same principle of ‘single vs double slit’ testing/observation was valid).

So, to summarize, we have a universe where everything can be (actually IS) two things at once – and only our observation of something decides what it turns out to be…

To be clear, this “observation” is not necessarily human observation (indeed it’s a bit hard to see atoms with a human eyeball..) but rather the ‘entanglement’ [to use the precise physics term] between the object under observation and the surrounding elements. For instance, the presence of one slit or two – in the above experiment – IS the ‘measurement’.

Now, back to the ‘paradox’ or ‘conundrum’ of the wave/particle duality. If we try – as a thought experiment – to understand what is happening, there are really only two possible conclusions: either a chunk of matter (a quantum object) is two things at once; or there is some unknown communications mechanism which would allow messaging at speeds faster than the speed of light (which would violate all current laws of physics). To explain this: one possibility is that the object, after passing through one slit somehow tells the original object “Hey, I got through the experiment by going through only one slit, so I must be a particle, so please behave like a particle when you approach the slits…” Until recently, the only other possibility was this vexing ‘duality’ where the best scientists could come up with was that quantum objects appeared to behave as waves… unless you looked at them and then they behaved like particles!

Yes, quantum physics is stranger than an acid trip…

A few years ago another wizard scientist (in this case a chemist looking at quantum mechanics to better understand really what goes on with chemical reactions) [Prof. Bill Poirier at Texs Tech] came up with a new theory: that of Many Interacting Worlds (MIW). Instead of having to believe that things were two things at once, or apparently communicated over great distances at speeds faster than light; Prof. Poirier postulated that very small particles from other universes ‘seep through’ into our and interact with our own universe.

Now before you think that I, and these other esteemed scientists, have gone off the deep end completely – exhaustive peer review and many recalculations of the mathematics have proven that his theory does not violate ANY current laws of quantum mechanics, physics or general relativity. There is no ‘fuzziness’ in his theory: the apparent ‘indeterminate’ position (as observed in our world) of a particle is actually the observed phenomena of the interaction between an ‘our world’ particle and an ‘other world’ particle.

Essentially Poirier is theorizing exactly what mystics have been telling us for a very long time: that there are parallel universes! Now, to be accurate, we are not completely certain (how can we be in what we now know is an Uncertain World) that this theory is the only correct one. All that has been proven at this point is that the MIW theory is at least as valid as the more well-established “wave/particle duality” theory.


Now, to clarify the “parallel universe” theory with a diagram: The black dots are little quantum particles in our universe, the white dots are similar particles in a parallel universe. [Remember at the quantum level of the infinitesimally small there is mostly empty space, so there is no issue with little things ‘seeping in’ from an alternate universe] These particles are in slightly different positions (notice the separations between the black dot and white dot). It’s this ‘positional uncertainty’ caused by the presence of particles from two universes very close to each other that is the root of the apparent inability to measure the position and momentum exactly in our  universe.

Ok, time for a short break before your brain explodes into several alternate universes… Below is a list of the links to the theories and articles discussed so far. I encourage a brief internet diversion – each of these is informative and even if not fully grasped the bones of the matter will be helpful in understanding what follows.

Do atoms going through double slit know they are being observed?
Strange behavior of quantum particles
Reality doesn’t exist until we measure it
Future events determine what happens in the past???
Parallel worlds exist and interact with ours

Ok, you are now fortified with knowledge, understanding – or possibly in need of a strong dose of a good single malt whiskey…

What I’d like to introduce is an alternate (or perhaps as an extension to) Prof. Poirier’s theory of parallel universes – which BTW I have no issue with, but I don’t think it necessarily explains all the issues surrounding the current ‘wave/particle duality’.

In the MIW (Many Interacting Worlds) notion, the property that is commingled with our universe is “position” – and yet there is the equally important property of “momentum” that should be considered. If the property of Position is no longer sacred, should not Momentum be more thoroughly investigated?

A discussion on momentum will also provide some interesting insight on some of the vexing issues that Einstein first brought to our attention: the idea of time, the theory of ‘space-time’ as a construct, and the mathematical knowledge that ‘time’ can run forwards or backwards – intimating that time travel is possible.

First we must understand that momentum, as a property, has two somewhat divergent qualities depending on whether one is discussing the everyday ‘Newtonian’ world of baseballs and automobiles (two objects that are commonly used in examples in physics classes); or the quantum world of incredibly small and strange things. In the ‘normal’ world, we all learned that Momentum = Mass x Velocity. The classic equation p=mv explains most everything from why it’s very hard for an ocean liner to stop or turn quickly once moving, all the way to the slightly odd, but correct, fact that any object with no velocity (at complete rest) has no momentum. (to be fully accurate, this  means no relative velocity to the observer – Einstein’s relativity and all that…)

However, in the wacky and weird world of quantum mechanics all quantum objects always have both position and momentum. As discussed already, we can’t accurately know both at the same time – but that does not mean the properties don’t exist with precise values. The new theory mentioned above (the Many Interacting Worlds) is primarily concerned with ‘alternate universe particles’ interacting, or entangling, in the spatial domain (position) with particles in our universe.

What happens if we look at this same idea – but from a ‘momentum’ point of view? Firstly, we need to better understand the concept of time vs. momentum. Time is totally a human construct – that actually does not exist at the quantum level! And, if one carefully thinks about it, even at the macroscopic level time is an artifice, not a reality.

There is only now. Again, philosophers and mystics have been trying to tell us this for a very long time. If you look deep into a watch what you will see is the potential energy of a spring, through gears and cogwheels, causing movement of some bits of metal. That’s all. The watch has no concept of ‘time’. Even a digital watch is really the same: a little crystal is vibrating due to battery power, and some small integrated circuits are counting the vibrations and moving a dial or blinking numbers. Again, nothing more than a physical expression of momentum in an ordered way. What we humans extrapolate from that: the concept of time; past; future – is arbitrary and related only to our internal subjective understanding of things.

Going back to the diagram (below)


Let’s assume for a moment that the black and white dots represent slight changes in momentum rather than position. This raises several interesting possibilities: 1) that an alternate universe, identical in EVERY respect to ours – except that with a slightly different momentum things would be occurring either slightly ahead (‘in the future’) or slightly behind (‘in the past’); or 2) that our own universe exists in multiple momentum states at the same time – with only ‘observation’ deciding which version we experience.

One of the things that this could explain is the seemingly bizarre ability of some to ‘predict the future’, or to ‘see into the past’. If these ‘alternate’ universes (based on momentum) actually exist, then it is not all that hard to accept that some could observe them, in addition to the one where we most all commonly hang out.

Since most of quantum theory is based around probability, it appears likely that the highest probability observable ‘alternate momentum’ events will be one that are closely related in the value of momentum to the particle that is currently  under observation in our universe. But that does not preclude the possibility of observation of particles that are much more removed, in terms of momentum (i.e. potentially further in the past or future).

I personally do not posses the knowledge of the high mathematics necessary to prove this to the same degree that Bill Poirier and other scientists have done with the positional theorems – but invite that exercise if anyone has such skills. As a thought experiment, it seems as valid as anything else that has been proposed to date.

So now, as was so eloquently said at the end of each episode of the “Twilight Zone”: we now return you to your regular program and station. Only perhaps slightly more confused… but also with some cracks in the rigid worldview of macroscopic explanations.

The Patriot Act – upcoming expiry of Section 215 and other unpatriotic rules…

April 18, 2015 · by parasam

Section215

On June 1, less than 45 days from now, a number of sections of the Patriot Act expire. The administration and a large section of our national security apparatus, including the Pentagon, Homeland Security, etc. are strongly pushing for extended renewal of these sections without modification.

While this may on the surface seem like something we should do (we need all the security we can get in these times of terrorism, Chinese/North Korean/WhoKnows hacks, etc. – right?) – the reality is significantly different. Many of the Sections of the Patriot Act (including ones that are already in force and do not expire for many years to come) are insidious, give almost unlimited and unprecedented surveillance powers to our government (and by the way any private contractors who the government hires to help them with this task), and are mostly without functional oversight or accountability.

Details of the particular sections up for renewal may be found in this article, and for a humorous and allegorical take on Section 215 (the so-called “Library Records” provision) I highly recommend this John Oliver video. While the full “Patriot Act” is huge, and covers an exhaustingly broad scope of activities that allow the government (meaning its various security agencies, including but not limited to: CIA, FBI, NSA, Joint Military Intelligence Services, etc. etc.) the sections that are of particular interest in terms of digital security pertaining to communications are the following:

  • Section 201, 202 – Ability to intercept communications (phone, e-mail, internet, etc.)
  • Section 206 – roving wiretap (ability to wiretap all locations that a person may have visited or communicated from for up to a year).
  • Section 215 – the so-called “Library Records” provision, basically allowing the government (NSA) to bulk collect communications from virtually everyone and store them for later ‘research’ to see if any terrorist or other activity deemed to be in violation of National Security interests.
  • Section 216 – pen register / trap and trace (the ability to collect metadata and/or actual telephone conversations – metadata does not require a specific warrant, recording content of conversations does).
  • Section 217 – computer communications interception (ability to monitor a user’s web activity, communications, etc.)
  • Section 225 – Immunity from prosecution for compliance with wiretaps or other surveillance activity (essentially protects police departments, private contractors, or anyone else that the government instructs/hires to assist them in surveillance).
  • Section 702 – Surveillance of ‘foreigners’ located abroad (in principle this should restrict surveillance to foreign nationals outside of US at the time of such action, but there is much gray area concerning exactly who is a ‘foreigner’ etc. [for instance, is a foreign born wife of a US citizen a “foreigner” – and if so, are communications between the wife and the husband allowed??]

Why is this Act so problematic?KeyholePeeper

As with many things in life, the “law of unintended consequences” can often overshadow the original problem. In this case, the original rationale of wanting to get all the info possible about persons or groups that may be planning terrorist activities against the USA was potentially noble, but the unprecedented powers and lack of accountability provided for by the Patriot Act has the potential (and in fact has already been proven) to scuttle many individual freedoms that form the basis for our society.

Without regard to the methods or justification for his actions, the revelations provided by Ed Snowden’s leaks of the current and past practices of the NSA are highly informative. This issue is now public, and cannot be ‘un-known’. What is clearly documented is that the NSA (and other entities as has since come to light) have extended surveillance on millions of US citizens living within the domestic US to a far greater extent than even the original authors of the Patriot Act envisioned. [This revealed in multiple tv interviews recently].

The next major issue is that of ‘data creep’ – that such data, once collected, almost always gets replicated into other databases, etc. and never really goes away. In theory, to take one of the Sections (702), data retention even for ‘actionable surveillance of foreign nationals’ is limited to one year, and inadvertent collection of surveillance data on US nationals, or even a foreign national that has travelled within the borders of the USA is supposed to be deleted immediately. But absolutely no instruction or methodology is given on how to do this, nor are any controls put in place to ensure compliance, nor are any audit powers given to any other governmental agency.

As we have seen in past discussions regarding data retention and deletion with the big social media firms (Facebook, Google, Twitter, etc.) it’s very difficult to actually delete data permanently. Firstly, in spite of what appears to be an easy step, actually deleting your data from Facebook is incredibly hard to do (what appears to be easy is just the inactivation of your account, permanently deleting data is a whole different exercise). On top of that, all these firms (and the NSA is no different) make backups of all their server data for protection and business continuity. One would have to search and compare every past backup to ensure your data was also deleted from those.

And even the backups have backups… it’s considered an IT ‘best practice’ to back up critical information across different geographical locations in case of disaster. You can see the scope of this problem… and once you understand that the NSA for example will under certain circumstances make chunks of data available to other law enforcement agencies, how does one then ensure compliance across all these agencies that data deletion occurs properly? (Simple answer: it’s realistically impossible).

So What Do We Do About This?

The good news is that most of these issues are not terribly difficult to fix… but the hard part will be changing the mindset of many in our government who feel that they should have the power to do anything they want in total secrecy with no accountability. The “fix” is to basically limit the scope and power of the data collection, provide far greater transparency about both the methods and actual type of data being collected, and have powerful audit and compliance methods in place that have teeth.

The entire process needs to be stood on its end – with the goal being to minimize surveillance to the greatest extent possible, and to retain as little data as possible, with very restrictive rules about retention, sharing, etc. For instance, if data is shared with another agency, it should ‘self-expire’ (there are technical ways to do this) after a certain amount of time, unless it has been determined that this data is now admissible evidence in a criminal trial – in which case the expiry can be revoked by a court order.

fisainfographic3_blog_0

The irony is that even the NSA has admitted that there is no way they can possibly search through all the data they have collected already – in terms of a general search-terms action. They could of course look for a particular person-name or place-name, but if this is all they needed they could have originally only collected surveillance data for those parameters instead of the bulk of American citizens living in the USA…

While they won’t give details, reasonable assumptions can be drawn from public filings and statements, as well as purchase information from storage vendors… and the NSA alone can be assumed to have many hundreds of exabytes of data stored. Given that 1 exabyte = 1,024 Petabytes (which in turn = 1,024 terabytes) this is an incredible amount of data. To put another way, it’s hundreds of trillions of gigabytes… and remember that your ‘fattest’ iPhone holds 128GB.

It’s a mindset of ‘scoop up all the data we can, while we can, just in case someday we might want to do something with it…’  This is why, if we care about our individual freedom of expression and liberty at all, we must protest against the blind renewal of these deeply flawed laws and regulations such as the Patriot Act.

This discussion is entering the public domain more and more – it’s making the news but it takes action not just talk. Make a noise. Write to your congressional representatives. Let them know this is an urgent issue and that they will be held accountable at election time for their position on this renewal. If the renewal is not granted, then – and typically only then – will the players be forced to sit down and have the honest discussion that should have happened years ago.

Data Security – An Overview for Executive Board members [Part 6: Cloud Computing]

March 21, 2015 · by parasam

Introduction

In the last part of this series on Data Security we’ll tackle the subject of the Cloud – the relatively ‘new kid on the block’. Actually “cloud computing” has been around for a long time, but the concept and naming was ‘repackaged’ along the way. The early foundation for the Cloud was ARPANET (1969), followed by several major milestones: Salesforce.com – the first major enterprise app available on the web (1999); Amazon Web Services (2002); Amazon EC2/S3 (2006); Web 2.0 (2009). The major impediment to mass use of the cloud (aka remote data centers) was cheap bandwidth. In parallel with massive bandwidth [mainly in the US, western Europe and Pacific Rim (APAC)] the development of fast and reliable web-based apps allowed concepts such as SaaS (Software as a Service) and other ‘remote’ applications to be viable. The initial use of the term “Cloud Computing” or “Cloud Storage” was intended to describe (in a buzzword fashion) the rather boring subject of remote data centers, hosted applications, storage farms, etc. in a way that mass consumers and small business could grasp.

Unfortunately this backfired in some corporate circles, leading to a fragmented and slow adoption of some of the potential power of cloud computing. Part of this was PR and communication, another was the fact that the security of early (and still many today unfortunately!) cloud centers was not very good. Concerns over security of assets, management and other access and control issues led many organizations – particularly media firms – to shun ‘clouds’ for some time, as they feared (perhaps rightly so early on) that their assets could be compromised or pirated. With a generally poor communication about cloud architecture, and the difference between ‘public clouds’ and ‘private clouds’ not being effectively communicated, widespread confusion existed for several years concerning this new technology.

Like most things in the IT world, very little was actually “new” – but incremental change is often perceived as boring so the marketing and hype around gradual improvements in remote data center capability, connectivity and features tended to portray these upgrades as a new and exciting (and perhaps untested!) entity.

The Data Security Model

Cloud Computing

To understand the security aspects of the Cloud, and more importantly to recognize the strategic concepts that apply to this computational model, it is important to know what a Cloud is, and what it is not. Essentially a cloud is a set of devices that host services in a remote location in relation to the user. Like most things in the IT world, there is a spectrum of capability and complexity… little clouds and bigger clouds… For instance, a single remote server that hosts some external storage that is capable of being reached by your cellphone can be considered a ‘cloud’… and on the other extreme the massive amount of servers and software that is known as AWS (Amazon Web Services) is also a cloud.

In both cases, and everything in between, all of the same issues we have discussed so far apply to the local cloud environment: Access Control, Network Security, Application Security and Data Compartmentalization. Every cloud provider must ensure that all these bits are correctly implemented – and it’s up to the due diligence of any cloud user/subscriber to verify that in fact these are in place. In addition to all these security features, there are some unique elements that must be considered in cloud computing.

The two main additional elements, in terms of security to both the cloud environment itself and the ‘user’ (whether that be an individual user or an entire organization that utilizes cloud services) are Cloud Access Control and Data Transport Control. Since the cloud is by its very nature remote from the user, almost always a WAN (Wide Area Network) connection is used to connect users to the cloud. This type of access is more difficult to secure, and as has been shown repeatedly by recent history, is susceptible to compromise. Even if the access control is fully effective, thereby allowing only authorized users to enter and perform operations, the problem of unauthorized data transport remains:  again, recent reports demonstrate that ‘inside jobs’ often result in users that have access to data (or if the user’s credentials are compromised or stolen) can therefore move or delete data, often with serious results.

An extra layer of security protocols and procedures are necessary to ensure that data transport or editing operations are appropriate and authorized.

  • Cloud Access Control (CAC)  –  Since the cloud environment (from an external user persepective) is ‘anonymous and external’ [i.e. neither the cloud nor the user can directly authenticate each other, nor is either contained within physical proximity] the possibility of unauthorized access (by a user) or spoofing (misdirecting a user to a cloud site other than which they intended to connect) is much greater. Both users and cloud sites must take extra precautions to ensure these scenarios do not take place.
    • Two-factor authentication is even more important in a cloud environment than in a ‘normal’ network. The best form of such authentication is where one of the factors is a Real Time Key (RTK). Essentially this means that one of the key factors is either generated in real time (or revealed in real time) and this shared knowledge between the user and the cloud is used to help authenticate the session.
    • One common example of RTK is where a short code is transmitted (often as a text message to the user’s cellphone) after the user completes the first part of a signon process using a username/password or some other single-factor login procedure.
    • Another form of RTK could be a shared random key where the user device is usually a small ‘fob’ that displays a random number that changes every 30 seconds. The cloud site contains a random number generator that has the same seed as the user fob so the random numbers will be the same within the 30 second window.
    • Either of these methods secures against both unauthorized user access (as is obvious) and protects the user against spoofing (in the first method, the required prior knowledge of the user’s cellphone number by the cloud site; in the second the requirement of matching random number generators).
    • With small variations to the above procedures, such authentication can apply to M2M (Machine to Machine) sessions as well.
  • Data Transport Control (DTC) – Probably the most difficult aspect of security to control is unauthorized movement, copying, deletion, etc. of data that is stored in a cloud environment. It’s the reason that even up until today many Hollywood studios prohibit their most valuable motion picture assets from being stored or edited in public cloud environments. Whether from external ‘hackers’ or internal network admins or others who have gone ‘rogue’ – protection must be provided to assets in a cloud even from users/machines that have been authenticated.
    • One method is to encrypt the assets, whereby the users that can effect the movement, etc. of an asset do not have the decryption key, so even if data is copied it will be useless without the decryption key. However, this does not protect against deletion or other edit functions that could disrupt normal business. There are also times where the encryption/decryption process would add complexity and reduce efficiency of a workflow.
    • Another method (offered by several commercial ‘managed transport’ applications) is a strict set of controls over which endpoints can receive or send data to/from a cloud. With the correct process controls in place (requiring for example that the defined lists of approved endpoints cannot be changed on the fly, and requires at least two different users to collectively authenticate updates to the endpoint list), a very secure set of transport privileges can be set up.
    • Tightly integrating ACL (Access Control List) actions against users and a set of rules can again reduce the possibility of rogue operations. For instance, the deletion of more than 5 assets within a given time period by a single user would trigger an authentication request against a second user – this would prevent a single user from wholesale data destruction operations. You might lose a few assets but not hundreds or thousands.

One can see that the art of protection here is really in the strategy and process controls that are set up – the technical bits just carry out the strategies. There are always compromises, and the precise set of protocols that will work for one organization will be different for another: security is absolutely not a ‘one size fits all’ concept. There is also no such thing as ‘total security’ – even if one wanted to sacrifice a lot of usability. The best practices serve to reduce the probability of a serious breach or other kind of damage to an acceptable level.

Summary

In this final section on Data Security we’ve discussed the special aspects of cloud security at a high level. As cloud computing becomes more and more integral to almost every business today, it’s vital to consider the security of such entities. As the efficiency, ubiquity and cost-saving features of cloud computing continue to rise, many times a user is not even consciously aware that some of the functionality they enjoy during a session is being provided by one or more cloud sites. To further add to the complexity (and potentially reduce security) of cloud computing in general, many clouds talk to other clouds… and the user may have no knowledge or control over these ‘extended sessions’. One example (that is currently the subject of large-scale security issues) is mobile advertising placement.

When a user launches one of their ‘free’ apps on their smartphone, the little ads that often appear at the bottom are not placed there by the app maker, rather that real estate is ‘leased’ to the highest bidder at that moment. The first ‘connection’ to the app is often an aggregator who resells the ‘ad space’ on the apps to one or more agencies that put this space up for bidding. Factors such as the user’s location, the model of phone, the app being used, etc. all factor in the price and type of ad being accepted. The ads themselves are often further aggregated by mobile ad agencies or clearing houses, many of which are scattered around the globe. With the speed of transactions and the number of layered firms involved, it’s almost impossible to know exactly how many companies have a finger in the pie of the app being used at that moment.

As can be seen from this brief introduction to Data Security, the topic can become complex in the details, but actually rather simple at a high level. It takes a clear set of guidelines, a good set of strategies – and the discipline to carry out the rules that are finally adopted.

Further enquires on this subject can be directed to the author at ed@exegesis9.com

 

 

Data Security – An Overview for Executive Board members [Part 5: Data Compartmentalization]

March 20, 2015 · by parasam

Introduction

This part of the series will discuss Data Compartmentalization – the rational separation of devices, data, applications and communications access from each other and external entities. This strategy is paramount in the design of a good Data Security model. While it’s often overlooked, particularly in small business (large firms tend to have more experienced IT managers who have been exposed to this), the biggest issue with Compartmentalization is keeping it in place over time, or not fully implementing it correctly in the first place. While not difficult per se, a serious amount of thought must be given to the layout and design of all the parts of the Data Structure of a firm if the balance between Security and Usability is to be attained in regards to Compartmentalization.

The concept of Data Compartmentalization (allow me to use the acronym DC in the rest of this post, both for ease of the author in writing and you the reader!) implies a separating element, i.e. a wall or other structure. Just as the watertight compartments in a submarine can keep the boat from sinking if one area is damaged (and the doors are closed!!) a ‘data-wall’ can isolate a breached area without allowing the enterprise at large to be exposed to the risk. DC is not only a good idea in terms of security, but also integrity and general business reliability. For instance, it’s considered good practice to feed different compartments with mains power from different distribution panels. So if one panel experiences a fault not everything goes black at once. A mis-configured server that is running amok and choking that segment of a network can easily be isolated to prevent a system-wide ‘data storm’ – an event that will raise the hairs on any seasoned network admin… a mis-configured DNS server can be a bear to fix!

In this section we’ll take a look at different forms of DC, and how each is appropriate to general access, network communications, applications servers, storage and so on. As in past sections, the important aspect to take away is the overall strategy, and the intestinal fortitude to ensure that the best practices are built, then followed, for the full life span of the organization.

The Data Security Model

Data Compartmentalization

DC (Data Compartmentalization) is essentially the practice of grouping and isolating IT functions to improve security, performance, reliability and ease of maintenance. While most of our focus here will be on the security aspects of this practice, it’s helpful to know that basic performance is often increased (by reducing traffic on a LAN [Local Area Network] sub-net. Reliability is improved since there are fewer moving parts within one particular area; and it’s easier to patch or otherwise maintain a group of devices when they are already grouped and isolated from other areas. It is not at all uncommon for a configuration change or software update to malfunction, and sometimes this can wreak havoc on the rest of the devices that are interconnected on that portion of a network. If all hell breaks loose, you can quickly ‘throw the switch’ and separate the malfunctioning group from the rest of the enterprise – provided this Compartmentalization has first been set up.

Again, from a policy or management point of view, it’s not important to understand all the details of programming firewalls or other devices that act as the ‘walls’ that separate these compartments (believe me, the arcane and tedious methodology required even today to correctly set up PIX firewalls for example gives even the most seasoned network admins severe headaches!). The fundamental concept is that it’s possible and highly desirable to break things down into groups, and then separate these groups at a practical level.

For a bit of perspective, let’s look at how DC operates in conjunction with each of the subjects that we’ve discussed so far: Access Control, Network Security Control and Application Security. In terms of Access Control the principle of DC would enforce that logins to one logical group (or domain or whatever logical division is appropriate to the organization) would restrict access to only the devices within that group. Now once again we run up against the age-old conundrum of Security vs. Usability – and here the oft-desired SSO (Single Sign On) feature is often at odds with a best practice of DC. How does one manage that effectively?

There are several methods: either a user is asked for additional authentication when wanting to cross a ‘boundary’ – or a more sophisticated SSO policy/implementation is put in place, where a request from a user to ‘cross a boundary’ is fed to an authentication server that automatically validates the request against the user’s permissions and allows the user to travel farther in cyberspace. As mentioned earlier in the section on Network Security, there is a definite tradeoff on this type of design, because rogue or stolen credentials could then be used to access very wide areas of an organization’s data structure. There are ways to deal with this, mostly in terms of monitoring controls that match the behavior of users with their past behavior, and a very granular set of ACL (Access Control List) permissions that are very specific about who can do what and where. There is no perfect answer to this balance between ideal security and friction-free access throughout a network. But in each organization the serious questions need to be asked, and a rational policy hammered out – with an understanding of the risks of whatever compromise is chosen.

Moving on to the concept of DC for Network Security, a similar set of challenges and possible solutions to the issues raised above in Access Control present themselves. While one may think that from a pure usability standpoint everything in the organization’s data structure should be connected to everything else, this is neither practical or reliable, let alone secure. One of the largest challenges to effective DC for large and mature networks is that these typically have grown over years, and not always in a fully designed manner: often things have expanded organically, with bits stuck on here and there as immediate tactical needs arose. The actual underpinnings of many networks of major corporations, governments and military sites are usually byzantine, rather disorganized and not fully documented. The topology of a given network or group of networks also has a direct effect on how DC can be implemented: that is why the best designs are where DC is taken as a design principle from day one. There is no one ‘correct’ way to do this: the best answer for any given organization is highly dependent on the type of organization, the amount and type of data being moved around and/or stored, and how much interconnection is required. For instance, a bank will have very different requirements than a news organization.

Application Security can only be enhanced by judicious use of compartmentalization. For instance, web servers, an inherently public-facing application, should be isolated from an organization’s database, e-mail and authentication servers. One should also remember that the basic concepts of DC can be applied no matter how small or large an organization is: even a small business can easily separate public-facing apps from secure internal financial systems, etc. with a few small routers/firewalls. These devices are so inexpensive these days that there is almost no rationale for not implementing these simple safeguards.

One can see that the important concept of DC can be applied to virtually any area of the Data Security model:  while the details of achieving the balance between what to compartmentalize and how to monitor/control the data movement between areas will vary from organization to organization, the basic methodology is simple and provides an important foundation for a secure computational environment.

Summary

In this section we’ve reviewed how Data Compartmentalization is a cornerstone of a sound data structure, aiding not only security but performance and reliability as well. The division of an extended and complex IT ecosystem into ‘blocks’ allows for flexibility, ease of maintenance and greatly contributes to the ability to contain a breach should one occur (and it will inevitably will!). One of the greatest mistakes any organization can make is to assume “it won’t happen to me.” Breaches are astoundingly commonplace, and many are undetected or go unreported even if discovered. For many reasons, including losing customer or investor confidence, potential financial losses, lack of understanding, etc. the majority of breaches that occur within commercial organizations are not publicly reported. Usually we find out only when the breach is sufficient in scope that it must be reported. And the track record for non-commercial institutions is even worse. NGOs, charities, research institutions, etc. often don’t even know of a breach unless something really big goes wrong.

The last part of this series will discuss Cloud Computing: as a relatively new ‘feature’ of the IT landscape, the particular risks and challenges to a good security model warrant a focused effort. The move to using some aspect of the Cloud is becoming prevalent very quickly among all levels of organizations: from the massive scale of iTunes down to an individual user or small business backing up their smartphones to the Cloud.

Part 6 of this series is located here.

 

Data Security – An Overview for Executive Board Members [Part 4: Application Security]

March 19, 2015 · by parasam

Introduction

In this section we’ll move on from Network Security (discussed in the last part) to the topic of Application Security. So far we’ve covered issues surrounding the security of basic access to devices (Access Control) and networks (Network Security), now we’ll look at an oft-overlooked aspect of a good Data Security model: how applications behave in regards to a security strategy. Once we have access to computers, smartphone, tablets, etc.; and then have privileges to connect to other devices through networks it’s like being inside a shop without doing any shopping. Rather useless…

All of our functional performance with data is through one or more applications: e-mail, messaging, social media, database, maps, VoIP (internet telephony), editing and sharing images, financial and planning – the list is endless. Very few modern applications work completely in a vacuum:  i.e. perform their function with absolutely no connection to the “outside world”. The connections that applications make can be as benign as accessing completely ‘local’ data – such as a photo-editing app requiring access to your photo library on the same computer on which the app is running; or can reach out the to entire public internet – such as Facebook.

The security implications of these interactions between applications and other apps, stored data, websites, etc. etc. are the area of discussion for the rest of this section.

The Data Security Model

Application Security

Trying to think of an app that is completely self-contained is actually an exercise. A simple calculator and the utility app that switches on the LED “flash” (to function as a flashlight) are the only two apps on my phone that are completely ‘stand-alone’. Every other app (some 200 in my case) connect in some way to external data (even if on the phone itself) or the local network, the web, etc. Each one of these ‘connections’ carries with it a security risk. Remember that hackers are like rainwater: even the tiniest little hole in your roof or wall will allow water into your home.

While you may think that your cellphone camera app is a very secure thing -after all you are only taking pix with your phone and storing those images directly on your phone… we are not discussing uploading these images or sharing them in any way (yet). However… remember that little message that pops up when you first install an app such as that? Where it asks permission to access ‘your photos’? (Different apps may ask for permission for different things… and this only applies to phones and tablets: laptops and desktops never seem to ask at all – they just connect to whatever they want to!)

I’ll give you an example of how ‘security holes’ can contribute to a weakness in your overall Data Security model: We’ll use the smartphone as an example platform. Your camera has access to photos. Now you’ve installed Facebook as well, and in addition to the Facebook app itself you’ve installed the FB “Platform” (which supports 3rd party FB apps) and a few FB apps, including some that allow you to share photos online. FB apps in general are notoriously ‘leaky’ (poorly written in terms of security, and some even deliberately do things with your data that they do not disclose). A very common user behavior on a phone is to switch apps without fully closing them. If FB is running in the background all installed FB apps are running as well. Each time you take a photo, these images are stored in the Camera Roll which is now shared with the FB apps – which can access and share these images without your knowledge. So the next time you see celebrity pix of things we really don’t need to see any more of… now you know one way this can easily happen.

The extent to which apps ‘share’ data is far greater than is usually recognized. This is particularly true in larger firms that often have distributed databases, etc. Some other examples of particularly ‘porous’ applications are: POS (Point Of Sale) systems, social media applications (corporate integration with Twitter, Facebook, etc. can be highly vulnerable), mobile advertising backoffice systems, applications that aggregate and transfer data to/from cloud accounts and many more. (Cloud Computing is a case unto itself, in terms of security issues, and will be discussed as the final section in this series.)

There are often very subtle ways in which data is shared from a system or systems. Some of these appear very innocuous but a determined hacker can make use of even small bits of data which can be linked with other bits to eventually provide enough information to make a breach possible. One example is many apps (including OS themselves – whether Apple, Android, Windows, etc.) send ‘diagnostic’ data to the vendor. Usually this is described as ‘anonymous’ and gives the user the feeling that’s it’s ok to do this: firstly personal information is not transmitted, secondly the data is supposedly only going to the vendor’s website for data collection – usually to study application crashes.

However, it’s not that hard to ‘spoof’ the server address to which the data is being sent, and seemingly innocent data being sent can often include either the ip address or MAC address of the device – which can be very useful in the future to a hacker that may attempt to compromise that device. The internal state of many ‘software switches’ is also revealed – which can tell a hacker whether some patches have been installed or not. Even if the area revealed by the app dump is not directly useful, a hacker that sees ‘stale’ settings (showing that this machine has not been updated/patched recently) may assume that other areas of the same machine are also not patched, and can use discovered vulnerabilities to attempt to compromise the security of that device.

The important thing to take away from this discussion is not the technical details (that is what you have IT staff for), but rather to ensure that protocols are in place to constantly keep ALL devices (including routers and other devices that are not ‘computers’ in the literal sense) updated and patched as new security vulnerabilities are published. An audit program should be in place to check this, and the resulting logs need to actually be studied, not just filed! You do not want to be having a meeting at some future date where you find out that a patch that could have prevented a data breach remained uninstalled for a year… which BTW is extraordinarily common.

The ongoing maintenance of a large and extended data system (such as many companies have) is a significant effort. It is as important as the initial design and deployment of systems themselves. There are well-know methodologies for doing this correctly that provide a high level of both security and stability for the applications and technical business process in general. It’s just that often they are not universally applied without exception. And it’s those little ‘exceptions’ that can bite you in the rear – fatally.

A good rule of thumb is that every time you launch an application, that app is ‘talking’ to at least ten other apps, OS processes, data stores, etc. Since the average user has dozens of apps open and running simultaneously, you can see that most user environments are highly interconnected and potentially porous. The real truth is that as a collective society, we are lucky that there are not enough really good hackers to go around: the amount of potential vulnerabilities vastly outnumbers those who would take advantage of them!

If you really want to look at extremes of this ‘cat and mouse’ game, do some deep reading on the biggest ‘hack’ of all time: the NSA penetration of massive amounts of US citizen’s data on the one side; the procedures that Ed Snowden took in communicating with Laura Poitras and Glenn Greenwald on the other side (the journalists who first connected with Snowden). Ed Snowden, more than just about anyone, knew how to effectively use computers and not be breached. It was fairly elaborate – but not at all that difficult – he managed to instruct both Laura and Glenn how to set up the necessary security on their computers so that reliable and totally secret communications could take place.

Another very important issue of which to be aware, particularly in this age of combined mobile and corporate computing with thousands of interconnected devices and applications: breaches WILL occur. It’s how you discover and react to them that is often the difference between a relatively minor loss and a CNN exposé level… The bywords one should remember are: Containment, Awareness, Response and Remediation. Any good Data Security protocol must include practices that are just as effective against M2M (Machine to Machine) actions as well as things performed by human actors. So constant monitoring software should be in place to see whether unusual amounts of connections, file transfers, etc. are taking place – even from one server to another. I know it’s an example I’ve used repeatedly in this series, (I can’t help it – it’s such a textbook case of how not to do things!) but the Sony hack revealed that a truly massive amount of data (as in many, many terabytes) was transferred from supposedly ‘highly secure’ servers/storage farms to repositories outside the USA. Someone or something should have been notified that very sustained transfers of this magnitude were occurring, so at least some admin could check and see what was using all this bandwidth. Both of the most common corporate file transfer applications (Aspera and Signiant) have built-in management tools that can report on what’s going where.. so this was not a case of something that needed to be built – it’s a case of using what’s already provided correctly.

Many, if not most, applications can be ‘locked down’ to some extent – the amount and degree of communication can be controlled to help reduce vulnerabilities. Sometimes this is not directly possible within the application, but it’s certainly possible if the correct environment for the apps is designed and implemented appropriately. For example, a given database engine may not have the level of granular controls to effectively limit interactions for your firm’s use case. If that application (and possibly others of similar function) are run on a group of application servers that are isolated from the rest of the network with a small firewall, the firewall settings can be used to very easily and effectively limit precisely which other devices these servers can reach, what kind of data they may send/receive, the time of day when they can be accessed, etc. etc. Again, most of good security is in the overall concept and design, as even excellent implementation of a poor design will not be effective.

Summary

Applications are what actually give us function in the data world, but they must be carefully installed, monitored and controlled in order to obtain the best security and reliability. We’ve reviewed a number of common scenarios that demonstrate how easily data can be compromised by unintended communication to/from your applications. Applications are vital to every aspect of work in the Data world, but can, and do, ‘leak’ data to many unintended areas – or provide an unintended bridge to sensitive data.

The next section will discuss Data Compartmentalization. This is an area of the Data Security model that is least understood and practiced. It’s a bit like the waterproof compartment in a submarine: if one area is flooded, by closing the communicating doors the rest of the boat can be saved from disaster. A big problem (again, not technical but procedural) is that in many organizations, even where a good initial design segregated operations into ‘compartments’ it doesn’t take very long at all for “cables to be thrown over the fence”, therefore bypassing the very protections that were put in place. Often this is done for expediency to fix a problem, or some new app needs to be brought on line quickly and taking the time to install things properly with all the firewall rule changes is waived. These business practices are where good governance, proper supervision, and continually asking the right questions is vital.

Part 5 of this series is located here.

Page 1 of 3 1 2 3 Next »
  • Blog at WordPress.com.
  • Connect with us:
  • Twitter
  • Vimeo
  • YouTube
  • RSS
  • Follow Following
    • Parasam
    • Join 95 other followers
    • Already have a WordPress.com account? Log in now.
    • Parasam
    • Customize
    • Follow Following
    • Sign up
    • Log in
    • Report this content
    • View site in Reader
    • Manage subscriptions
    • Collapse this bar
 

Loading Comments...