• Home
  • about this blog
  • Blog Posts

Parasam

Menu

  • design
  • fashion
  • history
  • philosophy
  • photography
  • post-production
    • Content Protection
    • Quality Control
  • science
  • security
  • technology
    • 2nd screen
    • IoT
  • Uncategorized
  • Enter your email address to follow this blog and receive notifications of new posts by email.

  • Recent Posts

    • Take Control of your Phone
    • DI – Disintermediation, 5 years on…
    • Objective Photography is an Oxymoron (all photos lie…)
    • A Historical Moment: The Sylmar Earthquake of 1971 (Los Angeles, CA)
    • Where Did My Images Go? [the challenge of long-term preservation of digital images]
  • Archives

    • September 2020
    • October 2017
    • August 2016
    • June 2016
    • May 2016
    • November 2015
    • June 2015
    • April 2015
    • March 2015
    • December 2014
    • February 2014
    • September 2012
    • August 2012
    • June 2012
    • May 2012
    • April 2012
    • March 2012
    • February 2012
    • January 2012
  • Categories

    • 2nd screen
    • Content Protection
    • design
    • fashion
    • history
    • IoT
    • philosophy
    • photography
    • post-production
    • Quality Control
    • science
    • security
    • technology
    • Uncategorized
  • Meta

    • Register
    • Log in
    • Entries feed
    • Comments feed
    • WordPress.com

Browsing Tags security

A Digital Disruptor: An Interview with Michael Fertik

June 27, 2016 · by parasam

During the recent Consumer Goods Forum global summit here in Cape Town, I had the opportunity to briefly chat with Michael about some of the issues confronting the digital disruption of this industry sector. [The original transcript has been edited for clarity and space.]

Michael Fertik founded Reputation.com with the belief that people and businesses have the right to control and protect their online reputation and privacy. A futurist, Michael is credited with pioneering the field of online reputation management (ORM) and lauded as the world’s leading cyberthinker in digital privacy and reputation. Michael was most recently named Entrepreneur of the Year by TechAmerica, an annual award given by the technology industry trade group to an individual they feel embodies the entrepreneurial spirit that made the U.S. technology sector a global leader.

He is a member of the World Economic Forum Agenda Council on the Future of the Internet, a recipient of the World Economic Forum Technology Pioneer 2011 Award and through his leadership, the Forum named Reputation.com a Global Growth Company in 2012.

Fertik is an industry commentator with guest columns in Harvard Business Review, Reuters, Inc.com and Newsweek. Named a LinkedIn Influencer, he regularly blogs on current events as well as developments in entrepreneurship and technology. Fertik frequently appears on national and international television and radio, including the BBC, Good Morning America, Today Show, Dr. Phil, CBS Early Show, CNN, Fox, Bloomberg, and MSNBC. He is the co-author of two books, Wild West 2.0 (2010), and New York Times best seller, The Reputation Economy (2015).

Fertik founded his first Internet company while at Harvard College. He received his JD from Harvard Law School.

Ed: As we move into a hyper-connected world, where consumers are tracked almost constantly, and now passively through our interactions with an IoT-enabled universe: how do we consumers maintain some level of control and privacy over the data we provide to vendors and other data banks?

Michael:  Yes, passive sharing is actually the lion’s share of data gathering today, and will continue in the future. I think the question of privacy can be broadly broken down into two areas. One is privacy against the government and the other is privacy against ‘the other guy’.

One might call this “Big Brother” (governments) and “Little Brother” (commercial or private interests). The question of invasion of privacy by Big Brother is valid, useful and something we should care about in many parts of the world. While I, as an American, don’t worry overly about the US government’s surveillance actions (I believethat the US is out to get ‘Jihadi John’ not you or me); I do believe that many other governments’ interest in their citizens is not as benign.

I think if you are in much of the world, worrying about the panopticon of visibility from one side of the one-way mirror to the other side where most of us sit is something to think and care about. We are under surveillance by Big Brother (governments) all the time. The surveillance tools are so good, and digital technology makes it possible to have so much of our data easily surveilled by governments that I think that battle is already lost.

What is done with that data, and how it is used is important: I believe that this access and usage should be regulated by the rule of law, and that only activities that could prove to be extremely adverse to our personal and national interests should be actively monitored and pursued.

When it comes to “Little Brother” I worry a lot. I don’t want my private life, my frailties, my strengths, my interests.. surveilled by guys I don’t know. The basic ‘bargain’ of the internet is a Faustian one: they will give you something free to use and in exchange will collect your data without your knowledge or permission for a purpose you can never know. Actually, they will collect your data without your permission and sell it to someone else for a purpose that you can never know!

I think that encryption technologies that help prevent and mitigate those activities are good and I support that. I believe that companies that promise not to do that and actually mean it, that provide real transparency, are welcome and should be supported.

I think this problem is solvable. It’s a problem that begins with technology but is also solvable by technology. I think this issue is more quickly and efficiently solvable by technology than through regulation – which is always behind the curve and slow to react. In the USA privacy is regarded as a benefit, not an absolute right; while in most of Europe it’s a constitutionally guaranteed right, on the same level as dignity. We have elements of privacy in American constitutional law that are recognized, but also massive exceptions – leading to a patchwork of protection in the USA as far as privacy goes. Remember, the constitutional protections for privacy in the USA are directed to the government, not very much towards privacy from other commercial interests or private interests. In this regard I think we have much to learn from other countries.

Interestingly, I think you can rely on incompetence as a relatively effective deterrence against public sector ‘snooping’ to some degree  – as so much government is behind the curve technically. The combination of regulation, bureaucracy, lack of cohesion and general technical lack of applied knowledge all serve to slow the capability of governments to effectively mass surveile their populations.

However, in the commercial sector, the opposite is true. The speed, accuracy, reach and skill of private corporations, groups and individuals is awesome. For the last ten years this (individual privacy and awareness/ownership of one’s data) has been my main professional interest… and I am constantly surprised by how people can get screwed in new ways on the internet.

Ed:  Just as in branding, where many consumers actually pay a lot for clothing, that in addition to being a T-shirt, advertise prominently the brand name of the manufacturer, with no recompense for the consumer; is there any way for digital consumers to ‘own’ and have some degree of control over the use of the data they provide just through their interactions? Or are consumers forever to be relegated to the short end of the stick and give up their data for free?

Michael:  I have mapped out, as well as others, how the consumer can become the ‘verb’ of the sentence instead of what they currently are, the ‘object’ of the sentence. The biggest lie of the internet is that “You” matter… You are the object of the sentence, the butt of the joke. You (or the digital representation of you) is what we (the internet owners/puppeteers) buy and sell. There is nothing about the internet that needs to be this way. This is not a technical or practical requirement of this ecosystem. If we could today ask the grandfathers of the internet how this came to be, they would likely say that one of areas in which they didn’t succeed was to add an authentication layer on top of the operational layer of the internet. And what I mean here is not what some may assume: providing access control credentials in order to use the network.

Ed:  Isn’t attribution another way of saying this? That the data provided (whether a comment or purchasing / browsing data) is attributable to a certain individual?

Michael:  Perhaps “provenance” is closer to what I mean. As an example, let’s say you buy some coffee online. The fact that you bought coffee; that you’re interested in coffee; the fact that you spend money, with a certain credit card, at a certain date and time; etc. are all things that you, the consumer, should have control over – in terms of knowing which 3rd parties may make use of this data and for what purpose. The consumer should be able to ‘barter’ this valuable information for some type of benefit – and I don’t think that means getting ‘better targeted ads!’ That explanation is a pernicious lie that is put forward by those that have only their own financial gain at heart.

What I am for is “a knowing exchange” between both parties, with at least some form of compensation for both parties in the deal. That is a libertarian principle, of which I am a staunch supporter. Perhaps users can accumulate something like ‘frequent flyer miles’ whereby the accumulated data of their online habits can be exchanged for some product or service of value to the user – as a balance against the certain value of the data that is provided to the data mining firms.

Ed:  Wouldn’t this “knowing exchange” also provide more accuracy in the provided data? As opposed to passively or surreptitiously collected data?

Michael:  Absolutely. With a knowing and willing provider, not only is the data collection process more transparent, but if an anomaly is detected (such as a significant change in consumer behavior), this can be questioned and corrected if the data was in error. A lot of noise is produced in the current one-sided data collection model and much time and expense is required to normalize the information.

Ed:  I’d like to move to a different subject and gain your perspective as one who is intimately connected to this current process of digital disruption. The confluence of AI, robotics, automation, IoT, VR, AR and other technologies that are literally exploding into practical usage have a tendency, as did other disruptive technologies before them, to supplant human workers with non-human processes. Here in Africa (and today we are speaking from Cape Town, South Africa) we have massive unemployment – varying between 25% – 50% of working age young people in particular. How do you see this disruption affecting this problem, and can new jobs, new forms of work be created by this sea change?

Michael:  The short answer is No. I think this is a one-way ratchet. I’m not saying that in a hundred years’ time that may change, but in the next 20-50 years, I don’t see it. Many, many current jobs will be replaced by machines, and that will be a fact we must deal with. I think there will be jobs for people that are educated. This makes education much, much more important in the future than it’s even been to date – which is huge enough. I’m not saying that only Ph.D.’s will have work, but to work at all in this disrupted society will require a reasonable level of technical skill.

We are headed towards an irrecoverable loss of unskilled labor jobs. Full stop. For example, we have over a million professional drivers in the USA – virtually all of these jobs are headed for extinction as autonomous vehicles, including taxis and trucks, start replacing human drivers in the next decade. These jobs will never come back.

I do think you have a saving set of graces in the developing world, that may slow down this effect in the short term: the cost of human labor is so low that in many places this will be cheaper than technology for some time; the fact that corruption is often a bigger impediment to job growth than technology; and trade restrictions and unfair practices are also such a huge limiting factor. But none of this will stem the inevitable tide of permanent disruption of the current jobs market.

And this doesn’t just affect the poor and unskilled workers in developing economies: many white collar jobs are at high risk in the USA and Western Europe:  financial analysts, basic lawyers, medical technicians, stock traders, etc.

I’m very bullish on the future in general, but we must be prepared to accommodate these interstitial times, and the very real effects that will result. The good news is that, for the developing world in particular, a person that has even rudimentary computer skills or other machine-interface skills will find work for some time to come – as this truly transformative disruption of so many job markets will not happen overnight.

IoT (Internet of Things): A Short Series of Observations [pt 7]: A Snapshot of an IoT-connected World in 2021

May 19, 2016 · by parasam

What Might a Snapshot of a Fully Integrated IoT World Look Like 5 Years from Now?

As we’ve seen on our short journey through the IoT landscape in these posts, the ecosystem of IoT has been under development for some time. A number of factors are accelerating the deployment, and the reality of a large-scale implementation is now upon us. Since 5 years is a forward-looking time frame that is within reason, both in terms of likely technology availability and deployment capabilities, I’ve chosen that to frame the following set of examples. While the exact scenarios may not play out precisely as envisioned, the general technology will be very close to this visualization.

The Setting

Since IoT will be international in scope, and will be deployed from 5th Avenue in mid-town Manhattan to the dusty farmlands of Namibia, more than one example place setting must be considered for this exercise. In order to convey as accurate and potentially realistic a snapshot as possible, I’m picking three real-world locations for our time-travel discussion.

  • San Francisco, CA – USA.  A dense and congested urban location, very forward thinking in terms of civic and business adoption of IT. With an upscale and sophisticated population, the cutting edge of IoT can be examined against such an environment.
  • Kigali, Rwanda – Africa.  An urban center in an equatorial African nation. With the entire country of Rwanda having a GDP of only 2% of San Francisco, it’s a useful comparison of how a relatively modern, urban center in Africa will implement IoT. In relative terms, the local population is literate, skilled and connected [70% literacy rate, is reputed to be one of the most IT-centric cities in Africa, and has a 26% internet connectivity rate nationally (substantially higher in Kigali)].
  • Tihi, a remote farming village in the Malwa area of the the central Indian state of Madhya Pradesh.  This is a small village of about 2,500 people that mostly grows soybeans, wheat, maize and so on. With an average income of $1.12 per year, this is an extremely poor region of central rural India. This little village is however ‘on the map’ due to the installation in 2002 of an ICT kiosk (named e-Choupal, taken from the term “choupal” meaning ‘village square’ or ‘gathering place’) which for the first time allowed internet connectivity to this previously disconnected town. IoT will be implemented here, and it will be instructive to see Tihi 5 years on…

General Assumptions

Crystal ball gazing is always an inexact science, but errors can be reduced by basing the starting point on a reasonable sense of reality, and attempting to err on the side of conservatism and caution in projecting the rollout of nascent technologies – some of which deploy faster than assumed, others much more slowly. Some very respectable consulting firms in 1995 reported that cellphones would remain a fringe device and only expected 1 million cellphones to be in use by the year 2000. In the USA alone, more than 100 million subscribers were online by that year…

I personally was one of the less than 40,000 users in the entire USA in 1984 when cellphones were only a few months old. As I drove on the freeways of Los Angeles talking on a handset (the same size as a landline, connected via coilcord to a box the size of a lunch pail) other drivers would stare and mouth “WTF??” But it aided my productivity enormously, as I sat through massive traffic jams on my 1.5 hr commute each way from home to work. I was able to speak to east coast customers, understand what technical issues would greet me once I arrived at work, etc. I personally couldn’t understand why we didn’t have 100 million subscribers by 1995… this was a transformative technology.

Here are the baseline assumptions from which the following forward-looking scenarios will be developed:

  • There are currently about the same number of deployed IoT devices as people on the planet: 6.8 billion. The number of deployed devices is expected to exceed that of the human population by the end of this year. Approximately 10 billion more devices are expected to be deployed each year over the next 5 years, on average.
  • The overall bandwidth of the world-wide internet will grow at approximately 25% per year over the next 5 years. The current overall traffic is a bit over 1 zettabyte per year [1 zettabyte = 1 million petabytes; 1 petabyte = 1 million gigabytes]. That translates to about 3 zettabytes by 2021. From another perspective, it took 27 years to reach the 1 zettabyte level; in 5 more years the traffic will triple!
  • Broadband data connectivity in general (wired + wireless) is currently available to about 46% of the world’s population, and is increasing by roughly 5% per year. The wireless connectivity is expected to increase in rate, but even being conservative about 60% of the world’s population will have internet access within 5 years.
  • The cost of both computation and storage is still falling, more or less in line with Moore’s law. Hosted computation and storage is essentially available for free (or close to it) for small users (a few GB of storage, basic cloud computations). This means that a soy farmer in Tihi, once connected to the ‘net, can essentially leverage all the storage and compute resources needed to run their farm at only the cost of connectivity.
  • Advertising (the second most trafficked content on the internet after porn) will keep increasing in volume, cleverness, economic productivity and reach. As much as many may be annoyed by this, the massive infrastructure must be fed… and it’s either ads or subscription services. Take your pick. And with all the new-found time, and profits, from an IoT enabled life, maybe one just has to buy something from Amazon? (Can’t wait to see how soon they can get a drone out to Tihi to deliver seeds…)

The Snapshots

San Francisco  We’ll drop in on the life of Samantha C for a few hours of her day in the spring of 2021 to see how IoT interacts with her life. Sam is a single professional who lives in the Noe Valley district in a flat. She works for a financial firm downtown that specializes in pricing and trading ‘information commodities’ – an outgrowth of online advertising now fueled by the enormous amount of data that IoT and other networks generate.

San Francisco and the Golden Gate Bridge

San Francisco and the Golden Gate Bridge

Sam’s alarm app is programmed to wake her between 4:45 – 5:15AM, based on sleep pattern info received from the wrist band she put on before retiring the night before. (The financial day starts very early, but she’s done by 3PM). As soon as the app plays the waking melody, the flat’s environment is signaled that she is waking. Lighting and temperature is adjusted and the espresso machine is turned on to preheat. A screen in the dressing area displays weather prediction to aid in clothing selection. After breakfast she simply walks out the front door, the flat environment automatically turns off lights, heat, checks perimeter and arms the security system. A status signal is sent to her smartphone. As San Francisco has one of the best public transport networks in the nation, only a few blocks walk is needed before boarding an electric bus that takes her almost to her office.

AV09  AV04  AV10

Traffic, which as late as 2018 was often a nightmare during rush hours, has markedly improved since the implementation in 2019 of a complete ban on private vehicles in the downtown and financial districts. Only autonomous vehicles, taxis/Ubers, small delivery vehicles and city vehicles are allowed. There is no longer any street parking required, so existing streets can carry more traffic. Small ‘smart cars’ quickly ferry people from local BART stations and other public transport terminals in and out of the congestion zone very efficiently. All vehicles operating in the downtown area must carry a TSD (Traffic Signalling Device), an IoT sensor and transmitter package that updates the master traffic system every 5 seconds with vehicle position, speed, etc.

AV05  AV08  AV02  AV01

As Samantha enters her office building, her phone acquires the local WiFi signal (but she’s never been out of range, SF now has blanket coverage in the entire city). As her phone logs onto the building network, her entry is noted in her office, and all of her systems are put on standby. The combination of picolocation, enabled through GPS, proximity sensors and WiFi hub triangulation – along with a ‘call and response’ security app on her phone – automatically unlocks the office door as she enters just before 6AM (traders get in before the general office staff). As she enters her area within the office environment the task lighting is adjusted and the IT systems move from standby to authentication mode. Even with the systems described above, a further authentication step of a fingerprint and a voice response to a random question (one of a small number that Sam has preprogrammed into the security algorithm) is required in order to open the trading applications.

San Francisco skyline

San Francisco skyline

The information pricing and trading firm for which Sam works is an economic outgrowth of the massive amount of data that IoT has created over the last 5 years. The firm aggregates raw data, curates and rates it, packages the data into various types and timeframes, etc. Almost all this ‘grunt work’ is performed by AI systems: there are no clerks, engineers, financial analysts or other support staff as would have been required even a few years ago. The bulk of Sam’s work is performed with spoken voice commands to the avatars that are the front end to the AI systems that do the crunching. Her avatars have heuristically learned over time her particular mannerisms, inflections of voice, etc. and can mostly intuit the difference between a statement and question just based on cadence and tonal value of her voice.

This firm is representative of many modern information brokerage service providers: with a staff of only 15 people they trade data based on over 5 billion distinct data sources every day, averaging a trade volume of $10 million per day. The clients range from advertising, utilities, manufacturing, traffic systems, agriculture, logistics and many more. Some of the clients are themselves other ‘info-brokers’ that further repackage the data for their own clients, others are direct consumers of the data. The data from IoT sensors is most often already aggregated to some extent by the time Sam’s firm gains access to it, but some of the data is directly fed to their harvesting networks – which often sit on top of the functional networks for the which the IoT systems were initially designed. A whole new economic model has been born where the cost of implementation of large IoT networks are partially funded by the resale of the data to firms like Samantha’s.

Transportation Network

Transportation Network

We’ll leave Sam in San Francisco as she walks down Bush Street for lunch, still not quite used to the absence of noise and diesel smoke of delivery trucks, congested traffic and honking taxis. The relative quiet, disturbed only by the white noise emitters of the electric vehicles (only electrics are allowed in the congestion area in SF), allows her to hear people, gulls and wind – a city achieving equilibrium through advanced technology.

.

Kigali  This relatively modern city in Rwanda might surprise some that think of Rwanda as “The Land of a Thousand Hills” with primeval forests inhabited with chimpanzees and bush people. For this snapshot, we’ll visit Sebahive D, a senior manager working for the city of Kigali (the capital of Rwanda) in public transport. He has worked for the city his entire professional life, and is enthusiastic about the changes that are occurring as a result of the significant deployment of IoT throughout the city over the last few years. As his name means “Bringer of Good Fortune” Sebahive is well positioned to help enable an improved transport environment for the Rwandans living in Kigali.

Kigali - this is also Africa...

Kigali – this is also Africa…

Even though Kigali is a very modern city by African standards, with a skyline that belies a city of just over a million people in a country that has been ‘reborn’ in many ways since the horrific times of 1994, many challenges remain. One of the largest is common to much of Africa: that of reliable urban transport. Very few people own private cars (there were only 21,000 cars in the entire country as of 2012, the latest year for which accurate figures were available) so the vast majority of people depend on public transport. The minibus taxi is the most common mode of transport, accounting for over 50% of all public transport vehicles in the country. Historically, they operated in a rather haphazard manner, with no schedules and flexible routes. Typically the taxis would just drive on routes that had proved over time to offer many passengers, hooting to attract riders and stopping wherever and whenever the driver decided. Roadworthiness, the presence of a driving license and other such basic structures was often optional…

Kigali city center on the hill

Kigali city center on the hill

We’ll join Sebahive as he prepares his staff for a meeting with Toyota who has come to Kigali to present information on their new line of “e-Quantum” minibus taxis. These vehicles are a gas/electric hybrid powered unit, with many of the same features that fully autonomous vehicles being used currently in Japan posses. The infrastructure, roads, IT networks and other basic requirements are insufficient in Kigali (and most of the rest of Africa) to support fully autonomous vehicles at this time. However, a ‘semi-autonomous’ mode has been developed, using both sophisticated on-board computers supplemented by an array of inexpensive IoT devices on roads, bus stops, buildings, etc. This “SA” (Semi-Autonomous) mode, as differentiated from a “FA” (Fully-Autonomous) mode, acts a bit like an auto-pilot or a very clever ‘cruise control’. When activated, the vehicle will maintain the speed at which it was travelling when switched on, and will use sensors both on the exterior of the minibus as well as receive data from roadside sensors to keep the vehicle in its lane and not too close to other vehicles. The driver is still required to steer, and tapping the brake will immediately give full control back to the vehicle operator.

AV11  AV07  AV03

Rather than the oft-hazardous manner of ‘taxi-hailing’ – which basically means stepping out into traffic and waving or whistling – many small IoT sensor/actuators (that are solar powered) are mounted on light poles, bus stop structures, sides of buildings, etc. Pressing the button on the device transmits a taxi request via WiFi/WiMax to the taxi signalling network, which in turn notifies any close taxis of a passenger waiting, and the location is displayed on the dashboard mapping display. A red LED is also illuminated on the transmitter so the passenger waiting knows the request has been sent. When the taxi is close (each taxi is constantly tracked using a combo IoT sensor/transceiver device) the LED turns green to notify the passenger to look for the nearby taxi.

The relatively good IT networks in Kigali make the taxi signalling network possible. One of the fortuitous aspects of local geography (the city is essentially built on four large hills) is that a very good wireless network was easy to establish due to overlooking locations. Although he is encouraged by the possibility of a safer and more modern fleet of taxis, Sebahive is experienced enough to wonder about the many challenges that just living in Africa offers… power outages, the occasional torrential rains, vandalism of public infrastructure, etc. Although there are only about 2,500 minibus taxis in the entire country, it often seems like most of them are in the suburb of Kacyiru, Gasebo district (where the presidential palace and most of the ministries, including Sebahive’s office), is located at rush hour. An IoT solution that keeps taxis, motorcycles (the single most common conveyance in Rwanda), pedestrians and very old diesel lorries from turning a roadway with lanes into an impenetrable morass of.. everything… has yet to be invented!

IT Center in suburban Kigali

IT Center in suburban Kigali

Another aspect of technology, assisted by IoT, that is making life simpler, safer and more efficient is cellphone-based payment systems. With almost everyone having a smartphone today, and even the most unschooled having learned how to purchase airtime, electricity and other basic utilities and enter those credits into a phone or smart meter, the need to pay cash for transport services is fast disappearing. Variations on SnapScan, NFC technology, etc. all offer rapid and mobile payment methods in taxis or buses, speeding up transactions and reducing the opportunity for street theft. One of the many things in Sebahive’s brief is the continual push to get more and more retail establishments to offer the sale of transport coupons (just like airtime or electricity) that can be loaded into a user’s cellphone app.

IoT in Africa is a blend of modern technology with age-old customs, with a healthy dose of reality dropped in…

.

Tihi  Ravi Sham C. is a soybean farmer in one of the poorest ares of rural India, a small village named Tihi in the central Indian state of Madhya Pradesh. However, he’s a sanchalak (lead farmer) with considerable IT experience relative to his environment, having been using a computer and online services since 2004, some 17 years now. Ravi started his involvement with the ITC’s “e-Choupal” service back then, and was able for the first time to gain knowledge of world-wide commodity prices rather than be at the mercy of the often unscrupulous middlemen that ran the “mandis” (physical marketplaces) in rural India. These traders would unfairly pay as little as possible to the farmers, who had no knowledge of the final selling price of their crops. The long-standing cultural, caste and other barriers to free trade in India also did not help the situation.

Indian farmers tilling the earth in Tihi

Indian farmers tilling the earth in Tihi

Although the first decade of internet connectivity greatly improved Ravi’s (and the other farmers in his group area) life and profitability, the last few years (from 2019 onwards) have seen a huge jump in productivity. The initial period was one of knowledge enhancement, becoming aware of the supply chain, learning pricing and distribution costs, being able to get good weather forecasting, etc. The actual farming practice however wasn’t much changed from a hundred years ago. With electricity in scare supply, almost no motorized vehicles or farm equipment, light basically supplied by the sun and so on, real advances toward modern farming were not easily feasible.

As India is making a massive investment into IoT, particularly in the manufacturing and supply chain sectors, an updated version of the “e-Choupal” was delivered to Ravi’s village. The original ‘gathering place’ was basically a computer that communicated over antiquated phone lines at very low speed and mostly supported text transmissions. The new “super-Choupal” was a small shipping container that housed several computers, a small server array with storage and a set of powerful WiFi/WiMax hubs. Connectivity is provided with a combination of the BBNL (Bharat Broadband Network Limited) service supported by the Indian national government, which provided fiber connectivity to many rural areas throughout the country, and a ‘Super WiFi’ service using Microsoft White Spaces technology (essentially identifying and taking advantage of unused portions of the RF spectrum in particular locations [so called “white spaces”] to link the super-Choupal IT container with the edge of the fiber network.

Power for the container is a combination of a large solar array on the top of the container supplemented by fuel cells. As an outgrowth of Intelligent Energy’s deal with India to provide backup power to most of the country’s rural off-grid cell towers (replacing expensive diesel generators), there has been a marked increase in availability of hydrogen as a fuel cell source. The fuel is delivered as a replaceable cartridge, minimizing transport and safety concerns. Since the super-Choupal now serves as a micro datacenter, Ravi spends more of his time running the IT hub, training other farmers and maintaining/expanding the IoT network than farming. Along with the container hub, hundreds of small soil and weather sensors have been deployed to all the surrounding village farms, giving accurate information on where and when to irrigate, etc. In addition, the local boreholes are now monitored for toxic levels of chemical and other pollutants. The power supplies that run the container also provide free electricity for locals to charge their cellphones, etc.

As each farmer harvests their crops, the soybeans, maize, etc. are bagged and then tagged with small passive IoT devices that indicate the exact type of product, amount, date packed, agreed upon selling price and tracking information. This now becomes the starting point for the supply chain, and can be monitored all the way from local distribution to eventual overseas markets. The farmers can now essentially sell online, and receive electronic payment as soon as the product arrives at the local distribution hub. The lost days of each farmer physically transporting their goods to the “mandi” – and getting ripped off by the greedy middlemen – are now in the past. A cooperative collection scheme sends trucks around to each village center, where the IoT-tagged crops are scanned and loaded, with each farmer immediately seeing a receipt for goods on their cellphone. The cost of the trucking is apportioned by weight and distance and billed against the proceeds of the sale of the crops. The distributor can see in almost real time where each truck is, and estimate with knowledge how much grain and so on can be promised per day from the hub.

The combination of improved farming techniques, knowledge, fair pay for the crops and rapid payment have more than tripled Ravi’s, and his fellow farmers’ incomes over the past two years. While this may seem like a small drop in the bucket of international wealth (an increase from $1.12 per year to $3.50 per year by first world standards is hard to appreciate), the difference on the ground is huge. There are over 1 billion Ravi’s in India…

 

This concludes the series on Internet of Things – a continually evolving story. The full series is available as a downloadable PDF here. Queries may be directed to ed@parasam.com

References:

Inside the Tech Revolution That Could Be Rwanda’s Future

Rwanda information

Republic of Rwanda – Ministry of Infrastructure Final Report on Transport Sector

IoT in rural India

Among India’s Rural Poor Farming Community, Technology Is the Great Equalizer

ITC eChoupal Initiative

India’s Soybean Farmers Join the Global Village

a development chronology of tihi

Connecting Rural India : This Is How Global Internet Companies Plan To Disrupt

Bharat Broadband Network

Intelligent Energy’s Fuel Cells

IoT (Internet of Things): A Short Series of Observations [pt 5]: IoT from the Business Point of View

May 19, 2016 · by parasam

IoT from the Business Perspective

While much of the current reporting on IoT describes how life will change for the end user / consumer once IoT matures and many of the features and functions that IoT can enable have deployed, the other side of the coin is equally compelling. The business of IoT can be broken down into roughly three areas: the design, manufacture and sales of the IoT technology; the generalized service providers that will implement and operate this technology for their business partners; and the ‘end user’ firms that will actually use this technology to enhance their business – whether that be in transportation, technology, food, clothing, medicine or a myriad of other sectors.

The manufacture, installation and operation of billions of IoT devices will be expensive in its totality. The only reason this will happen is that overall a net positive cash flow will result. Business is not charity, and no matter how ‘cool’ some new technology is perceived to be, no one is going to roll this out for the bragging rights. Even at this nascent stage the potential results of this technology are recognized by many different areas of commerce as such a powerful fulcrum that there is a large appetite for IoT. The driving force for the entire industry is the understanding of how goods and services can be made and delivered with increased efficiency, better value and lower friction.

InternetOfThings02  sensor07

As the whole notion of IoT matures, several aspects of this technology that must be present initially for IoT to succeed (such as an intelligent network, as discussed in prior posts in this series) will benefit other areas of the general IT ecosystem, even those not directly involved with IoT. Distributed and powerful networks will enhance ‘normal’ computational work, reduce loads on centralized data centers and in general provide a lower latency and improved experience for all users. The concept of increased contextual awareness that IoT technology brings can benefit many current applications and processes.

Even though many of today’s sophisticated supply chains have large portions that are automated and are otherwise interwoven with a degree of IT, many still have significant silos of ‘darkness’ where either there is no information, or process must be performed by humans. For example, the logistics of importing furniture from Indonesia is rife with many handoffs, instructions, commercial transactions and so on that are verbal or at best hand written notes. The fax is still ‘high technology’ in many pieces of this supply chain, and exactly what ends up in any given container, and even exactly which ship it’s on, is still often a matter of guesswork. IoT tags that are part of the original order (retailer in Los Angeles wants 12 bookcases) can be encoded locally in Indonesia and delivered to the craftsperson, who will attach each one to the completed bookcase. The items can then be tracked during the entire journey, providing everyone involved with a greater ease and efficiency of operations (local truckers, dockworkers, customs officials, freight security, aggregation and consignment through truck and rail in the US, etc.)

As IoT is in its infancy at this stage it’s interesting to note that the largest amount of traction is in the logistics and supply chain parts of commerce. The perceived functionality of IoT is so high, with relatively low risk from early adopter malfunction, that many supply chain entities are jumping on board, even with some half-baked technology. As was mentioned in an earlier article, temperature variations during transport are the single highest risk factor for the delivery of wine internationally. IoT can easily provide end-to-end monitoring of the temperature (and vibration) for every case of wine at an acceptable cost. The identification of suspect cases, and the attribution of liability to the carriers, will improve quality, lower losses and lead to reforms and changes where necessary in delivery firms to avoid future liability for spoiled wine.

As with many ‘buzzwords’ in the IT industry, it will be incumbent on each company to determine how IoT fits (or does not) within that firm’s product or service offerings. This technology is still in the very early stages of significant implementation, and many regulatory, legal, ethical and commercial aspects of how IoT will interact within the larger existing ecosystems of business, finance and law have yet to be worked out. Early adoption has advantages but also risk and increased costs. Rational evaluation and clear analysis will, as always, be the best way forward.

The next section of this post “The Disruptive Power of IoT” may be found here.

IoT (Internet of Things): A Short Series of Observations [pt 4]: IoT from the Consumer’s Point of View

May 19, 2016 · by parasam

Functional IoT from the Consumer’s Perspective

The single largest difference between this technology and most others that have come before – along with the requisite hype, news coverage, discussion and confusion – is that almost without exception the user won’t have to do anything to participate in this ‘new world’ of IoT. All previous major technical innovations have required either purchasing a new gadget, or making some active, conscious choice to participate in some way. Examples include getting a smartphone, a computer, a digital camera, a CD player, etc. Even if sometimes the user makes an implicit choice to embrace a new technology (such as a digital camera instead of a film camera) there is still an explicit act of bringing this different thing into their lives.

With IoT, almost every interaction with this ecosystem will be passive – i.e. will not involve conscious choice by the consumer. While the effects and benefits of the IoT technology will directly affect the user, and in many cases will be part of other interactions with the technosphere (home automation, autonomous cars, smartphone apps, etc.) the IoT aspect is in the background. The various sensors, actuators and network intelligence that makes all this work may never directly be part of a user’s awareness. The fabric of one’s daily life simply will become more responsive, more intelligent and more contextually aware.

During the adoption phase, where the intelligence, interaction and accuracy of both sensor, actuator and software interpretation of the data is maturing we can expect hiccups. Some of these will be laughable, some frustrating – and some downright dangerous. Good controls, security and common sense will need to prevail to ensure that this new technology is implemented correctly. Real-time location information can be reassuring to a parent whose young children are walking to school – and yet if that data is not protected or is hacked, can provide information to others that may have far darker intentions in mind. We will collectively experience ‘double-booked’ parking spaces (where smart parking technology gets it wrong sometimes), refrigerators that order vodka instead of milk when the product tracking goes haywire and so on. The challenge will be that the consumer will have far less knowledge, or information, about what went wrong and who to contact to sort it out.

When your weather app is consistently wrong, you can contact the app vendor, or if the data itself is wrong, the app maker can approach the weather data provider service. When a liter of vodka shows up in your shopping delivery instead of a liter of milk, is it the sensor in the fridge, the data transmission, an incorrectly coded tag on the last liter of milk consumed, the backoffice software in the data collection center, the picking algorithm in the online shopping store… the number of possible areas of malfunction are simply enormous in the IoT universe and a considerable effort will be required to ascertain where the root cause of failure is with each error.

A big part of a successful rollout of IoT will be a very sophisticated fault analysis layer that extends across the entire ecosystem. This again is a reason why the network of IoT itself must be so intelligent for things to work correctly. In order for data to be believed by upstream analysis and correctly integrated into a knowledge-based ecosystem, and for correct actions to be taken a high degree of contextual awareness and ‘range of acceptable data/outcomes’ must be built in to the overall network of IoT. When anomalies show up, the fault detection layer must intervene. Over time, the heuristic learning capability of many network elements may be able to actually correct for the bad data but at least data that is suspect must be flagged and not blindly acted upon.

A big deal was recently made over the next incarnation of Siri (Viv) managing to correctly order and deliver a pizza via voice recognition technology and AI (Artificial Intelligence). This type of interaction will fast become the norm in an IoT-enabled universe. Not all of the perceived functionality will be purely IoT – in many cases the data that IoT can provide will supplement other more traditional data inputs (voice, keyboard, thumbpress, fingerswipes, etc.). The combined data, along with a wealth of contextual knowledge (location, time of day, temperature, etc) and sophisticated algorithms, AI computation and the capability of low-latency ultra-high-speed networks and compute nodes will all work together to manifest the desired outcome of an apparently smart surrounding.

The Parallel Universes of IoT Communities

As the IoT technology rolls out during the next few years, different cultures and countries with different priorities and capabilities will implement these devices and networks in various ways. While the sophistication of a hyperfunctional BMW autonomous car driving you to a shop, finding and parking all without any human intervention may be the experience of a user in Munich, a farmer in rural Asia may use a low complexity app on their smartphone to read the data in some small sensors in local wells to determine that heavy metals have not polluted the water. If in fact the water is not up to standards, the app may then (with a very low bandwidth burst of data) inform the regional network that attention is required, and discover where nearby suitable drinking water is available.

Over time, the data collected by individual communities will aggregate and provide a continual improvement of knowledge of environment, migration of humans and animals, overall health patterns and many other data points that today often must be proactively gathered by human volunteers. It will take time, and continual work on data grooming, but the quantity and quality of profoundly useful data will increase many-fold during the next decade.

One area of critical usefulness where IoT, along with AI and considerable cleverness in data mining and analysis, can potentially save many lives and economic costs is in the detection and early reaction to medical pandemics. As we have recently seen with bird flu, Ebola and other diseases, the rapid transportation systems along with delayed incubation times can post a considerable risk for large groups of humanity. Since (in theory) all airline travel, and much train/boat travel is identifiable and trackable, the transmission vectors of potential carriers could be quickly analyzed if localized data in a particular area began to suggest a medical threat. The early signs of trouble are often in areas of low data awareness and generation (bird and chicken deaths in rural areas in Asia for example) – but as IoT brings an improvement in overall contextual awareness of environment initially unrelated occurrences can be monitored.

The importance and viability of the IoT market in developing economies cannot be underestimated: several major firms that specialize in such predictions (Morgan Stanley, Forbes, Gartner, etc.) predict that roughly a third of all sales in the IoT sector will come from emerging economies. The ‘perfect storm’ of relatively low-cost devices, the continual increase in wireless connectivity and the proliferation of relatively inexpensive but powerful compute nodes (smartphones, intelligent network nodes, etc.) can easily be implemented in areas that just five years ago were thought impenetrable by modern technology.

The next section of this post “IoT from the Business Point of View” may be found here.

IoT (Internet of Things): A Short Series of Observations [pt 3]: Security & Privacy

May 19, 2016 · by parasam

Past readers of my articles will notice that I have a particular interest in the duality of Security and Privacy within the universe of the Internet. IoT is no exception… In the case of IoT, the bottom line is that for wide-spread acceptance, functionality and a profitable outcome the entire system must be perceived as secure and trustworthy. If data cannot be trusted, if incorrect actions are taken, if the security of individuals and groups is reduced as a result of this technology there will be significant resistance.

Security

A number of security factors have been discussed in the prior posts in relation to sensors, actuators and the infrastructure/network that connects and supports these devices. To summarize, many devices do not, or likely will not, provide sufficient security built in to the devices themselves. Once installed, it will typically be unreasonable or impossible to upgrade or alter the security functionality of the IoT devices. Some of the issues that plague IoT devices are: lack of a security layer in the design; poor protocols; hard-coded passwords; lack of – or poorly implemented – encryption; lack of best practice authentication and access control, etc.

larger-13-SECURITY-internet3  security02  Security01  security03

From a larger perspective, the following issues surrounding security must be addressed in order for a robust IoT implementation to succeed:

  • Security as part of the overall design of individual sensors/actuators as well as the complete system.
  • The economic factor in security: how much security for how much cost is appropriate for a particular device? For instance, a temperature sensor used in logistics will have very different requirements than an implanted sensor in a human pacemaker.
  • Usability: just as in current endpoints and applications, a balance between ease of use and appropriate security must be achieved.
  • Adherence to recognized security ‘best practices’, protocols and standards. Just as “ipsec” exists for general ip networks, work is under discussion for “IoTsec” – and if such a standard comes into existence it will be incumbent on manufacturers to accommodate this.
  • How functional security processes (authentication, access control, encryption of data) will be implemented within various IoT schemas and implementations.
  • As vulnerabilities are discovered, or new security practices are deemed necessary to implement, how can these be implemented in a large field of installed devices?
  • How will IoT adapt to the continual change of security regulations, laws and business requirements over time?
  • How will various IoT implementations deal with ‘cross-border’ issues (where data from IoT sensors is consumed or accessed by entities that are in different geographic locations, with different laws and practices concerning data security?

Privacy

The issue of privacy is particularly challenging in the IoT universe, mainly due to both the ubiquity and passivity of these devices. Even with mobile apps that often tend to reduce privacy in many ways the user has some degree of control as an interface is usually provided where a measure of privacy control can be implemented. Most IoT devices are passive, in the sense that no direct interaction with humans occurs. But the ubiquity and pervasiveness of the the sensors, along with the capability of data aggregation, can provide a huge amount of information that may reduce the user’s privacy remarkably.

privacy04  privacy01  privacy02  privacy03

As an example, let’s examine the use case of a person waking up then ‘driving’ to work (in their autonomous car) with a stop on the way for a coffee:

  • The alarm in their smartphone wakes up the user – which as it detects sleep patterns through movement and machine learning – transmits that info to a database, registering among other things the time the user awoke.
  • The NEST thermostat adjusts the environment, as it has learned the user is now awake. That info as well is shared.
  • Various motion and light sensors throughout the home detect the presence and movement of the user, and to some degree transmit that information.
  • The security system is armed as the user leaves the home, indicating a lack of presence.
  • The autonomous car wakes up and a pre-existing program “take me to work, but stop at Starbucks on Main Road for a coffee first” is selected. The user’s location is transmitted to a number of databases, some personalized, some more anonymous (traffic management systems for example) – and the requirement for a parking space near the desired location is sent. Once a suitable parking space is reserved (through the smart parking system) a reservation is placed on the space (usually indicated by a lamp as well as signalling any other vehicle that they cannot park there).
  • The coffee house recognizes the presence of a repeat customer via the geotagging of the user’s cellphone as it acquires the WiFi signal when entering the coffee shop. The user is registered onto the local wireless network, and the user’s normal order is displayed on their cell for confirmation. A single click starts the order and the app signals the user when their coffee and pastry are ready. The payment is automatically deducted at pickup using NFC technology. The payment info is now known by financial networks, again indicating the location of the user and the time.
  • The user signals their vehicle as they leave the coffee shop, the parking space allocation system is notified that the space will be available within 2 minutes, and the user enters the car and proceeds to be driven to work.

It is clear that with almost no direct interaction with the surrounding ecosystem many details of the user’s daily life are constantly revealed to a large and distributed number of databases. As the world of IoT increases and matures, very little notification will ever be provided to an individual user about how many databases receive information from a sensor or set of sensors. In a similar manner, instructions to an actuator that is empirically tied to a particular user can reflect data about that user, and again the user has no control over the proliferation of that data.

As time goes on, and new ‘back-office’ functionality is added to increase either the usefulness of IoT data to a user or the provider, it is most likely that additional third party service providers will acquire access to this data. Many of these will use cloud functionality, with interconnections to other clouds and service providers that are very distant, both in location and regulatory environment, to the user. The level of diffusion will rapidly approach that of complete ambiguity in terms of a user having any idea of who has access to what data that IoT devices within their environment provide.

For the first time, we collectively must deal with a new paradigm: a pervasive and ubiquitous environment that generates massive data about all our activities over which we essentially have no control. If we thought that the concept of privacy – as we knew it 10 or 20 years ago – was pretty much dead, IoT will make absolutely certain that this idea is dead, buried and forgotten… More than anything else, the birth of substantial IoT will spark a set of conversations about what is an acceptable concept of privacy in the “Internet of Everything” age…

One cannot wish this technology away – it’s coming and nothing will stop it. At some level, the combination of drivers that will keep enabling the IoT ecosystem (desire for an increased ‘feature-set of life’ from users; and increased knowledge and efficiency from product and service providers) will remain much higher than any resistance to the overall technology. However, the widespread adoption, trust and usefulness will be greatly impacted if a wide-spread perception grows that IoT is invasive, reduces the overall sense of privacy, and is thought of as ‘big brother’ in small packages.

The scale of the IoT penetration into our lives is also larger than any previous technology in human history – with the number of connected devices poised to outnumber the total population of the planet by a factor of more than 10:1 within the next seven years. Even those users that believe they are not interacting with the Internet will be passively ‘connected’ every day of their lives in some way. This level of unavoidable interaction with the ‘web’ will shortly become the norm for most of humanity – and affect those in developing economies as well as the most technologically advanced areas. Due to the low cost and high degree of perceived value of the technology, the proliferation of IoT into currently less-advanced populations will likely exceed that of the cellphone.

While it is beyond the scope of this post to discuss the larger issue of privacy in the connected world in detail, it must be recognized that the explosive growth of IoT at present will forever change our notion of privacy in every aspect of our lives. This will have psychological, social, political and economic results that are not fully known, but will be a sea change in humanity’s process.

The next section of this post “IoT from a Consumer’s Point of View” may be found here.

References:

Rethinking Network Security for IoT

Five Challenges of IoT

 

IoT (Internet of Things): A Short Series of Observations [pt 2]: Sensors, Actuators & Infrastructure

May 19, 2016 · by parasam

The Trinity of Functional IoT

As the name implies, the functionality of “Things” that comprise an IoT universe must be connected in order for this ecosystem to operate. This networking interconnection is actually the magic that will allow a fully successful implementation of the IoT technology. In addition, it’s important to realize that this network will often perform in a bi-directional manner, with the “Things” at the edge of the network either acting as Input Devices (Sensors) or Output Devices (actuators).

Input (Sensors)

The variety, complexity and capability of input sensors in the IoT universe is almost without limit. Almost anything that can measured in some way will spawn an IoT sensor to communicate that data to something else. In many cases, sensors may be very simple, measuring only a single parameter. In other cases, a combined sensor package may measure many parameters, providing a complete environmental ‘picture’ as a dataset. For instance, a simple sensor may just measure temperature, and a use case might be an embedded sensor in a case of wine before transport. The data is measured once every hour and stored in memory onboard the sensor, then ‘read’ upon arrival at the retail point to ensure that maximums or minimums of acceptability were not exceeded. Thermal extremes are the single largest external loss factor in transport of wine worldwide, so this is not a trivial matter.

sensor01  sensor02  sensor10  sensor08

On the other hand, a small package – the size of a pack of cigarettes – attached to a vehicle can measure speed, acceleration, location, distance traveled from waypoints, temperature, humidity, relative light levels (to indicate degree of daylight), etc. If in addition the sensor package is connected to the vehicle computer, a myriad of engine and other component data can be collected. All this data can be either transmitted live, or more likely, stored in a sample manner and then ‘burst-transmitted’ on a regular basis when a good communications link is available.

An IoT sensor has, at a minimum, the following components: actual sensor element, internal processing, data formation, transmission or storage. More complex sensors may contain both storage and data transmission, multiple transmission methodologies, preprocessing and data aggregation, etc. At this time, the push for most vendors is to get sensors manufactured and deployed in the field to gain market share and increase sales in the IoT sector. Long term thought to security, compatibility, data standards, etc. is often not addressed. Since the scale of IoT sensor deployment is anticipated to exceed the physical deployment of any other technology in the history of humanity, new paradigms will evolve to enable this rollout in an effective manner.

While the large scale deployment of billions of sensors will bring many new benefits to our technological landscape, and undoubtedly improve many real-world issues such as health care, environmental safety and efficiency of resource utilization, traffic management, etc., this huge injection of edge devices will also collectively offer one of the greatest security threats that has ever been experienced in the IT landscape. Due to a current lack of standards, rush to market, lack of understanding of even the security model that IoT presents, etc. most sensors do not have security embedded as a fundamental design principle.

sensor09  sensor05  sensor03  sensor03

There are additional challenges to even the basic functionality, let alone security, of IoT sensors: that of updating, authenticating and validating such devices or the data that they produce. If a million small inexpensive temperature sensors are deployed by a logistics firm, there is no way to individually upgrade these devices should either a significant security flaw be discovered, or if the device itself is found to operate inaccurately. For example, let’s just say that a firmware programming error in such a sensor results in erroneous readings being collected once the sensor has been continuously exposed to an ambient temperature of -25C or below for more than 6 hours. This may not have been considered in a design lab in California, but once the sensors are being used in northern Sweden the issue is discovered. In a normal situation, the vendor would release a firmware update patch, the IT department would roll this out, and all is fixed… not an option in the world of tiny, cheap, non-upgradable IoT devices…

Many (read most as of the time of this article) sensors have little or no real security, authentication or encryption of data functionality. If logistics firms are subject to penalties for delivering goods to retailers that have exceeded the prescribed temperature min/max levels, some firm somewhere may be enticed to substitute readings from a set of sensors that were kept in a more appropriate temperature environment – how is this raw temperature data authenticated? What about sensors that are attached to a human pacemaker, reporting back biomedical information that is personally identifiable. Is a robust encryption scheme applied (as would be required by USA HIPPA regulations)?

There is another issue that will come back to haunt us collectively in a few years: that of vendor obsolescence. Whether a given manufacturer goes out of business, deprecates their support of a particular line of IoT sensors, or leaves the market for another reason, ‘orphaned’ devices will soon become a reality in the IoT universe. While one may think that “Oh well, I’ll just have to replace these sensors with new ones” is the answer, that will not always be an easy answer. What about sensors that are embedded deep within industrial machines, aircraft, motorcars, etc.? These could be very expensive or practically impossible to easily replace, particularly on a large scale. And to further challenge this line of thought, what if a proprietary communications scheme was used by a certain sensor manufacturer that was not escrowed before the firm went out of business? Then we are faced with a very abrupt ‘darkening’ of thousands or even millions of sensor devices.

All of the above variables should be considered before a firm embarks on a large-scale rollout of IoT sensor technology. Not all of the issues have immediate solutions, some of the challenges can be ameliorated in the network layer (to be discussed later in this post), and some can be resolved by making an appropriate choice of vendor or device up front.

Output (Actuators)

Actuators may be stand-alone (i.e. just an output device), or may be combined with an IoT input sensor. An example might be an intelligent light bulb designed for night lighting outdoors – where the sensor detects that the ambient light has fallen to a predetermined level (that may be externally programmable), and in addition to reporting this data upstream also directly triggers the actuator (the light bulb itself) to turn on. In many cases an actuator, in addition to acting on data sent to it over an IoT network, will report back with additional data as well, so in some sense may contain both a sensor as well as an actuator. An example, again using a light bulb: the light bulb turns on only when specifically instructed by external data, but if the light element fails, the bulb will inform the network that this device is no longer capable of producing light – even though it’s receiving data. A robustly designed network would also require the use of light bulb actuators that issue an occasional ‘heartbeat’ so if the bulb unit fails completely, the network will know this and report the failure.

actuators  actuator03  actuator01  actuator00

The issue of security was discussed concerning input sensors above, but this issue also applies to output actuators. In fact, the security and certainty that surrounds an IoT actuator is often more immediately important than a sensor. A compromised sensor will result in bad or missing data, which can still be accommodated within the network or computational schema that uses this data. An actuator that has been compromised or ‘hacked’ can directly affect either the physical world or a portion of a network, so can cause immediate harm. Imagine a set of actuators that control piping valves in a high-pressure gas pipeline installation… and if some valves were suddenly closed while others were opened a ‘hammer’ effect could easily cause a rupture and the potential of a disastrous result. It is imperative that in high-risk points a strong and multilayered set of security protocols is in place.

This issue, along with other reliability issues, will likely delay the deployment of many IoT implementations until adequate testing and use case experience demonstrates that current ‘closed-system’ industrial control networks can be safely replaced with a more open IoT structure. Another area where IoT will require much testing and rigorous analysis will be in vehicles, particularly autonomous cars. The safety of life and property will become highly dependent on the interactions of both sensors and actuators.

Other issues and vulnerabilities that affect input sensors are applicable to actuators as well: updating firmware, vendor obsolescence and a functional set of standards. Just as in the world of sensors, many of the shortcomings of individual actuators must be handled by the network layer in order for the entire system to demonstrate the required degree of robustness.

Network & Infrastructure

While sensors and actuators are the elements of IoT that receive most attention, and are in fact the devices that form the edge of the IoT ecosystem, the more invisible network and associated infrastructure is absolutely vital for this technology to function. In fact, the overall infrastructure is more important and carries a greater responsibility for the overall functionality of IoT than either sensors or actuators.Although the initial demonstration and implementation of IoT technology is currently using traditional ip networks this must change. The current model of remote users (or machines) connecting to other remote users, data centers or cloud combinations cannot scale to the degree required for a large scale successful implementation of IoT.

network01      

In addition, a functional IoT network/infrastructure must contain elements that are not present in today’s information networks, and provide many levels of distributed processing, data aggregation and other functions. Some of the reasons that drive these new requirements for the IoT network layer have been discussed above, in general the infrastructure must make up for the lacks and limitations of both sensors and actuators as they age in place over time. The single largest reason that the network layer will be responsible for the bulk of security, upgrading/adaptation, dealing with obsolescence, etc. is that the network is dynamic and can be continually adjusted and tuned to the ongoing requirements of the sensors, actuators and the data centers/users where the IoT information is processed or consumed.

The reference to ‘infrastructure’ in addition to ‘network’ is for a very good reason: in order for IoT to function well on a long-term basis, substantial ingredients beyond just a simple network of connectivity are required. There are three main drivers of this additional requirement: data reduction & aggregation, security & reliability, and adaptation/support of IoT edge devices that no longer function optimally.

Data Reduction & Aggregation

The amount of data that will be generated and/or consumed by billions of sensors and actuators is gargantuan. According to one of the most recent Cisco VNI forecasts, the global internet traffic will exceed 1.3 zettabytes by the end of this year. 1 zettabyte = 1 million petabytes, with 1 petabyte = 1 million gigabytes… to give some idea of the scale of current traffic. And this is with IoT barely beginning to show up on the global data transmission landscape. If we take even a conservative estimate of 10 billion IoT devices adding to the global network each year between now and 2020, and we assume that on average each edge device transmits/receives only 1 kbps (kilobits per second), this math follows: 30GB per device per year X 10 billion devices = 300 exabytes of new added data per year – at a minimum.

While this may not seem like a huge increase (about a 25% annual increase in overall data traffic worldwide) there are a number of factors that make this much more burdensome to current network topologies than may first be apparent. The current global network system supports basically three types of traffic: streaming content (music, videos, etc) that emanate from a small number of CDNs (Content Distribution Networks) and feed millions of subscribers; database queries and responses (Google searches, credit card authorizations, financial transactions and the like); and ad hoc bi-directional data moves (business documents, publications, research and discovery, etc.). The first of these (streaming) is inherently unidirectional and specialized CDNs have been built to accommodate this traffic, with many peering routers moving this traffic off the ‘general highway’ onto the dedicated routes for the CDNs to allow users to experience the low latency they have come to expect, etc. The second type of traffic, queries and responses, are typically very small data packets that hit a large purpose-designed data center which can process the query very quickly and respond, again with a very small data load. The third type, which has the broadest range of data types, is often not required to have a near-instantaneous delivery or response; the user is less worried about a few seconds delay on the upload of a scientific paper or the download of a commercial contract. A delay of more than 2 sec after a Google search is submitted is seen as frustrating…

Now, enter the world of IoT sensors and actuators onto this already crowded IT infrastructure. The type of data that is detected and transmitted by sensors will very often be time-sensitive. For instance the position of an autonomous vehicle must be updated every 100 mSec or the safety of that vehicle and others around it can be affected. If Amazon succeeds in getting delivery drones licensed, we will have tens of thousands of these critters flying around our heavily congested urban areas – again requiring critically frequent updates of positional data among other parameters. Latency rapidly becomes the problem even more than bandwidth… and the internet, in its glorious redundant design, has as its core value the ultimate delivery of the packet as the prime law, not how long it takes or how many packets can ultimately be delivered. Remember, the initial design of the Internet (which is basically unchanged for almost 50 years now) was a redundant mesh of connectivity to allow the huge bandwidth of 300 bits per second (teletype machine basically) to reach its destination even in the face of nuclear attack wiping out some major nodes on the network.

The current design of data center connectivity (even such monsters such as Amazon Web Services, Google Compute, Microsoft Azure) is a star network. This has one (or a small cluster) of large datacenters in the center of the ‘cloud’, with all the users attached like spokes on a wheel at the edge. As the number of users grows, the challenge is to keep raising the capacity of the ‘gateways’ into the actual computational/storage center of the cloud. It’s very expensive to duplicate data centers, and doing so brings additional data transmission costs as all the data centers (of a given vendor) must constantly be synchronized. Essentially, the star model of central reception, processing and then sending data back to edge fails at the scale and required latency for IoT to succeed.

One possible solution to avoid this congestion at the center of the network is to push some computation to the edge, and reduce the amount of data that is required to be acted upon at the center. This can be accomplished in several ways, but a general model will deal with both data aggregation (whereby data from individual sensors is combined where this is possible) and data reduction (where data flows from individual sensors can be either compressed, ignored in some cases or modified). A few use cases will illustrate these points:

  • Data Aggregation: assume a vendor has embedded small, low cost transpiration sensors in the soil of rows of grape plants in a wine farm. A given plot may have 50 rows each 100 meters long. With sensors embedded every 5 meters, 1,000 sensors will be generating data. Rather than push all that individual data up to a data center (or even to a local server at the wine farm), an intelligent network could aggregate the data and report that, on average, the soil needs or does not need watering. There is a 1000:1 reduction in network traffic up to the center…
  • Data Reduction:  using the same example, if one desired a somewhat more granular sensing of the wine plot, the intelligent network could examine the data from each row, and with a predetermined min/max data range, transmit the data upstream only for those sensors that were out of range. This may effectively reduce the data from 1,000 sensors to perhaps a few dozen.

Both of these techniques require both distributed compute and storage capabilities to exist within the network itself. This is a new paradigm for networks, which up to this time have been quite stupid in reality. Other than passive network hubs/combiners, and active switches (which are very limited, although extremely fast, in their analytical capabilities), current networks are just ribbons of glass or copper. With the current ability of putting substantial compute and storage power in a very small package that uses very little power (look at smart watches), small ‘nodes of intelligence’ could be embedded into modern networks and literally change the entire fabric of connectivity as we know it.

Further details on how this type of intelligent network could be designed and implemented will be a subject of a future post, but here it’s enough to demonstrate that some sort of ‘smart fabric’ of connectivity will be required to effectively deploy IoT on the enormous scale that is envisioned.

Security & Reliability

The next area in which the infrastructure/network that interconnects IoT will be critical to its success will be the overall security, reliability and trustworthiness of the data that is both received from and transmitted to edge devices: sensors and actuators. Not only does the data from sensors, and instructions to actuators, need to be accurate and protected; but the updstream data centers and other entities to which IoT networks are attached must be protected. IoT edge devices, due to their limited capabilities and oft-overlooked security features, can provide easy attack surfaces for the entire network. Typical perimeter defense mechanisms (firewalls, intrusion detection devices) will not work for several reasons in the IoT universe. Mostly this is because IoT devices are often deployed within a network, not just at the outer perimeter. Also, the types of attacks will be very different that what most IDS trigger on now.

As was touched on earlier in this series, most IoT sensors do not have strong security mechanisms built in to the devices themselves. In addition, with the issues of vulnerabilities discovered after deployment, it’s somewhere between difficult and impossible to upgrade large numbers of IoT sensors in place. Many times the sensors are not even designed for bi-directional traffic, so even if a patch was designed, and the sensor somehow could install it, the patch could not be received by the sensor. This boils down to the IoT infrastructure/network bearing the brunt of the burden of security for the overall IoT ecosystem.

There are a number of possible solutions that can be implemented in an IoT network environment to enhance security and reliability, one such example is outlined in this paper. Essentially the network must be intelligent enough to compensate for the ‘dumbness’ of the IoT devices, whether sensors or actuators. One of the trickiest bits will be to secure ‘device to device’ communications. As some IoT devices will directly communicate to other nearby IoT devices through a proprietary communications channel and not necessarily the ‘network’, there is the opportunity for unsecured traffic, etc. to exist.

An example could be a home automation system: Light sensors may communicate directly to outlets or lamps using the Zigbee protocol and never (directly) communicate over a normalized ip network. The issues of out-of-date devices, compromised devices, etc. are not handled (at this time) by the Zigbee protocol, so no protection can be offered. Potentially, such vulnerabilities could lead to an access point in the larger network as a threat surface. The network must ‘understand’ to what it is connected, even if it is a small subsystem (instead of single devices), and provide the same degree of supervision and protection to these isolated subsystems as is possible with single devices.

It rapidly becomes apparent that for the network to implement such functions a high degree of ‘contextual awareness’ and heuristic intelligence is required. With the plethora of devices, types of functions, etc. it won’t be possible to develop, maintain and implement a centrally based ‘rules engine’ to handle this very complex set of tasks. A collective effort will be required from the IoT community to assist in developing and maintaining the knowledgeset for the required AI to be ‘baked in’ to the network. While this is, at first, a considerable challenge, the payoff will be huge in many more ways than just IoT devices working better and being more secure: the large scale development of a truly contextually aware and intelligent network will change the “Internet” forever.

Adaptation & Support

In a similar manner to providing security and reliability, the network must take on the burden of adapting to obsolete devices, broken devices, and monitoring devices for out-of-expected-range behavior. Since the network is dynamic, and (as postulated above) will come to have significant computational capability baked in to the network itself, only the network is positioned to effectively monitor and adapt to the more static (and hugely deployed) set of sensors and actuators.

As in security scenarios, context is vital and each set of installed sensors/actuators must have a ‘profile’ installed to the network along with the actual device. For instance, a temperature sensor could in theory report back a reading of anything remotely reasonable (let’s say -50C to +60C – that covers Antarctica to Baghdad) but if the temp sensor is installed in a home refrigerator the range of expected results would be far more narrow. So as a home appliance vendor turns out units that have IoT devices on board that will connect to the network at large, a profile must also be supplied to the network to indicate the expected range of behavior. The same is true for actuators: an outdoor light for a walkway that tacitly assumes it will turn on once in the evening and off again in the morning should assume something is wrong if signals come through that would have the light flashing on and off every 10 seconds.

One of the things that the network will end up doing is ‘deprecating’ some sensors and actuators – whether they report continually erroneous information or have externally been determined to be no longer worthy of listening to… Even so, the challenge will be continual: not every vendor will announce end-of-life for every sensor or actuator; not every scenario can be envisioned ahead of time. The law of unintended consequences of a world that is largely controlled by embedded and unseen interconnected devices will be interesting indeed…

The next section of this post “Security & Privacy” may be found here.

References:

The Zettabyte Era – Trends and Analysis

 

The Patriot Act – upcoming expiry of Section 215 and other unpatriotic rules…

April 18, 2015 · by parasam

Section215

On June 1, less than 45 days from now, a number of sections of the Patriot Act expire. The administration and a large section of our national security apparatus, including the Pentagon, Homeland Security, etc. are strongly pushing for extended renewal of these sections without modification.

While this may on the surface seem like something we should do (we need all the security we can get in these times of terrorism, Chinese/North Korean/WhoKnows hacks, etc. – right?) – the reality is significantly different. Many of the Sections of the Patriot Act (including ones that are already in force and do not expire for many years to come) are insidious, give almost unlimited and unprecedented surveillance powers to our government (and by the way any private contractors who the government hires to help them with this task), and are mostly without functional oversight or accountability.

Details of the particular sections up for renewal may be found in this article, and for a humorous and allegorical take on Section 215 (the so-called “Library Records” provision) I highly recommend this John Oliver video. While the full “Patriot Act” is huge, and covers an exhaustingly broad scope of activities that allow the government (meaning its various security agencies, including but not limited to: CIA, FBI, NSA, Joint Military Intelligence Services, etc. etc.) the sections that are of particular interest in terms of digital security pertaining to communications are the following:

  • Section 201, 202 – Ability to intercept communications (phone, e-mail, internet, etc.)
  • Section 206 – roving wiretap (ability to wiretap all locations that a person may have visited or communicated from for up to a year).
  • Section 215 – the so-called “Library Records” provision, basically allowing the government (NSA) to bulk collect communications from virtually everyone and store them for later ‘research’ to see if any terrorist or other activity deemed to be in violation of National Security interests.
  • Section 216 – pen register / trap and trace (the ability to collect metadata and/or actual telephone conversations – metadata does not require a specific warrant, recording content of conversations does).
  • Section 217 – computer communications interception (ability to monitor a user’s web activity, communications, etc.)
  • Section 225 – Immunity from prosecution for compliance with wiretaps or other surveillance activity (essentially protects police departments, private contractors, or anyone else that the government instructs/hires to assist them in surveillance).
  • Section 702 – Surveillance of ‘foreigners’ located abroad (in principle this should restrict surveillance to foreign nationals outside of US at the time of such action, but there is much gray area concerning exactly who is a ‘foreigner’ etc. [for instance, is a foreign born wife of a US citizen a “foreigner” – and if so, are communications between the wife and the husband allowed??]

Why is this Act so problematic?KeyholePeeper

As with many things in life, the “law of unintended consequences” can often overshadow the original problem. In this case, the original rationale of wanting to get all the info possible about persons or groups that may be planning terrorist activities against the USA was potentially noble, but the unprecedented powers and lack of accountability provided for by the Patriot Act has the potential (and in fact has already been proven) to scuttle many individual freedoms that form the basis for our society.

Without regard to the methods or justification for his actions, the revelations provided by Ed Snowden’s leaks of the current and past practices of the NSA are highly informative. This issue is now public, and cannot be ‘un-known’. What is clearly documented is that the NSA (and other entities as has since come to light) have extended surveillance on millions of US citizens living within the domestic US to a far greater extent than even the original authors of the Patriot Act envisioned. [This revealed in multiple tv interviews recently].

The next major issue is that of ‘data creep’ – that such data, once collected, almost always gets replicated into other databases, etc. and never really goes away. In theory, to take one of the Sections (702), data retention even for ‘actionable surveillance of foreign nationals’ is limited to one year, and inadvertent collection of surveillance data on US nationals, or even a foreign national that has travelled within the borders of the USA is supposed to be deleted immediately. But absolutely no instruction or methodology is given on how to do this, nor are any controls put in place to ensure compliance, nor are any audit powers given to any other governmental agency.

As we have seen in past discussions regarding data retention and deletion with the big social media firms (Facebook, Google, Twitter, etc.) it’s very difficult to actually delete data permanently. Firstly, in spite of what appears to be an easy step, actually deleting your data from Facebook is incredibly hard to do (what appears to be easy is just the inactivation of your account, permanently deleting data is a whole different exercise). On top of that, all these firms (and the NSA is no different) make backups of all their server data for protection and business continuity. One would have to search and compare every past backup to ensure your data was also deleted from those.

And even the backups have backups… it’s considered an IT ‘best practice’ to back up critical information across different geographical locations in case of disaster. You can see the scope of this problem… and once you understand that the NSA for example will under certain circumstances make chunks of data available to other law enforcement agencies, how does one then ensure compliance across all these agencies that data deletion occurs properly? (Simple answer: it’s realistically impossible).

So What Do We Do About This?

The good news is that most of these issues are not terribly difficult to fix… but the hard part will be changing the mindset of many in our government who feel that they should have the power to do anything they want in total secrecy with no accountability. The “fix” is to basically limit the scope and power of the data collection, provide far greater transparency about both the methods and actual type of data being collected, and have powerful audit and compliance methods in place that have teeth.

The entire process needs to be stood on its end – with the goal being to minimize surveillance to the greatest extent possible, and to retain as little data as possible, with very restrictive rules about retention, sharing, etc. For instance, if data is shared with another agency, it should ‘self-expire’ (there are technical ways to do this) after a certain amount of time, unless it has been determined that this data is now admissible evidence in a criminal trial – in which case the expiry can be revoked by a court order.

fisainfographic3_blog_0

The irony is that even the NSA has admitted that there is no way they can possibly search through all the data they have collected already – in terms of a general search-terms action. They could of course look for a particular person-name or place-name, but if this is all they needed they could have originally only collected surveillance data for those parameters instead of the bulk of American citizens living in the USA…

While they won’t give details, reasonable assumptions can be drawn from public filings and statements, as well as purchase information from storage vendors… and the NSA alone can be assumed to have many hundreds of exabytes of data stored. Given that 1 exabyte = 1,024 Petabytes (which in turn = 1,024 terabytes) this is an incredible amount of data. To put another way, it’s hundreds of trillions of gigabytes… and remember that your ‘fattest’ iPhone holds 128GB.

It’s a mindset of ‘scoop up all the data we can, while we can, just in case someday we might want to do something with it…’  This is why, if we care about our individual freedom of expression and liberty at all, we must protest against the blind renewal of these deeply flawed laws and regulations such as the Patriot Act.

This discussion is entering the public domain more and more – it’s making the news but it takes action not just talk. Make a noise. Write to your congressional representatives. Let them know this is an urgent issue and that they will be held accountable at election time for their position on this renewal. If the renewal is not granted, then – and typically only then – will the players be forced to sit down and have the honest discussion that should have happened years ago.

Data Security – An Overview for Executive Board members [Part 1: Introduction & Concepts]

March 16, 2015 · by parasam

Introduction

This post is a synthesis of a number of conversations and discussions concerning security practices for the digital aspect of organizations. These dialogs were initially with board members and executive-level personnel, but the focus of this discussion is equally useful to small business owners or anyone that is a stakeholder in an organization that uses data or other digital tools in their business: which today means just about everyone!

The point of view is high level and deliberately as non-technical as possible: not to assume that many at this level are not extremely technically competent, but rather to encompass as broad an audience as possible – and, as will be seen, that the biggest issues are not actually that technical in the first place, but rather are issues of strategy, principle, process and oft-misunderstood ‘features’ of the digital side of any business. The points that will be discussed are equally applicable to firms that primarily exist ‘online’ (who essentially have no physical presence to the consumers or participants in their organization) and those organizations that exist mainly as ‘bricks and mortar’ companies (who use IT as a ‘back office’ function just to support their physical business).

In addition, these principles are relevant to virtually any organization, not just commercial business: educational institutions, NGO’s, government entities, charities, medical practices, research institutions, ecosystem monitoring agencies and so on. There is almost no organization on earth today that doesn’t use ‘data’ in some form. Within the next ten years, the transformation will be almost complete: there won’t be ANY organizations that won’t be based, at their core, on some form of IT. From databases to communication to information sharing to commercial transactions, almost every aspect of any firm will be entrenched in a digital model.

The Concept of Security

The overall concept of security has two major components: Data Integrity and Data Security. Data Integrity is the aspect of ensuring that data is not corrupted by either internal or external factors, and that the data can be trusted. Data Security is the aspect of ensuring that only authorized users have access to view, transmit, delete or perform other operations on the data. Each is critical – Integrity can likened to disease in the human body: pathogens that break the integrity of certain cells will disrupt and eventually cause injury or death; Security is similar to the protection that skin and other peripheral structures provide – a penetration of these boundaries leads to a compromise of the operation of the body, or in extreme cases major injury or death.

While Data Integrity is mostly enforced with technical means (backup, comparison, hash algorithms, etc.), Data Security is an amalgam of human factors, process controls, strategic concepts, technical measures (comprising everything from encryption, virus protection, intrusion detection, etc.) and the most subtle (but potentially dangerous to a good security model): the very features of a digital ecosystem that make it so useful also can make it highly vulnerable. The rest of this discussion will focus on Data Security, and in particular those factors that are not overtly ‘technical’ – as there are countless articles etc on the technical side of Data Security. [A very important aspect of Data Integrity – BCDR (Business Continuity and Disaster Recovery) will be the topic of an upcoming post – it’s such an important part of any organizations basic “Digital Foundation”.]

The Non-Technical Aspects of Data Security

The very nature of ‘digital data’ is both an absolute boon to organizations in so many ways: communication, design, finance, sales, online business – the list is endless. The fantastic toolsets we now have in terms of high-powered smartphones and tablets coupled with sophisticate software ‘apps’ have put modern business in the hands of almost anyone. This is based on the core of any digital system: the concept of binary values. Every piece of e-mail, data, bank account details or digital photograph is ultimately a series of digital values: either a 1 or a 0. This is the difference between the older analog systems (many shades of gray) and digital (black or white, only 2 values). This core concept of digital systems makes copying, transmission, etc of data very easy and very fast. A particular block of digital data, when copied with no errors, is absolutely indistinguishable from the ‘original’. While in most cases this is what makes the whole digital world work as well as it does, it also creates a built-in security threat. Once a copy is made, if it is appropriated by an unauthorized user it’s as if the original was taken. The many thousands of e-mails that were stolen and then released by the hackers that compromised the Sony Pictures data networks is a classic example of this…

While there are both technical methods and process controls that can mitigate this risk, it’s imperative that business owners / stakeholders understand that the very nature of a digital system has a built-in risk to data ‘leakage’. Only with this knowledge can adequate controls be put in place to prevent data loss or unauthorized use. Another side to digital systems, particularly communication systems (such as e-mail and social media), is how many of the software applications are designed and constructed. Many of these, mostly social media types, have uninhibited data sharing as the ‘normal’ way the software works – with the user having to take extra effort to limit the amount of sharing allowed.

An area that is a particular challenge is the ‘connectedness’ of modern data networks. The new challenge of privacy in the digital ecosystem has prompted (and will continue to) many conversations, from legal to moral/ethical to practical. The “Facebook” paradigm [everything is shared with everybody unless you take efforts to limit such sharing] is really something we haven’t experienced since small towns in past generations where everybody knew everyone’s business…

While social media is fast becoming an important aspect of many firms’ marketing, customer service and PR efforts, they must be designed rather carefully in order to isolate those ‘data sharing’ platforms from the internal business and financial systems of a company. It is surprisingly easy for inadvertent ‘connections’ to be made between what should be private business data and the more public social media facet of a business. Even if a direct connection is not made between say, the internal company e-mail address book and their external Facebook account (a practice that unfortunately I have witnessed on many more than one occasion!), the inappropriate positioning of a firm’s Twitter client on the same sub-network as their e-mail servers is a hacker’s dream: it usually will take a clever hacker only minutes to ‘hop the fence’ and gain access to the e-mail server if they were able to compromise the Twitter account.

Many of the most important issues surrounding good Data Security are not technical, but rather principles and practices of good security. Since ultimately human beings are often a significant actor in the chain of entities that handle data, these humans need guidance and effective protocols just like the computers need well-designed software that protects the underlying data. Access controls (from basic passwords to sophisticated biometric parameters such as fingerprints or retina scans); network security controls (for instance requiring at least two network administrators to collectively authorize large data transfers or deletions – which would have prevented most of the Sony Pictures data theft/destruction); compartmentalization of data (the practice of controlling both storage and access to different parts of a firms’ digital assets in separate digital repositories); and the newcomer on the block: cloud computing (essentially just remote data centers that host storage, applications or even entire IT platforms for companies) – all of these are areas that have very human philosophies and governance issues that are just implemented with technology.

Summary

In Part 1 of this post we have discussed the concepts and basic practices of digital security, and covered an overview of Data Security. The next part will discuss in further detail a few of the most useful parts of the Data Security model, and offer some practical solutions for good governance in these areas.

Part 2 of this series is located here.

The Hack

December 21, 2014 · by parasam

 

It’s a sign of our current connectedness (and the lack of ability or desire for most of us to live under a digital rock – without an hourly fix of Facebook, Twitter, CNN, blogs, etc – we don’t feel we exist) that the title of this post needs no further explanation.

The Sony “hack” must be analyzed apart from the hyperbole of the media, politics and business ‘experts’ to put the various aspects in some form of objectivity – and more importantly to learn the lessons that come with this experience.

I have watched and read endless accounts and reports on the event, from lay commentators, IT professionals, Hollywood business, foreign policy pundits, etc. – yet have not seen a concise analysis of the deeper meaning of this event relative to our current digital ecosystems.

Michael Lynton (CEO, Sony Pictures) stated on CNN’s Fareed Zakaria show today that “the malware inserted into the Sony network was so advanced and sophisticated that 90% of any companies would have been breached in the same manner as Sony Pictures.” Of course he had to take that position – while his interview was public there was a strong messaging to investors in both Sony and the various productions that it hosts.

As reported by Wired, Slate, InfoWorld and others the hack was almost certainly initiated by the introduction of malware into the Sony network – and not particularly clever code at that. For the rogue code to execute correctly, and to have the permissions to access, transmit and then delete massive amounts of data required the credentials of a senior network administrator – which supposedly were stolen by the hackers. The exact means by which this theft took place have not been revealed publicly. Reports on the amount of data stolen vary, but range from a few to as much as a hundred terabytes. That is a massive amount of data. To move this amount of data requires a very high bandwidth pipe – at least a 1Gbps, if not higher. These sized pipes are very expensive, and normally are managed rather strictly to prioritize bandwidth. Depending on the amount of bandwidth allocated for the theft of data, the ‘dump’ must have lasted days, if not weeks.

All this means that a number of rather standard security protocols were either not in place, or not adhered to at Sony Pictures. The issue here is not Sony – I have no bone to pick with them, and in fact they have been a client of mine numerous times in the past while with different firms, and I continue to have connections with people there. This is obviously a traumatic and challenging time for everyone there. It’s the larger implications that bear analysis.

This event can be viewed through a few different lenses: political, technical, philosophical and commercial.

Political – Initially let’s examine the implications of this type of breach, data theft and data destruction without regard to the instigator. In terms of results the “who did it” is not important. Imagine instead of this event (which caused embarrassment, business disruption and economic loss only) an event in which the Light Rail switching system in Los Angeles was targeted. Multiple and simultaneous train wrecks are a highly likely result, with massive human and infrastructure damage certain. In spite of the changes that were supposed to follow on from the horrific crash some years ago in the Valley there, the installation of “collision avoidance systems” on each locomotive still has not taken place. Good intentions in politics often take decades to see fruition…

One can easily look at other bits of infrastructure (electrical grids, petroleum pipelines, air traffic control systems [look at London last week], telecommunications, internet routing and peering – the list goes on and on – of critical infrastructure that is inadequately protected.

Senator John McCain said today that of all the meetings in his political life, none took longer and accomplished less than cybersecurity subjects. This issue is just not taken seriously. Many major hacks have occurred in the past – this one is getting serious attention from the media due to the target being a media company, and that many high profile Hollywood people have had a lot to say – and that further fuels the news machine.

Now whether North Korea instigated or performed this on its own – both possible and according to the FBI is now fact – the issue of a nation-state attacking other national interests is most serious, and demands a response from the US government. But regardless of the perpetrator – whether an individual criminal, a group, etc. – a much higher priority must be placed on the security of both public and private entities in our connected world.

Technical – The reporting and discussion on the methodology of this breach in particular, and ‘hacks’ in general, has ranged from the patently absurd to relatively accurate. In this case (and some other notable breaches in the last few years, such as Target), the introduction of malware into an otherwise protected (at least to some degree) system allowed access and control from an undesirable external party. While the implanting of the malware may have been a relatively simple part of the overall breach, the design of the entire process, codewriting and testing, steering and control of the malware from the external servers, as well as the data collection and retransmission clearly involved a team of knowledgeable technicians and some considerable resources. This was not a hack done by a teenager with a laptop.

On the other hand, the Sony breach was not all that sophisticated. The data made public so far indicates that the basic malware was Trojan Destover, combined with a commercially available codeset EldoS RawDisk which was used for the wiping (destruction) of the Sony data. Both of these programs (and their similes Shamoon and Jokra) have been detected in other breaches (Saudi Aramco, Aug 2012; South Korea, Mar 2013). See this link for further details. Each major breach of this sort tends to have individual code characteristics, along with required access credentials with the final malware deliverable package often compiled shortly before the attack. The evidence disclosed in the Sony breach indicates that stolen senior network admin credentials were part of the package, which allowed the full and unfettered access to the network.

It is highly likely that the network was repeatedly probed some time in advance of the actual breach, both as a test of the stolen credentials (to see how wide the access was, and to inspect for any tripwires that may have been set if the credentials had become suspect).

The real lessons to take away from the Sony event have much more to do with the structure of the Sony network, their security model, security standards and practices, and data movement monitoring. To be clear, this is not picking out Sony as a particularly bad example: unfortunately this firm’s security practices are rather the norm today: very, very few commercial networks are adequately protected or designed – even financial companies who one would assume have better than average security.

Without having to look at internal details, one only has to observe the reported breaches of large retail firms, banks and trading entities, government agencies, credit card clearing houses… the list goes on and on. Add to this that not all breaches are reported, and even less are publicly disclosed – the estimates range from 20-30% of network security breaches are reported. The reasons vary from loss of shareholder or customer trust, appearance of competitive weakness, not knowing what actually deserves reporting and how to classify the attempt or breach, etc. etc. In many cases data on “cyberattacks” is reported anonymously or is gathered statistically by firms that handle security monitoring on an outsource basis. At least these aggregate numbers give a scope to the problem – and it is huge. For example, IBM’s report shows for one year (April 2012 – April 2013)  there were 73,400 attacks on a single large organization during this time period. This resulted in about 100 actual ‘security incidents’ during the year for that one company. A PWC report shows that an estimated 42 million data security incidents will have occurred during 2014 worldwide.

If this amount of physical robberies were occurring to firms the response, and general awareness, would be far higher. There is something insidious about digital crime that doesn’t attract the level of notice that physical events do. The economic loss worldwide is estimated in the hundreds of billions of dollars – with most of these proceeds ending up in organized crime, rogue nation-states and terrorist groups. Given the relative sophistication of ISIS in terms of social media, video production and other high-tech endeavours, it is highly likely that a portion of their funding comes from cybercrime.

The scope of the Sony attack, with the commensurate data loss, is part of what has made this so newsworthy. This is also the aspect of this breach that could have mitigated rather easily – and underscores the design / security practices faults that plague so many firms today. The following points list some of the weaknesses that contributed to the scale of this breach:

  • A single static set of credentials allowed nearly unlimited access to the entire network.
  • A lack of effective audit controls that would have brought attention to potential use of these credentials by unauthorized users.
  • A lack of multiple-factor authentication that would have made hard-coding of the credentials into the malware ineffective.
  • Insufficient data move monitoring: the level of data that was transmitted out of the Sony network was massive, and had to impact normal working bandwidth. It appears that large amounts of data are allowed to move unmanaged in and out of the network – again an effective data move audit / management process would have triggered an alert.
  • Massive data deletion should have required at least two distinct sets of credentials to initiate.
  • A lack of internal firewalls or ‘firestops’ that could have limited the scope of access, damage, theft and destruction.
  • A lack of understanding at the highest management levels of the vulnerability of the firm to this type of breach, with commensurate board expertise and oversight. In short, a lack of governance in this critical area. This is perhaps one of the most important, and least recognized, aspects of genuine corporate security.

Philosophical – With the huge paradigm shift that the digital universe has brought to the human race we must collectively asses and understand the impacts of security, privacy and ownership of that ephemeral yet tangible entity called ‘data’. With an enormous transformation under way where millions of people (the so-called ‘knowledge workers’) produce, consume, trade and enjoy nothing but data. There is not an industry that is untouched by this new methodology: even very ‘mechanistic’ enterprises such as farming, steelmills, shipping and train transportation are deeply intertwined with IT now. Sectors such as telecoms, entertainment, finance, design, publishing, photography and so on are virtually impossible to implement without complete dependence on digital infrastructures. Medicine,  aeronautics, energy generation and prospecting – the lists go on and on.

The overall concept of security has two major components: Data Integrity (ensuring that the data is not corrupted by either internal or external factors, and that the data can be trusted; and Data Security (ensuring that only authorized users have access to view, transmit, delete or perform other operations on the data). Each are critical – Integrity can likened to disease in the human body: pathogens that break the integrity of certain cells will disrupt and eventually cause injury or death; Security is similar to the protection that skin and other peripheral structures provide – a penetration of these boundaries leads to a compromise of the operation of the body, or in extreme cases major injury or death.

An area that is a particular challenge is the ‘connectedness’ of modern data networks. The new challenge of privacy in the digital ecosystem has prompted (and will continue to) many conversations, from legal to moral/ethical to practical. The “Facebook” paradigm [everything is shared with everybody unless you take efforts to limit such sharing] is really something we haven’t experienced since small towns in past generations where everybody knew everyone’s business…

Just as we have always had a criminal element in societies – those that will take, destroy, manipulate and otherwise seek self-aggrandizement at the expense of others – we now have the same manifestations in the digital ecosystem. Only digi-crime is vastly more efficient, less detectable, often more lucrative, and very difficult to police. The legal system is woefully outdated and outclassed by modern digital pirates – there is almost no international cooperation, very poor understanding by most police departments or judges, etc. etc. The sad truth is that 99% of cyber-criminals will get away with their crimes for as long as they want to. A number of very basic things must change in our collective societies in order to achieve the level of crime reduction that we see in modern cultures in the physical realm.

A particular challenge is mostly educational/ethical: that everything on the internet is “free” and is there for the taking without regard to the intellectual property owner’s claim. Attempting to police this after the fact is doomed to failure (at least 80% of the time) – not until users are educated to the disruption and effects of their theft of intellectual property. This attitude has almost destroyed the music industry world-wide, and the losses to the film and television industry amount to billions of dollars annually.

Commercial – The economic losses due to data breaches, theft, destruction, etc are massive, and the perception of the level of this loss is staggeringly low – even among commercial stakeholders whom are directly affected. Firms that spend massive amounts of time, money and design effort to physically protect their enterprises apply the flimsiest of real effective data security efforts. Some of this is due to lack of knowledge, some to lack of understanding of the core principals that comprise a real and effective set of procedures for data protection, and a certain amount of laziness: strong security always takes some effort and time during each session with the data.

It is unfortunate, but the level of pain, publicity and potential legal liability of major breaches such as Sony are seemingly necessary to raise the attention that everyone is vulnerable. It is imperative that all commercial entities, from a vegetable seller at a farmer’s market that uses SnapScan all the way to global enterprises such as BP Oil, J.P. Morgan, or General Motors take cyber crime as a continual, ongoing, and very real challenge – and deal with it at the board level with same importance given to other critical areas of governance: finance, trade secrets, commercial strategy, etc.

Many firms will say, “But we already spend a ridiculous amount on IT, including security!” I am sure that Sony is saying this even today… but it’s not always the amount of the spend, it’s how it’s done. A great deal of cash can be wasted on pretty blinking lights and cool software that in the end is just not effective. Most of the changes required today are in methodology, practice, and actually adhering to already adopted ‘best practices’. I personally have yet to see any business, large or small, that follows the stated security practices set up in that particular firm to the letter.

– Ed Elliott

Past articles on privacy and security may be found at these links:

Comments on SOPA and PIPA

CONTENT PROTECTION – Methods and Practices for protecting audiovisual content

Anonymity, Privacy and Security in the Connected World

Whose Data Is It Anyway?

Privacy, Security and the Virtual World…

Who owns the rain?  A discussion on accountability of what’s in the cloud…

The Perception of Privacy

Privacy: a delusion? a right? an ideal?

Privacy in our connected world… (almost an oxymoron)

NSA (No Secrets Anymore), yet another Snowden treatise, practical realities…

It’s still Snowing… (the thread on Snowden, NSA and lack of privacy continues…)

 

Privacy in our connected world… (almost an oxymoron)

February 4, 2014 · by parasam

Yesterday I wrote on the “ideal” of privacy in our modern world – this morning I read some further information related to this topic (acknowledgement to Robert Cringely as the jumping-off point for this post). If one wants to invest the time, money or both – there are ways to keep your data safe. Really, really safe. The first is the digital equivalent of a Swiss bank account – and yes, it’s also offered by the Swiss – deep inside a mountain bunker – away from the prying eyes of NSA, MI6 and other inquisitive types. Article is here. The other method is a new encryption method that basically offers ‘red herrings’ to would-be password hackers: let them think they have cracked your password, but feed them fake data instead of your real stuff – described here.

Now either of these ‘methods’ requires the user to take proactive steps, and spend time/money. The unfortunate, but real, truth of today’s digital environment is that you – and only you – must take responsibility for the security and integrity of your data. The more security you desire, the more effort you must expend. No one will do it for you (for free) – and those that offer… well, you get the idea. A long time ago one could live in a village and not lock your front door… not any more.

However, before spiraling down a depressive slope of digital angst – there are some facets to consider:  even though it is highly likely (as in actually positively for certain…) that far more of your private life is exposed and stored in the BigData bunkers of Walmart, Amazon, ClearChannel, Facebook or some government… so are the details of a billion other users… There is anonymity in the sheer volume of data. The important thing to remember is that if you really become a ‘person of interest’ – to some intelligence agency, a particularly zealous advertiser, etc. – almost nothing can stop the accumulation of information about you and your activities. However, most people don’t fit this profile. You’re just a drop of water in a very large digital ocean. Relax and float on the waves…

Understanding helps: nothing is free. Ever. So if come to know that the ‘price’ you pay for the ‘free’ Google search engine that solves your trivia questions, settles arguments amongst your children, or allows you to complete your next research project in a fraction of the time that would otherwise be necessary is the ‘donation’ of some information about what you search for, when, how often, etc. – then maybe you can see this as fair payment. After all, the data centers that power the Google search ‘engine’ are ginormous, hugely expensive to build and massively expensive to run – they tend to be located close to power generating sources as the amount of electricity consumed is so large. Ultimately someone has to pay the power bill…

Page 1 of 2 1 2 Next »
  • Blog at WordPress.com.
  • Connect with us:
  • Twitter
  • Vimeo
  • YouTube
  • RSS
  • Follow Following
    • Parasam
    • Join 95 other followers
    • Already have a WordPress.com account? Log in now.
    • Parasam
    • Customize
    • Follow Following
    • Sign up
    • Log in
    • Report this content
    • View site in Reader
    • Manage subscriptions
    • Collapse this bar
 

Loading Comments...