• Home
  • about this blog
  • Blog Posts

Parasam

Menu

  • design
  • fashion
  • history
  • philosophy
  • photography
  • post-production
    • Content Protection
    • Quality Control
  • science
  • security
  • technology
    • 2nd screen
    • IoT
  • Uncategorized
  • Enter your email address to follow this blog and receive notifications of new posts by email.

  • Recent Posts

    • Take Control of your Phone
    • DI – Disintermediation, 5 years on…
    • Objective Photography is an Oxymoron (all photos lie…)
    • A Historical Moment: The Sylmar Earthquake of 1971 (Los Angeles, CA)
    • Where Did My Images Go? [the challenge of long-term preservation of digital images]
  • Archives

    • September 2020
    • October 2017
    • August 2016
    • June 2016
    • May 2016
    • November 2015
    • June 2015
    • April 2015
    • March 2015
    • December 2014
    • February 2014
    • September 2012
    • August 2012
    • June 2012
    • May 2012
    • April 2012
    • March 2012
    • February 2012
    • January 2012
  • Categories

    • 2nd screen
    • Content Protection
    • design
    • fashion
    • history
    • IoT
    • philosophy
    • photography
    • post-production
    • Quality Control
    • science
    • security
    • technology
    • Uncategorized
  • Meta

    • Register
    • Log in
    • Entries feed
    • Comments feed
    • WordPress.com

IoT (Internet of Things): A Short Series of Observations [pt 5]: IoT from the Business Point of View

May 19, 2016 · by parasam

IoT from the Business Perspective

While much of the current reporting on IoT describes how life will change for the end user / consumer once IoT matures and many of the features and functions that IoT can enable have deployed, the other side of the coin is equally compelling. The business of IoT can be broken down into roughly three areas: the design, manufacture and sales of the IoT technology; the generalized service providers that will implement and operate this technology for their business partners; and the ‘end user’ firms that will actually use this technology to enhance their business – whether that be in transportation, technology, food, clothing, medicine or a myriad of other sectors.

The manufacture, installation and operation of billions of IoT devices will be expensive in its totality. The only reason this will happen is that overall a net positive cash flow will result. Business is not charity, and no matter how ‘cool’ some new technology is perceived to be, no one is going to roll this out for the bragging rights. Even at this nascent stage the potential results of this technology are recognized by many different areas of commerce as such a powerful fulcrum that there is a large appetite for IoT. The driving force for the entire industry is the understanding of how goods and services can be made and delivered with increased efficiency, better value and lower friction.

InternetOfThings02  sensor07

As the whole notion of IoT matures, several aspects of this technology that must be present initially for IoT to succeed (such as an intelligent network, as discussed in prior posts in this series) will benefit other areas of the general IT ecosystem, even those not directly involved with IoT. Distributed and powerful networks will enhance ‘normal’ computational work, reduce loads on centralized data centers and in general provide a lower latency and improved experience for all users. The concept of increased contextual awareness that IoT technology brings can benefit many current applications and processes.

Even though many of today’s sophisticated supply chains have large portions that are automated and are otherwise interwoven with a degree of IT, many still have significant silos of ‘darkness’ where either there is no information, or process must be performed by humans. For example, the logistics of importing furniture from Indonesia is rife with many handoffs, instructions, commercial transactions and so on that are verbal or at best hand written notes. The fax is still ‘high technology’ in many pieces of this supply chain, and exactly what ends up in any given container, and even exactly which ship it’s on, is still often a matter of guesswork. IoT tags that are part of the original order (retailer in Los Angeles wants 12 bookcases) can be encoded locally in Indonesia and delivered to the craftsperson, who will attach each one to the completed bookcase. The items can then be tracked during the entire journey, providing everyone involved with a greater ease and efficiency of operations (local truckers, dockworkers, customs officials, freight security, aggregation and consignment through truck and rail in the US, etc.)

As IoT is in its infancy at this stage it’s interesting to note that the largest amount of traction is in the logistics and supply chain parts of commerce. The perceived functionality of IoT is so high, with relatively low risk from early adopter malfunction, that many supply chain entities are jumping on board, even with some half-baked technology. As was mentioned in an earlier article, temperature variations during transport are the single highest risk factor for the delivery of wine internationally. IoT can easily provide end-to-end monitoring of the temperature (and vibration) for every case of wine at an acceptable cost. The identification of suspect cases, and the attribution of liability to the carriers, will improve quality, lower losses and lead to reforms and changes where necessary in delivery firms to avoid future liability for spoiled wine.

As with many ‘buzzwords’ in the IT industry, it will be incumbent on each company to determine how IoT fits (or does not) within that firm’s product or service offerings. This technology is still in the very early stages of significant implementation, and many regulatory, legal, ethical and commercial aspects of how IoT will interact within the larger existing ecosystems of business, finance and law have yet to be worked out. Early adoption has advantages but also risk and increased costs. Rational evaluation and clear analysis will, as always, be the best way forward.

The next section of this post “The Disruptive Power of IoT” may be found here.

IoT (Internet of Things): A Short Series of Observations [pt 4]: IoT from the Consumer’s Point of View

May 19, 2016 · by parasam

Functional IoT from the Consumer’s Perspective

The single largest difference between this technology and most others that have come before – along with the requisite hype, news coverage, discussion and confusion – is that almost without exception the user won’t have to do anything to participate in this ‘new world’ of IoT. All previous major technical innovations have required either purchasing a new gadget, or making some active, conscious choice to participate in some way. Examples include getting a smartphone, a computer, a digital camera, a CD player, etc. Even if sometimes the user makes an implicit choice to embrace a new technology (such as a digital camera instead of a film camera) there is still an explicit act of bringing this different thing into their lives.

With IoT, almost every interaction with this ecosystem will be passive – i.e. will not involve conscious choice by the consumer. While the effects and benefits of the IoT technology will directly affect the user, and in many cases will be part of other interactions with the technosphere (home automation, autonomous cars, smartphone apps, etc.) the IoT aspect is in the background. The various sensors, actuators and network intelligence that makes all this work may never directly be part of a user’s awareness. The fabric of one’s daily life simply will become more responsive, more intelligent and more contextually aware.

During the adoption phase, where the intelligence, interaction and accuracy of both sensor, actuator and software interpretation of the data is maturing we can expect hiccups. Some of these will be laughable, some frustrating – and some downright dangerous. Good controls, security and common sense will need to prevail to ensure that this new technology is implemented correctly. Real-time location information can be reassuring to a parent whose young children are walking to school – and yet if that data is not protected or is hacked, can provide information to others that may have far darker intentions in mind. We will collectively experience ‘double-booked’ parking spaces (where smart parking technology gets it wrong sometimes), refrigerators that order vodka instead of milk when the product tracking goes haywire and so on. The challenge will be that the consumer will have far less knowledge, or information, about what went wrong and who to contact to sort it out.

When your weather app is consistently wrong, you can contact the app vendor, or if the data itself is wrong, the app maker can approach the weather data provider service. When a liter of vodka shows up in your shopping delivery instead of a liter of milk, is it the sensor in the fridge, the data transmission, an incorrectly coded tag on the last liter of milk consumed, the backoffice software in the data collection center, the picking algorithm in the online shopping store… the number of possible areas of malfunction are simply enormous in the IoT universe and a considerable effort will be required to ascertain where the root cause of failure is with each error.

A big part of a successful rollout of IoT will be a very sophisticated fault analysis layer that extends across the entire ecosystem. This again is a reason why the network of IoT itself must be so intelligent for things to work correctly. In order for data to be believed by upstream analysis and correctly integrated into a knowledge-based ecosystem, and for correct actions to be taken a high degree of contextual awareness and ‘range of acceptable data/outcomes’ must be built in to the overall network of IoT. When anomalies show up, the fault detection layer must intervene. Over time, the heuristic learning capability of many network elements may be able to actually correct for the bad data but at least data that is suspect must be flagged and not blindly acted upon.

A big deal was recently made over the next incarnation of Siri (Viv) managing to correctly order and deliver a pizza via voice recognition technology and AI (Artificial Intelligence). This type of interaction will fast become the norm in an IoT-enabled universe. Not all of the perceived functionality will be purely IoT – in many cases the data that IoT can provide will supplement other more traditional data inputs (voice, keyboard, thumbpress, fingerswipes, etc.). The combined data, along with a wealth of contextual knowledge (location, time of day, temperature, etc) and sophisticated algorithms, AI computation and the capability of low-latency ultra-high-speed networks and compute nodes will all work together to manifest the desired outcome of an apparently smart surrounding.

The Parallel Universes of IoT Communities

As the IoT technology rolls out during the next few years, different cultures and countries with different priorities and capabilities will implement these devices and networks in various ways. While the sophistication of a hyperfunctional BMW autonomous car driving you to a shop, finding and parking all without any human intervention may be the experience of a user in Munich, a farmer in rural Asia may use a low complexity app on their smartphone to read the data in some small sensors in local wells to determine that heavy metals have not polluted the water. If in fact the water is not up to standards, the app may then (with a very low bandwidth burst of data) inform the regional network that attention is required, and discover where nearby suitable drinking water is available.

Over time, the data collected by individual communities will aggregate and provide a continual improvement of knowledge of environment, migration of humans and animals, overall health patterns and many other data points that today often must be proactively gathered by human volunteers. It will take time, and continual work on data grooming, but the quantity and quality of profoundly useful data will increase many-fold during the next decade.

One area of critical usefulness where IoT, along with AI and considerable cleverness in data mining and analysis, can potentially save many lives and economic costs is in the detection and early reaction to medical pandemics. As we have recently seen with bird flu, Ebola and other diseases, the rapid transportation systems along with delayed incubation times can post a considerable risk for large groups of humanity. Since (in theory) all airline travel, and much train/boat travel is identifiable and trackable, the transmission vectors of potential carriers could be quickly analyzed if localized data in a particular area began to suggest a medical threat. The early signs of trouble are often in areas of low data awareness and generation (bird and chicken deaths in rural areas in Asia for example) – but as IoT brings an improvement in overall contextual awareness of environment initially unrelated occurrences can be monitored.

The importance and viability of the IoT market in developing economies cannot be underestimated: several major firms that specialize in such predictions (Morgan Stanley, Forbes, Gartner, etc.) predict that roughly a third of all sales in the IoT sector will come from emerging economies. The ‘perfect storm’ of relatively low-cost devices, the continual increase in wireless connectivity and the proliferation of relatively inexpensive but powerful compute nodes (smartphones, intelligent network nodes, etc.) can easily be implemented in areas that just five years ago were thought impenetrable by modern technology.

The next section of this post “IoT from the Business Point of View” may be found here.

IoT (Internet of Things): A Short Series of Observations [pt 3]: Security & Privacy

May 19, 2016 · by parasam

Past readers of my articles will notice that I have a particular interest in the duality of Security and Privacy within the universe of the Internet. IoT is no exception… In the case of IoT, the bottom line is that for wide-spread acceptance, functionality and a profitable outcome the entire system must be perceived as secure and trustworthy. If data cannot be trusted, if incorrect actions are taken, if the security of individuals and groups is reduced as a result of this technology there will be significant resistance.

Security

A number of security factors have been discussed in the prior posts in relation to sensors, actuators and the infrastructure/network that connects and supports these devices. To summarize, many devices do not, or likely will not, provide sufficient security built in to the devices themselves. Once installed, it will typically be unreasonable or impossible to upgrade or alter the security functionality of the IoT devices. Some of the issues that plague IoT devices are: lack of a security layer in the design; poor protocols; hard-coded passwords; lack of – or poorly implemented – encryption; lack of best practice authentication and access control, etc.

larger-13-SECURITY-internet3  security02  Security01  security03

From a larger perspective, the following issues surrounding security must be addressed in order for a robust IoT implementation to succeed:

  • Security as part of the overall design of individual sensors/actuators as well as the complete system.
  • The economic factor in security: how much security for how much cost is appropriate for a particular device? For instance, a temperature sensor used in logistics will have very different requirements than an implanted sensor in a human pacemaker.
  • Usability: just as in current endpoints and applications, a balance between ease of use and appropriate security must be achieved.
  • Adherence to recognized security ‘best practices’, protocols and standards. Just as “ipsec” exists for general ip networks, work is under discussion for “IoTsec” – and if such a standard comes into existence it will be incumbent on manufacturers to accommodate this.
  • How functional security processes (authentication, access control, encryption of data) will be implemented within various IoT schemas and implementations.
  • As vulnerabilities are discovered, or new security practices are deemed necessary to implement, how can these be implemented in a large field of installed devices?
  • How will IoT adapt to the continual change of security regulations, laws and business requirements over time?
  • How will various IoT implementations deal with ‘cross-border’ issues (where data from IoT sensors is consumed or accessed by entities that are in different geographic locations, with different laws and practices concerning data security?

Privacy

The issue of privacy is particularly challenging in the IoT universe, mainly due to both the ubiquity and passivity of these devices. Even with mobile apps that often tend to reduce privacy in many ways the user has some degree of control as an interface is usually provided where a measure of privacy control can be implemented. Most IoT devices are passive, in the sense that no direct interaction with humans occurs. But the ubiquity and pervasiveness of the the sensors, along with the capability of data aggregation, can provide a huge amount of information that may reduce the user’s privacy remarkably.

privacy04  privacy01  privacy02  privacy03

As an example, let’s examine the use case of a person waking up then ‘driving’ to work (in their autonomous car) with a stop on the way for a coffee:

  • The alarm in their smartphone wakes up the user – which as it detects sleep patterns through movement and machine learning – transmits that info to a database, registering among other things the time the user awoke.
  • The NEST thermostat adjusts the environment, as it has learned the user is now awake. That info as well is shared.
  • Various motion and light sensors throughout the home detect the presence and movement of the user, and to some degree transmit that information.
  • The security system is armed as the user leaves the home, indicating a lack of presence.
  • The autonomous car wakes up and a pre-existing program “take me to work, but stop at Starbucks on Main Road for a coffee first” is selected. The user’s location is transmitted to a number of databases, some personalized, some more anonymous (traffic management systems for example) – and the requirement for a parking space near the desired location is sent. Once a suitable parking space is reserved (through the smart parking system) a reservation is placed on the space (usually indicated by a lamp as well as signalling any other vehicle that they cannot park there).
  • The coffee house recognizes the presence of a repeat customer via the geotagging of the user’s cellphone as it acquires the WiFi signal when entering the coffee shop. The user is registered onto the local wireless network, and the user’s normal order is displayed on their cell for confirmation. A single click starts the order and the app signals the user when their coffee and pastry are ready. The payment is automatically deducted at pickup using NFC technology. The payment info is now known by financial networks, again indicating the location of the user and the time.
  • The user signals their vehicle as they leave the coffee shop, the parking space allocation system is notified that the space will be available within 2 minutes, and the user enters the car and proceeds to be driven to work.

It is clear that with almost no direct interaction with the surrounding ecosystem many details of the user’s daily life are constantly revealed to a large and distributed number of databases. As the world of IoT increases and matures, very little notification will ever be provided to an individual user about how many databases receive information from a sensor or set of sensors. In a similar manner, instructions to an actuator that is empirically tied to a particular user can reflect data about that user, and again the user has no control over the proliferation of that data.

As time goes on, and new ‘back-office’ functionality is added to increase either the usefulness of IoT data to a user or the provider, it is most likely that additional third party service providers will acquire access to this data. Many of these will use cloud functionality, with interconnections to other clouds and service providers that are very distant, both in location and regulatory environment, to the user. The level of diffusion will rapidly approach that of complete ambiguity in terms of a user having any idea of who has access to what data that IoT devices within their environment provide.

For the first time, we collectively must deal with a new paradigm: a pervasive and ubiquitous environment that generates massive data about all our activities over which we essentially have no control. If we thought that the concept of privacy – as we knew it 10 or 20 years ago – was pretty much dead, IoT will make absolutely certain that this idea is dead, buried and forgotten… More than anything else, the birth of substantial IoT will spark a set of conversations about what is an acceptable concept of privacy in the “Internet of Everything” age…

One cannot wish this technology away – it’s coming and nothing will stop it. At some level, the combination of drivers that will keep enabling the IoT ecosystem (desire for an increased ‘feature-set of life’ from users; and increased knowledge and efficiency from product and service providers) will remain much higher than any resistance to the overall technology. However, the widespread adoption, trust and usefulness will be greatly impacted if a wide-spread perception grows that IoT is invasive, reduces the overall sense of privacy, and is thought of as ‘big brother’ in small packages.

The scale of the IoT penetration into our lives is also larger than any previous technology in human history – with the number of connected devices poised to outnumber the total population of the planet by a factor of more than 10:1 within the next seven years. Even those users that believe they are not interacting with the Internet will be passively ‘connected’ every day of their lives in some way. This level of unavoidable interaction with the ‘web’ will shortly become the norm for most of humanity – and affect those in developing economies as well as the most technologically advanced areas. Due to the low cost and high degree of perceived value of the technology, the proliferation of IoT into currently less-advanced populations will likely exceed that of the cellphone.

While it is beyond the scope of this post to discuss the larger issue of privacy in the connected world in detail, it must be recognized that the explosive growth of IoT at present will forever change our notion of privacy in every aspect of our lives. This will have psychological, social, political and economic results that are not fully known, but will be a sea change in humanity’s process.

The next section of this post “IoT from a Consumer’s Point of View” may be found here.

References:

Rethinking Network Security for IoT

Five Challenges of IoT

 

IoT (Internet of Things): A Short Series of Observations [pt 2]: Sensors, Actuators & Infrastructure

May 19, 2016 · by parasam

The Trinity of Functional IoT

As the name implies, the functionality of “Things” that comprise an IoT universe must be connected in order for this ecosystem to operate. This networking interconnection is actually the magic that will allow a fully successful implementation of the IoT technology. In addition, it’s important to realize that this network will often perform in a bi-directional manner, with the “Things” at the edge of the network either acting as Input Devices (Sensors) or Output Devices (actuators).

Input (Sensors)

The variety, complexity and capability of input sensors in the IoT universe is almost without limit. Almost anything that can measured in some way will spawn an IoT sensor to communicate that data to something else. In many cases, sensors may be very simple, measuring only a single parameter. In other cases, a combined sensor package may measure many parameters, providing a complete environmental ‘picture’ as a dataset. For instance, a simple sensor may just measure temperature, and a use case might be an embedded sensor in a case of wine before transport. The data is measured once every hour and stored in memory onboard the sensor, then ‘read’ upon arrival at the retail point to ensure that maximums or minimums of acceptability were not exceeded. Thermal extremes are the single largest external loss factor in transport of wine worldwide, so this is not a trivial matter.

sensor01  sensor02  sensor10  sensor08

On the other hand, a small package – the size of a pack of cigarettes – attached to a vehicle can measure speed, acceleration, location, distance traveled from waypoints, temperature, humidity, relative light levels (to indicate degree of daylight), etc. If in addition the sensor package is connected to the vehicle computer, a myriad of engine and other component data can be collected. All this data can be either transmitted live, or more likely, stored in a sample manner and then ‘burst-transmitted’ on a regular basis when a good communications link is available.

An IoT sensor has, at a minimum, the following components: actual sensor element, internal processing, data formation, transmission or storage. More complex sensors may contain both storage and data transmission, multiple transmission methodologies, preprocessing and data aggregation, etc. At this time, the push for most vendors is to get sensors manufactured and deployed in the field to gain market share and increase sales in the IoT sector. Long term thought to security, compatibility, data standards, etc. is often not addressed. Since the scale of IoT sensor deployment is anticipated to exceed the physical deployment of any other technology in the history of humanity, new paradigms will evolve to enable this rollout in an effective manner.

While the large scale deployment of billions of sensors will bring many new benefits to our technological landscape, and undoubtedly improve many real-world issues such as health care, environmental safety and efficiency of resource utilization, traffic management, etc., this huge injection of edge devices will also collectively offer one of the greatest security threats that has ever been experienced in the IT landscape. Due to a current lack of standards, rush to market, lack of understanding of even the security model that IoT presents, etc. most sensors do not have security embedded as a fundamental design principle.

sensor09  sensor05  sensor03  sensor03

There are additional challenges to even the basic functionality, let alone security, of IoT sensors: that of updating, authenticating and validating such devices or the data that they produce. If a million small inexpensive temperature sensors are deployed by a logistics firm, there is no way to individually upgrade these devices should either a significant security flaw be discovered, or if the device itself is found to operate inaccurately. For example, let’s just say that a firmware programming error in such a sensor results in erroneous readings being collected once the sensor has been continuously exposed to an ambient temperature of -25C or below for more than 6 hours. This may not have been considered in a design lab in California, but once the sensors are being used in northern Sweden the issue is discovered. In a normal situation, the vendor would release a firmware update patch, the IT department would roll this out, and all is fixed… not an option in the world of tiny, cheap, non-upgradable IoT devices…

Many (read most as of the time of this article) sensors have little or no real security, authentication or encryption of data functionality. If logistics firms are subject to penalties for delivering goods to retailers that have exceeded the prescribed temperature min/max levels, some firm somewhere may be enticed to substitute readings from a set of sensors that were kept in a more appropriate temperature environment – how is this raw temperature data authenticated? What about sensors that are attached to a human pacemaker, reporting back biomedical information that is personally identifiable. Is a robust encryption scheme applied (as would be required by USA HIPPA regulations)?

There is another issue that will come back to haunt us collectively in a few years: that of vendor obsolescence. Whether a given manufacturer goes out of business, deprecates their support of a particular line of IoT sensors, or leaves the market for another reason, ‘orphaned’ devices will soon become a reality in the IoT universe. While one may think that “Oh well, I’ll just have to replace these sensors with new ones” is the answer, that will not always be an easy answer. What about sensors that are embedded deep within industrial machines, aircraft, motorcars, etc.? These could be very expensive or practically impossible to easily replace, particularly on a large scale. And to further challenge this line of thought, what if a proprietary communications scheme was used by a certain sensor manufacturer that was not escrowed before the firm went out of business? Then we are faced with a very abrupt ‘darkening’ of thousands or even millions of sensor devices.

All of the above variables should be considered before a firm embarks on a large-scale rollout of IoT sensor technology. Not all of the issues have immediate solutions, some of the challenges can be ameliorated in the network layer (to be discussed later in this post), and some can be resolved by making an appropriate choice of vendor or device up front.

Output (Actuators)

Actuators may be stand-alone (i.e. just an output device), or may be combined with an IoT input sensor. An example might be an intelligent light bulb designed for night lighting outdoors – where the sensor detects that the ambient light has fallen to a predetermined level (that may be externally programmable), and in addition to reporting this data upstream also directly triggers the actuator (the light bulb itself) to turn on. In many cases an actuator, in addition to acting on data sent to it over an IoT network, will report back with additional data as well, so in some sense may contain both a sensor as well as an actuator. An example, again using a light bulb: the light bulb turns on only when specifically instructed by external data, but if the light element fails, the bulb will inform the network that this device is no longer capable of producing light – even though it’s receiving data. A robustly designed network would also require the use of light bulb actuators that issue an occasional ‘heartbeat’ so if the bulb unit fails completely, the network will know this and report the failure.

actuators  actuator03  actuator01  actuator00

The issue of security was discussed concerning input sensors above, but this issue also applies to output actuators. In fact, the security and certainty that surrounds an IoT actuator is often more immediately important than a sensor. A compromised sensor will result in bad or missing data, which can still be accommodated within the network or computational schema that uses this data. An actuator that has been compromised or ‘hacked’ can directly affect either the physical world or a portion of a network, so can cause immediate harm. Imagine a set of actuators that control piping valves in a high-pressure gas pipeline installation… and if some valves were suddenly closed while others were opened a ‘hammer’ effect could easily cause a rupture and the potential of a disastrous result. It is imperative that in high-risk points a strong and multilayered set of security protocols is in place.

This issue, along with other reliability issues, will likely delay the deployment of many IoT implementations until adequate testing and use case experience demonstrates that current ‘closed-system’ industrial control networks can be safely replaced with a more open IoT structure. Another area where IoT will require much testing and rigorous analysis will be in vehicles, particularly autonomous cars. The safety of life and property will become highly dependent on the interactions of both sensors and actuators.

Other issues and vulnerabilities that affect input sensors are applicable to actuators as well: updating firmware, vendor obsolescence and a functional set of standards. Just as in the world of sensors, many of the shortcomings of individual actuators must be handled by the network layer in order for the entire system to demonstrate the required degree of robustness.

Network & Infrastructure

While sensors and actuators are the elements of IoT that receive most attention, and are in fact the devices that form the edge of the IoT ecosystem, the more invisible network and associated infrastructure is absolutely vital for this technology to function. In fact, the overall infrastructure is more important and carries a greater responsibility for the overall functionality of IoT than either sensors or actuators.Although the initial demonstration and implementation of IoT technology is currently using traditional ip networks this must change. The current model of remote users (or machines) connecting to other remote users, data centers or cloud combinations cannot scale to the degree required for a large scale successful implementation of IoT.

network01      

In addition, a functional IoT network/infrastructure must contain elements that are not present in today’s information networks, and provide many levels of distributed processing, data aggregation and other functions. Some of the reasons that drive these new requirements for the IoT network layer have been discussed above, in general the infrastructure must make up for the lacks and limitations of both sensors and actuators as they age in place over time. The single largest reason that the network layer will be responsible for the bulk of security, upgrading/adaptation, dealing with obsolescence, etc. is that the network is dynamic and can be continually adjusted and tuned to the ongoing requirements of the sensors, actuators and the data centers/users where the IoT information is processed or consumed.

The reference to ‘infrastructure’ in addition to ‘network’ is for a very good reason: in order for IoT to function well on a long-term basis, substantial ingredients beyond just a simple network of connectivity are required. There are three main drivers of this additional requirement: data reduction & aggregation, security & reliability, and adaptation/support of IoT edge devices that no longer function optimally.

Data Reduction & Aggregation

The amount of data that will be generated and/or consumed by billions of sensors and actuators is gargantuan. According to one of the most recent Cisco VNI forecasts, the global internet traffic will exceed 1.3 zettabytes by the end of this year. 1 zettabyte = 1 million petabytes, with 1 petabyte = 1 million gigabytes… to give some idea of the scale of current traffic. And this is with IoT barely beginning to show up on the global data transmission landscape. If we take even a conservative estimate of 10 billion IoT devices adding to the global network each year between now and 2020, and we assume that on average each edge device transmits/receives only 1 kbps (kilobits per second), this math follows: 30GB per device per year X 10 billion devices = 300 exabytes of new added data per year – at a minimum.

While this may not seem like a huge increase (about a 25% annual increase in overall data traffic worldwide) there are a number of factors that make this much more burdensome to current network topologies than may first be apparent. The current global network system supports basically three types of traffic: streaming content (music, videos, etc) that emanate from a small number of CDNs (Content Distribution Networks) and feed millions of subscribers; database queries and responses (Google searches, credit card authorizations, financial transactions and the like); and ad hoc bi-directional data moves (business documents, publications, research and discovery, etc.). The first of these (streaming) is inherently unidirectional and specialized CDNs have been built to accommodate this traffic, with many peering routers moving this traffic off the ‘general highway’ onto the dedicated routes for the CDNs to allow users to experience the low latency they have come to expect, etc. The second type of traffic, queries and responses, are typically very small data packets that hit a large purpose-designed data center which can process the query very quickly and respond, again with a very small data load. The third type, which has the broadest range of data types, is often not required to have a near-instantaneous delivery or response; the user is less worried about a few seconds delay on the upload of a scientific paper or the download of a commercial contract. A delay of more than 2 sec after a Google search is submitted is seen as frustrating…

Now, enter the world of IoT sensors and actuators onto this already crowded IT infrastructure. The type of data that is detected and transmitted by sensors will very often be time-sensitive. For instance the position of an autonomous vehicle must be updated every 100 mSec or the safety of that vehicle and others around it can be affected. If Amazon succeeds in getting delivery drones licensed, we will have tens of thousands of these critters flying around our heavily congested urban areas – again requiring critically frequent updates of positional data among other parameters. Latency rapidly becomes the problem even more than bandwidth… and the internet, in its glorious redundant design, has as its core value the ultimate delivery of the packet as the prime law, not how long it takes or how many packets can ultimately be delivered. Remember, the initial design of the Internet (which is basically unchanged for almost 50 years now) was a redundant mesh of connectivity to allow the huge bandwidth of 300 bits per second (teletype machine basically) to reach its destination even in the face of nuclear attack wiping out some major nodes on the network.

The current design of data center connectivity (even such monsters such as Amazon Web Services, Google Compute, Microsoft Azure) is a star network. This has one (or a small cluster) of large datacenters in the center of the ‘cloud’, with all the users attached like spokes on a wheel at the edge. As the number of users grows, the challenge is to keep raising the capacity of the ‘gateways’ into the actual computational/storage center of the cloud. It’s very expensive to duplicate data centers, and doing so brings additional data transmission costs as all the data centers (of a given vendor) must constantly be synchronized. Essentially, the star model of central reception, processing and then sending data back to edge fails at the scale and required latency for IoT to succeed.

One possible solution to avoid this congestion at the center of the network is to push some computation to the edge, and reduce the amount of data that is required to be acted upon at the center. This can be accomplished in several ways, but a general model will deal with both data aggregation (whereby data from individual sensors is combined where this is possible) and data reduction (where data flows from individual sensors can be either compressed, ignored in some cases or modified). A few use cases will illustrate these points:

  • Data Aggregation: assume a vendor has embedded small, low cost transpiration sensors in the soil of rows of grape plants in a wine farm. A given plot may have 50 rows each 100 meters long. With sensors embedded every 5 meters, 1,000 sensors will be generating data. Rather than push all that individual data up to a data center (or even to a local server at the wine farm), an intelligent network could aggregate the data and report that, on average, the soil needs or does not need watering. There is a 1000:1 reduction in network traffic up to the center…
  • Data Reduction:  using the same example, if one desired a somewhat more granular sensing of the wine plot, the intelligent network could examine the data from each row, and with a predetermined min/max data range, transmit the data upstream only for those sensors that were out of range. This may effectively reduce the data from 1,000 sensors to perhaps a few dozen.

Both of these techniques require both distributed compute and storage capabilities to exist within the network itself. This is a new paradigm for networks, which up to this time have been quite stupid in reality. Other than passive network hubs/combiners, and active switches (which are very limited, although extremely fast, in their analytical capabilities), current networks are just ribbons of glass or copper. With the current ability of putting substantial compute and storage power in a very small package that uses very little power (look at smart watches), small ‘nodes of intelligence’ could be embedded into modern networks and literally change the entire fabric of connectivity as we know it.

Further details on how this type of intelligent network could be designed and implemented will be a subject of a future post, but here it’s enough to demonstrate that some sort of ‘smart fabric’ of connectivity will be required to effectively deploy IoT on the enormous scale that is envisioned.

Security & Reliability

The next area in which the infrastructure/network that interconnects IoT will be critical to its success will be the overall security, reliability and trustworthiness of the data that is both received from and transmitted to edge devices: sensors and actuators. Not only does the data from sensors, and instructions to actuators, need to be accurate and protected; but the updstream data centers and other entities to which IoT networks are attached must be protected. IoT edge devices, due to their limited capabilities and oft-overlooked security features, can provide easy attack surfaces for the entire network. Typical perimeter defense mechanisms (firewalls, intrusion detection devices) will not work for several reasons in the IoT universe. Mostly this is because IoT devices are often deployed within a network, not just at the outer perimeter. Also, the types of attacks will be very different that what most IDS trigger on now.

As was touched on earlier in this series, most IoT sensors do not have strong security mechanisms built in to the devices themselves. In addition, with the issues of vulnerabilities discovered after deployment, it’s somewhere between difficult and impossible to upgrade large numbers of IoT sensors in place. Many times the sensors are not even designed for bi-directional traffic, so even if a patch was designed, and the sensor somehow could install it, the patch could not be received by the sensor. This boils down to the IoT infrastructure/network bearing the brunt of the burden of security for the overall IoT ecosystem.

There are a number of possible solutions that can be implemented in an IoT network environment to enhance security and reliability, one such example is outlined in this paper. Essentially the network must be intelligent enough to compensate for the ‘dumbness’ of the IoT devices, whether sensors or actuators. One of the trickiest bits will be to secure ‘device to device’ communications. As some IoT devices will directly communicate to other nearby IoT devices through a proprietary communications channel and not necessarily the ‘network’, there is the opportunity for unsecured traffic, etc. to exist.

An example could be a home automation system: Light sensors may communicate directly to outlets or lamps using the Zigbee protocol and never (directly) communicate over a normalized ip network. The issues of out-of-date devices, compromised devices, etc. are not handled (at this time) by the Zigbee protocol, so no protection can be offered. Potentially, such vulnerabilities could lead to an access point in the larger network as a threat surface. The network must ‘understand’ to what it is connected, even if it is a small subsystem (instead of single devices), and provide the same degree of supervision and protection to these isolated subsystems as is possible with single devices.

It rapidly becomes apparent that for the network to implement such functions a high degree of ‘contextual awareness’ and heuristic intelligence is required. With the plethora of devices, types of functions, etc. it won’t be possible to develop, maintain and implement a centrally based ‘rules engine’ to handle this very complex set of tasks. A collective effort will be required from the IoT community to assist in developing and maintaining the knowledgeset for the required AI to be ‘baked in’ to the network. While this is, at first, a considerable challenge, the payoff will be huge in many more ways than just IoT devices working better and being more secure: the large scale development of a truly contextually aware and intelligent network will change the “Internet” forever.

Adaptation & Support

In a similar manner to providing security and reliability, the network must take on the burden of adapting to obsolete devices, broken devices, and monitoring devices for out-of-expected-range behavior. Since the network is dynamic, and (as postulated above) will come to have significant computational capability baked in to the network itself, only the network is positioned to effectively monitor and adapt to the more static (and hugely deployed) set of sensors and actuators.

As in security scenarios, context is vital and each set of installed sensors/actuators must have a ‘profile’ installed to the network along with the actual device. For instance, a temperature sensor could in theory report back a reading of anything remotely reasonable (let’s say -50C to +60C – that covers Antarctica to Baghdad) but if the temp sensor is installed in a home refrigerator the range of expected results would be far more narrow. So as a home appliance vendor turns out units that have IoT devices on board that will connect to the network at large, a profile must also be supplied to the network to indicate the expected range of behavior. The same is true for actuators: an outdoor light for a walkway that tacitly assumes it will turn on once in the evening and off again in the morning should assume something is wrong if signals come through that would have the light flashing on and off every 10 seconds.

One of the things that the network will end up doing is ‘deprecating’ some sensors and actuators – whether they report continually erroneous information or have externally been determined to be no longer worthy of listening to… Even so, the challenge will be continual: not every vendor will announce end-of-life for every sensor or actuator; not every scenario can be envisioned ahead of time. The law of unintended consequences of a world that is largely controlled by embedded and unseen interconnected devices will be interesting indeed…

The next section of this post “Security & Privacy” may be found here.

References:

The Zettabyte Era – Trends and Analysis

 

IoT (Internet of Things): A Short Series of Observations [pt 1]: Introduction & Definition

May 19, 2016 · by parasam

Intro

As one of the latest buzzwords of things technical permeates our collective consciousness to a greater degree, it’s useful to better understand this technology by observing and discussing the various facets of IoT. Like many nascent technologies, IoT has been around for some time (depending on who you ask, and what your definition is, the term IoT showed up around 1999) but the real explosion of both the technology and large-scale awareness was over the last five years. Like the term ‘cloud’ – the meaning is often diffuse and inexact: one must define the use and application to better understand how this technology can provide value.

As the technology of IoT is maturing and beginning to be rolled out in larger and larger scale deployments, the impact of IoT will be felt by all of us, whether or not we directly think we are ‘using’ IoT. Understanding the strengths and weaknesses of IoT across different aspects will be critical to understanding the effects and usefulness (or potential threats) posed by this technology. In this short series of posts, I’ll be examining IoT features across the following areas:  1) basic definition & scope; 2) the Trinity of functional IoT: sensors/actuators/infrastructure; 3) security & privacy; 4) the consumer pov [bottom up view]; 5) the business pov [top down]; 6) the disruption that IoT will cause in both personal & business process; and 7) what an IoT-enabled world will look like (realistically) in 2021 (5 years on).

InternetOfThings05   InternetOfThings03       The Internet of things market connected smart devices tag cloud

Definition

The term “Internet of Things” can potentially encompass a vast array of objects and technologies. Essentially this means a collection of non-human entities that are connected to one or more networks and communicate to other non-human entities. This is to differentiate IoT from the ‘normal’ Internet where humans connect to either each other or information repositories (aka Google) to send or receive information, make purchases, perform tasks, etc. The range of activities and objects that can be encompassed by “IoT” is huge, and some may argue that certain activities fall outside their definition of IoT. This has been a common issue with the term “cloud” – and I don’t see this confusion going away anytime soon. One must clarify how the term applies in a given discussion or risk uncertainty of understanding.

Probably the biggest distinction of where the ‘edge’ of the IoT universe is, in relation to other information network activities, is one of scope, scale and functionality. Even then the definition is not absolute. I’ll give a few examples:

A small sensor that measures temperature and humidity that is capable of connecting to the Internet and transmitting that data is a classical example of an IoT device. It is usually physically small, relatively simple in both design and function, and can potentially exist in a large scale.

An Internet router – a large switch that directs traffic over the Internet – but also communicates with other such switches and uploads data for later analysis is usually not thought of as part of the IoT universe, even though it is not human, and does connect to other non-human entities over a network. These devices are usually (and I would argue correctly) defined as part of the overall infrastructure that supports IoT, but not an IoT device itself. However… IoT can’t exist without them, so they can’t be ignored, even in a discussion of IoT.

Now let’s take the example of a current high-end vehicle. At a more macro level, the entire car can be seen as an IoT device, communicating to other vehicles, mapping algorithms, security applications, traffic monitoring applications, maintenance and support applications, etc. At a localized micro level, the ‘vehicle’ is an entire hub with its own internal network, with many IoT devices embedded within the vehicle itself (GPS sensor, speed sensor, temperature, tire pressure, accelerometers, oil pressure, voice communications, data display, ambient light sensors, fuel delivery sensors, etc etc etc.) So it’s partially a point of view…

The other thing to keep in mind is that often we tend to think of IoT devices as “Input Devices”, or sensors. Equally at home in the IoT universe however are “Output Devices”, or actuators. They can be very simple, such as a light switch (that is actuated by either a local sensor of ambient light, or a remote command from a mobile device, etc.) Actuators can be somewhat more complicated, such as the set of solenoids, motor controls, etc. that comprise an IoT-connected washing machine (which among other activities may use a weight sensor to determine the actual amount of soiled clothes in order to use exactly the amount of water and detergent that is appropriate; predict the amount of electricity that will be used for a wash cycle, measure incoming water pressure and temperature and accommodate that in its process, etc.) At a macro level, an autonomous vehicle could be thought of as both an ‘actuator’ and a ‘sensor’ within a large network of traffic – again the point of view often determines the definition.

Scope

The potential range and pervasiveness of IoT devices is almost beyond imagination. Depending on your news source, the amount of estimated IoT devices that will be actively deployed by 2020 will range between 25 and 50 billion devices. What happens by 2030 – only 14 years from now? Given that most pundits were horribly wrong back in 1995 about how many cellphones would be actively deployed by 2010 (same ~15 yr predictive window) – most observing that maybe 1 million cellphones would be active by that time, whereas the actual number turned out to be almost a billion; it’s not unlikely that a trillion IoT devices will be deployed by 2030. That’s a very large number… and has some serious implications that will be discussed in later articles on this topic. For instance, just how do you update a trillion devices? The very fabric of connectivity will change in the face of this amount of devices that all want to talk to something.

The number of cellphones is already set to exceed the population of our planet within a year (there are currently 6.88 billion cellphones, and 7.01 billion humans – as of April 2016). With IoT devices set to outnumber all existing Internet devices by a factor of more than 1,000 an entirely new paradigm will come into existence to support this level of connectivity. Other issues surrounding such a massive scope will need to be addressed: power (even if individual devices use very little power, a trillion of them – at current power consumption levels – will be unsupportable; errors and outdated devices must be accommodated at a truly Herculean scale; the sheer volume of data created will have to be managed differently than today, etc.

InternetOfThings04

The next section of this post “The Trinity of functional IoT: Sensors, Actuators & Infrastructure” may be found here.

References:

The Internet of Things – An Overview

Number of Internet Users / Devices

 

 

Science – a different perspective on scientists and what they measure…

November 4, 2015 · by parasam

Scientists are humans too…

If you think that most scientists are boring old men with failed wardrobes and hair-challenged noggins.. and speak an alternative language that disengages most other humans… then you’re only partially correct… The world of scientists is now populated by an incredible array of fascinating people – many of whom are crazy-smart but also share humor and a variety of interests. I’ll introduce you to a few of them here – so the next time you hear of the latest ‘scientific discovery’ you might have a different mental picture of ‘scientist’.

Dr. Zeray Alemseged

Dr. Zeray Alemseged

 

Paleoanthropologist and chair and senior curator of Anthropology at the California Academy of Sciences

Zeresenay Alemseged is an Ethiopian paleoanthropologist who studies the origins of humanity in the Ethiopian desert, focusing on the emergence of childhood and tool use. His most exciting find was the 3.3-million-year-old bones of Selam, a 3-year-old girl from the species Australopithecus afarensis.

He speaks five languages.

 

 

 

Dr. Quyen Nguyen
Dr. Quyen Nguyen

 

Doctor and professor of surgery and director of the Facial Nerve Clinic at the University of California, San Diego.

Quyen Nguyen is developing ways to guide surgeons during tumor removal surgery by using fluorescent compounds to make tumor cells — and just tumor cells — glow during surgery, which helps surgeons perform successful operations and get more of the cancer out of the body. She’s using similar methods to label nerves during surgery to help doctors avoid accidentally injuring them.

She is fluent in French and Vietnamese.

 

 

Dr. Tali Sharot

Dr. Tali Sharot

 

Faculty member of the Department of Cognitive, Perceptual & Brain Sciences

Tali Sharot is an Israeli who studies the neuroscience of emotion, social interaction, decision making, and memory. Specifically her lab studies how our experience of emotion impacts how we think and behave on a daily basis, and when we suffer from mental illnesses like depression and anxiety.

She’s a descendant of Karl Marx.

 

 

 

Dr. Michelle Khine

Dr. Michelle Khine

 

Biomedical engineer, professor at UC Irvine, and co-Founder at Shrink Nanotechnologies

Michelle Khine uses Shrinky Dinks — a favorite childhood toy that shrinks when you bake it in the oven — to build microfluidic chips to create affordable tests for diseases in developing countries.

She set a world record speed of 38.4 mph for a human-powered vehicle as a mechanical engineering grad student at UC Berkeley in 2000.

 

 

 

Dr. Nina Tandon

Dr. Nina Tandon

 

Electrical and biomedical engineer at Columbia’s Laboratory for Stem Cells and Tissue Engineering; Adjunct professor of Electrical Engineering at the Cooper Union

Nina Tandon uses electrical signals and environmental manipulations to grow artificial tissues for transplants and other therapies. For example, she worked on an electronic nose used to “smell” lung cancer and now she’s working on growing artificial hearts and bones.

In her spare time, the Ted Fellow does yoga, runs, backpacks, and she likes to bake and do metalsmithing. Her nickname is “Dr. Frankenstein.”

 

 

Dr. Lisa Randall

Dr. Lisa Randall

 

Physicist and professor

Lisa Randall is considered to be one of the nation’s foremost theoretical physicists, with an expertise in particle physics and cosmology. The math whiz from Queens is best known for her models of particle physics and study of extra dimensions.

She wrote the lyrics to an opera that premiered in Paris and has an eclectic taste in movies.

 

 

 

Dr. Maria Spiropulu

Dr. Maria Spiropulu

 

Experimental particle physicist and professor of physics at Caltech

Maria Spiropulu develops experiments to search for dark matter and other theories that go beyond the Standard Model, which describes how the particles we know of interact. Her work is helping to fill in holes and deficiencies in that model. She works with data from the Large Hadron Collider.

She’s the great-grandchild of Enrico Fermi in Ph.D lineage — which means her graduate adviser’s adviser’s adviser was the great Enrico Fermi who played a key role in the development of basic physics.

 

 

 

Dr. Jean-Baptiste Michel

Dr. Jean-Baptiste Michel

Mathematician, engineer, and researcher

The French-Mauritian Jean-Baptiste Michel is a mathematician and engineer who’s interested in analyzing large volumes of quantitative data to better understand our world.

For example, he studied the evolution of human language and culture by analyzing millions of digitized books. He also used math to understand the evolution of disease-causing cells, violence during conflicts, and the way language and culture change with time.

He likes “Modern Family” and “Parks and Recreation,” he listens to the Black Keys and Feist, and his favorite restaurant in New York City is Kyo Ya.

 

 

Dr. John Dabiri

Dr. John Dabiri

 

Professor of aeronautics and bioengineering

John Dabiri studies biological fluid mechanics and wind energy — specifically how animals like jellyfish use water to move around. He also developed a mathematical model for placing wind turbines at an optimal distance from each other based on data from how fish schools move together in the water.

He won a MacArthur “genius grant” in 2010.

 

 

 

Isaac Kinde

Isaac Kinde

 

As a graduate student (M.D./Ph.D.) at Johns Hopkins, Isaac Kinde (Ethiopian/Eritrean) is working on improving the accuracy of genetic sequencing so that it can be used to diagnose cancer at an early stage in a simple, noninvasive manner.

In 2007 he worked with Bert Vogelstein, who just won the $3 million Breakthrough Prize in Life Sciences.

He’s an avid biker, coffee drinker and occasional video game player.

 

 

 

Dr. Franziska Michor

Dr. Franziska Michor

 

 

Franziska Michor received her PhD from Harvard’s Department of Organismic and Evolutionary Biology in 2005, followed by work at Dana-Farber Cancer Institute in Boston, then was assistant professor at Sloan-Kettering Cancer Center in New York City. In 2010, she moved to the Dana-Farber Cancer Institute and Harvard School of Public Health. Her lab investigates the evolutionary dynamics of cancer.

Both Franziska and her sister Johanna, who has a PhD in Mathematics, are licensed to drive 18-wheelers in Austria.

 

 

 

 

Heather Knight

Heather Knight

 

Heather Knight (Ph.D. student at Carnegie Mellon) loves robots — and she wants you to love them too. She founded Marilyn Monrobot, which creates socially intelligent robot performances and sensor-based electronic art. Her robotic installations have been featured at the Smithsonian-Cooper Hewitt Design Museum, LACMA, and PopTech.

In her graduate work she studies human-robot interaction, personal robots, theatrical robot performances, and designs behavior systems.

When she’s not building robots, she likes salsa dancing, karaoke, traveling, and film festivals.

 

 

Clio Cresswell

Clio Cresswell

 

 

The Australian author of “Mathematics and Sex,” Clio Cresswell uses math to understand how humans should find their partners. She came up with what she calls the “12 Bonk Rule,” which means that singles have a greater chance of finding their perfect partner after they date 12 people.

If she’s not at her desk brain working, you’ll find her at the gym either bench pressing her body weight or hanging upside down from the gym rings.

 

 

 

Dr. Cheska Burleson

Dr. Cheska Burleson

 

 

Cheska Burleson (PhD in Chemical Oceanography) focused her research on the production of toxins by algae blooms commonly known as “red tide.” She also evaluated the medicinal potential of these algal species, finding extracts that show promise as anti-malarial drugs and in combating various strains of staphylococcus, including MRSA—the most resistant and devastating form of the bacteria.

Cheska entered college at seventeen with enough credits to be classified as a junior. She also spent her formative years as a competitive figure skater.

 

 

 

 

Bobak Ferdowsi

Bobak Ferdowsi

 

Systems engineer and flight director for the Mars Curiosity rover

Bobak Ferdowsi (an Iranian-American) gained international fame when the Mars Curiosity rover landed on the surface of Mars last August. Since then, the space hunk has become an Internet sensation, gaining more than 50,000 followers on Twitter, multiple wedding proposals from women, and the unofficial title of NASA’s sexy “Mohawk Guy.”

He’s most known for this star-and-stripes mohawk (which he debuted for the Curiosity landing), but changes his hairstyle frequently.

He is a major Star Trek fan.

 

Ariel Garten

Ariel Garten

 

CEO and head of research at InteraXon

By tracking brain activity, the Canadian Ariel Garten creates products to improve people’s cognition and reduce stress. Her company just debuted Muse, a brain-sensing headband that shows your brain’s activity on a smartphone or tablet. The goal is to eventually let you control devices with your mind.

She runs her own fashion design business and has opened fashion week runways with shirts featuring brainwaves.

 

 

 

Amy Mainzer

Amy Mainzer

 

Astrophysicist and deputy project scientist at the wide-field infrared survey explorer, NASA’s Jet Propulsion Lab.

Amy Mainzer built the sensor for the Spitzer Space Telescope, a NASA infrared telescope that’s been going strong for the last 10 years in deep space. Now she’s the head scientist for a new-ish telescope, using it to search the sky for asteroids and comets.

Her job is to better understand potentially hazardous asteroids, including how many there are as well as their orbits, sizes, and compositions.

Her idea of a good time is to do roller disco in a Star Trek costume.

 

 

Dr. Aditi Shankardass

Dr. Aditi Shankardass

 

 

Pedriatric Neurologist, Boston Children’s Hospital, Harvard Medical School (her British pedigree includes B. Sc. in physiology from King’s College London; M.Sc. in neurological science from University College London; Ph.D in cognitive neuroscience from the University of Sheffield).

Aditi Shankardass is a renowned pediatric neurologist who uses real-time brain recordings to accurately diagnose children with developmental disorders and the underlying neurological causes of dyslexia.

Her father is a lawyer who represents celebrities. She enjoys dancing, acting, and painting.

 

 

 

Albert Mach

Albert Mach

 

Bio-engineer and senior scientist at Integrated Plasmonics Corporation

As a graduate student Albert Mach designed tiny chips that can separate out cells from fluids and perform tests using blood, pleural effusions, and urine to detect cancers and monitor them over time.

He loved playing with LEGOs as a kid.

 

 

 

Now that we’ve had a look at ‘modern scientists’ it is only appropriate to look at a bit of ‘modern science’…

STEM (Science, Technology, Engineering, Math) is not a job – it’s a way of life. You can’t turn it off… you don’t leave that understanding and outlook at the office when you leave (not that any of the above braniacs stop working on a problem just because it’s 5PM). One of the fundamental aspects of science is measurement: it is absolutely integral to every sector of scientific endeavor. In order to measure, and communicate those measurements to other scientists, a set of metrics is required, one that is shared and agreed upon by all. We are familiar with the meter, the litre, the hectare and the kilogram. But there are other, less well-known measurements – one of which I will share with you here.

What do humans, beer, sub-atomic collisions, the cosmos and the ‘broad side of a barn’ have in common? Let’s find out!

Units of measure are extremely important in any branch of science, but some of the most fundamental units are those of length (1 dimension), area (2 dimensions) and volume (3 dimensions). In human terms, we are familiar with the meter, the square meter and the litre. (For those lonely Americans – practically the only culture that is not metric – think of yards, square yards and quarts). I want to introduce you to a few different measurements.

There are three grand ‘scales’ of measurement in our observable universe: the very small (as in subatomic); the human scale; and the very large (as in cosmology). Even if you are not science-minded in detail, you will have heard of the angstrom [10−10 m (one ten-billionth of a meter)] – used to measure things like atoms; and the light-year [ 9 trillion kilometers (or about 6 trillion miles)] – used to measure galaxies.

the broad side of a barn

the broad side of a barn

The actual unit of measure I want to share is called “Hubble-barn” (a unit of volume), but a bit of background is needed. The barn is a serious unit of measure (area) used by nuclear physicists. One barn is equal to 1.0 x 10−28 m2. That’s about the size of the cross-sectional area of an atomic nucleus. The term was coined by these physicists that were running these incredibly large machines (atom smashers) that help us understand subatomic function and structure by basically creating head-on collisions with particles travelling in opposite directions at very high speeds. It is very, very difficult! And the colloquial expression “you can’t hit the broad side of a barn” led to the name for this unit of measure…

subatomic collision

subatomic collision

Now, since even scientists have a sense of humor, but understand rigor and analogy, they further derived two more units of measure based on the barn: the outhouse (smaller than a barn) [1.0 x 10−6 barns]; and the shed (smaller than an outhouse)  [1.0 x 10−24 barns].

So now we have resolved part of the aforementioned “Hubble-barn” unit of measure. Remember that a unit of volume requires three dimensions, and now we have established two of those with a unit of area (the barn). A

Large Hadron Collider

Large Hadron Collider

third dimension (of length, which is one-dimensional quantity) is needed…

The predecessor to Hubble-barn is the Barn-megaparsec. A parsec is equal to about 3.26 light years (31 trillion kilometers / 19 trillion miles). Its name is derived from the distance at which one astronomical unit subtends an angle of one arcsecond. [Don’t stress if you don’t get those geeky details, the parsec basically makes it easy for astronomers to calculate distances directly from telescope observations].

A megaparsec is one million parsecs. As in a bit over 3 million light years. A really, really long way… this type of unit of measure is what astronomers and astrophysicists use to measure the size and distance of galaxies and entire chunks of the universe.

 

The bottom line is if you multiply a really small unit (a barn) by a really big unit (a megaparsec), you get something rather normal. In this case 112ml (about ½ cup). So here is where we get to the fun (ok a bit geeky) aspect of scientific measurements. The next time your cake recipe calls for ½ cup of sugar, just ask for a Barn-megaparsec of sugar…

The Hubble length

The Hubble length

Since for our final scientific observation, we need a unit of measure a bit more than a teaspoon, we turn to the Hubble length (the radius of the entire observable universe, derived by multiplying the Hubble constant by the speed of light). It’s about 4,228 million parsecs [13.8 billion light years]. As you can see, by using a larger unit than the megaparsec we now can get a larger unit of volumetric measure. Once the high mathematics is complete, we find that one unit of Hubble-barn is pretty much equivalent to… a pint.. of beer. (473ml). So two physicists go into a bar… and order a Hubble-barn of ale…

See… there IS a practical use for science!

Pint of Beer

Pint of Beer

Hubble telescope

Hubble telescope

 

The Certainty of Uncertainty… a simple look at the fascinating and fundamental world of Quantum Physics

June 7, 2015 · by parasam

Recently a number of articles have been posted all reporting on a new set of experiments that attempt to shed light on one of the most fundamental postulates of quantum physics: the apparent contradictory nature of physical essence. Is ‘stuff’ a Wave or a Particle? Or is it both at the same time? Or even more perplexing (and apparently validated by a number of experiments, including this most recent one): the very observation of the essence determines whether it manifests as a wave or a particle.

Yes, this can sound weird – and even really smart scientists understood this challenge of normative values:  Einstein called this property “spooky action at a distance”; Neils Bohr (one of the pioneers of quantum theory) said “if quantum mechanics hasn’t profoundly shocked you, you haven’t understood it yet.”

If you are not immediately familiar with the duality paradox of objects in quantum physics (the so-called wave/particle paradox) – then I’ll share here a very brief introduction, followed by some links to articles that are well worth reading before continuing with this post, as the basic premise and current theories really will help you understand the further comments and alternate theory that I will offer. The authors of these referenced articles are much smarter than me on the details and mathematics of quantum physics – and yet do not (in these articles) use high mathematics, etc – I found the presentations clear and easy to comprehend.

Essentially back when the physics of very small things was being explored by the great minds of Einstein, Neils Bohr, Werner Heisenberg and others various properties of matter were derived. One of the principles of quantum mechanics – the so-called ‘Heisenberg Uncertainty Principle’ (after Heisenberg who first introduced this theory in 1927) – is that you cannot know precisely both the position and the momentum of any given particle at the same time.

The more precisely you know one property (say, position) the less precisely can you know the other property (in this case, momentum). In our everyday macro world, we “know” that a policeman can know both our position (300 meters past the intersection of Grand Ave. and 3rd St.) and our momentum (87 km/h) when his radar gun observed us in a 60km/h zone… and as much as we may not like the result, we understand and believe the science behind it.

In the very, very small world of quantum objects (where an atom is a really big thing…) things do not behave as one would think. The Uncertainty Principle is the beginning of ‘strangeness’, and it only gets weirder from there. As the further study of quantum mechanics, quantum physics and other areas of subatomic knowledge grew, scientists began to understand that matter apparently could behave as either a wave or a particle.

This is like saying that a bit of matter could be either a stream of water or a piece of ice – at the same time! And if this does not stretch your mental process into enough of a knot… the mathematics proves that everything (all matter in the known universe) actually possesses this property of “superposition” [the wave/particle duality] and that the act of observing the object is what determines whether the bit of matter ‘becomes’ a wave or a particle.

John Wheeler, in 1978 with his now-famous “delayed choice” thought experiment [Gedankenexperiment], first proposed the concept of using a double-slit test on a quantum object. Essentially you fire a quantum object (a tiny bit of matter) at two parallel slits in a solid panel. If the object is a particle (like a ball of ice) then it can only go through one slit or the other – it can’t stay a particle and go through both at the same time. If however the quantum object is a wave (such as a light wave) then it can go through both slits at the same time. Observation of the result tells us whether the quantum object was a wave or particle. How? If the object was a wave, then we will see an interference pattern on the other side of the slit (since the wave, passing through both slits, will recombine on the other side of the slits and exhibit areas of ‘interference’ – where some of the wave combines positively and some  combines destructively – these alternating ‘crests and valleys’ are easily recognized.


However, if the quantum object is a particle then no interference will be observed, as the object could only pass through one slit. Here’s the kicker: assume that we can open or close one of the slits very quickly – AFTER the quantum object (say a photon) has been fired at the slits. This is part of the ‘observation’. What if the decision to open or close one of the slits is made after the particle has committed to passing through either one slit or the other (as a particle) – and let’s say that initially (when the photon was fired) only one slit was open – but that subsequently we opened the other slit. If the particle was ‘really’ a particle then we should see no interference pattern – the expected behavior. But… the weirdness of quantum physics says that if we open the second slit (thereby setting up the stage to observe ‘double slit interference’) – that is exactly what we will observe!

And this is precisely what Alain Aspect and his colleagues proved experimentally in 2007 (in France). These experiments were performed with photons. The most recent experiments proved for the first time that the wave/particle duality applies to massive objects such as entire atoms (remember in the quantum world an atom is really, really huge). Andrew Truscott (with colleagues at the Australian National University in late 2014) repeated the same experiment in principle but with helium atoms. (They whacked the atoms with laser pulses instead of feeding them through slits but the same principle of ‘single vs double slit’ testing/observation was valid).

So, to summarize, we have a universe where everything can be (actually IS) two things at once – and only our observation of something decides what it turns out to be…

To be clear, this “observation” is not necessarily human observation (indeed it’s a bit hard to see atoms with a human eyeball..) but rather the ‘entanglement’ [to use the precise physics term] between the object under observation and the surrounding elements. For instance, the presence of one slit or two – in the above experiment – IS the ‘measurement’.

Now, back to the ‘paradox’ or ‘conundrum’ of the wave/particle duality. If we try – as a thought experiment – to understand what is happening, there are really only two possible conclusions: either a chunk of matter (a quantum object) is two things at once; or there is some unknown communications mechanism which would allow messaging at speeds faster than the speed of light (which would violate all current laws of physics). To explain this: one possibility is that the object, after passing through one slit somehow tells the original object “Hey, I got through the experiment by going through only one slit, so I must be a particle, so please behave like a particle when you approach the slits…” Until recently, the only other possibility was this vexing ‘duality’ where the best scientists could come up with was that quantum objects appeared to behave as waves… unless you looked at them and then they behaved like particles!

Yes, quantum physics is stranger than an acid trip…

A few years ago another wizard scientist (in this case a chemist looking at quantum mechanics to better understand really what goes on with chemical reactions) [Prof. Bill Poirier at Texs Tech] came up with a new theory: that of Many Interacting Worlds (MIW). Instead of having to believe that things were two things at once, or apparently communicated over great distances at speeds faster than light; Prof. Poirier postulated that very small particles from other universes ‘seep through’ into our and interact with our own universe.

Now before you think that I, and these other esteemed scientists, have gone off the deep end completely – exhaustive peer review and many recalculations of the mathematics have proven that his theory does not violate ANY current laws of quantum mechanics, physics or general relativity. There is no ‘fuzziness’ in his theory: the apparent ‘indeterminate’ position (as observed in our world) of a particle is actually the observed phenomena of the interaction between an ‘our world’ particle and an ‘other world’ particle.

Essentially Poirier is theorizing exactly what mystics have been telling us for a very long time: that there are parallel universes! Now, to be accurate, we are not completely certain (how can we be in what we now know is an Uncertain World) that this theory is the only correct one. All that has been proven at this point is that the MIW theory is at least as valid as the more well-established “wave/particle duality” theory.


Now, to clarify the “parallel universe” theory with a diagram: The black dots are little quantum particles in our universe, the white dots are similar particles in a parallel universe. [Remember at the quantum level of the infinitesimally small there is mostly empty space, so there is no issue with little things ‘seeping in’ from an alternate universe] These particles are in slightly different positions (notice the separations between the black dot and white dot). It’s this ‘positional uncertainty’ caused by the presence of particles from two universes very close to each other that is the root of the apparent inability to measure the position and momentum exactly in our  universe.

Ok, time for a short break before your brain explodes into several alternate universes… Below is a list of the links to the theories and articles discussed so far. I encourage a brief internet diversion – each of these is informative and even if not fully grasped the bones of the matter will be helpful in understanding what follows.

Do atoms going through double slit know they are being observed?
Strange behavior of quantum particles
Reality doesn’t exist until we measure it
Future events determine what happens in the past???
Parallel worlds exist and interact with ours

Ok, you are now fortified with knowledge, understanding – or possibly in need of a strong dose of a good single malt whiskey…

What I’d like to introduce is an alternate (or perhaps as an extension to) Prof. Poirier’s theory of parallel universes – which BTW I have no issue with, but I don’t think it necessarily explains all the issues surrounding the current ‘wave/particle duality’.

In the MIW (Many Interacting Worlds) notion, the property that is commingled with our universe is “position” – and yet there is the equally important property of “momentum” that should be considered. If the property of Position is no longer sacred, should not Momentum be more thoroughly investigated?

A discussion on momentum will also provide some interesting insight on some of the vexing issues that Einstein first brought to our attention: the idea of time, the theory of ‘space-time’ as a construct, and the mathematical knowledge that ‘time’ can run forwards or backwards – intimating that time travel is possible.

First we must understand that momentum, as a property, has two somewhat divergent qualities depending on whether one is discussing the everyday ‘Newtonian’ world of baseballs and automobiles (two objects that are commonly used in examples in physics classes); or the quantum world of incredibly small and strange things. In the ‘normal’ world, we all learned that Momentum = Mass x Velocity. The classic equation p=mv explains most everything from why it’s very hard for an ocean liner to stop or turn quickly once moving, all the way to the slightly odd, but correct, fact that any object with no velocity (at complete rest) has no momentum. (to be fully accurate, this  means no relative velocity to the observer – Einstein’s relativity and all that…)

However, in the wacky and weird world of quantum mechanics all quantum objects always have both position and momentum. As discussed already, we can’t accurately know both at the same time – but that does not mean the properties don’t exist with precise values. The new theory mentioned above (the Many Interacting Worlds) is primarily concerned with ‘alternate universe particles’ interacting, or entangling, in the spatial domain (position) with particles in our universe.

What happens if we look at this same idea – but from a ‘momentum’ point of view? Firstly, we need to better understand the concept of time vs. momentum. Time is totally a human construct – that actually does not exist at the quantum level! And, if one carefully thinks about it, even at the macroscopic level time is an artifice, not a reality.

There is only now. Again, philosophers and mystics have been trying to tell us this for a very long time. If you look deep into a watch what you will see is the potential energy of a spring, through gears and cogwheels, causing movement of some bits of metal. That’s all. The watch has no concept of ‘time’. Even a digital watch is really the same: a little crystal is vibrating due to battery power, and some small integrated circuits are counting the vibrations and moving a dial or blinking numbers. Again, nothing more than a physical expression of momentum in an ordered way. What we humans extrapolate from that: the concept of time; past; future – is arbitrary and related only to our internal subjective understanding of things.

Going back to the diagram (below)


Let’s assume for a moment that the black and white dots represent slight changes in momentum rather than position. This raises several interesting possibilities: 1) that an alternate universe, identical in EVERY respect to ours – except that with a slightly different momentum things would be occurring either slightly ahead (‘in the future’) or slightly behind (‘in the past’); or 2) that our own universe exists in multiple momentum states at the same time – with only ‘observation’ deciding which version we experience.

One of the things that this could explain is the seemingly bizarre ability of some to ‘predict the future’, or to ‘see into the past’. If these ‘alternate’ universes (based on momentum) actually exist, then it is not all that hard to accept that some could observe them, in addition to the one where we most all commonly hang out.

Since most of quantum theory is based around probability, it appears likely that the highest probability observable ‘alternate momentum’ events will be one that are closely related in the value of momentum to the particle that is currently  under observation in our universe. But that does not preclude the possibility of observation of particles that are much more removed, in terms of momentum (i.e. potentially further in the past or future).

I personally do not posses the knowledge of the high mathematics necessary to prove this to the same degree that Bill Poirier and other scientists have done with the positional theorems – but invite that exercise if anyone has such skills. As a thought experiment, it seems as valid as anything else that has been proposed to date.

So now, as was so eloquently said at the end of each episode of the “Twilight Zone”: we now return you to your regular program and station. Only perhaps slightly more confused… but also with some cracks in the rigid worldview of macroscopic explanations.

The Patriot Act – upcoming expiry of Section 215 and other unpatriotic rules…

April 18, 2015 · by parasam

Section215

On June 1, less than 45 days from now, a number of sections of the Patriot Act expire. The administration and a large section of our national security apparatus, including the Pentagon, Homeland Security, etc. are strongly pushing for extended renewal of these sections without modification.

While this may on the surface seem like something we should do (we need all the security we can get in these times of terrorism, Chinese/North Korean/WhoKnows hacks, etc. – right?) – the reality is significantly different. Many of the Sections of the Patriot Act (including ones that are already in force and do not expire for many years to come) are insidious, give almost unlimited and unprecedented surveillance powers to our government (and by the way any private contractors who the government hires to help them with this task), and are mostly without functional oversight or accountability.

Details of the particular sections up for renewal may be found in this article, and for a humorous and allegorical take on Section 215 (the so-called “Library Records” provision) I highly recommend this John Oliver video. While the full “Patriot Act” is huge, and covers an exhaustingly broad scope of activities that allow the government (meaning its various security agencies, including but not limited to: CIA, FBI, NSA, Joint Military Intelligence Services, etc. etc.) the sections that are of particular interest in terms of digital security pertaining to communications are the following:

  • Section 201, 202 – Ability to intercept communications (phone, e-mail, internet, etc.)
  • Section 206 – roving wiretap (ability to wiretap all locations that a person may have visited or communicated from for up to a year).
  • Section 215 – the so-called “Library Records” provision, basically allowing the government (NSA) to bulk collect communications from virtually everyone and store them for later ‘research’ to see if any terrorist or other activity deemed to be in violation of National Security interests.
  • Section 216 – pen register / trap and trace (the ability to collect metadata and/or actual telephone conversations – metadata does not require a specific warrant, recording content of conversations does).
  • Section 217 – computer communications interception (ability to monitor a user’s web activity, communications, etc.)
  • Section 225 – Immunity from prosecution for compliance with wiretaps or other surveillance activity (essentially protects police departments, private contractors, or anyone else that the government instructs/hires to assist them in surveillance).
  • Section 702 – Surveillance of ‘foreigners’ located abroad (in principle this should restrict surveillance to foreign nationals outside of US at the time of such action, but there is much gray area concerning exactly who is a ‘foreigner’ etc. [for instance, is a foreign born wife of a US citizen a “foreigner” – and if so, are communications between the wife and the husband allowed??]

Why is this Act so problematic?KeyholePeeper

As with many things in life, the “law of unintended consequences” can often overshadow the original problem. In this case, the original rationale of wanting to get all the info possible about persons or groups that may be planning terrorist activities against the USA was potentially noble, but the unprecedented powers and lack of accountability provided for by the Patriot Act has the potential (and in fact has already been proven) to scuttle many individual freedoms that form the basis for our society.

Without regard to the methods or justification for his actions, the revelations provided by Ed Snowden’s leaks of the current and past practices of the NSA are highly informative. This issue is now public, and cannot be ‘un-known’. What is clearly documented is that the NSA (and other entities as has since come to light) have extended surveillance on millions of US citizens living within the domestic US to a far greater extent than even the original authors of the Patriot Act envisioned. [This revealed in multiple tv interviews recently].

The next major issue is that of ‘data creep’ – that such data, once collected, almost always gets replicated into other databases, etc. and never really goes away. In theory, to take one of the Sections (702), data retention even for ‘actionable surveillance of foreign nationals’ is limited to one year, and inadvertent collection of surveillance data on US nationals, or even a foreign national that has travelled within the borders of the USA is supposed to be deleted immediately. But absolutely no instruction or methodology is given on how to do this, nor are any controls put in place to ensure compliance, nor are any audit powers given to any other governmental agency.

As we have seen in past discussions regarding data retention and deletion with the big social media firms (Facebook, Google, Twitter, etc.) it’s very difficult to actually delete data permanently. Firstly, in spite of what appears to be an easy step, actually deleting your data from Facebook is incredibly hard to do (what appears to be easy is just the inactivation of your account, permanently deleting data is a whole different exercise). On top of that, all these firms (and the NSA is no different) make backups of all their server data for protection and business continuity. One would have to search and compare every past backup to ensure your data was also deleted from those.

And even the backups have backups… it’s considered an IT ‘best practice’ to back up critical information across different geographical locations in case of disaster. You can see the scope of this problem… and once you understand that the NSA for example will under certain circumstances make chunks of data available to other law enforcement agencies, how does one then ensure compliance across all these agencies that data deletion occurs properly? (Simple answer: it’s realistically impossible).

So What Do We Do About This?

The good news is that most of these issues are not terribly difficult to fix… but the hard part will be changing the mindset of many in our government who feel that they should have the power to do anything they want in total secrecy with no accountability. The “fix” is to basically limit the scope and power of the data collection, provide far greater transparency about both the methods and actual type of data being collected, and have powerful audit and compliance methods in place that have teeth.

The entire process needs to be stood on its end – with the goal being to minimize surveillance to the greatest extent possible, and to retain as little data as possible, with very restrictive rules about retention, sharing, etc. For instance, if data is shared with another agency, it should ‘self-expire’ (there are technical ways to do this) after a certain amount of time, unless it has been determined that this data is now admissible evidence in a criminal trial – in which case the expiry can be revoked by a court order.

fisainfographic3_blog_0

The irony is that even the NSA has admitted that there is no way they can possibly search through all the data they have collected already – in terms of a general search-terms action. They could of course look for a particular person-name or place-name, but if this is all they needed they could have originally only collected surveillance data for those parameters instead of the bulk of American citizens living in the USA…

While they won’t give details, reasonable assumptions can be drawn from public filings and statements, as well as purchase information from storage vendors… and the NSA alone can be assumed to have many hundreds of exabytes of data stored. Given that 1 exabyte = 1,024 Petabytes (which in turn = 1,024 terabytes) this is an incredible amount of data. To put another way, it’s hundreds of trillions of gigabytes… and remember that your ‘fattest’ iPhone holds 128GB.

It’s a mindset of ‘scoop up all the data we can, while we can, just in case someday we might want to do something with it…’  This is why, if we care about our individual freedom of expression and liberty at all, we must protest against the blind renewal of these deeply flawed laws and regulations such as the Patriot Act.

This discussion is entering the public domain more and more – it’s making the news but it takes action not just talk. Make a noise. Write to your congressional representatives. Let them know this is an urgent issue and that they will be held accountable at election time for their position on this renewal. If the renewal is not granted, then – and typically only then – will the players be forced to sit down and have the honest discussion that should have happened years ago.

Shadow IT, Big Brother & The Holding Company, Thousand-Armed Management…

April 9, 2015 · by parasam

This article was inspired by reading a challenge of many organizations, along with their IT departments: that of “Shadow IT”. This is essentially the use of software by employees that is not formally ‘approved’ or managed by the IT Department. Often this is done quite innocently, as an expedient method to accomplish a task at hand when the perceived correct software tool for the job is unavailable, hard to use or otherwise presents friction to the user.

A classic example, and in fact the instigating action for the article I read (here) is DropBox. This ubiquitous cloud storage service is so ‘friction-free’ to set up and use that many users opt for this app as a quick  means to store documents for easy retrieval as they move from place to place and device to device during the course of their day/week at work. The issues of security, backup, data integrity and so on usually never occur to them.

The Hidden Dangers

The use of ad-hoc solutions to a user’s need to do something (whether it’s to store, edit, send, etc.) are often not immediately apparent. Some of the issues that come up are: lack of security for company documents; lack of version control when docs are stored multiple times in various places; potential compromise of security to company networks (often times users will use the same login info for DropBox as for their corporate login – DB is not that difficult to hack, once a set of credentials is discovered that works for one site a hacker will then try other sites…); general diffusion of IT management policies and practices.

The unfortunate dialectic that often follows from the discovery of this practice is one of opposing sides:  IT sees the user as the ‘bad guy’ and tries to enforce a totalitarian solution; the user feels discriminated against and gets frustrated that the tools they perceive they need are not provided.. all this leads to a continual ‘cat and mouse’ game where users feel even a greater ‘reason’ to utilize stealth IT solutions / IT management feels they have no choice except to police users and invoke more and more draconian rules to prevent users from acting in any way that is not ‘approved’.

Everyone Needs Awareness

A more cooperative solution can be found if both ‘sides’ (IT management and Users) get enlightened about the issues from both points of view. IT needs to accept that many of the toolsets often provided are ungainly, cumbersome, or otherwise hard to use – or don’t adequately address the needs of users; while users need to understand the security and management risks that Shadow IT solutions pose.

One of the biggest philosophical challenges is that most firms place IT somewhere near the top of the pyramid, with edicts on what to use and how to behave coming from a ‘top-down’ philosophy. A far more effective approach is to place IT at the ‘bottom of the stack’ – with IT truly being in a supportive role, literally acting as a foundation and glue for the actions of users. If the needs of the users are taken as real (within reason) and a concerted effort is taken to address those in a creative manner a much higher degree of conformance will follow.

Education of users is also paramount – many times existing software solutions are available within a corporate toolset but either are unknown to a user, or the easiest way to accomplish a task is not shown to the user. This paradigm (enlightened users acting with a common goal in cooperation with IT management) is actually a great model for other aspects of work life as well…

Big Brother & The Holding Company

BigBrother

Achieving the correct balance between user ‘freedom’ and the perceived need for IT management to monitor and control absolutely everything that is ‘data’ is a bigger challenge than even apparent at first. I’ve entitled this section to included “The Holding Company” for a more specific reason that just an alliteration… most organizations, whether your local Seven-Eleven or the NSA not only like to observe (and record) all the goings-on of their employees (or in the case of the NSA basically every human and/or machine they can find…) but to hold on to this data, well, pretty much forever.

This ‘holding’ in and of itself raises some interesting philosophical questions… for instance, is it legal/ethical for a firm to continue to keep records pertaining to an employee that is no longer working for the firm? And if so, for how long? Under what conditions, or what subjects would some data be deemed necessary to keep longer than other data?

And BTW if anyone still believes that old e-mails just aren’t that big a deal, please ask Amy Pascal (Sony Pictures exec…) if she wishes some of her past e-mails had never become public (thanks to the Hack of Armageddon). Perhaps one ‘better way’ to handle this balance (privacy vs perceived necessity) is somewhat like a pre-nup: hammer out the details before the marriage… In the case of employee/employer, if data policies were more clearly laid out, with reason and rationale, the chance of better IT behavior – and less chance of disgruntled employees later – would likely be ensured.

From a user’s or employee’s perspective, here’s a (potentially embarrassing) scenario:  during the course of normal business the user expresses frustration with a vendor to another employee of the current firm; a few years later said user leaves and goes to work for the vendor, having long forgotten about the momentary frustration and perhaps in hindsight a less than wonderful expression of the same. The original firm (probably some manager that had to explain why a good employee had left) reviews e-mails still on file, find this ‘gem’ and anonymously forwards it to the vendor… now the employer of the user… ouch!

If it could be proven, probably a black eye (or worse) for the original employer, but these things can be almost impossible to nail down to the degree of certainty required in our legal system, and the damage has already been done.

On the other hand, an audit trail of content moves by an employee of a major motion picture company that has experienced piracy could potentially help plug a leak that was costing the firm huge financial losses and also lead to the  appropriate actions being taken against the perpetrator.

The real issue here is good policy and governance, and then applying these polices uniformly across the board.

Thousand-Armed Management

SONY DSC

The 1000-Armed Buddha (Avalokiteśvara) is traditionally understood as a deity of Benevolent Compassion – but with the power of all-seeing, all-hearing and all-reaching attributes. That is exactly what is required today for sound and secure IT management across our new hyper-connected reality. With the concept of perimeters and ‘walled gardens’ lost by the wayside, along with hardware firewalls, antiquated OS’s and other roadkill brought on by interconnected clouds, multiple mobile devices all ‘attached’ to the same user, etc. – an entirely new paradigm is required for administration.

Closing the circle of discussion to our introduction, in this new world the attractiveness and utility of so-called ‘Shadow IT’ is even more pervasive – and harder to monitor and control – than previously. In the old world order where desktops were all controlled on a corporate LAN it was easier to monitor/block access to entities such as DropBox and other cloud apps that users found often fit their needs better than the tools provided by the local IT toolsets. It’s much more difficult to do this when a user is on an airplane logged in to the ‘net via GoGo at 10,000 meters in the air, using cloud apps located in 12 different countries simultaneously.

The Buddha Avalokiteśvara is also known for promoting teaching as one of the greatest ‘positive actions’ that one can take – (I’ll save a post on how our current culture values teachers vs stockbrokers for another time…). The most powerful tool any IT manager can utilize is education and sharing of knowledge in an effective manner. Informed users will generally make better decisions – and at the least will have a better understanding of IT policies and procedures.

Future posts on this general topic will delve a bit further into some of the discrete methods that can be utilized to effect this ‘1000-armed management’ – here it’s enough to introduce the concepts and the need for a radically new way of providing the balance of security and usability required today.

Digital Security in the Cloudy & Hyper-connected world…

April 5, 2015 · by parasam

Introduction

As we inch closer to the midpoint of 2015, we find ourselves in a drastically different world of both connectivity and security. Many of us switch devices throughout the day, from phone to tablet to laptop and back again. Even in corporate workplaces, the ubiquity of mobile devices has come to stay (in spite of the clamoring and frustration of many IT directors!). The efficiency and ease of use of integrated mobile and tethered devices propels many business solutions today. The various forms of cloud resources link all this together – whether personal or professional.

But this enormous change in topology has introduced very significant security implications, most of which are not really well dealt with using current tools, let alone software or devices that were ‘state of the art’ only a few years ago.

What does this mean for the user – whether personal or business? How do network admins and others that must protect their networks and systems deal with these new realities? That’s the focus of the brief discussion to follow.

No More Walls…

NoWalls

The pace of change in the ‘Internet’ is astounding. Even seasoned professionals who work and develop in this sector struggle to keep up. Every day when I read periodicals, news, research, feeds, etc. I discover something I didn’t know the day before. The ‘technosphere’ is actually expanding faster than our collective awareness – instead of hearing that such-and-such is being thought about, or hopefully will be invented in a few years, we are told that the app or hardware already exists and has a userbase of thousands!

One of the most fundamental changes in the last few years is the transition from ‘point-to-point’ connectivity to a ‘mesh’ connectivity. Even a single device, such as a phone or tablet, may be simultaneously connected to multiple clouds and applications – often in highly disparate geographical locations. The old tried-and-true methodology for securing servers, sessions and other IT functions was to ‘enclose’ the storage, servers and applications within one or more perimeters – then protect those ‘walled gardens’ with firewalls and other intrusion detection devices.

Now that we reach out every minute across boundaries to remotely hosted applications, storage and processes the very concept of perimeter protection is no longer valid nor functional.

Even the Washing Machine Needs Protection

Another big challenge for today’s security paradigm is the ever-growing “Internet of Things” (IoT). As more and more everyday devices become network-enabled, from thermostats to washing machines, door locks to on-shelf merchandise sensors – an entirely new set of security issues has been created. Already the M2M (Machine to Machine) communications are several orders of magnitude greater than sessions involving humans logging into machines.

This trend is set to literally explode over the next few years, with an estimated 50 billion devices being interconnected by 2020 (up from 8.7 billion in 2012). That’s a 6x increase in just 8 years… The real headache behind this (from a security point of view) is the amount of connections and sessions that each of these devices will generate. It doesn’t take much combinatorial math to see that literally trillions of simultaneous sessions will be occurring world-wide (and even in space… the ISS has recently completed upgrades to push 3Mbps channels to 300Mbps – a 100x increase in bandwidth – to support the massive data requirements of newer scientific experiments).

There is simply no way to put a ‘wall’ around this many sessions that are occurring in such a disparate manner. An entirely new paradigm is required to effectively secure and monitor data access and movement in this environment.

How Do You Make Bulletproof Spaghetti?

spaghetti

If you imagine the session connections from devices to other devices as strands of pasta in a boiling pot of water – constantly moving and changing in shape – and then wanted to encase each strand in an impermeable shield…. well you get the picture. There must be a better way… There are a number of efforts underway currently from different researchers, startups and vendors to address this situation – but there is no ‘magic bullet’ yet, nor is there even a complete consensus on what method may be best to solve this dilemma.

One way to attempt to resolve this need for secure computation is to break the problem down into the two main constituents: authentication of whom/what; and then protection of the “trust” that is given by the authentication. The first part (authentication) can be addressed with multiple-factor login methods: combinations of biometrics, one-time codes, previously registered ‘trusted devices’, etc. I’ve written on these issues here earlier. The second part: what does a person or machine have access to once authenticated – and how to protect those assets if the authentication is breached – is a much thornier problem.

In fact, from my perspective the best method involves a rather drastically different way of computing in the first place – one that would not have been possible only a few years ago. Essentially what I am suggesting is a fully virtualized environment where each session instance is ‘built’ for the duration of that session; only exposes the immediate assets required to complete the transactions associated with that session; and abstracts the ‘devices’ (whether they be humans or machines) from each other to the greatest degree possible.

While this may sound a bit complicated at first, the good news is that we are already moving in that direction, in terms of computational strategy. Most large scale cloud environments already use virtualization to a large degree, and the process of building up and tearing down virtual instances has become highly automated and very, very fast.

In addition, for some time now the industry has been moving towards thinner and more specific apps (such as found on phones and tablets) as opposed to massive thick client applications such as MS Office, SAP and other enterprise builds that fit far more readily into the old “protected perimeter” form of computing.

In addition (and I’m not making a point of picking on a particular vendor here, it’s just that this issue is a “fact of nature”) the Windows API model is just not secure any more. Due to the requirement of backwards compatibility – to a time where the security threats of today were not envisioned at all – many of the APIs are full of security holes. It’s a constant game of reactively patching vulnerabilities once discovered. This process cannot be sustained to support the level of future connectivity and distributed processing towards which we are moving.

Smaller, lightweight apps have fewer moving parts, and therefore by their very nature are easier to implement, virtualize, protect – and replace entirely should that be necessary. To use just an example: MS Word is a powerful ‘word processor’ – which has grown to integrate and support a rather vast range of capabilities including artwork, page layout, mailing list management/distribution, etc. etc. Every instance of this app includes all the functionality, of which 90% is unused (typically) during any one session instance.

If this “app” was broken down into many smaller “applets” that called on each other as required, and were made available to the user on the fly during the ‘session’ the entire compute environment becomes more dynamic, flexible and easier to protect.

Lowering the Threat Surface

Immune-System

One of the largest security challenges of a highly distributed compute environment – such as is presented by the typical hybrid cloud / world-wide / mobile device ecosystem that is rapidly becoming the norm – is the very large ‘threat surface’ that is exposed to potential hackers or other unauthorized access.

As more and more devices are interconnected – and data is interchanged and aggregated from millions of sensors, beacons and other new entities, the potential for breaches is increased exponentially. It is mathematically impossible to proactively secure every one of these connections – or even monitor them on an individual basis. Some new form of security paradigm is required that will, by its very nature, protect and inhibit breaches of the network.

Fortunately, we do have an excellent model on which to base this new type of security mechanism: the human immune system. The ‘threat surface’ of the human body is immense, when viewed at a cellular level. The number of pathogens that continually attempt to violate the human body systems are vastly greater than even the number of hackers and other malevolent entities in the IT world.

The conscious human brain could not even begin to attempt to monitor and react to every threat that the hordes of bacteria, viruses and other pathogens bring against the body ecosystem. About 99% of such defensive response mechanisms are ‘automatic’ and go unnoticed by our awareness. Only when things get ‘out of control’ and the symptoms tell us that the normal defense mechanisms need assistance do we notice things like a sore throat, an ache, or in more severe cases: bleeding or chest pain. We need a similar set of layered defense mechanisms that act completely automatically against threats to deal with the sheer numbers and variations of attack vectors that are becoming endemic in today’s new hyper-connected computational fabric.

A Two-Phased Approach to Hyper-Security

Our new hyper-connected reality requires an equally robust and all-encompassing security model: Hyper-Security. In principle, an approach that combines the absolute minimal exposure of any assets, applications or connectivity with a corresponding ‘shielding’ of the session using techniques to be discussed shortly can provide an extremely secure, scalable and efficient environment.

Phase One – building user ‘sessions’ (whether that user is a machine or a human) that expose the least possible amount of threat surface while providing all the functionality required during that session – has been touched on earlier during our discussion of virtualized compute environments. The big paradigm shift here is that security is ‘built in’ to the applications, data storage structures and communications interface at a molecular level. This is similar to how the human body systems are organized, which in addition to the actual immune systems and other proactive ‘security’ entities, help naturally limit any damage caused by pathogens.

This type of architecture simply cannot be ‘backed in’ to legacy OS systems – but it’s time that many of these are moved to the shelf anyway: they are becoming more and more clumsy in the face of highly virtualized environments, not to mention the extreme amount of time/cost to maintaining these outdated systems. Having some kind of attachment or allegiance to an OS today is as archaic as showing a preference for a Clydesdale vs a Palomino in the world of Ferraris and Teslas… Really all that matters today is the user experience, reliability and security. How something gets done should not matter any more, even to highly technical users, any more than knowing exactly which endocrines are secreted by our Islets of Langerhans (some small bits of the pancreas that produce some incredibly important things like insulin). These things must work (otherwise humans get diabetes or computers fail to process) but very few of us need to know the details.

Although the concept of this distributed, minimalistic and virtualized compute environment is simple, the details can become a bit complex – I’ll reserve further discussion for a future post.

To summarize, the security provided by this new architecture is one of prevention, limitation of damage and ease of applying proactive security measures (to be discussed next).

Phase Two – the protection of the compute sessions from either internal or external threat mechanisms – also requires a novel approach that is suited for our new ecosystems. External threats are essentially any attempt by unauthorized users (whether human, robots, extraterrestrials, etc.) to infiltrate and/or take data from a protected system. Internal threats are activities that are attempted by an authorized user – but are not authorized actions for that particular user. An example is a rogue network admin either transferring data to an unauthorized endpoint (piracy) or destruction of data.

The old-fashioned ‘perimeter defense systems’ are no longer appropriate for protection of cloud servers, mobile devices, etc. A particular example of how extensive and interconnected a single ‘session’ can be is given here:

A mobile user opens an app on their phone (say an image editing app) that is ‘free’ to the user. The user actually ‘pays’ for this ‘free’ privilege by donating a small amount of pixels (and time/focus) to some advertising. In the background, the app is providing some basic demographic info of the user, the precise physical location (in many instances), along with other data to an external “ad insertion service”.

This cloud-based service in turn aggregates the ‘avails’ (sorted by location, OS, hardware platform, app type that the user is running, etc.) and often submits these ‘avails’ [with the screen dimensions and animation capabilities] to an online auction system that bids the ‘avails’ against a pool of appropriate ads that are preloaded and ready to be served.

Typically the actual ads are not located on the same server, or even the same country, as either the ad insertion service or the auction service. It’s very common for up to half a dozen countries, clouds and other entities to participate in delivering a single ad to a mobile user.

This highly porous ad insertion system has actually become a recent favorite of hackers and other con games – even without technical breaches it’s an incredibly easy system to game – due to the speed of the transactions and almost impossible ability to monitor in real time many ‘deviations’ are possible… and common.

There are a number of ingenious methods being touted right now to help solve both the actual protection of virtualized and distributed compute environments, as well as to monitor such things as intrusions, breaches and unintended data moves – all things that traditional IT tools don’t address well at all.

I am unaware of a ‘perfect’ solution yet to address either the protection or monitoring aspects, but here are a few ideas: [NOTE: some of these are my ideas, some have been taken up by vendors as a potential product/service. I don’t feel qualified enough to judge the merits of any particular commercial product at this point, nor is the focus of this article on detailed implementations but rather concepts, so I’ll refrain from getting into specific products].

  • User endpoint devices (anything from humans’ cellphones to other servers) must be pre-authenticated (using combination of currently well-known identification methods such as MAC address, embedded token, etc.). On top of this basic trust environment, each session is authenticated with a minimum of a two-factor logon scheme (such as biometric plus PIN, certificate plus One Time Token, etc). Once the endpoints are authenticated, a one-time use VPN is established for each connection.
  • Endpoint devices and users are combined as ‘profiles’ that are stored as part of a security monitoring application. Each user may have more than one profile: for instance the same user may typically perform (or be allowed to perform by his/her firm’s security protocol) different actions from a cellphone as opposed to a corporate laptop. The actions that each user takes are automatically monitored / restricted. For instance, the VPNs discussed in the point above can be individually tailored to allow only certain kinds of traffic to/from certain endpoints. Actions that fall outside of the pre-established scope, or are outside a heuristic pattern for that user, can either be denied or referred for further authorization.
  • Using techniques similar to the SSL methodologies that protect and authenticate online financial transactions, different kinds of certificates can be used to permit certain kinds of ‘transactions’ (with a transaction being either access to certain data, permission to move/copy/delete data, etc.) In a sense it’s a bit like the layered security that exists within the Amazon store: it takes one level of authentication to get in and place an order, yet another level of ‘security’ to actually pay for something (you must have a valid credit card that is authenticated in real time by the clearing houses for Visa/MasterCard, etc.). For instance, a user may log into a network/application instance with a biometric on a pre-registered device (such as fingerprint on an iPhone6 that has been previously registered in the domain as an authenticated device). But if that user then wishes to move several terabytes of a Hollywood movie studio’s content to remote storage site (!!) they would need to submit an additional certificate and PIN.

An Integrated Immune System for Data Security

Virus in blood - Scanning Electron Microscopy stylised

The goal of a highly efficient and manageable ‘immune system’ for a hyper-connected data infrastructure is for such a system to protect against all possible threats with the least direct supervision possible. Since not only is it impossible for a centralized omniscient monitoring system to handle the incredible number of sessions that take place in even a single modern hyper-network; it’s equally difficult for a single monitoring / intrusion detection device to understand and adapt to the myriad of local contexts and ‘rules’ that define what is ‘normal’ and what is a ‘breach’.

The only practical method to accomplish the implementation of such an ‘immune system’ for large hyper-networks is to distribute the security and protection infrastructure throughout the entire network. Just as in the human body, where ‘security’ begins at the cellular level (with cell walls allowing only certain compounds to pass – depending on the type and location of each cell); each local device or application must have as part of its ‘cellular structure’ a certain amount of security.

As cells become building blocks for larger structures and eventually organs or other systems, the same ‘layering’ model can be applied to IT structures so the bulk of security actions are taken automatically at lower levels, with only issues that deviate substantially from the norm being brought to the attention of higher level and more centralized security detection and action systems.

Another issue of which to be aware: over-reporting. It’s all well and good to log certain events… but who or what is going to review millions of lines of logs if every event that deviates even slightly from some established ‘norm’ is recorded? And even then, that action will only be looking in the rear view mirror.. The human body doesn’t generate any logs at all and yet manages to more or less handle the security for 37.2 trillion cells!

That’s not say that no logs at all should be kept – they can be very useful to help understand breaches and what can be improved in the future – but the logs should be designed with that purpose in mind and recycled as appropriate.

Summary

In this very brief overview we’ve discussed some of the challenges and possible solutions to the very different security paradigm that we now have due to the hyper-connected and diverse nature of today’s data ecosystems. As the number of ‘unmanned’ devices, sensors, beacons and other ‘things’ continues to grow exponentially, along with the fact that most of humanity will soon be connected to some degree to the ‘Internet’, the scale of the security issue is truly enormous.

A few ideas and thoughts that can lead to effective, scalable and affordable solutions have been discussed – many of these are new and works in progress but offer at least a partially viable solution as we work forward. The most important thing to take away here is an awareness of how things must change, and to keep asking questions and not assume that the security techniques that worked last year will keep you safe next year.

Page 2 of 7 « Previous 1 2 3 4 … 7 Next »
  • Blog at WordPress.com.
  • Connect with us:
  • Twitter
  • Vimeo
  • YouTube
  • RSS
  • Follow Following
    • Parasam
    • Join 95 other followers
    • Already have a WordPress.com account? Log in now.
    • Parasam
    • Customize
    • Follow Following
    • Sign up
    • Log in
    • Report this content
    • View site in Reader
    • Manage subscriptions
    • Collapse this bar
 

Loading Comments...