The Trinity of Functional IoT
As the name implies, the functionality of “Things” that comprise an IoT universe must be connected in order for this ecosystem to operate. This networking interconnection is actually the magic that will allow a fully successful implementation of the IoT technology. In addition, it’s important to realize that this network will often perform in a bi-directional manner, with the “Things” at the edge of the network either acting as Input Devices (Sensors) or Output Devices (actuators).
Input (Sensors)
The variety, complexity and capability of input sensors in the IoT universe is almost without limit. Almost anything that can measured in some way will spawn an IoT sensor to communicate that data to something else. In many cases, sensors may be very simple, measuring only a single parameter. In other cases, a combined sensor package may measure many parameters, providing a complete environmental ‘picture’ as a dataset. For instance, a simple sensor may just measure temperature, and a use case might be an embedded sensor in a case of wine before transport. The data is measured once every hour and stored in memory onboard the sensor, then ‘read’ upon arrival at the retail point to ensure that maximums or minimums of acceptability were not exceeded. Thermal extremes are the single largest external loss factor in transport of wine worldwide, so this is not a trivial matter.

On the other hand, a small package – the size of a pack of cigarettes – attached to a vehicle can measure speed, acceleration, location, distance traveled from waypoints, temperature, humidity, relative light levels (to indicate degree of daylight), etc. If in addition the sensor package is connected to the vehicle computer, a myriad of engine and other component data can be collected. All this data can be either transmitted live, or more likely, stored in a sample manner and then ‘burst-transmitted’ on a regular basis when a good communications link is available.
An IoT sensor has, at a minimum, the following components: actual sensor element, internal processing, data formation, transmission or storage. More complex sensors may contain both storage and data transmission, multiple transmission methodologies, preprocessing and data aggregation, etc. At this time, the push for most vendors is to get sensors manufactured and deployed in the field to gain market share and increase sales in the IoT sector. Long term thought to security, compatibility, data standards, etc. is often not addressed. Since the scale of IoT sensor deployment is anticipated to exceed the physical deployment of any other technology in the history of humanity, new paradigms will evolve to enable this rollout in an effective manner.
While the large scale deployment of billions of sensors will bring many new benefits to our technological landscape, and undoubtedly improve many real-world issues such as health care, environmental safety and efficiency of resource utilization, traffic management, etc., this huge injection of edge devices will also collectively offer one of the greatest security threats that has ever been experienced in the IT landscape. Due to a current lack of standards, rush to market, lack of understanding of even the security model that IoT presents, etc. most sensors do not have security embedded as a fundamental design principle.

There are additional challenges to even the basic functionality, let alone security, of IoT sensors: that of updating, authenticating and validating such devices or the data that they produce. If a million small inexpensive temperature sensors are deployed by a logistics firm, there is no way to individually upgrade these devices should either a significant security flaw be discovered, or if the device itself is found to operate inaccurately. For example, let’s just say that a firmware programming error in such a sensor results in erroneous readings being collected once the sensor has been continuously exposed to an ambient temperature of -25C or below for more than 6 hours. This may not have been considered in a design lab in California, but once the sensors are being used in northern Sweden the issue is discovered. In a normal situation, the vendor would release a firmware update patch, the IT department would roll this out, and all is fixed… not an option in the world of tiny, cheap, non-upgradable IoT devices…
Many (read most as of the time of this article) sensors have little or no real security, authentication or encryption of data functionality. If logistics firms are subject to penalties for delivering goods to retailers that have exceeded the prescribed temperature min/max levels, some firm somewhere may be enticed to substitute readings from a set of sensors that were kept in a more appropriate temperature environment – how is this raw temperature data authenticated? What about sensors that are attached to a human pacemaker, reporting back biomedical information that is personally identifiable. Is a robust encryption scheme applied (as would be required by USA HIPPA regulations)?
There is another issue that will come back to haunt us collectively in a few years: that of vendor obsolescence. Whether a given manufacturer goes out of business, deprecates their support of a particular line of IoT sensors, or leaves the market for another reason, ‘orphaned’ devices will soon become a reality in the IoT universe. While one may think that “Oh well, I’ll just have to replace these sensors with new ones” is the answer, that will not always be an easy answer. What about sensors that are embedded deep within industrial machines, aircraft, motorcars, etc.? These could be very expensive or practically impossible to easily replace, particularly on a large scale. And to further challenge this line of thought, what if a proprietary communications scheme was used by a certain sensor manufacturer that was not escrowed before the firm went out of business? Then we are faced with a very abrupt ‘darkening’ of thousands or even millions of sensor devices.
All of the above variables should be considered before a firm embarks on a large-scale rollout of IoT sensor technology. Not all of the issues have immediate solutions, some of the challenges can be ameliorated in the network layer (to be discussed later in this post), and some can be resolved by making an appropriate choice of vendor or device up front.
Output (Actuators)
Actuators may be stand-alone (i.e. just an output device), or may be combined with an IoT input sensor. An example might be an intelligent light bulb designed for night lighting outdoors – where the sensor detects that the ambient light has fallen to a predetermined level (that may be externally programmable), and in addition to reporting this data upstream also directly triggers the actuator (the light bulb itself) to turn on. In many cases an actuator, in addition to acting on data sent to it over an IoT network, will report back with additional data as well, so in some sense may contain both a sensor as well as an actuator. An example, again using a light bulb: the light bulb turns on only when specifically instructed by external data, but if the light element fails, the bulb will inform the network that this device is no longer capable of producing light – even though it’s receiving data. A robustly designed network would also require the use of light bulb actuators that issue an occasional ‘heartbeat’ so if the bulb unit fails completely, the network will know this and report the failure.

The issue of security was discussed concerning input sensors above, but this issue also applies to output actuators. In fact, the security and certainty that surrounds an IoT actuator is often more immediately important than a sensor. A compromised sensor will result in bad or missing data, which can still be accommodated within the network or computational schema that uses this data. An actuator that has been compromised or ‘hacked’ can directly affect either the physical world or a portion of a network, so can cause immediate harm. Imagine a set of actuators that control piping valves in a high-pressure gas pipeline installation… and if some valves were suddenly closed while others were opened a ‘hammer’ effect could easily cause a rupture and the potential of a disastrous result. It is imperative that in high-risk points a strong and multilayered set of security protocols is in place.
This issue, along with other reliability issues, will likely delay the deployment of many IoT implementations until adequate testing and use case experience demonstrates that current ‘closed-system’ industrial control networks can be safely replaced with a more open IoT structure. Another area where IoT will require much testing and rigorous analysis will be in vehicles, particularly autonomous cars. The safety of life and property will become highly dependent on the interactions of both sensors and actuators.
Other issues and vulnerabilities that affect input sensors are applicable to actuators as well: updating firmware, vendor obsolescence and a functional set of standards. Just as in the world of sensors, many of the shortcomings of individual actuators must be handled by the network layer in order for the entire system to demonstrate the required degree of robustness.
Network & Infrastructure
While sensors and actuators are the elements of IoT that receive most attention, and are in fact the devices that form the edge of the IoT ecosystem, the more invisible network and associated infrastructure is absolutely vital for this technology to function. In fact, the overall infrastructure is more important and carries a greater responsibility for the overall functionality of IoT than either sensors or actuators.Although the initial demonstration and implementation of IoT technology is currently using traditional ip networks this must change. The current model of remote users (or machines) connecting to other remote users, data centers or cloud combinations cannot scale to the degree required for a large scale successful implementation of IoT.

In addition, a functional IoT network/infrastructure must contain elements that are not present in today’s information networks, and provide many levels of distributed processing, data aggregation and other functions. Some of the reasons that drive these new requirements for the IoT network layer have been discussed above, in general the infrastructure must make up for the lacks and limitations of both sensors and actuators as they age in place over time. The single largest reason that the network layer will be responsible for the bulk of security, upgrading/adaptation, dealing with obsolescence, etc. is that the network is dynamic and can be continually adjusted and tuned to the ongoing requirements of the sensors, actuators and the data centers/users where the IoT information is processed or consumed.
The reference to ‘infrastructure’ in addition to ‘network’ is for a very good reason: in order for IoT to function well on a long-term basis, substantial ingredients beyond just a simple network of connectivity are required. There are three main drivers of this additional requirement: data reduction & aggregation, security & reliability, and adaptation/support of IoT edge devices that no longer function optimally.
Data Reduction & Aggregation
The amount of data that will be generated and/or consumed by billions of sensors and actuators is gargantuan. According to one of the most recent Cisco VNI forecasts, the global internet traffic will exceed 1.3 zettabytes by the end of this year. 1 zettabyte = 1 million petabytes, with 1 petabyte = 1 million gigabytes… to give some idea of the scale of current traffic. And this is with IoT barely beginning to show up on the global data transmission landscape. If we take even a conservative estimate of 10 billion IoT devices adding to the global network each year between now and 2020, and we assume that on average each edge device transmits/receives only 1 kbps (kilobits per second), this math follows: 30GB per device per year X 10 billion devices = 300 exabytes of new added data per year – at a minimum.
While this may not seem like a huge increase (about a 25% annual increase in overall data traffic worldwide) there are a number of factors that make this much more burdensome to current network topologies than may first be apparent. The current global network system supports basically three types of traffic: streaming content (music, videos, etc) that emanate from a small number of CDNs (Content Distribution Networks) and feed millions of subscribers; database queries and responses (Google searches, credit card authorizations, financial transactions and the like); and ad hoc bi-directional data moves (business documents, publications, research and discovery, etc.). The first of these (streaming) is inherently unidirectional and specialized CDNs have been built to accommodate this traffic, with many peering routers moving this traffic off the ‘general highway’ onto the dedicated routes for the CDNs to allow users to experience the low latency they have come to expect, etc. The second type of traffic, queries and responses, are typically very small data packets that hit a large purpose-designed data center which can process the query very quickly and respond, again with a very small data load. The third type, which has the broadest range of data types, is often not required to have a near-instantaneous delivery or response; the user is less worried about a few seconds delay on the upload of a scientific paper or the download of a commercial contract. A delay of more than 2 sec after a Google search is submitted is seen as frustrating…
Now, enter the world of IoT sensors and actuators onto this already crowded IT infrastructure. The type of data that is detected and transmitted by sensors will very often be time-sensitive. For instance the position of an autonomous vehicle must be updated every 100 mSec or the safety of that vehicle and others around it can be affected. If Amazon succeeds in getting delivery drones licensed, we will have tens of thousands of these critters flying around our heavily congested urban areas – again requiring critically frequent updates of positional data among other parameters. Latency rapidly becomes the problem even more than bandwidth… and the internet, in its glorious redundant design, has as its core value the ultimate delivery of the packet as the prime law, not how long it takes or how many packets can ultimately be delivered. Remember, the initial design of the Internet (which is basically unchanged for almost 50 years now) was a redundant mesh of connectivity to allow the huge bandwidth of 300 bits per second (teletype machine basically) to reach its destination even in the face of nuclear attack wiping out some major nodes on the network.
The current design of data center connectivity (even such monsters such as Amazon Web Services, Google Compute, Microsoft Azure) is a star network. This has one (or a small cluster) of large datacenters in the center of the ‘cloud’, with all the users attached like spokes on a wheel at the edge. As the number of users grows, the challenge is to keep raising the capacity of the ‘gateways’ into the actual computational/storage center of the cloud. It’s very expensive to duplicate data centers, and doing so brings additional data transmission costs as all the data centers (of a given vendor) must constantly be synchronized. Essentially, the star model of central reception, processing and then sending data back to edge fails at the scale and required latency for IoT to succeed.
One possible solution to avoid this congestion at the center of the network is to push some computation to the edge, and reduce the amount of data that is required to be acted upon at the center. This can be accomplished in several ways, but a general model will deal with both data aggregation (whereby data from individual sensors is combined where this is possible) and data reduction (where data flows from individual sensors can be either compressed, ignored in some cases or modified). A few use cases will illustrate these points:
- Data Aggregation: assume a vendor has embedded small, low cost transpiration sensors in the soil of rows of grape plants in a wine farm. A given plot may have 50 rows each 100 meters long. With sensors embedded every 5 meters, 1,000 sensors will be generating data. Rather than push all that individual data up to a data center (or even to a local server at the wine farm), an intelligent network could aggregate the data and report that, on average, the soil needs or does not need watering. There is a 1000:1 reduction in network traffic up to the center…
- Data Reduction: using the same example, if one desired a somewhat more granular sensing of the wine plot, the intelligent network could examine the data from each row, and with a predetermined min/max data range, transmit the data upstream only for those sensors that were out of range. This may effectively reduce the data from 1,000 sensors to perhaps a few dozen.
Both of these techniques require both distributed compute and storage capabilities to exist within the network itself. This is a new paradigm for networks, which up to this time have been quite stupid in reality. Other than passive network hubs/combiners, and active switches (which are very limited, although extremely fast, in their analytical capabilities), current networks are just ribbons of glass or copper. With the current ability of putting substantial compute and storage power in a very small package that uses very little power (look at smart watches), small ‘nodes of intelligence’ could be embedded into modern networks and literally change the entire fabric of connectivity as we know it.
Further details on how this type of intelligent network could be designed and implemented will be a subject of a future post, but here it’s enough to demonstrate that some sort of ‘smart fabric’ of connectivity will be required to effectively deploy IoT on the enormous scale that is envisioned.
Security & Reliability
The next area in which the infrastructure/network that interconnects IoT will be critical to its success will be the overall security, reliability and trustworthiness of the data that is both received from and transmitted to edge devices: sensors and actuators. Not only does the data from sensors, and instructions to actuators, need to be accurate and protected; but the updstream data centers and other entities to which IoT networks are attached must be protected. IoT edge devices, due to their limited capabilities and oft-overlooked security features, can provide easy attack surfaces for the entire network. Typical perimeter defense mechanisms (firewalls, intrusion detection devices) will not work for several reasons in the IoT universe. Mostly this is because IoT devices are often deployed within a network, not just at the outer perimeter. Also, the types of attacks will be very different that what most IDS trigger on now.
As was touched on earlier in this series, most IoT sensors do not have strong security mechanisms built in to the devices themselves. In addition, with the issues of vulnerabilities discovered after deployment, it’s somewhere between difficult and impossible to upgrade large numbers of IoT sensors in place. Many times the sensors are not even designed for bi-directional traffic, so even if a patch was designed, and the sensor somehow could install it, the patch could not be received by the sensor. This boils down to the IoT infrastructure/network bearing the brunt of the burden of security for the overall IoT ecosystem.
There are a number of possible solutions that can be implemented in an IoT network environment to enhance security and reliability, one such example is outlined in this paper. Essentially the network must be intelligent enough to compensate for the ‘dumbness’ of the IoT devices, whether sensors or actuators. One of the trickiest bits will be to secure ‘device to device’ communications. As some IoT devices will directly communicate to other nearby IoT devices through a proprietary communications channel and not necessarily the ‘network’, there is the opportunity for unsecured traffic, etc. to exist.
An example could be a home automation system: Light sensors may communicate directly to outlets or lamps using the Zigbee protocol and never (directly) communicate over a normalized ip network. The issues of out-of-date devices, compromised devices, etc. are not handled (at this time) by the Zigbee protocol, so no protection can be offered. Potentially, such vulnerabilities could lead to an access point in the larger network as a threat surface. The network must ‘understand’ to what it is connected, even if it is a small subsystem (instead of single devices), and provide the same degree of supervision and protection to these isolated subsystems as is possible with single devices.
It rapidly becomes apparent that for the network to implement such functions a high degree of ‘contextual awareness’ and heuristic intelligence is required. With the plethora of devices, types of functions, etc. it won’t be possible to develop, maintain and implement a centrally based ‘rules engine’ to handle this very complex set of tasks. A collective effort will be required from the IoT community to assist in developing and maintaining the knowledgeset for the required AI to be ‘baked in’ to the network. While this is, at first, a considerable challenge, the payoff will be huge in many more ways than just IoT devices working better and being more secure: the large scale development of a truly contextually aware and intelligent network will change the “Internet” forever.
Adaptation & Support
In a similar manner to providing security and reliability, the network must take on the burden of adapting to obsolete devices, broken devices, and monitoring devices for out-of-expected-range behavior. Since the network is dynamic, and (as postulated above) will come to have significant computational capability baked in to the network itself, only the network is positioned to effectively monitor and adapt to the more static (and hugely deployed) set of sensors and actuators.
As in security scenarios, context is vital and each set of installed sensors/actuators must have a ‘profile’ installed to the network along with the actual device. For instance, a temperature sensor could in theory report back a reading of anything remotely reasonable (let’s say -50C to +60C – that covers Antarctica to Baghdad) but if the temp sensor is installed in a home refrigerator the range of expected results would be far more narrow. So as a home appliance vendor turns out units that have IoT devices on board that will connect to the network at large, a profile must also be supplied to the network to indicate the expected range of behavior. The same is true for actuators: an outdoor light for a walkway that tacitly assumes it will turn on once in the evening and off again in the morning should assume something is wrong if signals come through that would have the light flashing on and off every 10 seconds.
One of the things that the network will end up doing is ‘deprecating’ some sensors and actuators – whether they report continually erroneous information or have externally been determined to be no longer worthy of listening to… Even so, the challenge will be continual: not every vendor will announce end-of-life for every sensor or actuator; not every scenario can be envisioned ahead of time. The law of unintended consequences of a world that is largely controlled by embedded and unseen interconnected devices will be interesting indeed…
The next section of this post “Security & Privacy” may be found here.
References:
The Zettabyte Era – Trends and Analysis