• Home
  • about this blog
  • Blog Posts

Parasam

Menu

  • design
  • fashion
  • history
  • philosophy
  • photography
  • post-production
    • Content Protection
    • Quality Control
  • science
  • security
  • technology
    • 2nd screen
    • IoT
  • Uncategorized
  • Enter your email address to follow this blog and receive notifications of new posts by email.

  • Recent Posts

    • Take Control of your Phone
    • DI – Disintermediation, 5 years on…
    • Objective Photography is an Oxymoron (all photos lie…)
    • A Historical Moment: The Sylmar Earthquake of 1971 (Los Angeles, CA)
    • Where Did My Images Go? [the challenge of long-term preservation of digital images]
  • Archives

    • September 2020
    • October 2017
    • August 2016
    • June 2016
    • May 2016
    • November 2015
    • June 2015
    • April 2015
    • March 2015
    • December 2014
    • February 2014
    • September 2012
    • August 2012
    • June 2012
    • May 2012
    • April 2012
    • March 2012
    • February 2012
    • January 2012
  • Categories

    • 2nd screen
    • Content Protection
    • design
    • fashion
    • history
    • IoT
    • philosophy
    • photography
    • post-production
    • Quality Control
    • science
    • security
    • technology
    • Uncategorized
  • Meta

    • Register
    • Log in
    • Entries feed
    • Comments feed
    • WordPress.com

Browsing Tags sensors

IoT (Internet of Things): A Short Series of Observations [pt 2]: Sensors, Actuators & Infrastructure

May 19, 2016 · by parasam

The Trinity of Functional IoT

As the name implies, the functionality of “Things” that comprise an IoT universe must be connected in order for this ecosystem to operate. This networking interconnection is actually the magic that will allow a fully successful implementation of the IoT technology. In addition, it’s important to realize that this network will often perform in a bi-directional manner, with the “Things” at the edge of the network either acting as Input Devices (Sensors) or Output Devices (actuators).

Input (Sensors)

The variety, complexity and capability of input sensors in the IoT universe is almost without limit. Almost anything that can measured in some way will spawn an IoT sensor to communicate that data to something else. In many cases, sensors may be very simple, measuring only a single parameter. In other cases, a combined sensor package may measure many parameters, providing a complete environmental ‘picture’ as a dataset. For instance, a simple sensor may just measure temperature, and a use case might be an embedded sensor in a case of wine before transport. The data is measured once every hour and stored in memory onboard the sensor, then ‘read’ upon arrival at the retail point to ensure that maximums or minimums of acceptability were not exceeded. Thermal extremes are the single largest external loss factor in transport of wine worldwide, so this is not a trivial matter.

sensor01  sensor02  sensor10  sensor08

On the other hand, a small package – the size of a pack of cigarettes – attached to a vehicle can measure speed, acceleration, location, distance traveled from waypoints, temperature, humidity, relative light levels (to indicate degree of daylight), etc. If in addition the sensor package is connected to the vehicle computer, a myriad of engine and other component data can be collected. All this data can be either transmitted live, or more likely, stored in a sample manner and then ‘burst-transmitted’ on a regular basis when a good communications link is available.

An IoT sensor has, at a minimum, the following components: actual sensor element, internal processing, data formation, transmission or storage. More complex sensors may contain both storage and data transmission, multiple transmission methodologies, preprocessing and data aggregation, etc. At this time, the push for most vendors is to get sensors manufactured and deployed in the field to gain market share and increase sales in the IoT sector. Long term thought to security, compatibility, data standards, etc. is often not addressed. Since the scale of IoT sensor deployment is anticipated to exceed the physical deployment of any other technology in the history of humanity, new paradigms will evolve to enable this rollout in an effective manner.

While the large scale deployment of billions of sensors will bring many new benefits to our technological landscape, and undoubtedly improve many real-world issues such as health care, environmental safety and efficiency of resource utilization, traffic management, etc., this huge injection of edge devices will also collectively offer one of the greatest security threats that has ever been experienced in the IT landscape. Due to a current lack of standards, rush to market, lack of understanding of even the security model that IoT presents, etc. most sensors do not have security embedded as a fundamental design principle.

sensor09  sensor05  sensor03  sensor03

There are additional challenges to even the basic functionality, let alone security, of IoT sensors: that of updating, authenticating and validating such devices or the data that they produce. If a million small inexpensive temperature sensors are deployed by a logistics firm, there is no way to individually upgrade these devices should either a significant security flaw be discovered, or if the device itself is found to operate inaccurately. For example, let’s just say that a firmware programming error in such a sensor results in erroneous readings being collected once the sensor has been continuously exposed to an ambient temperature of -25C or below for more than 6 hours. This may not have been considered in a design lab in California, but once the sensors are being used in northern Sweden the issue is discovered. In a normal situation, the vendor would release a firmware update patch, the IT department would roll this out, and all is fixed… not an option in the world of tiny, cheap, non-upgradable IoT devices…

Many (read most as of the time of this article) sensors have little or no real security, authentication or encryption of data functionality. If logistics firms are subject to penalties for delivering goods to retailers that have exceeded the prescribed temperature min/max levels, some firm somewhere may be enticed to substitute readings from a set of sensors that were kept in a more appropriate temperature environment – how is this raw temperature data authenticated? What about sensors that are attached to a human pacemaker, reporting back biomedical information that is personally identifiable. Is a robust encryption scheme applied (as would be required by USA HIPPA regulations)?

There is another issue that will come back to haunt us collectively in a few years: that of vendor obsolescence. Whether a given manufacturer goes out of business, deprecates their support of a particular line of IoT sensors, or leaves the market for another reason, ‘orphaned’ devices will soon become a reality in the IoT universe. While one may think that “Oh well, I’ll just have to replace these sensors with new ones” is the answer, that will not always be an easy answer. What about sensors that are embedded deep within industrial machines, aircraft, motorcars, etc.? These could be very expensive or practically impossible to easily replace, particularly on a large scale. And to further challenge this line of thought, what if a proprietary communications scheme was used by a certain sensor manufacturer that was not escrowed before the firm went out of business? Then we are faced with a very abrupt ‘darkening’ of thousands or even millions of sensor devices.

All of the above variables should be considered before a firm embarks on a large-scale rollout of IoT sensor technology. Not all of the issues have immediate solutions, some of the challenges can be ameliorated in the network layer (to be discussed later in this post), and some can be resolved by making an appropriate choice of vendor or device up front.

Output (Actuators)

Actuators may be stand-alone (i.e. just an output device), or may be combined with an IoT input sensor. An example might be an intelligent light bulb designed for night lighting outdoors – where the sensor detects that the ambient light has fallen to a predetermined level (that may be externally programmable), and in addition to reporting this data upstream also directly triggers the actuator (the light bulb itself) to turn on. In many cases an actuator, in addition to acting on data sent to it over an IoT network, will report back with additional data as well, so in some sense may contain both a sensor as well as an actuator. An example, again using a light bulb: the light bulb turns on only when specifically instructed by external data, but if the light element fails, the bulb will inform the network that this device is no longer capable of producing light – even though it’s receiving data. A robustly designed network would also require the use of light bulb actuators that issue an occasional ‘heartbeat’ so if the bulb unit fails completely, the network will know this and report the failure.

actuators  actuator03  actuator01  actuator00

The issue of security was discussed concerning input sensors above, but this issue also applies to output actuators. In fact, the security and certainty that surrounds an IoT actuator is often more immediately important than a sensor. A compromised sensor will result in bad or missing data, which can still be accommodated within the network or computational schema that uses this data. An actuator that has been compromised or ‘hacked’ can directly affect either the physical world or a portion of a network, so can cause immediate harm. Imagine a set of actuators that control piping valves in a high-pressure gas pipeline installation… and if some valves were suddenly closed while others were opened a ‘hammer’ effect could easily cause a rupture and the potential of a disastrous result. It is imperative that in high-risk points a strong and multilayered set of security protocols is in place.

This issue, along with other reliability issues, will likely delay the deployment of many IoT implementations until adequate testing and use case experience demonstrates that current ‘closed-system’ industrial control networks can be safely replaced with a more open IoT structure. Another area where IoT will require much testing and rigorous analysis will be in vehicles, particularly autonomous cars. The safety of life and property will become highly dependent on the interactions of both sensors and actuators.

Other issues and vulnerabilities that affect input sensors are applicable to actuators as well: updating firmware, vendor obsolescence and a functional set of standards. Just as in the world of sensors, many of the shortcomings of individual actuators must be handled by the network layer in order for the entire system to demonstrate the required degree of robustness.

Network & Infrastructure

While sensors and actuators are the elements of IoT that receive most attention, and are in fact the devices that form the edge of the IoT ecosystem, the more invisible network and associated infrastructure is absolutely vital for this technology to function. In fact, the overall infrastructure is more important and carries a greater responsibility for the overall functionality of IoT than either sensors or actuators.Although the initial demonstration and implementation of IoT technology is currently using traditional ip networks this must change. The current model of remote users (or machines) connecting to other remote users, data centers or cloud combinations cannot scale to the degree required for a large scale successful implementation of IoT.

network01      

In addition, a functional IoT network/infrastructure must contain elements that are not present in today’s information networks, and provide many levels of distributed processing, data aggregation and other functions. Some of the reasons that drive these new requirements for the IoT network layer have been discussed above, in general the infrastructure must make up for the lacks and limitations of both sensors and actuators as they age in place over time. The single largest reason that the network layer will be responsible for the bulk of security, upgrading/adaptation, dealing with obsolescence, etc. is that the network is dynamic and can be continually adjusted and tuned to the ongoing requirements of the sensors, actuators and the data centers/users where the IoT information is processed or consumed.

The reference to ‘infrastructure’ in addition to ‘network’ is for a very good reason: in order for IoT to function well on a long-term basis, substantial ingredients beyond just a simple network of connectivity are required. There are three main drivers of this additional requirement: data reduction & aggregation, security & reliability, and adaptation/support of IoT edge devices that no longer function optimally.

Data Reduction & Aggregation

The amount of data that will be generated and/or consumed by billions of sensors and actuators is gargantuan. According to one of the most recent Cisco VNI forecasts, the global internet traffic will exceed 1.3 zettabytes by the end of this year. 1 zettabyte = 1 million petabytes, with 1 petabyte = 1 million gigabytes… to give some idea of the scale of current traffic. And this is with IoT barely beginning to show up on the global data transmission landscape. If we take even a conservative estimate of 10 billion IoT devices adding to the global network each year between now and 2020, and we assume that on average each edge device transmits/receives only 1 kbps (kilobits per second), this math follows: 30GB per device per year X 10 billion devices = 300 exabytes of new added data per year – at a minimum.

While this may not seem like a huge increase (about a 25% annual increase in overall data traffic worldwide) there are a number of factors that make this much more burdensome to current network topologies than may first be apparent. The current global network system supports basically three types of traffic: streaming content (music, videos, etc) that emanate from a small number of CDNs (Content Distribution Networks) and feed millions of subscribers; database queries and responses (Google searches, credit card authorizations, financial transactions and the like); and ad hoc bi-directional data moves (business documents, publications, research and discovery, etc.). The first of these (streaming) is inherently unidirectional and specialized CDNs have been built to accommodate this traffic, with many peering routers moving this traffic off the ‘general highway’ onto the dedicated routes for the CDNs to allow users to experience the low latency they have come to expect, etc. The second type of traffic, queries and responses, are typically very small data packets that hit a large purpose-designed data center which can process the query very quickly and respond, again with a very small data load. The third type, which has the broadest range of data types, is often not required to have a near-instantaneous delivery or response; the user is less worried about a few seconds delay on the upload of a scientific paper or the download of a commercial contract. A delay of more than 2 sec after a Google search is submitted is seen as frustrating…

Now, enter the world of IoT sensors and actuators onto this already crowded IT infrastructure. The type of data that is detected and transmitted by sensors will very often be time-sensitive. For instance the position of an autonomous vehicle must be updated every 100 mSec or the safety of that vehicle and others around it can be affected. If Amazon succeeds in getting delivery drones licensed, we will have tens of thousands of these critters flying around our heavily congested urban areas – again requiring critically frequent updates of positional data among other parameters. Latency rapidly becomes the problem even more than bandwidth… and the internet, in its glorious redundant design, has as its core value the ultimate delivery of the packet as the prime law, not how long it takes or how many packets can ultimately be delivered. Remember, the initial design of the Internet (which is basically unchanged for almost 50 years now) was a redundant mesh of connectivity to allow the huge bandwidth of 300 bits per second (teletype machine basically) to reach its destination even in the face of nuclear attack wiping out some major nodes on the network.

The current design of data center connectivity (even such monsters such as Amazon Web Services, Google Compute, Microsoft Azure) is a star network. This has one (or a small cluster) of large datacenters in the center of the ‘cloud’, with all the users attached like spokes on a wheel at the edge. As the number of users grows, the challenge is to keep raising the capacity of the ‘gateways’ into the actual computational/storage center of the cloud. It’s very expensive to duplicate data centers, and doing so brings additional data transmission costs as all the data centers (of a given vendor) must constantly be synchronized. Essentially, the star model of central reception, processing and then sending data back to edge fails at the scale and required latency for IoT to succeed.

One possible solution to avoid this congestion at the center of the network is to push some computation to the edge, and reduce the amount of data that is required to be acted upon at the center. This can be accomplished in several ways, but a general model will deal with both data aggregation (whereby data from individual sensors is combined where this is possible) and data reduction (where data flows from individual sensors can be either compressed, ignored in some cases or modified). A few use cases will illustrate these points:

  • Data Aggregation: assume a vendor has embedded small, low cost transpiration sensors in the soil of rows of grape plants in a wine farm. A given plot may have 50 rows each 100 meters long. With sensors embedded every 5 meters, 1,000 sensors will be generating data. Rather than push all that individual data up to a data center (or even to a local server at the wine farm), an intelligent network could aggregate the data and report that, on average, the soil needs or does not need watering. There is a 1000:1 reduction in network traffic up to the center…
  • Data Reduction:  using the same example, if one desired a somewhat more granular sensing of the wine plot, the intelligent network could examine the data from each row, and with a predetermined min/max data range, transmit the data upstream only for those sensors that were out of range. This may effectively reduce the data from 1,000 sensors to perhaps a few dozen.

Both of these techniques require both distributed compute and storage capabilities to exist within the network itself. This is a new paradigm for networks, which up to this time have been quite stupid in reality. Other than passive network hubs/combiners, and active switches (which are very limited, although extremely fast, in their analytical capabilities), current networks are just ribbons of glass or copper. With the current ability of putting substantial compute and storage power in a very small package that uses very little power (look at smart watches), small ‘nodes of intelligence’ could be embedded into modern networks and literally change the entire fabric of connectivity as we know it.

Further details on how this type of intelligent network could be designed and implemented will be a subject of a future post, but here it’s enough to demonstrate that some sort of ‘smart fabric’ of connectivity will be required to effectively deploy IoT on the enormous scale that is envisioned.

Security & Reliability

The next area in which the infrastructure/network that interconnects IoT will be critical to its success will be the overall security, reliability and trustworthiness of the data that is both received from and transmitted to edge devices: sensors and actuators. Not only does the data from sensors, and instructions to actuators, need to be accurate and protected; but the updstream data centers and other entities to which IoT networks are attached must be protected. IoT edge devices, due to their limited capabilities and oft-overlooked security features, can provide easy attack surfaces for the entire network. Typical perimeter defense mechanisms (firewalls, intrusion detection devices) will not work for several reasons in the IoT universe. Mostly this is because IoT devices are often deployed within a network, not just at the outer perimeter. Also, the types of attacks will be very different that what most IDS trigger on now.

As was touched on earlier in this series, most IoT sensors do not have strong security mechanisms built in to the devices themselves. In addition, with the issues of vulnerabilities discovered after deployment, it’s somewhere between difficult and impossible to upgrade large numbers of IoT sensors in place. Many times the sensors are not even designed for bi-directional traffic, so even if a patch was designed, and the sensor somehow could install it, the patch could not be received by the sensor. This boils down to the IoT infrastructure/network bearing the brunt of the burden of security for the overall IoT ecosystem.

There are a number of possible solutions that can be implemented in an IoT network environment to enhance security and reliability, one such example is outlined in this paper. Essentially the network must be intelligent enough to compensate for the ‘dumbness’ of the IoT devices, whether sensors or actuators. One of the trickiest bits will be to secure ‘device to device’ communications. As some IoT devices will directly communicate to other nearby IoT devices through a proprietary communications channel and not necessarily the ‘network’, there is the opportunity for unsecured traffic, etc. to exist.

An example could be a home automation system: Light sensors may communicate directly to outlets or lamps using the Zigbee protocol and never (directly) communicate over a normalized ip network. The issues of out-of-date devices, compromised devices, etc. are not handled (at this time) by the Zigbee protocol, so no protection can be offered. Potentially, such vulnerabilities could lead to an access point in the larger network as a threat surface. The network must ‘understand’ to what it is connected, even if it is a small subsystem (instead of single devices), and provide the same degree of supervision and protection to these isolated subsystems as is possible with single devices.

It rapidly becomes apparent that for the network to implement such functions a high degree of ‘contextual awareness’ and heuristic intelligence is required. With the plethora of devices, types of functions, etc. it won’t be possible to develop, maintain and implement a centrally based ‘rules engine’ to handle this very complex set of tasks. A collective effort will be required from the IoT community to assist in developing and maintaining the knowledgeset for the required AI to be ‘baked in’ to the network. While this is, at first, a considerable challenge, the payoff will be huge in many more ways than just IoT devices working better and being more secure: the large scale development of a truly contextually aware and intelligent network will change the “Internet” forever.

Adaptation & Support

In a similar manner to providing security and reliability, the network must take on the burden of adapting to obsolete devices, broken devices, and monitoring devices for out-of-expected-range behavior. Since the network is dynamic, and (as postulated above) will come to have significant computational capability baked in to the network itself, only the network is positioned to effectively monitor and adapt to the more static (and hugely deployed) set of sensors and actuators.

As in security scenarios, context is vital and each set of installed sensors/actuators must have a ‘profile’ installed to the network along with the actual device. For instance, a temperature sensor could in theory report back a reading of anything remotely reasonable (let’s say -50C to +60C – that covers Antarctica to Baghdad) but if the temp sensor is installed in a home refrigerator the range of expected results would be far more narrow. So as a home appliance vendor turns out units that have IoT devices on board that will connect to the network at large, a profile must also be supplied to the network to indicate the expected range of behavior. The same is true for actuators: an outdoor light for a walkway that tacitly assumes it will turn on once in the evening and off again in the morning should assume something is wrong if signals come through that would have the light flashing on and off every 10 seconds.

One of the things that the network will end up doing is ‘deprecating’ some sensors and actuators – whether they report continually erroneous information or have externally been determined to be no longer worthy of listening to… Even so, the challenge will be continual: not every vendor will announce end-of-life for every sensor or actuator; not every scenario can be envisioned ahead of time. The law of unintended consequences of a world that is largely controlled by embedded and unseen interconnected devices will be interesting indeed…

The next section of this post “Security & Privacy” may be found here.

References:

The Zettabyte Era – Trends and Analysis

 

Ubiquitous Computational Fabric (UCF)

March 6, 2012 · by parasam

Ok, not every day do I get to coin a new term, but I think this is a good description for what I see coming. The latest thing in the news is “the PC is dead, long live the tablet…”   actually, all forms of current ‘computers’ – whether they are desktops, laptops, ultrabooks, tablets, smartphones, etc. have a life expectancy just short of butter on pavement on a warm afternoon.

We have left the Model “T” days, to use an automotive analogy – where one had to be a trained mechanic to even think about driving a car – and moved on just a little bit.

Ford Model “T” (1910)

We are now at the equivalent of the Model “A” – a slight improvement.

Ford Model “A” (1931)

The user is still expected to understand things like:  OS (Operating Systems), storage, apps, networking, WiFi security modes, printer drivers, etc. etc. The general expectation is that the user conform his or her behavior to the capabilities of the machine, not the other way around. Things we sort of take for granted – without question –
are really archaic. Typing into keyboards as the primary interface. Dealing with a file system – or more likely the frustration that goes along with dealing with incompatible filing systems… Mac vs PC… To use the automobile for one more analogy:  think how frustrating it would be to have to go to different gas stations depending on the type of car you had… because the nozzle on the gas pump would only fit certain cars!

A few “computational systems” today have actually achieved ‘user friendly’ status – but only with a very limited feature set, and this took many, many years to get there:  the telephone is one good example. A 2 yr old can operate it without a manual. It works more or less the same anywhere in the world. In general, it is a highly reliable system. In terms of raw computational power, the world-wide telephone system is one of the most powerful computers on the planet. It has more raw bandwidth than the current ‘internet’ (not well utilized, but that’s a different issue).

We are now seeing “computers” embedded into a wide variety of items, from cars to planes to trains. Even our appliances have built-in touch screens. We are starting to have to redefine the term ‘computer’ – the edges are getting very fuzzy. Embedded sensors  are finding their way into clothing (from inventory control tags in department stores to LED fabric in some cutting edge fashions); pets (tracking chips); credit cards (so-called smart cards); the atmosphere (disposable sensors on small parachutes are dropped by plane or shot from mortars to gather weather data remotely); roads (this is what powers those great traffic maps) and on and on.

It is actually getting hard to find a piece of matter that is not connected in some way to some computing device. The power is more and more becoming ‘the cloud.’ Our way of interacting with computational power is changing as well:  we used to be ‘session based’ – we would sit down at a desktop computer and switch gears (and usually employ a number of well chosen expletives) to get the computer up and running, connected to a printer and the network, then proceed to input our problems and get results.

Now we are an ‘always on’ culture. We just pick up the smartphone and ask Siri “where the heck is…” and expect an answer – and get torqued when she doesn’t know or is out of touch with her cloud. Just as we expect a dial tone to always be there when we pick up the phone, we now expect the same from our ‘computers.’ The annoyance of waiting for a PC to boot up is one of several factors users report on for their attraction to tablets.

Another big change is the type of connectivity that we desire and expect. The telephone analogy points to an anachronistic form of communication: point-to-point. Although, with enough patience or the backup of extra software, you can speak with several people at once, the basic model of the phone system is one-to-one. The cloud model, Google, blogs, YouTube, Facebook, Twitter etc. has changed all that. We now expect to be part of the crowd. Instead of one-to-one we now want many-to-many.

Instead of a single thread joining one user to another, we now live in a fabric of highly interwoven connectivity.

When we look ahead – and by this I mean ten years or so – we will see the extension of trends that are already well underway. Essentially the ‘computer’ will disappear – in all of its current forms. Yes, there will still be ‘portals’ where queries can be put to the cloud for answers; documents will still be written, photographs will still be manipulated, etc. – but the mechanisms will be more ‘appliance like’ – typically these portals will act like the handsets of today’s cellphone network – where 99% of the horsepower is in the backoffice and attached network.

This is what I mean by Ubiquitous Computational Fabric (UCF). It’s going to be an ‘always on’, ‘always there’ environment. The distinction of a separate ‘computer’ will disappear. Our clothing, our cars, our stoves, our roads, even our bodies will be ‘plugged in’ to the background of the cloud system.

There are already small pills you can swallow that have video cameras – your GI tract is video-ed and sent to your doctor while the pill moves through your body. No longer is an expensive and invasive endoscopy required. Of course today this is primitive, but in a decade we’ll swallow a ‘diagnostic’ pill along with our vitamins and many data points of our internal health will be automatically uploaded.

As you get ready to leave the bar, you’ll likely have to pop a little pill (required to be offered free of charge by the bar) that will measure your blood alcohol level and transmit approval to your car before it will start. Really. Research on this, and the accompanying legislation, is under way now.

The military is already experimenting with shirts that have a mesh of small wires embedded in the fabric. When a soldier is shot, the severing of the wires will pinpoint the wound location and automatically transmit this information to the medic.

Today, we have very expensive motion tracking suits that are used in computer animation to make fantasy movies.

Soon, little sensors will be embedded into normal sports clothing and all of an athlete’s motions will be recorded accurately for later study – or injury prevention. One of the most difficult computational problems today – requiring the use of the planet’s most massive supercomputers – is weather prediction. The savings in human life and property damage (from hurricanes, tornadoes, tsunamis, earthquakes, etc.) can be staggering. One of the biggest problems is data input. We will see a massive improvement here with small intelligent sensors being dropped into formative storms to help determine if they will become dangerous. The same with undersea sensors, fault line sensors, etc.

The real winners of tomorrow’s business profits will be those companies that realize this is where the money will flow. Materials science, boring but crucial, will allow for economic dispersal of smart sensors. Really clever data transmission techniques are needed to funnel the amount of collected information through oft time narrow pipes and difficult environments. ‘Spread-spectrum computing’ will be required to minimize energy usage, provide the truly reliable and available fabric that is needed. Continual understanding of human factor design will be needed to allow the operation of these highly complex systems in an intuitive fashion.

We are at an exciting time:  to use the auto one more time – there were early Ford engineers who could visualize Ferraris – even though the materials at time could not support their vision. We need to support those people, those visionaries, those dreamers – for they will provide the expertise and plans to help us realize what is next. We have only scratched the surface of what’s possible.

  • Blog at WordPress.com.
  • Connect with us:
  • Twitter
  • Vimeo
  • YouTube
  • RSS
  • Follow Following
    • Parasam
    • Join 139 other followers
    • Already have a WordPress.com account? Log in now.
    • Parasam
    • Customize
    • Follow Following
    • Sign up
    • Log in
    • Report this content
    • View site in Reader
    • Manage subscriptions
    • Collapse this bar
 

Loading Comments...