• Home
  • about this blog
  • Blog Posts

Parasam

Menu

  • design
  • fashion
  • history
  • philosophy
  • photography
  • post-production
    • Content Protection
    • Quality Control
  • science
  • security
  • technology
    • 2nd screen
    • IoT
  • Uncategorized
  • Enter your email address to follow this blog and receive notifications of new posts by email.

  • Recent Posts

    • Take Control of your Phone
    • DI – Disintermediation, 5 years on…
    • Objective Photography is an Oxymoron (all photos lie…)
    • A Historical Moment: The Sylmar Earthquake of 1971 (Los Angeles, CA)
    • Where Did My Images Go? [the challenge of long-term preservation of digital images]
  • Archives

    • September 2020
    • October 2017
    • August 2016
    • June 2016
    • May 2016
    • November 2015
    • June 2015
    • April 2015
    • March 2015
    • December 2014
    • February 2014
    • September 2012
    • August 2012
    • June 2012
    • May 2012
    • April 2012
    • March 2012
    • February 2012
    • January 2012
  • Categories

    • 2nd screen
    • Content Protection
    • design
    • fashion
    • history
    • IoT
    • philosophy
    • photography
    • post-production
    • Quality Control
    • science
    • security
    • technology
    • Uncategorized
  • Meta

    • Register
    • Log in
    • Entries feed
    • Comments feed
    • WordPress.com

Browsing Category science

IoT (Internet of Things): A Short Series of Observations [pt 3]: Security & Privacy

May 19, 2016 · by parasam

Past readers of my articles will notice that I have a particular interest in the duality of Security and Privacy within the universe of the Internet. IoT is no exception… In the case of IoT, the bottom line is that for wide-spread acceptance, functionality and a profitable outcome the entire system must be perceived as secure and trustworthy. If data cannot be trusted, if incorrect actions are taken, if the security of individuals and groups is reduced as a result of this technology there will be significant resistance.

Security

A number of security factors have been discussed in the prior posts in relation to sensors, actuators and the infrastructure/network that connects and supports these devices. To summarize, many devices do not, or likely will not, provide sufficient security built in to the devices themselves. Once installed, it will typically be unreasonable or impossible to upgrade or alter the security functionality of the IoT devices. Some of the issues that plague IoT devices are: lack of a security layer in the design; poor protocols; hard-coded passwords; lack of – or poorly implemented – encryption; lack of best practice authentication and access control, etc.

larger-13-SECURITY-internet3  security02  Security01  security03

From a larger perspective, the following issues surrounding security must be addressed in order for a robust IoT implementation to succeed:

  • Security as part of the overall design of individual sensors/actuators as well as the complete system.
  • The economic factor in security: how much security for how much cost is appropriate for a particular device? For instance, a temperature sensor used in logistics will have very different requirements than an implanted sensor in a human pacemaker.
  • Usability: just as in current endpoints and applications, a balance between ease of use and appropriate security must be achieved.
  • Adherence to recognized security ‘best practices’, protocols and standards. Just as “ipsec” exists for general ip networks, work is under discussion for “IoTsec” – and if such a standard comes into existence it will be incumbent on manufacturers to accommodate this.
  • How functional security processes (authentication, access control, encryption of data) will be implemented within various IoT schemas and implementations.
  • As vulnerabilities are discovered, or new security practices are deemed necessary to implement, how can these be implemented in a large field of installed devices?
  • How will IoT adapt to the continual change of security regulations, laws and business requirements over time?
  • How will various IoT implementations deal with ‘cross-border’ issues (where data from IoT sensors is consumed or accessed by entities that are in different geographic locations, with different laws and practices concerning data security?

Privacy

The issue of privacy is particularly challenging in the IoT universe, mainly due to both the ubiquity and passivity of these devices. Even with mobile apps that often tend to reduce privacy in many ways the user has some degree of control as an interface is usually provided where a measure of privacy control can be implemented. Most IoT devices are passive, in the sense that no direct interaction with humans occurs. But the ubiquity and pervasiveness of the the sensors, along with the capability of data aggregation, can provide a huge amount of information that may reduce the user’s privacy remarkably.

privacy04  privacy01  privacy02  privacy03

As an example, let’s examine the use case of a person waking up then ‘driving’ to work (in their autonomous car) with a stop on the way for a coffee:

  • The alarm in their smartphone wakes up the user – which as it detects sleep patterns through movement and machine learning – transmits that info to a database, registering among other things the time the user awoke.
  • The NEST thermostat adjusts the environment, as it has learned the user is now awake. That info as well is shared.
  • Various motion and light sensors throughout the home detect the presence and movement of the user, and to some degree transmit that information.
  • The security system is armed as the user leaves the home, indicating a lack of presence.
  • The autonomous car wakes up and a pre-existing program “take me to work, but stop at Starbucks on Main Road for a coffee first” is selected. The user’s location is transmitted to a number of databases, some personalized, some more anonymous (traffic management systems for example) – and the requirement for a parking space near the desired location is sent. Once a suitable parking space is reserved (through the smart parking system) a reservation is placed on the space (usually indicated by a lamp as well as signalling any other vehicle that they cannot park there).
  • The coffee house recognizes the presence of a repeat customer via the geotagging of the user’s cellphone as it acquires the WiFi signal when entering the coffee shop. The user is registered onto the local wireless network, and the user’s normal order is displayed on their cell for confirmation. A single click starts the order and the app signals the user when their coffee and pastry are ready. The payment is automatically deducted at pickup using NFC technology. The payment info is now known by financial networks, again indicating the location of the user and the time.
  • The user signals their vehicle as they leave the coffee shop, the parking space allocation system is notified that the space will be available within 2 minutes, and the user enters the car and proceeds to be driven to work.

It is clear that with almost no direct interaction with the surrounding ecosystem many details of the user’s daily life are constantly revealed to a large and distributed number of databases. As the world of IoT increases and matures, very little notification will ever be provided to an individual user about how many databases receive information from a sensor or set of sensors. In a similar manner, instructions to an actuator that is empirically tied to a particular user can reflect data about that user, and again the user has no control over the proliferation of that data.

As time goes on, and new ‘back-office’ functionality is added to increase either the usefulness of IoT data to a user or the provider, it is most likely that additional third party service providers will acquire access to this data. Many of these will use cloud functionality, with interconnections to other clouds and service providers that are very distant, both in location and regulatory environment, to the user. The level of diffusion will rapidly approach that of complete ambiguity in terms of a user having any idea of who has access to what data that IoT devices within their environment provide.

For the first time, we collectively must deal with a new paradigm: a pervasive and ubiquitous environment that generates massive data about all our activities over which we essentially have no control. If we thought that the concept of privacy – as we knew it 10 or 20 years ago – was pretty much dead, IoT will make absolutely certain that this idea is dead, buried and forgotten… More than anything else, the birth of substantial IoT will spark a set of conversations about what is an acceptable concept of privacy in the “Internet of Everything” age…

One cannot wish this technology away – it’s coming and nothing will stop it. At some level, the combination of drivers that will keep enabling the IoT ecosystem (desire for an increased ‘feature-set of life’ from users; and increased knowledge and efficiency from product and service providers) will remain much higher than any resistance to the overall technology. However, the widespread adoption, trust and usefulness will be greatly impacted if a wide-spread perception grows that IoT is invasive, reduces the overall sense of privacy, and is thought of as ‘big brother’ in small packages.

The scale of the IoT penetration into our lives is also larger than any previous technology in human history – with the number of connected devices poised to outnumber the total population of the planet by a factor of more than 10:1 within the next seven years. Even those users that believe they are not interacting with the Internet will be passively ‘connected’ every day of their lives in some way. This level of unavoidable interaction with the ‘web’ will shortly become the norm for most of humanity – and affect those in developing economies as well as the most technologically advanced areas. Due to the low cost and high degree of perceived value of the technology, the proliferation of IoT into currently less-advanced populations will likely exceed that of the cellphone.

While it is beyond the scope of this post to discuss the larger issue of privacy in the connected world in detail, it must be recognized that the explosive growth of IoT at present will forever change our notion of privacy in every aspect of our lives. This will have psychological, social, political and economic results that are not fully known, but will be a sea change in humanity’s process.

The next section of this post “IoT from a Consumer’s Point of View” may be found here.

References:

Rethinking Network Security for IoT

Five Challenges of IoT

 

IoT (Internet of Things): A Short Series of Observations [pt 2]: Sensors, Actuators & Infrastructure

May 19, 2016 · by parasam

The Trinity of Functional IoT

As the name implies, the functionality of “Things” that comprise an IoT universe must be connected in order for this ecosystem to operate. This networking interconnection is actually the magic that will allow a fully successful implementation of the IoT technology. In addition, it’s important to realize that this network will often perform in a bi-directional manner, with the “Things” at the edge of the network either acting as Input Devices (Sensors) or Output Devices (actuators).

Input (Sensors)

The variety, complexity and capability of input sensors in the IoT universe is almost without limit. Almost anything that can measured in some way will spawn an IoT sensor to communicate that data to something else. In many cases, sensors may be very simple, measuring only a single parameter. In other cases, a combined sensor package may measure many parameters, providing a complete environmental ‘picture’ as a dataset. For instance, a simple sensor may just measure temperature, and a use case might be an embedded sensor in a case of wine before transport. The data is measured once every hour and stored in memory onboard the sensor, then ‘read’ upon arrival at the retail point to ensure that maximums or minimums of acceptability were not exceeded. Thermal extremes are the single largest external loss factor in transport of wine worldwide, so this is not a trivial matter.

sensor01  sensor02  sensor10  sensor08

On the other hand, a small package – the size of a pack of cigarettes – attached to a vehicle can measure speed, acceleration, location, distance traveled from waypoints, temperature, humidity, relative light levels (to indicate degree of daylight), etc. If in addition the sensor package is connected to the vehicle computer, a myriad of engine and other component data can be collected. All this data can be either transmitted live, or more likely, stored in a sample manner and then ‘burst-transmitted’ on a regular basis when a good communications link is available.

An IoT sensor has, at a minimum, the following components: actual sensor element, internal processing, data formation, transmission or storage. More complex sensors may contain both storage and data transmission, multiple transmission methodologies, preprocessing and data aggregation, etc. At this time, the push for most vendors is to get sensors manufactured and deployed in the field to gain market share and increase sales in the IoT sector. Long term thought to security, compatibility, data standards, etc. is often not addressed. Since the scale of IoT sensor deployment is anticipated to exceed the physical deployment of any other technology in the history of humanity, new paradigms will evolve to enable this rollout in an effective manner.

While the large scale deployment of billions of sensors will bring many new benefits to our technological landscape, and undoubtedly improve many real-world issues such as health care, environmental safety and efficiency of resource utilization, traffic management, etc., this huge injection of edge devices will also collectively offer one of the greatest security threats that has ever been experienced in the IT landscape. Due to a current lack of standards, rush to market, lack of understanding of even the security model that IoT presents, etc. most sensors do not have security embedded as a fundamental design principle.

sensor09  sensor05  sensor03  sensor03

There are additional challenges to even the basic functionality, let alone security, of IoT sensors: that of updating, authenticating and validating such devices or the data that they produce. If a million small inexpensive temperature sensors are deployed by a logistics firm, there is no way to individually upgrade these devices should either a significant security flaw be discovered, or if the device itself is found to operate inaccurately. For example, let’s just say that a firmware programming error in such a sensor results in erroneous readings being collected once the sensor has been continuously exposed to an ambient temperature of -25C or below for more than 6 hours. This may not have been considered in a design lab in California, but once the sensors are being used in northern Sweden the issue is discovered. In a normal situation, the vendor would release a firmware update patch, the IT department would roll this out, and all is fixed… not an option in the world of tiny, cheap, non-upgradable IoT devices…

Many (read most as of the time of this article) sensors have little or no real security, authentication or encryption of data functionality. If logistics firms are subject to penalties for delivering goods to retailers that have exceeded the prescribed temperature min/max levels, some firm somewhere may be enticed to substitute readings from a set of sensors that were kept in a more appropriate temperature environment – how is this raw temperature data authenticated? What about sensors that are attached to a human pacemaker, reporting back biomedical information that is personally identifiable. Is a robust encryption scheme applied (as would be required by USA HIPPA regulations)?

There is another issue that will come back to haunt us collectively in a few years: that of vendor obsolescence. Whether a given manufacturer goes out of business, deprecates their support of a particular line of IoT sensors, or leaves the market for another reason, ‘orphaned’ devices will soon become a reality in the IoT universe. While one may think that “Oh well, I’ll just have to replace these sensors with new ones” is the answer, that will not always be an easy answer. What about sensors that are embedded deep within industrial machines, aircraft, motorcars, etc.? These could be very expensive or practically impossible to easily replace, particularly on a large scale. And to further challenge this line of thought, what if a proprietary communications scheme was used by a certain sensor manufacturer that was not escrowed before the firm went out of business? Then we are faced with a very abrupt ‘darkening’ of thousands or even millions of sensor devices.

All of the above variables should be considered before a firm embarks on a large-scale rollout of IoT sensor technology. Not all of the issues have immediate solutions, some of the challenges can be ameliorated in the network layer (to be discussed later in this post), and some can be resolved by making an appropriate choice of vendor or device up front.

Output (Actuators)

Actuators may be stand-alone (i.e. just an output device), or may be combined with an IoT input sensor. An example might be an intelligent light bulb designed for night lighting outdoors – where the sensor detects that the ambient light has fallen to a predetermined level (that may be externally programmable), and in addition to reporting this data upstream also directly triggers the actuator (the light bulb itself) to turn on. In many cases an actuator, in addition to acting on data sent to it over an IoT network, will report back with additional data as well, so in some sense may contain both a sensor as well as an actuator. An example, again using a light bulb: the light bulb turns on only when specifically instructed by external data, but if the light element fails, the bulb will inform the network that this device is no longer capable of producing light – even though it’s receiving data. A robustly designed network would also require the use of light bulb actuators that issue an occasional ‘heartbeat’ so if the bulb unit fails completely, the network will know this and report the failure.

actuators  actuator03  actuator01  actuator00

The issue of security was discussed concerning input sensors above, but this issue also applies to output actuators. In fact, the security and certainty that surrounds an IoT actuator is often more immediately important than a sensor. A compromised sensor will result in bad or missing data, which can still be accommodated within the network or computational schema that uses this data. An actuator that has been compromised or ‘hacked’ can directly affect either the physical world or a portion of a network, so can cause immediate harm. Imagine a set of actuators that control piping valves in a high-pressure gas pipeline installation… and if some valves were suddenly closed while others were opened a ‘hammer’ effect could easily cause a rupture and the potential of a disastrous result. It is imperative that in high-risk points a strong and multilayered set of security protocols is in place.

This issue, along with other reliability issues, will likely delay the deployment of many IoT implementations until adequate testing and use case experience demonstrates that current ‘closed-system’ industrial control networks can be safely replaced with a more open IoT structure. Another area where IoT will require much testing and rigorous analysis will be in vehicles, particularly autonomous cars. The safety of life and property will become highly dependent on the interactions of both sensors and actuators.

Other issues and vulnerabilities that affect input sensors are applicable to actuators as well: updating firmware, vendor obsolescence and a functional set of standards. Just as in the world of sensors, many of the shortcomings of individual actuators must be handled by the network layer in order for the entire system to demonstrate the required degree of robustness.

Network & Infrastructure

While sensors and actuators are the elements of IoT that receive most attention, and are in fact the devices that form the edge of the IoT ecosystem, the more invisible network and associated infrastructure is absolutely vital for this technology to function. In fact, the overall infrastructure is more important and carries a greater responsibility for the overall functionality of IoT than either sensors or actuators.Although the initial demonstration and implementation of IoT technology is currently using traditional ip networks this must change. The current model of remote users (or machines) connecting to other remote users, data centers or cloud combinations cannot scale to the degree required for a large scale successful implementation of IoT.

network01      

In addition, a functional IoT network/infrastructure must contain elements that are not present in today’s information networks, and provide many levels of distributed processing, data aggregation and other functions. Some of the reasons that drive these new requirements for the IoT network layer have been discussed above, in general the infrastructure must make up for the lacks and limitations of both sensors and actuators as they age in place over time. The single largest reason that the network layer will be responsible for the bulk of security, upgrading/adaptation, dealing with obsolescence, etc. is that the network is dynamic and can be continually adjusted and tuned to the ongoing requirements of the sensors, actuators and the data centers/users where the IoT information is processed or consumed.

The reference to ‘infrastructure’ in addition to ‘network’ is for a very good reason: in order for IoT to function well on a long-term basis, substantial ingredients beyond just a simple network of connectivity are required. There are three main drivers of this additional requirement: data reduction & aggregation, security & reliability, and adaptation/support of IoT edge devices that no longer function optimally.

Data Reduction & Aggregation

The amount of data that will be generated and/or consumed by billions of sensors and actuators is gargantuan. According to one of the most recent Cisco VNI forecasts, the global internet traffic will exceed 1.3 zettabytes by the end of this year. 1 zettabyte = 1 million petabytes, with 1 petabyte = 1 million gigabytes… to give some idea of the scale of current traffic. And this is with IoT barely beginning to show up on the global data transmission landscape. If we take even a conservative estimate of 10 billion IoT devices adding to the global network each year between now and 2020, and we assume that on average each edge device transmits/receives only 1 kbps (kilobits per second), this math follows: 30GB per device per year X 10 billion devices = 300 exabytes of new added data per year – at a minimum.

While this may not seem like a huge increase (about a 25% annual increase in overall data traffic worldwide) there are a number of factors that make this much more burdensome to current network topologies than may first be apparent. The current global network system supports basically three types of traffic: streaming content (music, videos, etc) that emanate from a small number of CDNs (Content Distribution Networks) and feed millions of subscribers; database queries and responses (Google searches, credit card authorizations, financial transactions and the like); and ad hoc bi-directional data moves (business documents, publications, research and discovery, etc.). The first of these (streaming) is inherently unidirectional and specialized CDNs have been built to accommodate this traffic, with many peering routers moving this traffic off the ‘general highway’ onto the dedicated routes for the CDNs to allow users to experience the low latency they have come to expect, etc. The second type of traffic, queries and responses, are typically very small data packets that hit a large purpose-designed data center which can process the query very quickly and respond, again with a very small data load. The third type, which has the broadest range of data types, is often not required to have a near-instantaneous delivery or response; the user is less worried about a few seconds delay on the upload of a scientific paper or the download of a commercial contract. A delay of more than 2 sec after a Google search is submitted is seen as frustrating…

Now, enter the world of IoT sensors and actuators onto this already crowded IT infrastructure. The type of data that is detected and transmitted by sensors will very often be time-sensitive. For instance the position of an autonomous vehicle must be updated every 100 mSec or the safety of that vehicle and others around it can be affected. If Amazon succeeds in getting delivery drones licensed, we will have tens of thousands of these critters flying around our heavily congested urban areas – again requiring critically frequent updates of positional data among other parameters. Latency rapidly becomes the problem even more than bandwidth… and the internet, in its glorious redundant design, has as its core value the ultimate delivery of the packet as the prime law, not how long it takes or how many packets can ultimately be delivered. Remember, the initial design of the Internet (which is basically unchanged for almost 50 years now) was a redundant mesh of connectivity to allow the huge bandwidth of 300 bits per second (teletype machine basically) to reach its destination even in the face of nuclear attack wiping out some major nodes on the network.

The current design of data center connectivity (even such monsters such as Amazon Web Services, Google Compute, Microsoft Azure) is a star network. This has one (or a small cluster) of large datacenters in the center of the ‘cloud’, with all the users attached like spokes on a wheel at the edge. As the number of users grows, the challenge is to keep raising the capacity of the ‘gateways’ into the actual computational/storage center of the cloud. It’s very expensive to duplicate data centers, and doing so brings additional data transmission costs as all the data centers (of a given vendor) must constantly be synchronized. Essentially, the star model of central reception, processing and then sending data back to edge fails at the scale and required latency for IoT to succeed.

One possible solution to avoid this congestion at the center of the network is to push some computation to the edge, and reduce the amount of data that is required to be acted upon at the center. This can be accomplished in several ways, but a general model will deal with both data aggregation (whereby data from individual sensors is combined where this is possible) and data reduction (where data flows from individual sensors can be either compressed, ignored in some cases or modified). A few use cases will illustrate these points:

  • Data Aggregation: assume a vendor has embedded small, low cost transpiration sensors in the soil of rows of grape plants in a wine farm. A given plot may have 50 rows each 100 meters long. With sensors embedded every 5 meters, 1,000 sensors will be generating data. Rather than push all that individual data up to a data center (or even to a local server at the wine farm), an intelligent network could aggregate the data and report that, on average, the soil needs or does not need watering. There is a 1000:1 reduction in network traffic up to the center…
  • Data Reduction:  using the same example, if one desired a somewhat more granular sensing of the wine plot, the intelligent network could examine the data from each row, and with a predetermined min/max data range, transmit the data upstream only for those sensors that were out of range. This may effectively reduce the data from 1,000 sensors to perhaps a few dozen.

Both of these techniques require both distributed compute and storage capabilities to exist within the network itself. This is a new paradigm for networks, which up to this time have been quite stupid in reality. Other than passive network hubs/combiners, and active switches (which are very limited, although extremely fast, in their analytical capabilities), current networks are just ribbons of glass or copper. With the current ability of putting substantial compute and storage power in a very small package that uses very little power (look at smart watches), small ‘nodes of intelligence’ could be embedded into modern networks and literally change the entire fabric of connectivity as we know it.

Further details on how this type of intelligent network could be designed and implemented will be a subject of a future post, but here it’s enough to demonstrate that some sort of ‘smart fabric’ of connectivity will be required to effectively deploy IoT on the enormous scale that is envisioned.

Security & Reliability

The next area in which the infrastructure/network that interconnects IoT will be critical to its success will be the overall security, reliability and trustworthiness of the data that is both received from and transmitted to edge devices: sensors and actuators. Not only does the data from sensors, and instructions to actuators, need to be accurate and protected; but the updstream data centers and other entities to which IoT networks are attached must be protected. IoT edge devices, due to their limited capabilities and oft-overlooked security features, can provide easy attack surfaces for the entire network. Typical perimeter defense mechanisms (firewalls, intrusion detection devices) will not work for several reasons in the IoT universe. Mostly this is because IoT devices are often deployed within a network, not just at the outer perimeter. Also, the types of attacks will be very different that what most IDS trigger on now.

As was touched on earlier in this series, most IoT sensors do not have strong security mechanisms built in to the devices themselves. In addition, with the issues of vulnerabilities discovered after deployment, it’s somewhere between difficult and impossible to upgrade large numbers of IoT sensors in place. Many times the sensors are not even designed for bi-directional traffic, so even if a patch was designed, and the sensor somehow could install it, the patch could not be received by the sensor. This boils down to the IoT infrastructure/network bearing the brunt of the burden of security for the overall IoT ecosystem.

There are a number of possible solutions that can be implemented in an IoT network environment to enhance security and reliability, one such example is outlined in this paper. Essentially the network must be intelligent enough to compensate for the ‘dumbness’ of the IoT devices, whether sensors or actuators. One of the trickiest bits will be to secure ‘device to device’ communications. As some IoT devices will directly communicate to other nearby IoT devices through a proprietary communications channel and not necessarily the ‘network’, there is the opportunity for unsecured traffic, etc. to exist.

An example could be a home automation system: Light sensors may communicate directly to outlets or lamps using the Zigbee protocol and never (directly) communicate over a normalized ip network. The issues of out-of-date devices, compromised devices, etc. are not handled (at this time) by the Zigbee protocol, so no protection can be offered. Potentially, such vulnerabilities could lead to an access point in the larger network as a threat surface. The network must ‘understand’ to what it is connected, even if it is a small subsystem (instead of single devices), and provide the same degree of supervision and protection to these isolated subsystems as is possible with single devices.

It rapidly becomes apparent that for the network to implement such functions a high degree of ‘contextual awareness’ and heuristic intelligence is required. With the plethora of devices, types of functions, etc. it won’t be possible to develop, maintain and implement a centrally based ‘rules engine’ to handle this very complex set of tasks. A collective effort will be required from the IoT community to assist in developing and maintaining the knowledgeset for the required AI to be ‘baked in’ to the network. While this is, at first, a considerable challenge, the payoff will be huge in many more ways than just IoT devices working better and being more secure: the large scale development of a truly contextually aware and intelligent network will change the “Internet” forever.

Adaptation & Support

In a similar manner to providing security and reliability, the network must take on the burden of adapting to obsolete devices, broken devices, and monitoring devices for out-of-expected-range behavior. Since the network is dynamic, and (as postulated above) will come to have significant computational capability baked in to the network itself, only the network is positioned to effectively monitor and adapt to the more static (and hugely deployed) set of sensors and actuators.

As in security scenarios, context is vital and each set of installed sensors/actuators must have a ‘profile’ installed to the network along with the actual device. For instance, a temperature sensor could in theory report back a reading of anything remotely reasonable (let’s say -50C to +60C – that covers Antarctica to Baghdad) but if the temp sensor is installed in a home refrigerator the range of expected results would be far more narrow. So as a home appliance vendor turns out units that have IoT devices on board that will connect to the network at large, a profile must also be supplied to the network to indicate the expected range of behavior. The same is true for actuators: an outdoor light for a walkway that tacitly assumes it will turn on once in the evening and off again in the morning should assume something is wrong if signals come through that would have the light flashing on and off every 10 seconds.

One of the things that the network will end up doing is ‘deprecating’ some sensors and actuators – whether they report continually erroneous information or have externally been determined to be no longer worthy of listening to… Even so, the challenge will be continual: not every vendor will announce end-of-life for every sensor or actuator; not every scenario can be envisioned ahead of time. The law of unintended consequences of a world that is largely controlled by embedded and unseen interconnected devices will be interesting indeed…

The next section of this post “Security & Privacy” may be found here.

References:

The Zettabyte Era – Trends and Analysis

 

IoT (Internet of Things): A Short Series of Observations [pt 1]: Introduction & Definition

May 19, 2016 · by parasam

Intro

As one of the latest buzzwords of things technical permeates our collective consciousness to a greater degree, it’s useful to better understand this technology by observing and discussing the various facets of IoT. Like many nascent technologies, IoT has been around for some time (depending on who you ask, and what your definition is, the term IoT showed up around 1999) but the real explosion of both the technology and large-scale awareness was over the last five years. Like the term ‘cloud’ – the meaning is often diffuse and inexact: one must define the use and application to better understand how this technology can provide value.

As the technology of IoT is maturing and beginning to be rolled out in larger and larger scale deployments, the impact of IoT will be felt by all of us, whether or not we directly think we are ‘using’ IoT. Understanding the strengths and weaknesses of IoT across different aspects will be critical to understanding the effects and usefulness (or potential threats) posed by this technology. In this short series of posts, I’ll be examining IoT features across the following areas:  1) basic definition & scope; 2) the Trinity of functional IoT: sensors/actuators/infrastructure; 3) security & privacy; 4) the consumer pov [bottom up view]; 5) the business pov [top down]; 6) the disruption that IoT will cause in both personal & business process; and 7) what an IoT-enabled world will look like (realistically) in 2021 (5 years on).

InternetOfThings05   InternetOfThings03       The Internet of things market connected smart devices tag cloud

Definition

The term “Internet of Things” can potentially encompass a vast array of objects and technologies. Essentially this means a collection of non-human entities that are connected to one or more networks and communicate to other non-human entities. This is to differentiate IoT from the ‘normal’ Internet where humans connect to either each other or information repositories (aka Google) to send or receive information, make purchases, perform tasks, etc. The range of activities and objects that can be encompassed by “IoT” is huge, and some may argue that certain activities fall outside their definition of IoT. This has been a common issue with the term “cloud” – and I don’t see this confusion going away anytime soon. One must clarify how the term applies in a given discussion or risk uncertainty of understanding.

Probably the biggest distinction of where the ‘edge’ of the IoT universe is, in relation to other information network activities, is one of scope, scale and functionality. Even then the definition is not absolute. I’ll give a few examples:

A small sensor that measures temperature and humidity that is capable of connecting to the Internet and transmitting that data is a classical example of an IoT device. It is usually physically small, relatively simple in both design and function, and can potentially exist in a large scale.

An Internet router – a large switch that directs traffic over the Internet – but also communicates with other such switches and uploads data for later analysis is usually not thought of as part of the IoT universe, even though it is not human, and does connect to other non-human entities over a network. These devices are usually (and I would argue correctly) defined as part of the overall infrastructure that supports IoT, but not an IoT device itself. However… IoT can’t exist without them, so they can’t be ignored, even in a discussion of IoT.

Now let’s take the example of a current high-end vehicle. At a more macro level, the entire car can be seen as an IoT device, communicating to other vehicles, mapping algorithms, security applications, traffic monitoring applications, maintenance and support applications, etc. At a localized micro level, the ‘vehicle’ is an entire hub with its own internal network, with many IoT devices embedded within the vehicle itself (GPS sensor, speed sensor, temperature, tire pressure, accelerometers, oil pressure, voice communications, data display, ambient light sensors, fuel delivery sensors, etc etc etc.) So it’s partially a point of view…

The other thing to keep in mind is that often we tend to think of IoT devices as “Input Devices”, or sensors. Equally at home in the IoT universe however are “Output Devices”, or actuators. They can be very simple, such as a light switch (that is actuated by either a local sensor of ambient light, or a remote command from a mobile device, etc.) Actuators can be somewhat more complicated, such as the set of solenoids, motor controls, etc. that comprise an IoT-connected washing machine (which among other activities may use a weight sensor to determine the actual amount of soiled clothes in order to use exactly the amount of water and detergent that is appropriate; predict the amount of electricity that will be used for a wash cycle, measure incoming water pressure and temperature and accommodate that in its process, etc.) At a macro level, an autonomous vehicle could be thought of as both an ‘actuator’ and a ‘sensor’ within a large network of traffic – again the point of view often determines the definition.

Scope

The potential range and pervasiveness of IoT devices is almost beyond imagination. Depending on your news source, the amount of estimated IoT devices that will be actively deployed by 2020 will range between 25 and 50 billion devices. What happens by 2030 – only 14 years from now? Given that most pundits were horribly wrong back in 1995 about how many cellphones would be actively deployed by 2010 (same ~15 yr predictive window) – most observing that maybe 1 million cellphones would be active by that time, whereas the actual number turned out to be almost a billion; it’s not unlikely that a trillion IoT devices will be deployed by 2030. That’s a very large number… and has some serious implications that will be discussed in later articles on this topic. For instance, just how do you update a trillion devices? The very fabric of connectivity will change in the face of this amount of devices that all want to talk to something.

The number of cellphones is already set to exceed the population of our planet within a year (there are currently 6.88 billion cellphones, and 7.01 billion humans – as of April 2016). With IoT devices set to outnumber all existing Internet devices by a factor of more than 1,000 an entirely new paradigm will come into existence to support this level of connectivity. Other issues surrounding such a massive scope will need to be addressed: power (even if individual devices use very little power, a trillion of them – at current power consumption levels – will be unsupportable; errors and outdated devices must be accommodated at a truly Herculean scale; the sheer volume of data created will have to be managed differently than today, etc.

InternetOfThings04

The next section of this post “The Trinity of functional IoT: Sensors, Actuators & Infrastructure” may be found here.

References:

The Internet of Things – An Overview

Number of Internet Users / Devices

 

 

Science – a different perspective on scientists and what they measure…

November 4, 2015 · by parasam

Scientists are humans too…

If you think that most scientists are boring old men with failed wardrobes and hair-challenged noggins.. and speak an alternative language that disengages most other humans… then you’re only partially correct… The world of scientists is now populated by an incredible array of fascinating people – many of whom are crazy-smart but also share humor and a variety of interests. I’ll introduce you to a few of them here – so the next time you hear of the latest ‘scientific discovery’ you might have a different mental picture of ‘scientist’.

Dr. Zeray Alemseged

Dr. Zeray Alemseged

 

Paleoanthropologist and chair and senior curator of Anthropology at the California Academy of Sciences

Zeresenay Alemseged is an Ethiopian paleoanthropologist who studies the origins of humanity in the Ethiopian desert, focusing on the emergence of childhood and tool use. His most exciting find was the 3.3-million-year-old bones of Selam, a 3-year-old girl from the species Australopithecus afarensis.

He speaks five languages.

 

 

 

Dr. Quyen Nguyen
Dr. Quyen Nguyen

 

Doctor and professor of surgery and director of the Facial Nerve Clinic at the University of California, San Diego.

Quyen Nguyen is developing ways to guide surgeons during tumor removal surgery by using fluorescent compounds to make tumor cells — and just tumor cells — glow during surgery, which helps surgeons perform successful operations and get more of the cancer out of the body. She’s using similar methods to label nerves during surgery to help doctors avoid accidentally injuring them.

She is fluent in French and Vietnamese.

 

 

Dr. Tali Sharot

Dr. Tali Sharot

 

Faculty member of the Department of Cognitive, Perceptual & Brain Sciences

Tali Sharot is an Israeli who studies the neuroscience of emotion, social interaction, decision making, and memory. Specifically her lab studies how our experience of emotion impacts how we think and behave on a daily basis, and when we suffer from mental illnesses like depression and anxiety.

She’s a descendant of Karl Marx.

 

 

 

Dr. Michelle Khine

Dr. Michelle Khine

 

Biomedical engineer, professor at UC Irvine, and co-Founder at Shrink Nanotechnologies

Michelle Khine uses Shrinky Dinks — a favorite childhood toy that shrinks when you bake it in the oven — to build microfluidic chips to create affordable tests for diseases in developing countries.

She set a world record speed of 38.4 mph for a human-powered vehicle as a mechanical engineering grad student at UC Berkeley in 2000.

 

 

 

Dr. Nina Tandon

Dr. Nina Tandon

 

Electrical and biomedical engineer at Columbia’s Laboratory for Stem Cells and Tissue Engineering; Adjunct professor of Electrical Engineering at the Cooper Union

Nina Tandon uses electrical signals and environmental manipulations to grow artificial tissues for transplants and other therapies. For example, she worked on an electronic nose used to “smell” lung cancer and now she’s working on growing artificial hearts and bones.

In her spare time, the Ted Fellow does yoga, runs, backpacks, and she likes to bake and do metalsmithing. Her nickname is “Dr. Frankenstein.”

 

 

Dr. Lisa Randall

Dr. Lisa Randall

 

Physicist and professor

Lisa Randall is considered to be one of the nation’s foremost theoretical physicists, with an expertise in particle physics and cosmology. The math whiz from Queens is best known for her models of particle physics and study of extra dimensions.

She wrote the lyrics to an opera that premiered in Paris and has an eclectic taste in movies.

 

 

 

Dr. Maria Spiropulu

Dr. Maria Spiropulu

 

Experimental particle physicist and professor of physics at Caltech

Maria Spiropulu develops experiments to search for dark matter and other theories that go beyond the Standard Model, which describes how the particles we know of interact. Her work is helping to fill in holes and deficiencies in that model. She works with data from the Large Hadron Collider.

She’s the great-grandchild of Enrico Fermi in Ph.D lineage — which means her graduate adviser’s adviser’s adviser was the great Enrico Fermi who played a key role in the development of basic physics.

 

 

 

Dr. Jean-Baptiste Michel

Dr. Jean-Baptiste Michel

Mathematician, engineer, and researcher

The French-Mauritian Jean-Baptiste Michel is a mathematician and engineer who’s interested in analyzing large volumes of quantitative data to better understand our world.

For example, he studied the evolution of human language and culture by analyzing millions of digitized books. He also used math to understand the evolution of disease-causing cells, violence during conflicts, and the way language and culture change with time.

He likes “Modern Family” and “Parks and Recreation,” he listens to the Black Keys and Feist, and his favorite restaurant in New York City is Kyo Ya.

 

 

Dr. John Dabiri

Dr. John Dabiri

 

Professor of aeronautics and bioengineering

John Dabiri studies biological fluid mechanics and wind energy — specifically how animals like jellyfish use water to move around. He also developed a mathematical model for placing wind turbines at an optimal distance from each other based on data from how fish schools move together in the water.

He won a MacArthur “genius grant” in 2010.

 

 

 

Isaac Kinde

Isaac Kinde

 

As a graduate student (M.D./Ph.D.) at Johns Hopkins, Isaac Kinde (Ethiopian/Eritrean) is working on improving the accuracy of genetic sequencing so that it can be used to diagnose cancer at an early stage in a simple, noninvasive manner.

In 2007 he worked with Bert Vogelstein, who just won the $3 million Breakthrough Prize in Life Sciences.

He’s an avid biker, coffee drinker and occasional video game player.

 

 

 

Dr. Franziska Michor

Dr. Franziska Michor

 

 

Franziska Michor received her PhD from Harvard’s Department of Organismic and Evolutionary Biology in 2005, followed by work at Dana-Farber Cancer Institute in Boston, then was assistant professor at Sloan-Kettering Cancer Center in New York City. In 2010, she moved to the Dana-Farber Cancer Institute and Harvard School of Public Health. Her lab investigates the evolutionary dynamics of cancer.

Both Franziska and her sister Johanna, who has a PhD in Mathematics, are licensed to drive 18-wheelers in Austria.

 

 

 

 

Heather Knight

Heather Knight

 

Heather Knight (Ph.D. student at Carnegie Mellon) loves robots — and she wants you to love them too. She founded Marilyn Monrobot, which creates socially intelligent robot performances and sensor-based electronic art. Her robotic installations have been featured at the Smithsonian-Cooper Hewitt Design Museum, LACMA, and PopTech.

In her graduate work she studies human-robot interaction, personal robots, theatrical robot performances, and designs behavior systems.

When she’s not building robots, she likes salsa dancing, karaoke, traveling, and film festivals.

 

 

Clio Cresswell

Clio Cresswell

 

 

The Australian author of “Mathematics and Sex,” Clio Cresswell uses math to understand how humans should find their partners. She came up with what she calls the “12 Bonk Rule,” which means that singles have a greater chance of finding their perfect partner after they date 12 people.

If she’s not at her desk brain working, you’ll find her at the gym either bench pressing her body weight or hanging upside down from the gym rings.

 

 

 

Dr. Cheska Burleson

Dr. Cheska Burleson

 

 

Cheska Burleson (PhD in Chemical Oceanography) focused her research on the production of toxins by algae blooms commonly known as “red tide.” She also evaluated the medicinal potential of these algal species, finding extracts that show promise as anti-malarial drugs and in combating various strains of staphylococcus, including MRSA—the most resistant and devastating form of the bacteria.

Cheska entered college at seventeen with enough credits to be classified as a junior. She also spent her formative years as a competitive figure skater.

 

 

 

 

Bobak Ferdowsi

Bobak Ferdowsi

 

Systems engineer and flight director for the Mars Curiosity rover

Bobak Ferdowsi (an Iranian-American) gained international fame when the Mars Curiosity rover landed on the surface of Mars last August. Since then, the space hunk has become an Internet sensation, gaining more than 50,000 followers on Twitter, multiple wedding proposals from women, and the unofficial title of NASA’s sexy “Mohawk Guy.”

He’s most known for this star-and-stripes mohawk (which he debuted for the Curiosity landing), but changes his hairstyle frequently.

He is a major Star Trek fan.

 

Ariel Garten

Ariel Garten

 

CEO and head of research at InteraXon

By tracking brain activity, the Canadian Ariel Garten creates products to improve people’s cognition and reduce stress. Her company just debuted Muse, a brain-sensing headband that shows your brain’s activity on a smartphone or tablet. The goal is to eventually let you control devices with your mind.

She runs her own fashion design business and has opened fashion week runways with shirts featuring brainwaves.

 

 

 

Amy Mainzer

Amy Mainzer

 

Astrophysicist and deputy project scientist at the wide-field infrared survey explorer, NASA’s Jet Propulsion Lab.

Amy Mainzer built the sensor for the Spitzer Space Telescope, a NASA infrared telescope that’s been going strong for the last 10 years in deep space. Now she’s the head scientist for a new-ish telescope, using it to search the sky for asteroids and comets.

Her job is to better understand potentially hazardous asteroids, including how many there are as well as their orbits, sizes, and compositions.

Her idea of a good time is to do roller disco in a Star Trek costume.

 

 

Dr. Aditi Shankardass

Dr. Aditi Shankardass

 

 

Pedriatric Neurologist, Boston Children’s Hospital, Harvard Medical School (her British pedigree includes B. Sc. in physiology from King’s College London; M.Sc. in neurological science from University College London; Ph.D in cognitive neuroscience from the University of Sheffield).

Aditi Shankardass is a renowned pediatric neurologist who uses real-time brain recordings to accurately diagnose children with developmental disorders and the underlying neurological causes of dyslexia.

Her father is a lawyer who represents celebrities. She enjoys dancing, acting, and painting.

 

 

 

Albert Mach

Albert Mach

 

Bio-engineer and senior scientist at Integrated Plasmonics Corporation

As a graduate student Albert Mach designed tiny chips that can separate out cells from fluids and perform tests using blood, pleural effusions, and urine to detect cancers and monitor them over time.

He loved playing with LEGOs as a kid.

 

 

 

Now that we’ve had a look at ‘modern scientists’ it is only appropriate to look at a bit of ‘modern science’…

STEM (Science, Technology, Engineering, Math) is not a job – it’s a way of life. You can’t turn it off… you don’t leave that understanding and outlook at the office when you leave (not that any of the above braniacs stop working on a problem just because it’s 5PM). One of the fundamental aspects of science is measurement: it is absolutely integral to every sector of scientific endeavor. In order to measure, and communicate those measurements to other scientists, a set of metrics is required, one that is shared and agreed upon by all. We are familiar with the meter, the litre, the hectare and the kilogram. But there are other, less well-known measurements – one of which I will share with you here.

What do humans, beer, sub-atomic collisions, the cosmos and the ‘broad side of a barn’ have in common? Let’s find out!

Units of measure are extremely important in any branch of science, but some of the most fundamental units are those of length (1 dimension), area (2 dimensions) and volume (3 dimensions). In human terms, we are familiar with the meter, the square meter and the litre. (For those lonely Americans – practically the only culture that is not metric – think of yards, square yards and quarts). I want to introduce you to a few different measurements.

There are three grand ‘scales’ of measurement in our observable universe: the very small (as in subatomic); the human scale; and the very large (as in cosmology). Even if you are not science-minded in detail, you will have heard of the angstrom [10−10 m (one ten-billionth of a meter)] – used to measure things like atoms; and the light-year [ 9 trillion kilometers (or about 6 trillion miles)] – used to measure galaxies.

the broad side of a barn

the broad side of a barn

The actual unit of measure I want to share is called “Hubble-barn” (a unit of volume), but a bit of background is needed. The barn is a serious unit of measure (area) used by nuclear physicists. One barn is equal to 1.0 x 10−28 m2. That’s about the size of the cross-sectional area of an atomic nucleus. The term was coined by these physicists that were running these incredibly large machines (atom smashers) that help us understand subatomic function and structure by basically creating head-on collisions with particles travelling in opposite directions at very high speeds. It is very, very difficult! And the colloquial expression “you can’t hit the broad side of a barn” led to the name for this unit of measure…

subatomic collision

subatomic collision

Now, since even scientists have a sense of humor, but understand rigor and analogy, they further derived two more units of measure based on the barn: the outhouse (smaller than a barn) [1.0 x 10−6 barns]; and the shed (smaller than an outhouse)  [1.0 x 10−24 barns].

So now we have resolved part of the aforementioned “Hubble-barn” unit of measure. Remember that a unit of volume requires three dimensions, and now we have established two of those with a unit of area (the barn). A

Large Hadron Collider

Large Hadron Collider

third dimension (of length, which is one-dimensional quantity) is needed…

The predecessor to Hubble-barn is the Barn-megaparsec. A parsec is equal to about 3.26 light years (31 trillion kilometers / 19 trillion miles). Its name is derived from the distance at which one astronomical unit subtends an angle of one arcsecond. [Don’t stress if you don’t get those geeky details, the parsec basically makes it easy for astronomers to calculate distances directly from telescope observations].

A megaparsec is one million parsecs. As in a bit over 3 million light years. A really, really long way… this type of unit of measure is what astronomers and astrophysicists use to measure the size and distance of galaxies and entire chunks of the universe.

 

The bottom line is if you multiply a really small unit (a barn) by a really big unit (a megaparsec), you get something rather normal. In this case 112ml (about ½ cup). So here is where we get to the fun (ok a bit geeky) aspect of scientific measurements. The next time your cake recipe calls for ½ cup of sugar, just ask for a Barn-megaparsec of sugar…

The Hubble length

The Hubble length

Since for our final scientific observation, we need a unit of measure a bit more than a teaspoon, we turn to the Hubble length (the radius of the entire observable universe, derived by multiplying the Hubble constant by the speed of light). It’s about 4,228 million parsecs [13.8 billion light years]. As you can see, by using a larger unit than the megaparsec we now can get a larger unit of volumetric measure. Once the high mathematics is complete, we find that one unit of Hubble-barn is pretty much equivalent to… a pint.. of beer. (473ml). So two physicists go into a bar… and order a Hubble-barn of ale…

See… there IS a practical use for science!

Pint of Beer

Pint of Beer

Hubble telescope

Hubble telescope

 

The Certainty of Uncertainty… a simple look at the fascinating and fundamental world of Quantum Physics

June 7, 2015 · by parasam

Recently a number of articles have been posted all reporting on a new set of experiments that attempt to shed light on one of the most fundamental postulates of quantum physics: the apparent contradictory nature of physical essence. Is ‘stuff’ a Wave or a Particle? Or is it both at the same time? Or even more perplexing (and apparently validated by a number of experiments, including this most recent one): the very observation of the essence determines whether it manifests as a wave or a particle.

Yes, this can sound weird – and even really smart scientists understood this challenge of normative values:  Einstein called this property “spooky action at a distance”; Neils Bohr (one of the pioneers of quantum theory) said “if quantum mechanics hasn’t profoundly shocked you, you haven’t understood it yet.”

If you are not immediately familiar with the duality paradox of objects in quantum physics (the so-called wave/particle paradox) – then I’ll share here a very brief introduction, followed by some links to articles that are well worth reading before continuing with this post, as the basic premise and current theories really will help you understand the further comments and alternate theory that I will offer. The authors of these referenced articles are much smarter than me on the details and mathematics of quantum physics – and yet do not (in these articles) use high mathematics, etc – I found the presentations clear and easy to comprehend.

Essentially back when the physics of very small things was being explored by the great minds of Einstein, Neils Bohr, Werner Heisenberg and others various properties of matter were derived. One of the principles of quantum mechanics – the so-called ‘Heisenberg Uncertainty Principle’ (after Heisenberg who first introduced this theory in 1927) – is that you cannot know precisely both the position and the momentum of any given particle at the same time.

The more precisely you know one property (say, position) the less precisely can you know the other property (in this case, momentum). In our everyday macro world, we “know” that a policeman can know both our position (300 meters past the intersection of Grand Ave. and 3rd St.) and our momentum (87 km/h) when his radar gun observed us in a 60km/h zone… and as much as we may not like the result, we understand and believe the science behind it.

In the very, very small world of quantum objects (where an atom is a really big thing…) things do not behave as one would think. The Uncertainty Principle is the beginning of ‘strangeness’, and it only gets weirder from there. As the further study of quantum mechanics, quantum physics and other areas of subatomic knowledge grew, scientists began to understand that matter apparently could behave as either a wave or a particle.

This is like saying that a bit of matter could be either a stream of water or a piece of ice – at the same time! And if this does not stretch your mental process into enough of a knot… the mathematics proves that everything (all matter in the known universe) actually possesses this property of “superposition” [the wave/particle duality] and that the act of observing the object is what determines whether the bit of matter ‘becomes’ a wave or a particle.

John Wheeler, in 1978 with his now-famous “delayed choice” thought experiment [Gedankenexperiment], first proposed the concept of using a double-slit test on a quantum object. Essentially you fire a quantum object (a tiny bit of matter) at two parallel slits in a solid panel. If the object is a particle (like a ball of ice) then it can only go through one slit or the other – it can’t stay a particle and go through both at the same time. If however the quantum object is a wave (such as a light wave) then it can go through both slits at the same time. Observation of the result tells us whether the quantum object was a wave or particle. How? If the object was a wave, then we will see an interference pattern on the other side of the slit (since the wave, passing through both slits, will recombine on the other side of the slits and exhibit areas of ‘interference’ – where some of the wave combines positively and some  combines destructively – these alternating ‘crests and valleys’ are easily recognized.


However, if the quantum object is a particle then no interference will be observed, as the object could only pass through one slit. Here’s the kicker: assume that we can open or close one of the slits very quickly – AFTER the quantum object (say a photon) has been fired at the slits. This is part of the ‘observation’. What if the decision to open or close one of the slits is made after the particle has committed to passing through either one slit or the other (as a particle) – and let’s say that initially (when the photon was fired) only one slit was open – but that subsequently we opened the other slit. If the particle was ‘really’ a particle then we should see no interference pattern – the expected behavior. But… the weirdness of quantum physics says that if we open the second slit (thereby setting up the stage to observe ‘double slit interference’) – that is exactly what we will observe!

And this is precisely what Alain Aspect and his colleagues proved experimentally in 2007 (in France). These experiments were performed with photons. The most recent experiments proved for the first time that the wave/particle duality applies to massive objects such as entire atoms (remember in the quantum world an atom is really, really huge). Andrew Truscott (with colleagues at the Australian National University in late 2014) repeated the same experiment in principle but with helium atoms. (They whacked the atoms with laser pulses instead of feeding them through slits but the same principle of ‘single vs double slit’ testing/observation was valid).

So, to summarize, we have a universe where everything can be (actually IS) two things at once – and only our observation of something decides what it turns out to be…

To be clear, this “observation” is not necessarily human observation (indeed it’s a bit hard to see atoms with a human eyeball..) but rather the ‘entanglement’ [to use the precise physics term] between the object under observation and the surrounding elements. For instance, the presence of one slit or two – in the above experiment – IS the ‘measurement’.

Now, back to the ‘paradox’ or ‘conundrum’ of the wave/particle duality. If we try – as a thought experiment – to understand what is happening, there are really only two possible conclusions: either a chunk of matter (a quantum object) is two things at once; or there is some unknown communications mechanism which would allow messaging at speeds faster than the speed of light (which would violate all current laws of physics). To explain this: one possibility is that the object, after passing through one slit somehow tells the original object “Hey, I got through the experiment by going through only one slit, so I must be a particle, so please behave like a particle when you approach the slits…” Until recently, the only other possibility was this vexing ‘duality’ where the best scientists could come up with was that quantum objects appeared to behave as waves… unless you looked at them and then they behaved like particles!

Yes, quantum physics is stranger than an acid trip…

A few years ago another wizard scientist (in this case a chemist looking at quantum mechanics to better understand really what goes on with chemical reactions) [Prof. Bill Poirier at Texs Tech] came up with a new theory: that of Many Interacting Worlds (MIW). Instead of having to believe that things were two things at once, or apparently communicated over great distances at speeds faster than light; Prof. Poirier postulated that very small particles from other universes ‘seep through’ into our and interact with our own universe.

Now before you think that I, and these other esteemed scientists, have gone off the deep end completely – exhaustive peer review and many recalculations of the mathematics have proven that his theory does not violate ANY current laws of quantum mechanics, physics or general relativity. There is no ‘fuzziness’ in his theory: the apparent ‘indeterminate’ position (as observed in our world) of a particle is actually the observed phenomena of the interaction between an ‘our world’ particle and an ‘other world’ particle.

Essentially Poirier is theorizing exactly what mystics have been telling us for a very long time: that there are parallel universes! Now, to be accurate, we are not completely certain (how can we be in what we now know is an Uncertain World) that this theory is the only correct one. All that has been proven at this point is that the MIW theory is at least as valid as the more well-established “wave/particle duality” theory.


Now, to clarify the “parallel universe” theory with a diagram: The black dots are little quantum particles in our universe, the white dots are similar particles in a parallel universe. [Remember at the quantum level of the infinitesimally small there is mostly empty space, so there is no issue with little things ‘seeping in’ from an alternate universe] These particles are in slightly different positions (notice the separations between the black dot and white dot). It’s this ‘positional uncertainty’ caused by the presence of particles from two universes very close to each other that is the root of the apparent inability to measure the position and momentum exactly in our  universe.

Ok, time for a short break before your brain explodes into several alternate universes… Below is a list of the links to the theories and articles discussed so far. I encourage a brief internet diversion – each of these is informative and even if not fully grasped the bones of the matter will be helpful in understanding what follows.

Do atoms going through double slit know they are being observed?
Strange behavior of quantum particles
Reality doesn’t exist until we measure it
Future events determine what happens in the past???
Parallel worlds exist and interact with ours

Ok, you are now fortified with knowledge, understanding – or possibly in need of a strong dose of a good single malt whiskey…

What I’d like to introduce is an alternate (or perhaps as an extension to) Prof. Poirier’s theory of parallel universes – which BTW I have no issue with, but I don’t think it necessarily explains all the issues surrounding the current ‘wave/particle duality’.

In the MIW (Many Interacting Worlds) notion, the property that is commingled with our universe is “position” – and yet there is the equally important property of “momentum” that should be considered. If the property of Position is no longer sacred, should not Momentum be more thoroughly investigated?

A discussion on momentum will also provide some interesting insight on some of the vexing issues that Einstein first brought to our attention: the idea of time, the theory of ‘space-time’ as a construct, and the mathematical knowledge that ‘time’ can run forwards or backwards – intimating that time travel is possible.

First we must understand that momentum, as a property, has two somewhat divergent qualities depending on whether one is discussing the everyday ‘Newtonian’ world of baseballs and automobiles (two objects that are commonly used in examples in physics classes); or the quantum world of incredibly small and strange things. In the ‘normal’ world, we all learned that Momentum = Mass x Velocity. The classic equation p=mv explains most everything from why it’s very hard for an ocean liner to stop or turn quickly once moving, all the way to the slightly odd, but correct, fact that any object with no velocity (at complete rest) has no momentum. (to be fully accurate, this  means no relative velocity to the observer – Einstein’s relativity and all that…)

However, in the wacky and weird world of quantum mechanics all quantum objects always have both position and momentum. As discussed already, we can’t accurately know both at the same time – but that does not mean the properties don’t exist with precise values. The new theory mentioned above (the Many Interacting Worlds) is primarily concerned with ‘alternate universe particles’ interacting, or entangling, in the spatial domain (position) with particles in our universe.

What happens if we look at this same idea – but from a ‘momentum’ point of view? Firstly, we need to better understand the concept of time vs. momentum. Time is totally a human construct – that actually does not exist at the quantum level! And, if one carefully thinks about it, even at the macroscopic level time is an artifice, not a reality.

There is only now. Again, philosophers and mystics have been trying to tell us this for a very long time. If you look deep into a watch what you will see is the potential energy of a spring, through gears and cogwheels, causing movement of some bits of metal. That’s all. The watch has no concept of ‘time’. Even a digital watch is really the same: a little crystal is vibrating due to battery power, and some small integrated circuits are counting the vibrations and moving a dial or blinking numbers. Again, nothing more than a physical expression of momentum in an ordered way. What we humans extrapolate from that: the concept of time; past; future – is arbitrary and related only to our internal subjective understanding of things.

Going back to the diagram (below)


Let’s assume for a moment that the black and white dots represent slight changes in momentum rather than position. This raises several interesting possibilities: 1) that an alternate universe, identical in EVERY respect to ours – except that with a slightly different momentum things would be occurring either slightly ahead (‘in the future’) or slightly behind (‘in the past’); or 2) that our own universe exists in multiple momentum states at the same time – with only ‘observation’ deciding which version we experience.

One of the things that this could explain is the seemingly bizarre ability of some to ‘predict the future’, or to ‘see into the past’. If these ‘alternate’ universes (based on momentum) actually exist, then it is not all that hard to accept that some could observe them, in addition to the one where we most all commonly hang out.

Since most of quantum theory is based around probability, it appears likely that the highest probability observable ‘alternate momentum’ events will be one that are closely related in the value of momentum to the particle that is currently  under observation in our universe. But that does not preclude the possibility of observation of particles that are much more removed, in terms of momentum (i.e. potentially further in the past or future).

I personally do not posses the knowledge of the high mathematics necessary to prove this to the same degree that Bill Poirier and other scientists have done with the positional theorems – but invite that exercise if anyone has such skills. As a thought experiment, it seems as valid as anything else that has been proposed to date.

So now, as was so eloquently said at the end of each episode of the “Twilight Zone”: we now return you to your regular program and station. Only perhaps slightly more confused… but also with some cracks in the rigid worldview of macroscopic explanations.

Lens Adaptors for iPhone4S: technical details, issues and usage

August 31, 2012 · by parasam

[Note: before moving on with this post, a comment on stupid spell-checkers… my blog writer (and even Microsoft Word!) insists that “adaptor” is a mis-spelling. Not so… “adaptor” is a device that adapts one thing to another that would otherwise be incompatible, while an “adapter” is a person that adapts to new situations or environments… I’ve seen countless instances of mis-use… I fear that even educated users are deferring to software, assuming that it’s always correct. The amount of flat-out wrong instances in both spelling and grammar in most major software applications is actually scary…]

Okay, now for the good stuff…  While the iPhone (in all models) is a fantastic camera for a cellphone, it does have many limitations, some of which have been discussed in previous articles in this blog. The one we’ll address today is the fixed field-of-view (FOV) of the camera lens. Since most users are familiar with 35mm SLR (Single Lens Reflex) – or, if you are young enough to not have used film, then DSLR (Digital SLR) – and have at least an acquaintance with the relative FOV of different focal length lenses. As a quick review, the so-called “normal” lens for a 35mm sensor size is a 50mm focal length. Anything less than that is termed a “wide angle” lens, anything greater than that is termed a “telephoto” lens. This is a somewhat loose description, and at very small focal lengths (which leads to very wide angle of view) the terminology changes to a “fisheye” lens. For a more detailed explanation of focal length and other issues please see my original post on the iPhone4S camera “Basic Overview” here.

Overview

The lens that is part of the iPhone4S camera system is a fixed aperture / fixed focal length lens. The aperture is set at f2.4 while the 35mm equivalent focal length of the lens is 32mm – a moderately wide angle lens. The FOV (Field of View) for this lens is 62° [for still photos], 46° [for video]. {Note: since the video mode of 1920×1080 pixels is smaller than the sensor size used for still photos (3264×2448) the angle of view changes with the focal length held constant} The fixed FOV (i.e. not a zoom lens) affects composition of the image, as well as depth of field. A quick note: yes the iPhone (and most other cellphone cameras) have a “zoom” function, but this is a so-called “digital zoom” which is achieved by cropping and magnifying a small portion of the original image as captured on the sensor. This produces a poor quality image that has low resolution, and is avoided for any serious photography. A true zoom lens (sometimes called ‘optical zoom’) achieves this function by mechanically changing the focal length – something that is impossible to engineer for a cellphone. As a rule of thumb, the smaller the focal length, the greater the depth of field (the areas of the image that are in focus, in relation to the distance from the lens); and the greater the field of view (how much of the total scene fits into the captured image).

In order to add some variety to the compositional choices afforded by the fixed iPhone lens, the only option is to fit external adaptor lenses to the iPhone. There are several manufacturers that offer these, using a variety of mechanical devices to mount the lens. There are two basic divisions of adaptor type: those that provide external lenses and the mounting hardware; and those that provide a mechanical adaptor to use commonly available 35mm lenses with the iPhone. One example of an adaptor for 35mm lenses is here, while an example of lens+mount is here.

I personally don’t find a use for adapting 35mm lenses to the iPhone:  if I am going to deal with the bulk of a full sized lens then I will always choose to attach a real camera body and take advantage of the resolution and control that a full purpose-built camera provides. Not everyone may share this sentiment, and for those that find this useful there are several adaptors available. I do shoot a lot with the iPhone, and found that I did really want to have a relatively small and lightweight set of adaptor lenses to offer more choice in framing an image. I researched the several vendors offering such devices, and for my personal use I chose the iPro lens system manufactured by Schneider Optics. I made this choice based on two primary factors:  I had prior experience with lenses made by Schneider (their unparalleled Super Angulon wide angle for my view camera), and the precision, quality and versatility of the iPro system. This is a personal choice – ultimately any user will find what works for them – but the principles discussed here will apply to any external adaptor lens. As I have mentioned in previous posts, I am not a professional reviewer, have no relationship with any hardware or software vendor (other than the support offered as an end user), and have no commercial interest in any product I mention in this blog. I pick what I like, then write about it.

I do want to point out however, once I started using the iPro lenses and had some questions, that I received a large amount of time and assistance from the staff at Schneider Optics, particularly Niki Mustain. I would like to thank her and all the staff that so generously answered my incessant questions, and did a fair amount of additional research and testing prompted by some of my observations. They kindly made available an internal report on iPro lens performance, and the interactions with the iPhone camera (some of these issues to be discussed below). When and if they make that public (likely as an application note on their website) I will update this blog with a comment to point to that, in the meantime they have allowed me to use some of their comments on the general technology and limitations of any adaptor lens system as background for this post.

Technical specs on the iPro lens adaptor system

This particular system offers three different adaptor lenses (they can be purchased individually or as a set): Wide Angle, Telephoto and Fisheye. Here are the basic specifications:

As can be seen from the above details, the Telephoto is a 2X magnification, doubling the focal length and halving the FOV (Field of View). The Wide Angle changes the stock medium wide-angle view of the iPhone to a “very wide” wide angle (19mm equivalent – about the widest FOV provided by most variable focal length** 35mm lenses). The Fisheye offers what I would consider a ‘medium’ fisheye look, with a 12mm equivalent focal length. With fisheye lenses generally accepted as having focal lengths of 18mm or less, this falls about midway between 6mm*** and 18mm.

**There is a difference between “variable focal length” and “zoom” lenses, although most use the term interchangeably not being aware of the distinction between the two. A variable focal length lens allows a continuous change of focal length, but once the new focal length is established, the image must be refocused. A true zoom lens will maintain focus throughout the entire range of focal lengths allowed by the lens design. Obviously a true zoom lens is more difficult (and therefore costly) to manufacture. Typically, zoom lenses are larger and heavier than a variable focal length lens. It is also more difficult to create such a lens with a wide aperture (low f/stop number). To give an example, you can purchase a reasonable 70-200mm zoom lens for about $200 (with a maximum aperture of f5.6); a high quality zoom lens of the same range (70-200mm) that opens up to f2.8 will run about $2,500.

Another thing to keep in mind is that most ‘variable focal length’ lenses are not advertised as such, they are often marketed as zoom lenses, but careful testing will show that accurate focus is not maintained throughout the full range of focal lengths. Not surprising, as this is a difficult optical feat to do well, which is why high quality zoom lenses cost so much. Really good HD video or cinemaphotography zoom lenses that have an extremely wide range (often used for sports television – for example the Canon DigiSuper 80 with a zoom range of 8.8 to 710mm) can cost upwards of $163,000. Warning: playing with one of these for a few days will produce depression and optical frustration once returning to ‘normal’ inexpensive zoom lenses… A good lens is simply the most important factor in getting a great image. Period.

*** The extreme wide end of fisheye lenses is held by the Nikkor 6mm/f2.8 which is a masterpiece of engineering. With an almost insane 220° FOV, this is the widest lens for 35mm cameras of which I am aware. You won’t find this in your local camera shop however, only a few hundred were ever made – during the 1970s – 1980s. The last time one went on auction (in the UK in April 2012) it sold for just over $160,000. The objective lens is a bit over 236mm (9.25″) in diameter! Here are a few pix of this awesome lens:

actual image taken with 6mm f2.8 Nikkor fisheye

Ok, back to reality (both size and wallet-wise…)

Here are some images of my iPro lenses to give the reader a better idea of the devices which we’ll be discussing further:

The 3-lens iPro kit fully assembled for carrying/storage.

An ‘exploded view’ of all the bits that make up the 3-lens iPro system.

The Fisheye, Telephoto and Wide Angle iPro lenses.

Front view of iPro case mounted to iPhone4S, showing the attached tripod adaptor.

Rear view of the iPro case mounted on iPhone4S.

Close-up of the bayonet lens mounting feature of the iPro case.

2X Telephoto mounted on iPhone.

WideAngle lens mounted on iPhone.

Fisheye lens mounted on iPhone.

Basic use of the iPro lens system

The essential parts of the iPro lens system are the case, which allows precision alignment of the lens with the iPhone camera, and the detachable lens elements themselves. As we will discuss below, the precision and accuracy of mounting an external adaptor lens is crucial to good optical performance. It may seem trivial, but the material and case design is an important overall part of the performance of this adaptor lens system. Due to the necessary rigidity of the case material, once it is installed on the iPhone it is not the easiest to remove… I missed this important part of the instructions provided:  you must attach the tripod adaptor to the case body to provide the additional leverage needed to slightly flex the case for removal. (the hole in the rear of the case that shows the Apple logo is actually a critical design element: that is where you push with a finger of your opposite hand while flexing the case in order to pop out the phone from the case).

In addition to providing the necessary means for taking the iPhone out of the case if you should need to (and you really won’t: I found that this case works just fine as an everyday shell for the phone, protecting the edges, insulating the metallic sideband to avoid the infamous ‘hand soaking up microwaves dropped call iPhone effect’, and is slim enough that it fits perfectly in my belt-mounted carrying case), the tripod mounting screw provides a very important improvement for iPhonography: stability. Even if you don’t use any of the adaptor lenses, the ability to affix the phone to a tripod (or even a small mono-pod) is a boon to getting better photographs with the iPhone. Rather than bore you with various laws of physics and optic science, just know that the smaller the sensor, the more a resultant image is affected by camera movement. The simple truth is that the very small sensor size of the iPhone camera, coupled with the light weight and small case size of the phone, means that most users unconsciously jiggle the camera a lot when taking an image. This is the single greatest reason for lack of sharpness in iPhone images. To compound things, the smaller the sensor size, the less sensitive it is for gathering light, which means that often, in virtually anything but direct sunlight, the iPhone is shooting at relatively slow shutter speeds, which only exaggerates camera movement.

Since the EXIF data (camera image metadata) is collected with each shot, you can see afterwards what shutter speed was used by the iPhone on each of your shots. The range of shutter speeds on the iPhone4S is from 1/15 sec to 1/2000 sec. Any shutter speed slower than 1/250 sec will show some blurring if the camera moves at all during the shot. So, whenever possible, brace your phone against a rigid object when shooting, particularly in partial shade or darker surroundings. Since often a suitable fence post, lamp pole or other object is not right where you need it for your shot, the ability to use some form of tripod will often provide a superior result for your image.

The adaptor lenses themselves twist into the case with a simple bayonet mount. As usual with any fine optics, take care to avoid dropping, scratching or otherwise damaging the delicate optical surfaces of the lenses. The telephoto lens will most benefit from tripod use (when possible), as the narrower the angle of view, the more pronounced camera shake is on the image. On the other hand, the fisheye lens can be handheld for most work with no visible impairment. A note on use of the fisheye lens:  the FOV is so wide that it’s easy for your hand to end up in the image… take some care and practice with how you hold the phone when using this lens.

Optical issues with adaptor lenses, including the iPro lens system

After using the adaptor lenses for a short time, I found several impairments in the images taken. Essentially the artifacts result in a lack of sharpness towards the edge of the image, and color fringing of certain objects near the edge of the frame. I went on to perform extensive tests of each of the lenses and then forwarded my concerns to the staff at Schneider Optics. To my pleasure, they were open to my concerns, and performed a number of tests in their own lab as well. While I will discuss the details below, the bottom line is that both myself and the iPro team agrees that external adaptor lenses are not a perfect science, particularly with the iPhone. We must remember, for all the fantastic capabilities that this device exhibits… it’s a bloody cellphone! I have every confidence that Schneider (and probably other vendors as well) have made every effort within the scope of practicality and budget for such lenses to minimize the side-effects. I have found the actual optical precision of the iPro lenses (as measured for such things as MTF [Modulation Transfer Function – an objective measurement of the resolving capability of a lens system], illumination fall-off, chromatic and geometric aberrations, optical alignment and contrast ratio) are excellent – particularly for lenses that are really quite inexpensive compared to their quality.

The real issue lies with the iPhone camera system itself: Apple never designed this camera to interoperate with external adaptor lenses. One cannot fault the original manufacturer for attempting to produce a piece of hardware that offers good performance at a reasonable price within a self-contained system. The iPhone designers have treated the totality of the hardware and software of the camera system as a fixed and closed universe. This is typical of the way that Apple designs both their hardware and software. There are both pros and cons to this philosophy:  the strong advantage is the ability to blend design characteristics of both hardware and software to mutually complement each other in the effort to meet design criteria with a time/cost budget; the disadvantage is the lack of easy adaptability in many cases for external hardware or software to easily interoperate with Apple products. For example, the software development guidelines for Apple devices are the most stringent in the entire industry. You work within the framework provided, or you don’t get approval for your app. Every app intended for any iDevice must be submitted to Apple directly for testing and approval. This is virtually unique in the entire computer/cellphone industry. (I’m obviously not talking about the gray area of ‘jailbroken’ phones and software).

The way in which this design philosophy shows up in relation to external adaptor lenses is this: the iPhone camera is an amazingly good camera for it’s size, cost and weight, but it was never designed to be complementary to external lenses. Certain design choices that are not evident when images are taken with the native camera show up, sometimes rather glaringly, when external lenses are coupled with the iPhone camera. One might say that latent issues in the lens and sensor design are significantly amplified by external adaptor lenses. This issue is endemic to any external lens, not just the iPro lenses I am discussing here. Each one will of course have its own unique ‘fingerprint’ of interaction with the iPhone camera, but the general issues discussed will be the same.

As usual, I bring all this up to share with my readers the best information I can find or develop in the pursuit of what’s realistically possible with this great little camera. The better we know the capabilities and limitations of our tools, the better able we are to make the images we want. I have taken some great shots with these adaptor lenses that would have been impossible to create any other way. I can live with the distortions introduced as a compromise to get the kind of shot that I want. The more aware I am of what the issues are, the better I can attempt (while composing a shot) to attempt to minimize the visibility of some of these artifacts.

To get started, here are some example shots:

[Note:  all shots are unretouched from the iPhone camera, the only adjustment is resizing to fit the constraints of this blog format]

iPhone4 Normal (no adaptor lens)

iPhone4 WideAngle adaptor lens

iPhone4 Fisheye adaptor lens

iPhone4 Telephoto adaptor lens

iPhone4S #1 Normal

iPhone4S #1 WideAngle

iPhone4S #1 Fisheye

iPhone4S #1 Telephoto

iPhone4S #2 Normal

iPhone4S #2 WideAngle

iPhone4S #2 Fisheye

iPhone4S #2 Telephoto

The above shots were taken to test one of the first potential causes for the artifacts in the images: the softening towards the edges as well as the color fringing of bright areas near the edge of the image (chromatic aberration). A big potential issue with externally mounted adaptor lenses for the iPhone is lens alignment. The iPhone lens is physically aligned to the sensor as part of the entire camera assembly. This unitary assembly is then inserted into the case during final manufacture of the device. Since Apple never considered the use of external adaptor lenses, no effort was made to ensure perfect alignment of the camera assembly into the case. As can be seen from my blog on the iPhone hardware (showing detailed images of an iPhone torn apart), the camera assembly is simply pressed into place – there is no precision mechanical lock to align the optical axis of the camera with the case. In addition, the actual camera lens is protected by being installed behind a clear plastic window that is part of the outer case itself.

What this means is that if the camera assembly is tilted even very slightly it will produce a “tilt-shift” de-focus effect when coupled with an external lens:  the center of the image will be in focus, but both edges will be out of focus. One side will actually be focused a bit behind the sensor plane, the other side will be focused a bit in front of the sensor plane.

The above diagram represents an extreme example, but you can see that if the lens is tilted in relation to the image sensor plane, the plane of focus changes. Objects at the edge of the frame will no longer be in focus, while objects in the center of the frame will remain in focus.

In order to eliminate this probability from my tests, I used three separate iPhones (one iPhone4 and two iPhone4S models). While not a large sample statistically, it did provide some certainty that the issues I was observing were not related to a single iPhone. You can see from the examples above that all of the adaptor lens shots exhibit some degree of the two artifacts (defocused edges and chromatic aberration). So further investigation was required in order to attempt to understand the root cause of these distortions.

Since the first set of test shots was not overly ‘scientific’ (back yard), I was advised by the staff at Schneider that a brick wall was a good test subject. It was easy to visualize the truth of this, so I went off in search of a large public test chart (brick wall…)

WideAngle taken from 15 ft.

Fisheye taken from 15 ft.

Telephoto taken from 15 ft.

To add some control to the shots, and reduce potential errors of camera movement that may affect sharpness in the image, the above and all subsequent test shots were taken while the iPhone was mounted on a stable tripod. In addition, each shot was taken from exactly the same camera position (in the above shots, 15 feet from the wall). Two things stood out here: 1) there was a lack of visible chromatic aberration [I think likely due to the flat lighting on the wall and lack of high contrast edges, which typically enhance that form of artifact]; and 2) the soft focus artifact is more pronounced on the left and right sides as opposed to the top and bottom edges. [More on why I think this may occur later in this article].

WideAngle, 8 ft.

Fisheye, 8 ft.

WideAngle, 30 ft.

Fisheye, 30 ft.

Telephoto, 30 ft.

WideAngle, 50 ft.

Fisheye, 50 ft.

Telephoto, 150 ft.

Telephoto, 150 ft.

Telephoto, 500 ft.

The above set of images represented the next test series of shots. Here, various distances to the “test chart” [this time I needed even a larger ‘chart’ so had to find a 3-story brick building…] were used in order to see what effect that may have on the resultant image. A few ‘real world’ images were shot using just the telephoto at long distances – here the large distance from camera to subject, using a telephoto lens, would normally result in a completely ‘flat’ image with everything in the same focal plane. Once again, we continue to see soft focus and chromatic aberrations at the edges.

Normal (no adaptor), auto-focus

Normal, Selective Focus

WideAngle, Auto Focus

WideAngle, Selective Focus

Fisheye, Auto Focus

Fisheye, Selective Focus

Telephoto, Auto Focus

Telephoto, Selective Focus

This last set of test shots was suggested by the Schneider staff, based on some tests they ran and subsequent discussions. One theory is that there is a difference in how the iPhone camera internal software (firmware + OS kernel software – not anything a camera app developer has access to) handles auto-focus vs selective-focus. Selective focus is where the user can select the focus area, usually with a little square that can be moved to different parts of the image. In all the above tests, the selective focus area was set to the center of the image. In theory, since my test images were flat and all at the same difference from the camera, there should have been no difference between auto-focus or selective-focus, no matter which lens was used. Careful examination of the above images shows an inconsistent result:  the fisheye showed no difference between the two focus modes, the normal and telephoto looked better with selective focus, while the wideangle looked best when auto focus was applied.

The internal test report I received from Schneider pointed out another potential anomaly, one I have not yet had time to attempt to reproduce: using selective focus off-center in the image. This usage appeared to generate results that would be unexpected in normal photographic work: the area of selective focus was sharp, most of the rest of the image was a bit softer, but a mirror image position of the original selective focus region was once again sharp on the opposite side of the image. This does seem to clearly point to some image-enhancement algorithms behaving in an unexpected fashion.

The issue of auto-focus methods is a bit beyond the scope of this article, but some considerable research shows that the most likely methodology used in the iPhone camera is passive detection (that is certain – there is no range finder on an iPhone!) controlled lens barrel or lens element adjustment. There are a large number of vendors that support this form of auto-focus (and here, I mean ‘not manual focus’ since there is no mechanical focus ring on cellphones… – the ‘auto-focus’ can either be entirely automatic [as I use the term “auto-focus” in my tests above] or selective area auto-focus, where the user indicates a region of the image on which the auto-focus is concentrated. One of the most advanced methods is MEMS (Micro-Electrical Mechanical Systems) which moves a single optical element within the lens barrel, another popular method is the ‘voice-coil’ micro-motor which moves the entire lens barrel to effect focus.

With the advances brought to bear with iOS5, including face area recognition (the camera attempts to recognize faces in the image and focus on those when in full auto-focus mode), it is apparent that significant image recognition and processing are being done at the kernel level, before any camera app ‘gets their hands on’ the camera controls. The bottom line is that there may well be some interactions between the way in which the passive detection and image processing algorithms are affected by an unexpected (to the iPhone software) presence of an external adaptor lens. Another way to put this is that the internal software of the camera is likely ‘tuned’ to the lens that is part of the camera assembly, and the addition of a significant change to the optical pattern drawn on the camera sensor (now that a telephoto lens adaptor is attached) alters the focusing algorithm in an unexpected manner, producing the artifacts we see in the examples.

This issue is not at all unknown in engineering and quality control: a holistically designed system where all of the variables are thought to be known can be significantly degraded when even one element is externally modified without knowledge of the full scope of design parameters. This often occurs with after-market additions or changes to automobiles. One simple example is if you change the tire size (radius, not width) the speedometer is no longer is accurate – the entire system of the car, including wheel and tire diameter, was part of the calculus for determining how many turns of the axle per minute (all the speedometer mechanism actually measures) are required to indicate X amount of kph (or mph) on the instrument panel.

Another factor that may have a material effect on the focus and observed chromatic aberration is the lens design itself, and how an external adaptor lens may interact with the native design. Simple lenses are often portions of a sphere, so called “spherical lenses.”  Such a lens suffers from significant optical aberrations, as not all of the light rays that are focused by a spherical lens converge to a single point (producing a lack of sharp focus). Also, such lenses bend different colors of light differently, leading to chromatic aberrations (where one sees color fringing, usually blue/purple on one side of a high contrast object and green/yellow on the opposite side). Most high quality modern camera lenses are either aspherical (specially modified shapes that deviate away from a perfect spheroid shape) or groups of elements, some of which may be spherical and others aspherical. Several examples are shown below:

We know from published literature that the lens used in the iPhone4S is a 5 element lens system with at least several aspherical elements. A diagram released by Apple is shown below:

iPhone4 lens system [top] and iPhone4S lens system [bottom]

Again, as described earlier, the iPhone camera system was designed as a unitary system, with factors from the lens system, the individual lens elements, the sensor, firmware and kernel software all becoming known variables in a highly complex opto-electronic equation. The introduction of an external adaptor array of additional elements can produce unplanned effects. All in all, the various vendors of such adaptor lenses, including iPro, have done a good job in dealing with many unknowns. Apple is a highly secretive manufacturer, and does not publish much information. Attempts to gain further technical knowledge are very difficult, at some point one invariably comes up against Apple’s draconian NDAs (Non-Disclosure Agreements) which have penalties large enough to deter even the most aggressive seekers of information. Even the accumulation of knowledge that I have acquired over the past year while writing about the iPhone has been slow, tedious and has taken a tremendous amount of research and ‘fact comparison.’

As a final example, using a more real-world subject, here are a few camera images and screen shots that demonstrate the challenge if one attempts to correct, using post-production techniques, some of the errors introduced by such a lens adaptor:

Original image, unretouched but annotated.

The original image shows significant chromatic aberrations (color fringing) around the reflections in the shop window, grout lines in the brickwork on the pavement, and on the left side of the man’s shirt.

Editing using Photoshop ‘CameraRaw’ to attempt to correct the chromatic aberrations.

Using the Photoshop Camera Raw module, it is possible to manually correct for color fringing shifts… but this affects the entire image. So a fix for the edges causes a new set of errors in the middle of the image.

Chromatic aberrations removed from around the reflections in the window…

Notice here that the color fringing is gone from around the bright reflections in the window, but now the left edge of the man’s shirt has the color shifted, leaving only the monochromatic outline behind, producing a dark gray edge instead of the uniform blue that should exist.

…but reciprocal chromatic edge errors are introduced in the central portion of the image where highly saturated colors abut more neutral areas.

Likewise, the green paint on the steel column has shifted, revealing a gray line on the right of the woman’s leg, with a corresponding shift of the flesh tone onto the green steelwork on the left side of her leg.

final retouched shot after ‘painting in’ was performed to resolve the chroma offset errors in the central portion of the image.

To fix all these new errors, a technique known as ‘painting in’ was used, sampling and filling the color errors with the correct shade, texture and intensity. This takes time, skill and patience. It is impractical in the most part – this was done as an example.

Summary

The use of external adaptor lenses, including the iPro system discussed here, can offer a useful extension to the creative composition of images with the iPhone. Such lenses bring a set of compromises with them, but hopefully once these are known, careful choice of lighting, camera position and other factors can be used to reduce the visibility of such effects. As with any ‘creative device’ less is often more… sparing use of such adaptors will likely bring the best results. However, there are shots that I have obtained with the iPro that would have been impossible with the basic iPhone camera/lens, so I happy to have this additional tool.

To close, here are a few more examples using the iPro lenses:

Fisheye

WideAngle

Telephoto

Why do musicians have lousy music systems?

August 18, 2012 · by parasam

[NOTE: this article is a repost of an e-mail thread started by a good friend of mine. It raised an interesting question, and I found the answers and comments fascinating and wanted to share with you. The original thread has been slightly edited for continuity and presentation here].

To begin, the original post that started this discussion:

Why do musicians have lousy hi-fis?

It’s one of life’s little mysteries, but most musicians have the crappiest stereo systems.

  by Steve Guttenberg

August 11, 2012 7:36 AM PDT

I know it doesn’t make sense, but it’s true: most musicians don’t have good hi-fis.

To be fair, most musicians don’t have hi-fis at all, because like most people musicians listen in their cars, on computers, or with cheap headphones. Musicians don’t have turntables, CD players, stereo amplifiers, and speakers. Granted, most musicians aren’t rich, so they’re more likely to invest whatever available cash they have in buying instruments. That’s understandable, but since they so rarely hear music over a decent system they’re pretty clueless about the sound of their recordings.

(Credit: Steve Guttenberg/CNET)

Musicians who are also audiophiles are rare, though I’ve met quite a few. Trumpet player Jon Faddis was definitely into it, and I found he had a great set of ears when he came to my apartment years ago to listen to his favorite Dizzy Gillespie recordings. Most musicians I’ve met at recording sessions focus on the sound of their own instrument, and how it stands out in the mix. They don’t seem all that interested in the sound of the group.

I remember a bass player at a jazz recording session who grew impatient with the time the engineer was taking to get the best possible sound from his 200-year-old-acoustic bass. After ten minutes the bassist asked the engineer to plug into a pickup on his instrument, so he wouldn’t take up any more time setting up the microphone. The engineer wasn’t thrilled with the idea, because he would then just have the generic sound of a pickup rather than the gorgeous sound of the instrument. I was amazed: the man probably paid $100,000 for his bass, and he didn’t care if its true sound was recorded or not. His performance was what mattered.

From what I’ve seen, musicians listen differently from everyone else. They focus on how well the music is being played, the structure of the music, and the production. The quality of the sound? Not so much!

Some musicians have home studios, but very few of today’s home (or professional) studios sound good in the audiophile sense. Studios use big pro monitor speakers designed to be hyperanalytical, so you hear all of even the most subtle details in the sound. That’s the top requirement, but listening for pleasure is not the same as monitoring. That’s not just my opinion — very, very few audiophiles use studio monitors at home. It’s not their large size or four-figure price tags that stop them, as most high-end audiophile speakers are bigger and more expensive. No, studio monitor sound has little appeal for the cognoscenti because pro speakers don’t sound good.

I have seen the big Bowers & Wilkins, Energy, ProAc, and Wilson audiophile speakers used by mastering engineers, so it does work the other way around. Audiophile speakers can be used as monitors, but I can’t name one pro monitor that has found widespread acceptance in the audiophile world.

Like I said, musicians rarely listen over any sort of decent hi-fi, and that might be part of the reason they make so few great-sounding records. They don’t know what they’re missing.

——–

Now, in order, the original comment and replies:  [due to the authors of these e-mails being located in USA, Sweden, UK, etc. not all of the timestamps line up, but the messages are in order]

From: Tom McMahon
Sent: Saturday, August 11, 2012 6:09 PM
To: Mikael Reichel; ‘Per Sjofors’; John Watkinson
Subject: Why do musicians have lousy hi-fis?

I agree to some of this, have same observations.

But I don’t agree with the use broad use of “most musicians” as it may be interpreted that it is the majority. Neither of us can know this. Neil Young evidently cares a lot.

However, the statement “pro speakers do not sound good” is a subjective statement.  It´s like saying distilled water (i.e 100% H20) doesn’t taste good. Possibly, many think so but distilled water is the purest form of water and by definition anything less pure is not pure water. Whether you like it or not.

The water is the messenger and shooting it for delivering the truth is not productive.

If Audiophiles don’t like to hear the truth is sort deflates them.

A friend, singer/songwriter with fifteen CD´s behind her in the rock/blues genre, on a rare occasion when I got her to listen to her own stuff over a pair of Earo speakers, commented on the detail and realism. I then suggested that her forthcoming CD should be mastered over these speakers, she replied “ I don’t dare”.

Best/Mike

——-

From: John Watkinson
Sent: Sun 8/12/2012 6:46 AM
To: Mikael Reichel; Per Sjofors; Tom McMahon; Ed Elliott
Subject: Why do musicians have lousy hi-fis?

Hello All,

If a pro loudspeaker reproduces the input waveform and an audiophool [ed.note: letting this possible mis-spelling stand, in case it’s intended…] speaker also does, then why do they sound different?

We know the reasons, which are that practically no loudspeakers are accurate enough.  We have specialist speakers that fail in different ways.

The reason musicians are perceived to have lousy hi-fis may be that practically everyone does. The resultant imprinting means that my crap speaker is correct and your crap speaker is wrong, whereas in fact they are all crap.

Our author doesn’t seem to know any of this, so he is just wasting our time.

Furthermore I know plenty of musicians with good ears and good hi-fi.

Best,

John

——-

From: Mikael Reichel
Sent: Sun 8/12/2012 12:58 PM
To: John Watkinson; Per Sjofors; Tom McMahon; Ed Elliott
Subject: Why do musicians have lousy hi-fis?

Andrew is a really nice guy.

He has a talent in selecting demo material for his demos and his TAD speakers sound quite good. But they are passive and also use bass-reflex. This more or less puts the attainable quality level against a brick wall. Add the soft dome tweeter and I am a bit surprised at Mr. Jones choices, dome tweeters are fundamentally flawed designs.

One logical result of making “new” drivers is to skip ferrite magnets because they become a size and weight thief and also limits mechanical freedom for the design engineer. You almost automatically get higher sensitivity by using neodymium. But this is also a myth, as little is made to increase the fundamental mismatch of the driver to the air itself. I would guess Andrew has had the good sense to go with neodymium magnets.

To deliver affordable speakers is a matter of having a strong brand to begin with that allows for volumes so that you can have clients buy without listening first. This then allows for direct delivery thus avoiding importing and retail links in the chain to be removed. Typically out of the MSRP, only 25% reaches the manufacturer. Remove the manufacturing cost and you realize it’s  a numbers game.

This is exactly what is going on in the “audio” business today. The classical sales structures are being torn down. A very large number or speaker manufacturers are going to disappear because they don’t have the brand and volumes to sell over the web. To survive new ways of reaching the clients have to be invented. A true paradigm shift.

TAD has been the motor to provide this brand recognition and consumers are marketed to believe that they can get almost $80 performance from a less than $1 speaker, which is naïve.

If the speakers can be made active with DSP, they can be made to sound unbelievably good.  This is the real snapshot of the future.

/Mike

—-

From: John Watkinson
Sent: Sun 8/12/2012 11:13 PM
To: Mikael Reichel; Per Sjofors; Tom McMahon; Ed Elliott
Subject: Why do musicians have lousy hi-fis?

Hello All,

Mike is right. The combination of dome tweeter, bass reflex and passive crossover is a recipe for failure. But our journalist friend doesn’t know. I wonder what he does know?

Best,

John

——

From: Ed Elliott
Sent: Mon 8/13/2012 7:02 AM
To: Mikael Reichel; Per Sjofors; Tom McMahon; John Watkinson
Subject: Why do musicians have lousy hi-fis?

Hi Mike,

Well, this must be answered at several levels. Firstly the author has erred in two major, but unfortunately not at all uncommon ways:  the linguistic construction of “most <fill_in_the_blank>” is inaccurate and unscientific at the best of times, and all too often a device for aligning some margin of factuality to a desired hypothesis; the other issue is the very basis of the premise raised is left undefined in the article – what is “a good hi-fi system”?

Forgoing for the moment the gaps in logic and ontological reasoning that may be applied to the world of aural perception, the author does raise a most interesting question – one that if had been pursued in a different manner would have made for a far more interesting article. Forgetting for the moment issues (that are a total red herring today – the affordability of quality components has never been more accessible) of cost or availability – WHY don’t ‘most’ musicians apparently care to have ‘better’ sound systems? There is no argument that many musicians DO have excellent systems, at all levels of affordability; and appreciate the aural experience provided. However – and I personally have spent many decades closely connected to both the professional audio industry, musicians in general, and the larger post-production community – I do agree that based purely on anecdotal observation, many talented musicians do in fact not attach a large importance to the expense or quality of their ‘retail playback equipment.’ The same of course is not valid for their instruments or any equipment they deem necessary to express their music.

The answer I believe is most interesting:  I believe that good musicians simply don’t need a high quality audio system in order to hear music. The same synaptic wiring and neural fabric connectedness – the stuff that really is the “application layer” in the brain – means that this group of people actually ‘hears’ differently. Hearing, just like seeing, is almost 90% a neurological activity. Beyond the very basic mechanical issues of sound capture, focus, filtering and conversion from pressure waves to chemico-electical impulses (provided by the ears, ear canal, eardrum, cochlea) all the rest of ‘hearing’ is provided by  the ‘brain software.’

To cut to the chase:  this group of people already has a highly refined ‘sample set’ of musical notes, harmonies, melodies, rhythms, etc. in their brains, and needs very little external stimulation in order to ‘fire off’ those internalized ‘letters and words’ of musical sound. Just as an inexperienced reader may ‘read’ individual words – and a highly competent and experienced reader basically digests entire sentences as a single optic-with-meaning element, so a lay person may actually need a ‘better’ sound system in order to ‘hear’ the same things that a trained musician would hear.

That is not to say that musicians don’t hear – and appreciate – external acoustic reality:  just try playing a bit out of tune, lag a few microseconds on a lead guitar riff, or not express the same voice as others in the brass sections and you will get a quick lesson in just how acute their hearing is. It’s just tuned to different things. Once a composed song is underway, the merest hint of a well-known chord progression ‘fires off’ that experience in the musician’s brain software – so they ‘hear’ it was it was intended – the harmonic distortion, the lack of coherent imaging, the flappy bass – all those ‘noise elements’ are filtered out by their brains – they already know what it’s supposed to sound like.

If you realize that someone like Anne-Sophie Mutter has most likely played over 100,000 hours of violin already in her life, and look at what this has done to her brain in terms of listening (forgoing for the moment the musculo-skeletal reprogramming that has turned her body into as much of a musical instrument as the Stradivarius) – you can see that there is not a single passage of classical stringed or piano music that is not already etched into her neural fabric at almost an atomic level.

With this level of ‘programming’ it just doesn’t take a lot of external stimulation in order for the brain to start ‘playing the music.’ Going at this issue from a different but orthogonal point of view:  a study of how hearing impaired people ‘hear’ music is also revealing – as well as the other side of that equation: those that have damaged or uncommon neural software for hearing. People in this realm include autistics (who often have an extreme sensitivity to sound); stroke victims; head trauma victims, etc. A study here shows that the ‘brain software’ is far more of an issue in terms of quality of hearing than the mechanics or objective scientific ‘quality’ (perhaps an oxymoron) of the acoustic pressure waves provided to the human ear.

Evelyn Glennie – profoundly deaf – is a master percussionist (we just saw her play at the Opening Ceremonies) – and has adapted ‘hearing’ to an astounding level of physical sense in vibrations – including her feet (she mostly plays barefoot for this reason). I would strongly encourage the reading of three short and highly informative letters she published on hearing, disabilities and professional music. Evelyn does not need, nor can she appreciate, DACs with only .0001%THD and time-domain accuracies of sub-milliseconds – but there is no question whatsoever that this woman hears music!

This may have been a bit of a round-about answer to the issues of why ‘most musicians’ may have what the author perceives to be ‘sub-optimal’ hi-fi systems – but I believe it more fully answers the larger question of aural perception. I for instance completely appreciate (to the limits of my ability as a listener – which are professional but not ‘golden ears’) the accuracy, imaging and clarity of high end sound systems (most of which are way beyond my budget for personal consumption); but the lack of such does not get in the way of my personal enjoyment of many musicians’ work – even if played back from my iPod. Maybe I have trained my own brain software just a little bit…

In closing, I would like to take an analogy from the still photographer’s world:  this group of amateurs and professional alike put an almost unbelievable level of importance on their kit – with various bits of hardware (and now software) taking either the blame or the glory for the quality (or lack thereof) of their images. My personal observation is that the eye/brain behind the viewfinder is responsible for about 93% of both the successes and failures of a given image to match the desired state. I think a very similar situation exists today in both ‘audiophile’ as well as ‘professional audio’ – it would be a welcome change to discuss facts, not fancy.

Warmest regards,

Ed

——-

From: John Watkinson
Sent: Mon 8/13/2012 12:50 AM
To: Mikael Reichel; Per Sjofors; Tom McMahon; Ed Elliott
Subject: Why do musicians have lousy hi-fis?

Hello All,

I think Ed has hit the nail on he head. It is generally true that people hear what they ought to hear and see what they ought to see, not what is actually there. It is not restricted to musicians, but they have refined it for music.

The consequences are that transistor radios and portable cassette recorders, which sound like strangled cats, were popular, as iPods with their MP3 distortion are today. But in photography, the Brownie and the Instamatic were popular, yet the realism or quality of the snaps was in the viewer’s mind. Most people are content to watch television sets that are grossly misadjusted and they don’t see spelling mistakes.

I would go a little further than Ed’s erudite analysis and say that most people not only see and hear what they ought to, but they also think what they ought to, even if it defies logic. People in groups reach consensus, even if it is wrong, because the person who is right suffers peer pressure to conform else risk being ejected from the group. This is where urban myths that have no basis in physics come from. The result is that most decisions are emotive and science or physics will be ignored. Why else do 40 percent of Americans believe in Creation? I look forward to having problems with groups because it confirms that my ability to use logic is undiminished. Was it Wittgenstein who said what most people think doesn’t amount to much?

Marketing, that modern cancer, leaps into this human failing, by playing on emotions to sell things. It follows that cars do not need advanced technology if they can be sold by draping them with scantily-clad women. Modern cars are still primitive because the technical requirements are distorted downwards by emotion. In contrast  Ed’s accurate observation that photographers obsess about their kit, as do audiophiles illustrates that technical requirements can also be distorted upwards by emotion.

Marketing also preys on people to convince them that success depends on having all the right accessories and clothing for the endeavour. Look at all the stuff that sportsmen wear.

Whilst it would be nice for hi-fi magazines to discuss facts instead of fancy, I don’t see it happening as it gets in the way of the marketing.

Best,

John

——

From: Ed Elliott
Sent: Monday, August 13, 2012 8:11 PM
To: Mikael Reichel; Per Sjofors; Tom McMahon; John Watkinson
Subject: Why do musicians have lousy hi-fis?

Hi John, Mike, et al

Love your further comments, but I’m afraid that “marketing, that modern cancer” is a bit older that we would all like to think.. one example that comes to mind is about 400-odd years old now – and actually represents one of the most powerful and enduring ‘brands’ ever to be promoted in Western culture: Shakespeare. Never mind that allusions to and adaptations of his plays have permeated our culture for hundreds of years – even ‘in the time’ Shakespeare created, bonded with and nurtured his customer base. Now this was admittedly marketing in a more pure sense (you actually got something for your money) – but nonetheless repeat business was just as much of an issue then as now. Understanding his audience, knowing that both tragedy and comedy was required to build the dramatic tension that would bring crowds back for more; recognizing the capabilities and understanding of his audience so that they were stimulated but not puzzled, entertained but not insulted – there was a mastery of marketing there beyond just the tights, skulls and iambic pentameter.

Unfortunately I do agree that with time, marketing hype has diverged substantially from the underlying product to the point that often they don’t share the same solar system… And what’s worse is that now in many cases the marketing department actually runs product development in many large corporations… and I love your comments on ‘stuff sportsmen wear’ – to copy my earlier analogy on photography, if I was to pack up all the things that the latest photo consumer magazine and camera shop said I needed to have to take a picture I would need a band of Sherpas…

Now there is a bit of light ahead potentially:  the almost certain demise of most printed magazines (along with newspapers, etc.) is creating a tumultuous landscape that won’t stabilize right away. This means that what entities do remain and survive to publish information no longer have to conform to selling X amount of pages of ads to keep the magazine alive (and hence pander to marketing, etc.) There are two very interesting facts about digital publishing that to date have mostly been ignored (and IMHO are the root cause of digital mags being so poorly constructed and read – those that want to think that they can convert all their print reader to e-zine subscriptions need to check out multi-year retention stats – they are abysmal.)

#1 is people read digital material in a very different way than paper. (The details must wait for another thread – too much now). Bottom line is that real information (aka CONTENT) is what keeps readership. Splash and video might get some hits, but the fickle-factor is astronomical in online reading – if you don’t give your reader useful facts or real entertainment they won’t stay.

#2 is that, if done correctly, digital publishing can be very effective, beautiful, evocative and compelling at a very low cost. There simply isn’t the need for massive ad dollars any more. So the type of information that you all are sharing here can be distributed much more widely than ever before. I do believe there is a window of opportunity for getting real info out in front of a large audience, to start chipping away at this Himalayan pile of stink that defines so much of (fill in the blank: audio, tv, cars, vitamins, anti-aging creams, etc.)

Ok, off to answer some e-mails for that dwindling supply of really importance:  paying clients!

Many thanks

Ed

——–

From: John Watkinson
Sent: Tue 8/14/2012 12:57 AM
To: Mikael Reichel; Per Sjofors; Tom McMahon; Ed Elliott
Subject: Why do musicians have lousy hi-fis?

Dear Ed,

This is starting to be interesting. I take your point about Shakespeare being marketed, but if we want to go back even further, we have to look at religion as the oldest use of marketing. It’s actually remarkable that the religions managed to prosper to the point of being degenerate when they had no tangible product at all. Invent a problem, namely evil, and then sell a solution, namely a god. It’s a protection racket. Give us money so we can build cathedrals and you go to heaven. It makes oxygen free speaker cables seem fairly innocuous. At least the hi-fi industry doesn’t threaten you with torture. If you  read anything about the evolution of self-replicating viruses, suddenly you see why the Pope is opposed to contraception.

I read an interesting book about Chartres cathedral, in which it was pointed out that the engineering skills and the underlying science needed to make the place stand up (it’s more air than stone) had stemmed from a curiosity about the laws of nature that would ultimately lead to the conclusion that there was no Creation and no evidence for a god, that the earth goes round the sun and that virgin birth is due to people living in poverty sharing bathwater.

If you look at the achievements of hi-fi and religion in comparison to the achievements of science and engineering, the results are glaring. The first two have made no progress in decades, because they are based on marketing and have nothing to offer. Prayer didn’t defeat Hitler, but radar, supercharging and decryption may have played a small part.

Your comments about printed magazines and newspapers are pertinent. These are marketing tools and as a result the copy is seldom of any great merit, as Steve Gutenberg continues to demonstrate in his own way. Actually the same is true for television. People think the screensaver was a computer invention. Actually it’s not, it’s what television broadcasts between commercial breaks.

So yes, you are right that digital/internet publishing is in the process of pulling the rug on traditional media. Television is over. I don’t have one and I don’t miss the dumbed-down crap and the waste of time. Himalayan pile of stink is a wonderful and evocative term!

Actually services like eBay are changing the world as well. I hardly ever buy anything new if I can get one someone doesn’t want on eBay. It’s good for the vendor, for me and the environment.

In a sense the present slump/recession has been good in some ways. Certainly it has eroded peoples’ faith in politicians and bankers and the shortage of ready cash has led many to question consumerism.

Once you stop being a consumer, reverse the spiral and decide to tread lightly on the earth, the need to earn lots of money goes away. My carbon neutral house has zero energy bills and my  policy of buying old things and repairing them means I have all the gadgets I need, but without the cost. The time liberated by not needing to earn lots of money allows me to make things I can’t buy, like decent loudspeakers. It means I never buy pre-prepared food because I’m not short of time. Instead I can buy decent ingredients and know what I’m eating.

One of the experiences I treasure due to reversing the spiral was turning up at a gas station in Luxembourg. There must have a been a couple of million dollars worth of pretentious cars filling up. BMW, Lexus, Mercedes, the lot. And they all turned to stare at my old Jaguar when I turned up. It was something they couldn’t have because they were too busy running on the treadmill to run a car that needs some fixing.

Best,

John

——

From: Ed Elliott
Sent: Wed 8/15/2012 1:01 PM
To: Mikael Reichel; Per Sjofors; Tom McMahon; John Watkinson
Subject: Why do musicians have lousy hi-fis?

Hi John,

Yes, I’m finding this part of my inbox so much more interesting than the chatterings of well-intentioned (but boring) missives; and of course the ubiquitous efforts of (who else!) the current transformation of tele-marketers into spam producers… I never knew that so many of my body parts needed either extending, flattening, bulking up, slimming down, etc. etc!

Ahh! Religion… yes, got that one right the first time. I actually find that there’s a more nefarious aspect to organized religion: to act as a proxy for nation-states that couldn’t get away with the murder, land grabs, misogyny, physical torture and mutilation if these practices were “state sponsored” as opposed to “expressions of religious freedom.”  Always makes me think of that Bob Dylan song, “With God on Our Side…”

On to marketing in television.. and tv in general… I actually turned mine on the other day (well, since I don’t have a ‘real’ tv – but I do have the cable box as I use that for high speed internet – I turned on the little tuner in my laptop so I could watch Olympics in HD (the bandwidth of the NBC streaming left something to be desired) – and as usual found the production quality and techniques used in the adverts mostly exceed the filler… The message, well that went the way of all adverts: straight back out my head into the ether… What I want to know – this is a better trick than almost anything – how did the advertisers ever get convinced that watching this drivel actually effects what people buy?? Or I am just an odd-bod that is not swayed by hype, mesmerizing disinformation [if I buy those sunglasses I’ll get Giselle to come home with me…], or downright charlatantry.

And yeah for fixing things and older cars… I bought my last car in 1991 and have found no reason [or desire] to replace it. And since (thank g-d) it was “pre-computer” it is still ‘fixable’ with things like screwdrivers and spanners… I think another issue in general is that our cultures have lost the understanding of ‘preventative maintenance’ – a lot of what ends up in the rubbish bin is due to lack of care while it was alive..

Which brings me back to a final point:  I do like quality, high tech equipment, when it does something useful and fulfills a purpose. But I see a disappointing tendency with one of the prime vendors in this sector:  Apple. I am currently (in my blog) writing about the use of iPhones as a practical camera system for HD cinemaphotography – with all of the issues and compromises well understood! Turns out that two of the fundamental design decisions by Apple are at the core of limiting the broader adoption of this platform (I describe how to work around this, but it’s challenging):  the lack of a removable battery and removable storage.

While there are obvious advantages to those decisions, in terms of reliability and industrial design, it can’t be ignored that the lack of both of those features certainly mitigate towards a user ‘upgrading’ at regular intervals (since they can’t swap out a battery or add more storage). And now they have migrated this ‘sealed design’ to the laptops… the new Mac Air is for all practical purposes unrepairable (again, no removable battery, the screen is glued into the aluminium case, and all sub-assemblies are wave-soldered to the main board). The construction of even the Mac Pro is moving in that direction.

So my trusty Dell laptop, with all of its warts, is still appreciated for its many little screws and flaps… when a bit breaks, I can take it apart and actually change out just a Bluetooth receiver, or upgrade memory, or even swap the cpu. Makes me feel a little less redundant in this throw-away world.

I’ll leave you with this:

Jay Leno driving down Olive Ave. last Sunday in his recently restored 1909 Doble & White steam car. At 103 years old, this car would qualify for all current California “low emissions” and “fuel efficiency” standards…

(snapped from my iPhone)

Here is the link to Jay’s videos on the restoration process.

Enjoy!

Ed

Ubiquitous Computational Fabric (UCF)

March 6, 2012 · by parasam

Ok, not every day do I get to coin a new term, but I think this is a good description for what I see coming. The latest thing in the news is “the PC is dead, long live the tablet…”   actually, all forms of current ‘computers’ – whether they are desktops, laptops, ultrabooks, tablets, smartphones, etc. have a life expectancy just short of butter on pavement on a warm afternoon.

We have left the Model “T” days, to use an automotive analogy – where one had to be a trained mechanic to even think about driving a car – and moved on just a little bit.

Ford Model “T” (1910)

We are now at the equivalent of the Model “A” – a slight improvement.

Ford Model “A” (1931)

The user is still expected to understand things like:  OS (Operating Systems), storage, apps, networking, WiFi security modes, printer drivers, etc. etc. The general expectation is that the user conform his or her behavior to the capabilities of the machine, not the other way around. Things we sort of take for granted – without question –
are really archaic. Typing into keyboards as the primary interface. Dealing with a file system – or more likely the frustration that goes along with dealing with incompatible filing systems… Mac vs PC… To use the automobile for one more analogy:  think how frustrating it would be to have to go to different gas stations depending on the type of car you had… because the nozzle on the gas pump would only fit certain cars!

A few “computational systems” today have actually achieved ‘user friendly’ status – but only with a very limited feature set, and this took many, many years to get there:  the telephone is one good example. A 2 yr old can operate it without a manual. It works more or less the same anywhere in the world. In general, it is a highly reliable system. In terms of raw computational power, the world-wide telephone system is one of the most powerful computers on the planet. It has more raw bandwidth than the current ‘internet’ (not well utilized, but that’s a different issue).

We are now seeing “computers” embedded into a wide variety of items, from cars to planes to trains. Even our appliances have built-in touch screens. We are starting to have to redefine the term ‘computer’ – the edges are getting very fuzzy. Embedded sensors  are finding their way into clothing (from inventory control tags in department stores to LED fabric in some cutting edge fashions); pets (tracking chips); credit cards (so-called smart cards); the atmosphere (disposable sensors on small parachutes are dropped by plane or shot from mortars to gather weather data remotely); roads (this is what powers those great traffic maps) and on and on.

It is actually getting hard to find a piece of matter that is not connected in some way to some computing device. The power is more and more becoming ‘the cloud.’ Our way of interacting with computational power is changing as well:  we used to be ‘session based’ – we would sit down at a desktop computer and switch gears (and usually employ a number of well chosen expletives) to get the computer up and running, connected to a printer and the network, then proceed to input our problems and get results.

Now we are an ‘always on’ culture. We just pick up the smartphone and ask Siri “where the heck is…” and expect an answer – and get torqued when she doesn’t know or is out of touch with her cloud. Just as we expect a dial tone to always be there when we pick up the phone, we now expect the same from our ‘computers.’ The annoyance of waiting for a PC to boot up is one of several factors users report on for their attraction to tablets.

Another big change is the type of connectivity that we desire and expect. The telephone analogy points to an anachronistic form of communication: point-to-point. Although, with enough patience or the backup of extra software, you can speak with several people at once, the basic model of the phone system is one-to-one. The cloud model, Google, blogs, YouTube, Facebook, Twitter etc. has changed all that. We now expect to be part of the crowd. Instead of one-to-one we now want many-to-many.

Instead of a single thread joining one user to another, we now live in a fabric of highly interwoven connectivity.

When we look ahead – and by this I mean ten years or so – we will see the extension of trends that are already well underway. Essentially the ‘computer’ will disappear – in all of its current forms. Yes, there will still be ‘portals’ where queries can be put to the cloud for answers; documents will still be written, photographs will still be manipulated, etc. – but the mechanisms will be more ‘appliance like’ – typically these portals will act like the handsets of today’s cellphone network – where 99% of the horsepower is in the backoffice and attached network.

This is what I mean by Ubiquitous Computational Fabric (UCF). It’s going to be an ‘always on’, ‘always there’ environment. The distinction of a separate ‘computer’ will disappear. Our clothing, our cars, our stoves, our roads, even our bodies will be ‘plugged in’ to the background of the cloud system.

There are already small pills you can swallow that have video cameras – your GI tract is video-ed and sent to your doctor while the pill moves through your body. No longer is an expensive and invasive endoscopy required. Of course today this is primitive, but in a decade we’ll swallow a ‘diagnostic’ pill along with our vitamins and many data points of our internal health will be automatically uploaded.

As you get ready to leave the bar, you’ll likely have to pop a little pill (required to be offered free of charge by the bar) that will measure your blood alcohol level and transmit approval to your car before it will start. Really. Research on this, and the accompanying legislation, is under way now.

The military is already experimenting with shirts that have a mesh of small wires embedded in the fabric. When a soldier is shot, the severing of the wires will pinpoint the wound location and automatically transmit this information to the medic.

Today, we have very expensive motion tracking suits that are used in computer animation to make fantasy movies.

Soon, little sensors will be embedded into normal sports clothing and all of an athlete’s motions will be recorded accurately for later study – or injury prevention. One of the most difficult computational problems today – requiring the use of the planet’s most massive supercomputers – is weather prediction. The savings in human life and property damage (from hurricanes, tornadoes, tsunamis, earthquakes, etc.) can be staggering. One of the biggest problems is data input. We will see a massive improvement here with small intelligent sensors being dropped into formative storms to help determine if they will become dangerous. The same with undersea sensors, fault line sensors, etc.

The real winners of tomorrow’s business profits will be those companies that realize this is where the money will flow. Materials science, boring but crucial, will allow for economic dispersal of smart sensors. Really clever data transmission techniques are needed to funnel the amount of collected information through oft time narrow pipes and difficult environments. ‘Spread-spectrum computing’ will be required to minimize energy usage, provide the truly reliable and available fabric that is needed. Continual understanding of human factor design will be needed to allow the operation of these highly complex systems in an intuitive fashion.

We are at an exciting time:  to use the auto one more time – there were early Ford engineers who could visualize Ferraris – even though the materials at time could not support their vision. We need to support those people, those visionaries, those dreamers – for they will provide the expertise and plans to help us realize what is next. We have only scratched the surface of what’s possible.

How ‘where we are’ affects ‘what we see’..

February 17, 2012 · by parasam

I won’t often be reposting other blogs here in their entirety, but this is such a good example of a topic on which I will be posting shortly I wanted to share this with you. “Contextual awareness” has been proven in many instances to color our perception, whether this is visual, auditory, smell, taste, etc.

Here’s the story:  (thanks to Josh Armour for his post that first caught my attention)

Care for another ‘urban legend’? This was has been verified as true by a couple sources.
A man sat at a metro station in Washington DC and started to play the violin; it was a cold January morning. He played six Bach pieces for about 45 minutes. During that time, since it was rush hour, it was calculated that 1,100 people went through the station, most of them on their way to work.
Three minutes went by, and a middle aged man noticed there was musician playing. He slowed his pace, and stopped for a few seconds, and then hurried up to meet his schedule.
A minute later, the violinist received his first dollar tip: a woman threw the money in the till and without stopping, and continued to walk.
A few minutes later, someone leaned against the wall to listen to him, but the man looked at his watch and started to walk again. Clearly he was late for work.
The one who paid the most attention was a 3 year old boy. His mother tagged him along, hurried, but the kid stopped to look at the violinist. Finally, the mother pushed hard, and the child continued to walk, turning his head all the time. This action was repeated by several other children. All the parents, without exception, forced them to move on.
In the 45 minutes the musician played, only 6 people stopped and stayed for a while. About 20 gave him money, but continued to walk their normal pace. He collected $32. When he finished playing and silence took over, no one noticed it. No one applauded, nor was there any recognition.
No one knew this, but the violinist was Joshua Bell, one of the most talented musicians in the world. He had just played one of the most intricate pieces ever written, on a violin worth $3.5 million dollars.
Two days before his playing in the subway, Joshua Bell sold out at a theater in Boston where the seats averaged $100.
This is a real story. Joshua Bell playing incognito in the metro station was organized by the Washington Post as part of a social experiment about perception, taste, and priorities of people. The outlines were: in a commonplace environment at an inappropriate hour: Do we perceive beauty? Do we stop to appreciate it? Do we recognize the talent in an unexpected context?
One of the possible conclusions from this experience could be:
If we do not have a moment to stop and listen to one of the best musicians in the world playing the best music ever written, how many other things are we missing?
Thanks +Kyle Salewski providing the actual video link here: Stop and Hear the Music
+Christine Jacinta Cabalo Points out that Joshua Bell has this story on his website: http://www.joshuabell.com/news/pulitzer-prize-winning-washington-post-feature
http://www.snopes.com/music/artists/bell.asp
Page 2 of 2 « Previous 1 2
  • Blog at WordPress.com.
  • Connect with us:
  • Twitter
  • Vimeo
  • YouTube
  • RSS
  • Follow Following
    • Parasam
    • Join 95 other followers
    • Already have a WordPress.com account? Log in now.
    • Parasam
    • Customize
    • Follow Following
    • Sign up
    • Log in
    • Report this content
    • View site in Reader
    • Manage subscriptions
    • Collapse this bar
 

Loading Comments...