• Home
  • about this blog
  • Blog Posts

Parasam

Menu

  • design
  • fashion
  • history
  • philosophy
  • photography
  • post-production
    • Content Protection
    • Quality Control
  • science
  • security
  • technology
    • 2nd screen
    • IoT
  • Uncategorized
  • Enter your email address to follow this blog and receive notifications of new posts by email.

  • Recent Posts

    • Take Control of your Phone
    • DI – Disintermediation, 5 years on…
    • Objective Photography is an Oxymoron (all photos lie…)
    • A Historical Moment: The Sylmar Earthquake of 1971 (Los Angeles, CA)
    • Where Did My Images Go? [the challenge of long-term preservation of digital images]
  • Archives

    • September 2020
    • October 2017
    • August 2016
    • June 2016
    • May 2016
    • November 2015
    • June 2015
    • April 2015
    • March 2015
    • December 2014
    • February 2014
    • September 2012
    • August 2012
    • June 2012
    • May 2012
    • April 2012
    • March 2012
    • February 2012
    • January 2012
  • Categories

    • 2nd screen
    • Content Protection
    • design
    • fashion
    • history
    • IoT
    • philosophy
    • photography
    • post-production
    • Quality Control
    • science
    • security
    • technology
    • Uncategorized
  • Meta

    • Register
    • Log in
    • Entries feed
    • Comments feed
    • WordPress.com

Archive For April, 2015

The Patriot Act – upcoming expiry of Section 215 and other unpatriotic rules…

April 18, 2015 · by parasam

Section215

On June 1, less than 45 days from now, a number of sections of the Patriot Act expire. The administration and a large section of our national security apparatus, including the Pentagon, Homeland Security, etc. are strongly pushing for extended renewal of these sections without modification.

While this may on the surface seem like something we should do (we need all the security we can get in these times of terrorism, Chinese/North Korean/WhoKnows hacks, etc. – right?) – the reality is significantly different. Many of the Sections of the Patriot Act (including ones that are already in force and do not expire for many years to come) are insidious, give almost unlimited and unprecedented surveillance powers to our government (and by the way any private contractors who the government hires to help them with this task), and are mostly without functional oversight or accountability.

Details of the particular sections up for renewal may be found in this article, and for a humorous and allegorical take on Section 215 (the so-called “Library Records” provision) I highly recommend this John Oliver video. While the full “Patriot Act” is huge, and covers an exhaustingly broad scope of activities that allow the government (meaning its various security agencies, including but not limited to: CIA, FBI, NSA, Joint Military Intelligence Services, etc. etc.) the sections that are of particular interest in terms of digital security pertaining to communications are the following:

  • Section 201, 202 – Ability to intercept communications (phone, e-mail, internet, etc.)
  • Section 206 – roving wiretap (ability to wiretap all locations that a person may have visited or communicated from for up to a year).
  • Section 215 – the so-called “Library Records” provision, basically allowing the government (NSA) to bulk collect communications from virtually everyone and store them for later ‘research’ to see if any terrorist or other activity deemed to be in violation of National Security interests.
  • Section 216 – pen register / trap and trace (the ability to collect metadata and/or actual telephone conversations – metadata does not require a specific warrant, recording content of conversations does).
  • Section 217 – computer communications interception (ability to monitor a user’s web activity, communications, etc.)
  • Section 225 – Immunity from prosecution for compliance with wiretaps or other surveillance activity (essentially protects police departments, private contractors, or anyone else that the government instructs/hires to assist them in surveillance).
  • Section 702 – Surveillance of ‘foreigners’ located abroad (in principle this should restrict surveillance to foreign nationals outside of US at the time of such action, but there is much gray area concerning exactly who is a ‘foreigner’ etc. [for instance, is a foreign born wife of a US citizen a “foreigner” – and if so, are communications between the wife and the husband allowed??]

Why is this Act so problematic?KeyholePeeper

As with many things in life, the “law of unintended consequences” can often overshadow the original problem. In this case, the original rationale of wanting to get all the info possible about persons or groups that may be planning terrorist activities against the USA was potentially noble, but the unprecedented powers and lack of accountability provided for by the Patriot Act has the potential (and in fact has already been proven) to scuttle many individual freedoms that form the basis for our society.

Without regard to the methods or justification for his actions, the revelations provided by Ed Snowden’s leaks of the current and past practices of the NSA are highly informative. This issue is now public, and cannot be ‘un-known’. What is clearly documented is that the NSA (and other entities as has since come to light) have extended surveillance on millions of US citizens living within the domestic US to a far greater extent than even the original authors of the Patriot Act envisioned. [This revealed in multiple tv interviews recently].

The next major issue is that of ‘data creep’ – that such data, once collected, almost always gets replicated into other databases, etc. and never really goes away. In theory, to take one of the Sections (702), data retention even for ‘actionable surveillance of foreign nationals’ is limited to one year, and inadvertent collection of surveillance data on US nationals, or even a foreign national that has travelled within the borders of the USA is supposed to be deleted immediately. But absolutely no instruction or methodology is given on how to do this, nor are any controls put in place to ensure compliance, nor are any audit powers given to any other governmental agency.

As we have seen in past discussions regarding data retention and deletion with the big social media firms (Facebook, Google, Twitter, etc.) it’s very difficult to actually delete data permanently. Firstly, in spite of what appears to be an easy step, actually deleting your data from Facebook is incredibly hard to do (what appears to be easy is just the inactivation of your account, permanently deleting data is a whole different exercise). On top of that, all these firms (and the NSA is no different) make backups of all their server data for protection and business continuity. One would have to search and compare every past backup to ensure your data was also deleted from those.

And even the backups have backups… it’s considered an IT ‘best practice’ to back up critical information across different geographical locations in case of disaster. You can see the scope of this problem… and once you understand that the NSA for example will under certain circumstances make chunks of data available to other law enforcement agencies, how does one then ensure compliance across all these agencies that data deletion occurs properly? (Simple answer: it’s realistically impossible).

So What Do We Do About This?

The good news is that most of these issues are not terribly difficult to fix… but the hard part will be changing the mindset of many in our government who feel that they should have the power to do anything they want in total secrecy with no accountability. The “fix” is to basically limit the scope and power of the data collection, provide far greater transparency about both the methods and actual type of data being collected, and have powerful audit and compliance methods in place that have teeth.

The entire process needs to be stood on its end – with the goal being to minimize surveillance to the greatest extent possible, and to retain as little data as possible, with very restrictive rules about retention, sharing, etc. For instance, if data is shared with another agency, it should ‘self-expire’ (there are technical ways to do this) after a certain amount of time, unless it has been determined that this data is now admissible evidence in a criminal trial – in which case the expiry can be revoked by a court order.

fisainfographic3_blog_0

The irony is that even the NSA has admitted that there is no way they can possibly search through all the data they have collected already – in terms of a general search-terms action. They could of course look for a particular person-name or place-name, but if this is all they needed they could have originally only collected surveillance data for those parameters instead of the bulk of American citizens living in the USA…

While they won’t give details, reasonable assumptions can be drawn from public filings and statements, as well as purchase information from storage vendors… and the NSA alone can be assumed to have many hundreds of exabytes of data stored. Given that 1 exabyte = 1,024 Petabytes (which in turn = 1,024 terabytes) this is an incredible amount of data. To put another way, it’s hundreds of trillions of gigabytes… and remember that your ‘fattest’ iPhone holds 128GB.

It’s a mindset of ‘scoop up all the data we can, while we can, just in case someday we might want to do something with it…’  This is why, if we care about our individual freedom of expression and liberty at all, we must protest against the blind renewal of these deeply flawed laws and regulations such as the Patriot Act.

This discussion is entering the public domain more and more – it’s making the news but it takes action not just talk. Make a noise. Write to your congressional representatives. Let them know this is an urgent issue and that they will be held accountable at election time for their position on this renewal. If the renewal is not granted, then – and typically only then – will the players be forced to sit down and have the honest discussion that should have happened years ago.

Shadow IT, Big Brother & The Holding Company, Thousand-Armed Management…

April 9, 2015 · by parasam

This article was inspired by reading a challenge of many organizations, along with their IT departments: that of “Shadow IT”. This is essentially the use of software by employees that is not formally ‘approved’ or managed by the IT Department. Often this is done quite innocently, as an expedient method to accomplish a task at hand when the perceived correct software tool for the job is unavailable, hard to use or otherwise presents friction to the user.

A classic example, and in fact the instigating action for the article I read (here) is DropBox. This ubiquitous cloud storage service is so ‘friction-free’ to set up and use that many users opt for this app as a quick  means to store documents for easy retrieval as they move from place to place and device to device during the course of their day/week at work. The issues of security, backup, data integrity and so on usually never occur to them.

The Hidden Dangers

The use of ad-hoc solutions to a user’s need to do something (whether it’s to store, edit, send, etc.) are often not immediately apparent. Some of the issues that come up are: lack of security for company documents; lack of version control when docs are stored multiple times in various places; potential compromise of security to company networks (often times users will use the same login info for DropBox as for their corporate login – DB is not that difficult to hack, once a set of credentials is discovered that works for one site a hacker will then try other sites…); general diffusion of IT management policies and practices.

The unfortunate dialectic that often follows from the discovery of this practice is one of opposing sides:  IT sees the user as the ‘bad guy’ and tries to enforce a totalitarian solution; the user feels discriminated against and gets frustrated that the tools they perceive they need are not provided.. all this leads to a continual ‘cat and mouse’ game where users feel even a greater ‘reason’ to utilize stealth IT solutions / IT management feels they have no choice except to police users and invoke more and more draconian rules to prevent users from acting in any way that is not ‘approved’.

Everyone Needs Awareness

A more cooperative solution can be found if both ‘sides’ (IT management and Users) get enlightened about the issues from both points of view. IT needs to accept that many of the toolsets often provided are ungainly, cumbersome, or otherwise hard to use – or don’t adequately address the needs of users; while users need to understand the security and management risks that Shadow IT solutions pose.

One of the biggest philosophical challenges is that most firms place IT somewhere near the top of the pyramid, with edicts on what to use and how to behave coming from a ‘top-down’ philosophy. A far more effective approach is to place IT at the ‘bottom of the stack’ – with IT truly being in a supportive role, literally acting as a foundation and glue for the actions of users. If the needs of the users are taken as real (within reason) and a concerted effort is taken to address those in a creative manner a much higher degree of conformance will follow.

Education of users is also paramount – many times existing software solutions are available within a corporate toolset but either are unknown to a user, or the easiest way to accomplish a task is not shown to the user. This paradigm (enlightened users acting with a common goal in cooperation with IT management) is actually a great model for other aspects of work life as well…

Big Brother & The Holding Company

BigBrother

Achieving the correct balance between user ‘freedom’ and the perceived need for IT management to monitor and control absolutely everything that is ‘data’ is a bigger challenge than even apparent at first. I’ve entitled this section to included “The Holding Company” for a more specific reason that just an alliteration… most organizations, whether your local Seven-Eleven or the NSA not only like to observe (and record) all the goings-on of their employees (or in the case of the NSA basically every human and/or machine they can find…) but to hold on to this data, well, pretty much forever.

This ‘holding’ in and of itself raises some interesting philosophical questions… for instance, is it legal/ethical for a firm to continue to keep records pertaining to an employee that is no longer working for the firm? And if so, for how long? Under what conditions, or what subjects would some data be deemed necessary to keep longer than other data?

And BTW if anyone still believes that old e-mails just aren’t that big a deal, please ask Amy Pascal (Sony Pictures exec…) if she wishes some of her past e-mails had never become public (thanks to the Hack of Armageddon). Perhaps one ‘better way’ to handle this balance (privacy vs perceived necessity) is somewhat like a pre-nup: hammer out the details before the marriage… In the case of employee/employer, if data policies were more clearly laid out, with reason and rationale, the chance of better IT behavior – and less chance of disgruntled employees later – would likely be ensured.

From a user’s or employee’s perspective, here’s a (potentially embarrassing) scenario:  during the course of normal business the user expresses frustration with a vendor to another employee of the current firm; a few years later said user leaves and goes to work for the vendor, having long forgotten about the momentary frustration and perhaps in hindsight a less than wonderful expression of the same. The original firm (probably some manager that had to explain why a good employee had left) reviews e-mails still on file, find this ‘gem’ and anonymously forwards it to the vendor… now the employer of the user… ouch!

If it could be proven, probably a black eye (or worse) for the original employer, but these things can be almost impossible to nail down to the degree of certainty required in our legal system, and the damage has already been done.

On the other hand, an audit trail of content moves by an employee of a major motion picture company that has experienced piracy could potentially help plug a leak that was costing the firm huge financial losses and also lead to the  appropriate actions being taken against the perpetrator.

The real issue here is good policy and governance, and then applying these polices uniformly across the board.

Thousand-Armed Management

SONY DSC

The 1000-Armed Buddha (Avalokiteśvara) is traditionally understood as a deity of Benevolent Compassion – but with the power of all-seeing, all-hearing and all-reaching attributes. That is exactly what is required today for sound and secure IT management across our new hyper-connected reality. With the concept of perimeters and ‘walled gardens’ lost by the wayside, along with hardware firewalls, antiquated OS’s and other roadkill brought on by interconnected clouds, multiple mobile devices all ‘attached’ to the same user, etc. – an entirely new paradigm is required for administration.

Closing the circle of discussion to our introduction, in this new world the attractiveness and utility of so-called ‘Shadow IT’ is even more pervasive – and harder to monitor and control – than previously. In the old world order where desktops were all controlled on a corporate LAN it was easier to monitor/block access to entities such as DropBox and other cloud apps that users found often fit their needs better than the tools provided by the local IT toolsets. It’s much more difficult to do this when a user is on an airplane logged in to the ‘net via GoGo at 10,000 meters in the air, using cloud apps located in 12 different countries simultaneously.

The Buddha Avalokiteśvara is also known for promoting teaching as one of the greatest ‘positive actions’ that one can take – (I’ll save a post on how our current culture values teachers vs stockbrokers for another time…). The most powerful tool any IT manager can utilize is education and sharing of knowledge in an effective manner. Informed users will generally make better decisions – and at the least will have a better understanding of IT policies and procedures.

Future posts on this general topic will delve a bit further into some of the discrete methods that can be utilized to effect this ‘1000-armed management’ – here it’s enough to introduce the concepts and the need for a radically new way of providing the balance of security and usability required today.

Digital Security in the Cloudy & Hyper-connected world…

April 5, 2015 · by parasam

Introduction

As we inch closer to the midpoint of 2015, we find ourselves in a drastically different world of both connectivity and security. Many of us switch devices throughout the day, from phone to tablet to laptop and back again. Even in corporate workplaces, the ubiquity of mobile devices has come to stay (in spite of the clamoring and frustration of many IT directors!). The efficiency and ease of use of integrated mobile and tethered devices propels many business solutions today. The various forms of cloud resources link all this together – whether personal or professional.

But this enormous change in topology has introduced very significant security implications, most of which are not really well dealt with using current tools, let alone software or devices that were ‘state of the art’ only a few years ago.

What does this mean for the user – whether personal or business? How do network admins and others that must protect their networks and systems deal with these new realities? That’s the focus of the brief discussion to follow.

No More Walls…

NoWalls

The pace of change in the ‘Internet’ is astounding. Even seasoned professionals who work and develop in this sector struggle to keep up. Every day when I read periodicals, news, research, feeds, etc. I discover something I didn’t know the day before. The ‘technosphere’ is actually expanding faster than our collective awareness – instead of hearing that such-and-such is being thought about, or hopefully will be invented in a few years, we are told that the app or hardware already exists and has a userbase of thousands!

One of the most fundamental changes in the last few years is the transition from ‘point-to-point’ connectivity to a ‘mesh’ connectivity. Even a single device, such as a phone or tablet, may be simultaneously connected to multiple clouds and applications – often in highly disparate geographical locations. The old tried-and-true methodology for securing servers, sessions and other IT functions was to ‘enclose’ the storage, servers and applications within one or more perimeters – then protect those ‘walled gardens’ with firewalls and other intrusion detection devices.

Now that we reach out every minute across boundaries to remotely hosted applications, storage and processes the very concept of perimeter protection is no longer valid nor functional.

Even the Washing Machine Needs Protection

Another big challenge for today’s security paradigm is the ever-growing “Internet of Things” (IoT). As more and more everyday devices become network-enabled, from thermostats to washing machines, door locks to on-shelf merchandise sensors – an entirely new set of security issues has been created. Already the M2M (Machine to Machine) communications are several orders of magnitude greater than sessions involving humans logging into machines.

This trend is set to literally explode over the next few years, with an estimated 50 billion devices being interconnected by 2020 (up from 8.7 billion in 2012). That’s a 6x increase in just 8 years… The real headache behind this (from a security point of view) is the amount of connections and sessions that each of these devices will generate. It doesn’t take much combinatorial math to see that literally trillions of simultaneous sessions will be occurring world-wide (and even in space… the ISS has recently completed upgrades to push 3Mbps channels to 300Mbps – a 100x increase in bandwidth – to support the massive data requirements of newer scientific experiments).

There is simply no way to put a ‘wall’ around this many sessions that are occurring in such a disparate manner. An entirely new paradigm is required to effectively secure and monitor data access and movement in this environment.

How Do You Make Bulletproof Spaghetti?

spaghetti

If you imagine the session connections from devices to other devices as strands of pasta in a boiling pot of water – constantly moving and changing in shape – and then wanted to encase each strand in an impermeable shield…. well you get the picture. There must be a better way… There are a number of efforts underway currently from different researchers, startups and vendors to address this situation – but there is no ‘magic bullet’ yet, nor is there even a complete consensus on what method may be best to solve this dilemma.

One way to attempt to resolve this need for secure computation is to break the problem down into the two main constituents: authentication of whom/what; and then protection of the “trust” that is given by the authentication. The first part (authentication) can be addressed with multiple-factor login methods: combinations of biometrics, one-time codes, previously registered ‘trusted devices’, etc. I’ve written on these issues here earlier. The second part: what does a person or machine have access to once authenticated – and how to protect those assets if the authentication is breached – is a much thornier problem.

In fact, from my perspective the best method involves a rather drastically different way of computing in the first place – one that would not have been possible only a few years ago. Essentially what I am suggesting is a fully virtualized environment where each session instance is ‘built’ for the duration of that session; only exposes the immediate assets required to complete the transactions associated with that session; and abstracts the ‘devices’ (whether they be humans or machines) from each other to the greatest degree possible.

While this may sound a bit complicated at first, the good news is that we are already moving in that direction, in terms of computational strategy. Most large scale cloud environments already use virtualization to a large degree, and the process of building up and tearing down virtual instances has become highly automated and very, very fast.

In addition, for some time now the industry has been moving towards thinner and more specific apps (such as found on phones and tablets) as opposed to massive thick client applications such as MS Office, SAP and other enterprise builds that fit far more readily into the old “protected perimeter” form of computing.

In addition (and I’m not making a point of picking on a particular vendor here, it’s just that this issue is a “fact of nature”) the Windows API model is just not secure any more. Due to the requirement of backwards compatibility – to a time where the security threats of today were not envisioned at all – many of the APIs are full of security holes. It’s a constant game of reactively patching vulnerabilities once discovered. This process cannot be sustained to support the level of future connectivity and distributed processing towards which we are moving.

Smaller, lightweight apps have fewer moving parts, and therefore by their very nature are easier to implement, virtualize, protect – and replace entirely should that be necessary. To use just an example: MS Word is a powerful ‘word processor’ – which has grown to integrate and support a rather vast range of capabilities including artwork, page layout, mailing list management/distribution, etc. etc. Every instance of this app includes all the functionality, of which 90% is unused (typically) during any one session instance.

If this “app” was broken down into many smaller “applets” that called on each other as required, and were made available to the user on the fly during the ‘session’ the entire compute environment becomes more dynamic, flexible and easier to protect.

Lowering the Threat Surface

Immune-System

One of the largest security challenges of a highly distributed compute environment – such as is presented by the typical hybrid cloud / world-wide / mobile device ecosystem that is rapidly becoming the norm – is the very large ‘threat surface’ that is exposed to potential hackers or other unauthorized access.

As more and more devices are interconnected – and data is interchanged and aggregated from millions of sensors, beacons and other new entities, the potential for breaches is increased exponentially. It is mathematically impossible to proactively secure every one of these connections – or even monitor them on an individual basis. Some new form of security paradigm is required that will, by its very nature, protect and inhibit breaches of the network.

Fortunately, we do have an excellent model on which to base this new type of security mechanism: the human immune system. The ‘threat surface’ of the human body is immense, when viewed at a cellular level. The number of pathogens that continually attempt to violate the human body systems are vastly greater than even the number of hackers and other malevolent entities in the IT world.

The conscious human brain could not even begin to attempt to monitor and react to every threat that the hordes of bacteria, viruses and other pathogens bring against the body ecosystem. About 99% of such defensive response mechanisms are ‘automatic’ and go unnoticed by our awareness. Only when things get ‘out of control’ and the symptoms tell us that the normal defense mechanisms need assistance do we notice things like a sore throat, an ache, or in more severe cases: bleeding or chest pain. We need a similar set of layered defense mechanisms that act completely automatically against threats to deal with the sheer numbers and variations of attack vectors that are becoming endemic in today’s new hyper-connected computational fabric.

A Two-Phased Approach to Hyper-Security

Our new hyper-connected reality requires an equally robust and all-encompassing security model: Hyper-Security. In principle, an approach that combines the absolute minimal exposure of any assets, applications or connectivity with a corresponding ‘shielding’ of the session using techniques to be discussed shortly can provide an extremely secure, scalable and efficient environment.

Phase One – building user ‘sessions’ (whether that user is a machine or a human) that expose the least possible amount of threat surface while providing all the functionality required during that session – has been touched on earlier during our discussion of virtualized compute environments. The big paradigm shift here is that security is ‘built in’ to the applications, data storage structures and communications interface at a molecular level. This is similar to how the human body systems are organized, which in addition to the actual immune systems and other proactive ‘security’ entities, help naturally limit any damage caused by pathogens.

This type of architecture simply cannot be ‘backed in’ to legacy OS systems – but it’s time that many of these are moved to the shelf anyway: they are becoming more and more clumsy in the face of highly virtualized environments, not to mention the extreme amount of time/cost to maintaining these outdated systems. Having some kind of attachment or allegiance to an OS today is as archaic as showing a preference for a Clydesdale vs a Palomino in the world of Ferraris and Teslas… Really all that matters today is the user experience, reliability and security. How something gets done should not matter any more, even to highly technical users, any more than knowing exactly which endocrines are secreted by our Islets of Langerhans (some small bits of the pancreas that produce some incredibly important things like insulin). These things must work (otherwise humans get diabetes or computers fail to process) but very few of us need to know the details.

Although the concept of this distributed, minimalistic and virtualized compute environment is simple, the details can become a bit complex – I’ll reserve further discussion for a future post.

To summarize, the security provided by this new architecture is one of prevention, limitation of damage and ease of applying proactive security measures (to be discussed next).

Phase Two – the protection of the compute sessions from either internal or external threat mechanisms – also requires a novel approach that is suited for our new ecosystems. External threats are essentially any attempt by unauthorized users (whether human, robots, extraterrestrials, etc.) to infiltrate and/or take data from a protected system. Internal threats are activities that are attempted by an authorized user – but are not authorized actions for that particular user. An example is a rogue network admin either transferring data to an unauthorized endpoint (piracy) or destruction of data.

The old-fashioned ‘perimeter defense systems’ are no longer appropriate for protection of cloud servers, mobile devices, etc. A particular example of how extensive and interconnected a single ‘session’ can be is given here:

A mobile user opens an app on their phone (say an image editing app) that is ‘free’ to the user. The user actually ‘pays’ for this ‘free’ privilege by donating a small amount of pixels (and time/focus) to some advertising. In the background, the app is providing some basic demographic info of the user, the precise physical location (in many instances), along with other data to an external “ad insertion service”.

This cloud-based service in turn aggregates the ‘avails’ (sorted by location, OS, hardware platform, app type that the user is running, etc.) and often submits these ‘avails’ [with the screen dimensions and animation capabilities] to an online auction system that bids the ‘avails’ against a pool of appropriate ads that are preloaded and ready to be served.

Typically the actual ads are not located on the same server, or even the same country, as either the ad insertion service or the auction service. It’s very common for up to half a dozen countries, clouds and other entities to participate in delivering a single ad to a mobile user.

This highly porous ad insertion system has actually become a recent favorite of hackers and other con games – even without technical breaches it’s an incredibly easy system to game – due to the speed of the transactions and almost impossible ability to monitor in real time many ‘deviations’ are possible… and common.

There are a number of ingenious methods being touted right now to help solve both the actual protection of virtualized and distributed compute environments, as well as to monitor such things as intrusions, breaches and unintended data moves – all things that traditional IT tools don’t address well at all.

I am unaware of a ‘perfect’ solution yet to address either the protection or monitoring aspects, but here are a few ideas: [NOTE: some of these are my ideas, some have been taken up by vendors as a potential product/service. I don’t feel qualified enough to judge the merits of any particular commercial product at this point, nor is the focus of this article on detailed implementations but rather concepts, so I’ll refrain from getting into specific products].

  • User endpoint devices (anything from humans’ cellphones to other servers) must be pre-authenticated (using combination of currently well-known identification methods such as MAC address, embedded token, etc.). On top of this basic trust environment, each session is authenticated with a minimum of a two-factor logon scheme (such as biometric plus PIN, certificate plus One Time Token, etc). Once the endpoints are authenticated, a one-time use VPN is established for each connection.
  • Endpoint devices and users are combined as ‘profiles’ that are stored as part of a security monitoring application. Each user may have more than one profile: for instance the same user may typically perform (or be allowed to perform by his/her firm’s security protocol) different actions from a cellphone as opposed to a corporate laptop. The actions that each user takes are automatically monitored / restricted. For instance, the VPNs discussed in the point above can be individually tailored to allow only certain kinds of traffic to/from certain endpoints. Actions that fall outside of the pre-established scope, or are outside a heuristic pattern for that user, can either be denied or referred for further authorization.
  • Using techniques similar to the SSL methodologies that protect and authenticate online financial transactions, different kinds of certificates can be used to permit certain kinds of ‘transactions’ (with a transaction being either access to certain data, permission to move/copy/delete data, etc.) In a sense it’s a bit like the layered security that exists within the Amazon store: it takes one level of authentication to get in and place an order, yet another level of ‘security’ to actually pay for something (you must have a valid credit card that is authenticated in real time by the clearing houses for Visa/MasterCard, etc.). For instance, a user may log into a network/application instance with a biometric on a pre-registered device (such as fingerprint on an iPhone6 that has been previously registered in the domain as an authenticated device). But if that user then wishes to move several terabytes of a Hollywood movie studio’s content to remote storage site (!!) they would need to submit an additional certificate and PIN.

An Integrated Immune System for Data Security

Virus in blood - Scanning Electron Microscopy stylised

The goal of a highly efficient and manageable ‘immune system’ for a hyper-connected data infrastructure is for such a system to protect against all possible threats with the least direct supervision possible. Since not only is it impossible for a centralized omniscient monitoring system to handle the incredible number of sessions that take place in even a single modern hyper-network; it’s equally difficult for a single monitoring / intrusion detection device to understand and adapt to the myriad of local contexts and ‘rules’ that define what is ‘normal’ and what is a ‘breach’.

The only practical method to accomplish the implementation of such an ‘immune system’ for large hyper-networks is to distribute the security and protection infrastructure throughout the entire network. Just as in the human body, where ‘security’ begins at the cellular level (with cell walls allowing only certain compounds to pass – depending on the type and location of each cell); each local device or application must have as part of its ‘cellular structure’ a certain amount of security.

As cells become building blocks for larger structures and eventually organs or other systems, the same ‘layering’ model can be applied to IT structures so the bulk of security actions are taken automatically at lower levels, with only issues that deviate substantially from the norm being brought to the attention of higher level and more centralized security detection and action systems.

Another issue of which to be aware: over-reporting. It’s all well and good to log certain events… but who or what is going to review millions of lines of logs if every event that deviates even slightly from some established ‘norm’ is recorded? And even then, that action will only be looking in the rear view mirror.. The human body doesn’t generate any logs at all and yet manages to more or less handle the security for 37.2 trillion cells!

That’s not say that no logs at all should be kept – they can be very useful to help understand breaches and what can be improved in the future – but the logs should be designed with that purpose in mind and recycled as appropriate.

Summary

In this very brief overview we’ve discussed some of the challenges and possible solutions to the very different security paradigm that we now have due to the hyper-connected and diverse nature of today’s data ecosystems. As the number of ‘unmanned’ devices, sensors, beacons and other ‘things’ continues to grow exponentially, along with the fact that most of humanity will soon be connected to some degree to the ‘Internet’, the scale of the security issue is truly enormous.

A few ideas and thoughts that can lead to effective, scalable and affordable solutions have been discussed – many of these are new and works in progress but offer at least a partially viable solution as we work forward. The most important thing to take away here is an awareness of how things must change, and to keep asking questions and not assume that the security techniques that worked last year will keep you safe next year.

  • Blog at WordPress.com.
  • Connect with us:
  • Twitter
  • Vimeo
  • YouTube
  • RSS
  • Follow Following
    • Parasam
    • Join 95 other followers
    • Already have a WordPress.com account? Log in now.
    • Parasam
    • Customize
    • Follow Following
    • Sign up
    • Log in
    • Report this content
    • View site in Reader
    • Manage subscriptions
    • Collapse this bar
 

Loading Comments...