• Home
  • about this blog
  • Blog Posts

Parasam

Menu

  • design
  • fashion
  • history
  • philosophy
  • photography
  • post-production
    • Content Protection
    • Quality Control
  • science
  • security
  • technology
    • 2nd screen
    • IoT
  • Uncategorized
  • Enter your email address to follow this blog and receive notifications of new posts by email.

  • Recent Posts

    • Take Control of your Phone
    • DI – Disintermediation, 5 years on…
    • Objective Photography is an Oxymoron (all photos lie…)
    • A Historical Moment: The Sylmar Earthquake of 1971 (Los Angeles, CA)
    • Where Did My Images Go? [the challenge of long-term preservation of digital images]
  • Archives

    • September 2020
    • October 2017
    • August 2016
    • June 2016
    • May 2016
    • November 2015
    • June 2015
    • April 2015
    • March 2015
    • December 2014
    • February 2014
    • September 2012
    • August 2012
    • June 2012
    • May 2012
    • April 2012
    • March 2012
    • February 2012
    • January 2012
  • Categories

    • 2nd screen
    • Content Protection
    • design
    • fashion
    • history
    • IoT
    • philosophy
    • photography
    • post-production
    • Quality Control
    • science
    • security
    • technology
    • Uncategorized
  • Meta

    • Register
    • Log in
    • Entries feed
    • Comments feed
    • WordPress.com

Browsing Tags business technology

Digital Security in the Cloudy & Hyper-connected world…

April 5, 2015 · by parasam

Introduction

As we inch closer to the midpoint of 2015, we find ourselves in a drastically different world of both connectivity and security. Many of us switch devices throughout the day, from phone to tablet to laptop and back again. Even in corporate workplaces, the ubiquity of mobile devices has come to stay (in spite of the clamoring and frustration of many IT directors!). The efficiency and ease of use of integrated mobile and tethered devices propels many business solutions today. The various forms of cloud resources link all this together – whether personal or professional.

But this enormous change in topology has introduced very significant security implications, most of which are not really well dealt with using current tools, let alone software or devices that were ‘state of the art’ only a few years ago.

What does this mean for the user – whether personal or business? How do network admins and others that must protect their networks and systems deal with these new realities? That’s the focus of the brief discussion to follow.

No More Walls…

NoWalls

The pace of change in the ‘Internet’ is astounding. Even seasoned professionals who work and develop in this sector struggle to keep up. Every day when I read periodicals, news, research, feeds, etc. I discover something I didn’t know the day before. The ‘technosphere’ is actually expanding faster than our collective awareness – instead of hearing that such-and-such is being thought about, or hopefully will be invented in a few years, we are told that the app or hardware already exists and has a userbase of thousands!

One of the most fundamental changes in the last few years is the transition from ‘point-to-point’ connectivity to a ‘mesh’ connectivity. Even a single device, such as a phone or tablet, may be simultaneously connected to multiple clouds and applications – often in highly disparate geographical locations. The old tried-and-true methodology for securing servers, sessions and other IT functions was to ‘enclose’ the storage, servers and applications within one or more perimeters – then protect those ‘walled gardens’ with firewalls and other intrusion detection devices.

Now that we reach out every minute across boundaries to remotely hosted applications, storage and processes the very concept of perimeter protection is no longer valid nor functional.

Even the Washing Machine Needs Protection

Another big challenge for today’s security paradigm is the ever-growing “Internet of Things” (IoT). As more and more everyday devices become network-enabled, from thermostats to washing machines, door locks to on-shelf merchandise sensors – an entirely new set of security issues has been created. Already the M2M (Machine to Machine) communications are several orders of magnitude greater than sessions involving humans logging into machines.

This trend is set to literally explode over the next few years, with an estimated 50 billion devices being interconnected by 2020 (up from 8.7 billion in 2012). That’s a 6x increase in just 8 years… The real headache behind this (from a security point of view) is the amount of connections and sessions that each of these devices will generate. It doesn’t take much combinatorial math to see that literally trillions of simultaneous sessions will be occurring world-wide (and even in space… the ISS has recently completed upgrades to push 3Mbps channels to 300Mbps – a 100x increase in bandwidth – to support the massive data requirements of newer scientific experiments).

There is simply no way to put a ‘wall’ around this many sessions that are occurring in such a disparate manner. An entirely new paradigm is required to effectively secure and monitor data access and movement in this environment.

How Do You Make Bulletproof Spaghetti?

spaghetti

If you imagine the session connections from devices to other devices as strands of pasta in a boiling pot of water – constantly moving and changing in shape – and then wanted to encase each strand in an impermeable shield…. well you get the picture. There must be a better way… There are a number of efforts underway currently from different researchers, startups and vendors to address this situation – but there is no ‘magic bullet’ yet, nor is there even a complete consensus on what method may be best to solve this dilemma.

One way to attempt to resolve this need for secure computation is to break the problem down into the two main constituents: authentication of whom/what; and then protection of the “trust” that is given by the authentication. The first part (authentication) can be addressed with multiple-factor login methods: combinations of biometrics, one-time codes, previously registered ‘trusted devices’, etc. I’ve written on these issues here earlier. The second part: what does a person or machine have access to once authenticated – and how to protect those assets if the authentication is breached – is a much thornier problem.

In fact, from my perspective the best method involves a rather drastically different way of computing in the first place – one that would not have been possible only a few years ago. Essentially what I am suggesting is a fully virtualized environment where each session instance is ‘built’ for the duration of that session; only exposes the immediate assets required to complete the transactions associated with that session; and abstracts the ‘devices’ (whether they be humans or machines) from each other to the greatest degree possible.

While this may sound a bit complicated at first, the good news is that we are already moving in that direction, in terms of computational strategy. Most large scale cloud environments already use virtualization to a large degree, and the process of building up and tearing down virtual instances has become highly automated and very, very fast.

In addition, for some time now the industry has been moving towards thinner and more specific apps (such as found on phones and tablets) as opposed to massive thick client applications such as MS Office, SAP and other enterprise builds that fit far more readily into the old “protected perimeter” form of computing.

In addition (and I’m not making a point of picking on a particular vendor here, it’s just that this issue is a “fact of nature”) the Windows API model is just not secure any more. Due to the requirement of backwards compatibility – to a time where the security threats of today were not envisioned at all – many of the APIs are full of security holes. It’s a constant game of reactively patching vulnerabilities once discovered. This process cannot be sustained to support the level of future connectivity and distributed processing towards which we are moving.

Smaller, lightweight apps have fewer moving parts, and therefore by their very nature are easier to implement, virtualize, protect – and replace entirely should that be necessary. To use just an example: MS Word is a powerful ‘word processor’ – which has grown to integrate and support a rather vast range of capabilities including artwork, page layout, mailing list management/distribution, etc. etc. Every instance of this app includes all the functionality, of which 90% is unused (typically) during any one session instance.

If this “app” was broken down into many smaller “applets” that called on each other as required, and were made available to the user on the fly during the ‘session’ the entire compute environment becomes more dynamic, flexible and easier to protect.

Lowering the Threat Surface

Immune-System

One of the largest security challenges of a highly distributed compute environment – such as is presented by the typical hybrid cloud / world-wide / mobile device ecosystem that is rapidly becoming the norm – is the very large ‘threat surface’ that is exposed to potential hackers or other unauthorized access.

As more and more devices are interconnected – and data is interchanged and aggregated from millions of sensors, beacons and other new entities, the potential for breaches is increased exponentially. It is mathematically impossible to proactively secure every one of these connections – or even monitor them on an individual basis. Some new form of security paradigm is required that will, by its very nature, protect and inhibit breaches of the network.

Fortunately, we do have an excellent model on which to base this new type of security mechanism: the human immune system. The ‘threat surface’ of the human body is immense, when viewed at a cellular level. The number of pathogens that continually attempt to violate the human body systems are vastly greater than even the number of hackers and other malevolent entities in the IT world.

The conscious human brain could not even begin to attempt to monitor and react to every threat that the hordes of bacteria, viruses and other pathogens bring against the body ecosystem. About 99% of such defensive response mechanisms are ‘automatic’ and go unnoticed by our awareness. Only when things get ‘out of control’ and the symptoms tell us that the normal defense mechanisms need assistance do we notice things like a sore throat, an ache, or in more severe cases: bleeding or chest pain. We need a similar set of layered defense mechanisms that act completely automatically against threats to deal with the sheer numbers and variations of attack vectors that are becoming endemic in today’s new hyper-connected computational fabric.

A Two-Phased Approach to Hyper-Security

Our new hyper-connected reality requires an equally robust and all-encompassing security model: Hyper-Security. In principle, an approach that combines the absolute minimal exposure of any assets, applications or connectivity with a corresponding ‘shielding’ of the session using techniques to be discussed shortly can provide an extremely secure, scalable and efficient environment.

Phase One – building user ‘sessions’ (whether that user is a machine or a human) that expose the least possible amount of threat surface while providing all the functionality required during that session – has been touched on earlier during our discussion of virtualized compute environments. The big paradigm shift here is that security is ‘built in’ to the applications, data storage structures and communications interface at a molecular level. This is similar to how the human body systems are organized, which in addition to the actual immune systems and other proactive ‘security’ entities, help naturally limit any damage caused by pathogens.

This type of architecture simply cannot be ‘backed in’ to legacy OS systems – but it’s time that many of these are moved to the shelf anyway: they are becoming more and more clumsy in the face of highly virtualized environments, not to mention the extreme amount of time/cost to maintaining these outdated systems. Having some kind of attachment or allegiance to an OS today is as archaic as showing a preference for a Clydesdale vs a Palomino in the world of Ferraris and Teslas… Really all that matters today is the user experience, reliability and security. How something gets done should not matter any more, even to highly technical users, any more than knowing exactly which endocrines are secreted by our Islets of Langerhans (some small bits of the pancreas that produce some incredibly important things like insulin). These things must work (otherwise humans get diabetes or computers fail to process) but very few of us need to know the details.

Although the concept of this distributed, minimalistic and virtualized compute environment is simple, the details can become a bit complex – I’ll reserve further discussion for a future post.

To summarize, the security provided by this new architecture is one of prevention, limitation of damage and ease of applying proactive security measures (to be discussed next).

Phase Two – the protection of the compute sessions from either internal or external threat mechanisms – also requires a novel approach that is suited for our new ecosystems. External threats are essentially any attempt by unauthorized users (whether human, robots, extraterrestrials, etc.) to infiltrate and/or take data from a protected system. Internal threats are activities that are attempted by an authorized user – but are not authorized actions for that particular user. An example is a rogue network admin either transferring data to an unauthorized endpoint (piracy) or destruction of data.

The old-fashioned ‘perimeter defense systems’ are no longer appropriate for protection of cloud servers, mobile devices, etc. A particular example of how extensive and interconnected a single ‘session’ can be is given here:

A mobile user opens an app on their phone (say an image editing app) that is ‘free’ to the user. The user actually ‘pays’ for this ‘free’ privilege by donating a small amount of pixels (and time/focus) to some advertising. In the background, the app is providing some basic demographic info of the user, the precise physical location (in many instances), along with other data to an external “ad insertion service”.

This cloud-based service in turn aggregates the ‘avails’ (sorted by location, OS, hardware platform, app type that the user is running, etc.) and often submits these ‘avails’ [with the screen dimensions and animation capabilities] to an online auction system that bids the ‘avails’ against a pool of appropriate ads that are preloaded and ready to be served.

Typically the actual ads are not located on the same server, or even the same country, as either the ad insertion service or the auction service. It’s very common for up to half a dozen countries, clouds and other entities to participate in delivering a single ad to a mobile user.

This highly porous ad insertion system has actually become a recent favorite of hackers and other con games – even without technical breaches it’s an incredibly easy system to game – due to the speed of the transactions and almost impossible ability to monitor in real time many ‘deviations’ are possible… and common.

There are a number of ingenious methods being touted right now to help solve both the actual protection of virtualized and distributed compute environments, as well as to monitor such things as intrusions, breaches and unintended data moves – all things that traditional IT tools don’t address well at all.

I am unaware of a ‘perfect’ solution yet to address either the protection or monitoring aspects, but here are a few ideas: [NOTE: some of these are my ideas, some have been taken up by vendors as a potential product/service. I don’t feel qualified enough to judge the merits of any particular commercial product at this point, nor is the focus of this article on detailed implementations but rather concepts, so I’ll refrain from getting into specific products].

  • User endpoint devices (anything from humans’ cellphones to other servers) must be pre-authenticated (using combination of currently well-known identification methods such as MAC address, embedded token, etc.). On top of this basic trust environment, each session is authenticated with a minimum of a two-factor logon scheme (such as biometric plus PIN, certificate plus One Time Token, etc). Once the endpoints are authenticated, a one-time use VPN is established for each connection.
  • Endpoint devices and users are combined as ‘profiles’ that are stored as part of a security monitoring application. Each user may have more than one profile: for instance the same user may typically perform (or be allowed to perform by his/her firm’s security protocol) different actions from a cellphone as opposed to a corporate laptop. The actions that each user takes are automatically monitored / restricted. For instance, the VPNs discussed in the point above can be individually tailored to allow only certain kinds of traffic to/from certain endpoints. Actions that fall outside of the pre-established scope, or are outside a heuristic pattern for that user, can either be denied or referred for further authorization.
  • Using techniques similar to the SSL methodologies that protect and authenticate online financial transactions, different kinds of certificates can be used to permit certain kinds of ‘transactions’ (with a transaction being either access to certain data, permission to move/copy/delete data, etc.) In a sense it’s a bit like the layered security that exists within the Amazon store: it takes one level of authentication to get in and place an order, yet another level of ‘security’ to actually pay for something (you must have a valid credit card that is authenticated in real time by the clearing houses for Visa/MasterCard, etc.). For instance, a user may log into a network/application instance with a biometric on a pre-registered device (such as fingerprint on an iPhone6 that has been previously registered in the domain as an authenticated device). But if that user then wishes to move several terabytes of a Hollywood movie studio’s content to remote storage site (!!) they would need to submit an additional certificate and PIN.

An Integrated Immune System for Data Security

Virus in blood - Scanning Electron Microscopy stylised

The goal of a highly efficient and manageable ‘immune system’ for a hyper-connected data infrastructure is for such a system to protect against all possible threats with the least direct supervision possible. Since not only is it impossible for a centralized omniscient monitoring system to handle the incredible number of sessions that take place in even a single modern hyper-network; it’s equally difficult for a single monitoring / intrusion detection device to understand and adapt to the myriad of local contexts and ‘rules’ that define what is ‘normal’ and what is a ‘breach’.

The only practical method to accomplish the implementation of such an ‘immune system’ for large hyper-networks is to distribute the security and protection infrastructure throughout the entire network. Just as in the human body, where ‘security’ begins at the cellular level (with cell walls allowing only certain compounds to pass – depending on the type and location of each cell); each local device or application must have as part of its ‘cellular structure’ a certain amount of security.

As cells become building blocks for larger structures and eventually organs or other systems, the same ‘layering’ model can be applied to IT structures so the bulk of security actions are taken automatically at lower levels, with only issues that deviate substantially from the norm being brought to the attention of higher level and more centralized security detection and action systems.

Another issue of which to be aware: over-reporting. It’s all well and good to log certain events… but who or what is going to review millions of lines of logs if every event that deviates even slightly from some established ‘norm’ is recorded? And even then, that action will only be looking in the rear view mirror.. The human body doesn’t generate any logs at all and yet manages to more or less handle the security for 37.2 trillion cells!

That’s not say that no logs at all should be kept – they can be very useful to help understand breaches and what can be improved in the future – but the logs should be designed with that purpose in mind and recycled as appropriate.

Summary

In this very brief overview we’ve discussed some of the challenges and possible solutions to the very different security paradigm that we now have due to the hyper-connected and diverse nature of today’s data ecosystems. As the number of ‘unmanned’ devices, sensors, beacons and other ‘things’ continues to grow exponentially, along with the fact that most of humanity will soon be connected to some degree to the ‘Internet’, the scale of the security issue is truly enormous.

A few ideas and thoughts that can lead to effective, scalable and affordable solutions have been discussed – many of these are new and works in progress but offer at least a partially viable solution as we work forward. The most important thing to take away here is an awareness of how things must change, and to keep asking questions and not assume that the security techniques that worked last year will keep you safe next year.

Data Security – An Overview for Executive Board members [Part 6: Cloud Computing]

March 21, 2015 · by parasam

Introduction

In the last part of this series on Data Security we’ll tackle the subject of the Cloud – the relatively ‘new kid on the block’. Actually “cloud computing” has been around for a long time, but the concept and naming was ‘repackaged’ along the way. The early foundation for the Cloud was ARPANET (1969), followed by several major milestones: Salesforce.com – the first major enterprise app available on the web (1999); Amazon Web Services (2002); Amazon EC2/S3 (2006); Web 2.0 (2009). The major impediment to mass use of the cloud (aka remote data centers) was cheap bandwidth. In parallel with massive bandwidth [mainly in the US, western Europe and Pacific Rim (APAC)] the development of fast and reliable web-based apps allowed concepts such as SaaS (Software as a Service) and other ‘remote’ applications to be viable. The initial use of the term “Cloud Computing” or “Cloud Storage” was intended to describe (in a buzzword fashion) the rather boring subject of remote data centers, hosted applications, storage farms, etc. in a way that mass consumers and small business could grasp.

Unfortunately this backfired in some corporate circles, leading to a fragmented and slow adoption of some of the potential power of cloud computing. Part of this was PR and communication, another was the fact that the security of early (and still many today unfortunately!) cloud centers was not very good. Concerns over security of assets, management and other access and control issues led many organizations – particularly media firms – to shun ‘clouds’ for some time, as they feared (perhaps rightly so early on) that their assets could be compromised or pirated. With a generally poor communication about cloud architecture, and the difference between ‘public clouds’ and ‘private clouds’ not being effectively communicated, widespread confusion existed for several years concerning this new technology.

Like most things in the IT world, very little was actually “new” – but incremental change is often perceived as boring so the marketing and hype around gradual improvements in remote data center capability, connectivity and features tended to portray these upgrades as a new and exciting (and perhaps untested!) entity.

The Data Security Model

Cloud Computing

To understand the security aspects of the Cloud, and more importantly to recognize the strategic concepts that apply to this computational model, it is important to know what a Cloud is, and what it is not. Essentially a cloud is a set of devices that host services in a remote location in relation to the user. Like most things in the IT world, there is a spectrum of capability and complexity… little clouds and bigger clouds… For instance, a single remote server that hosts some external storage that is capable of being reached by your cellphone can be considered a ‘cloud’… and on the other extreme the massive amount of servers and software that is known as AWS (Amazon Web Services) is also a cloud.

In both cases, and everything in between, all of the same issues we have discussed so far apply to the local cloud environment: Access Control, Network Security, Application Security and Data Compartmentalization. Every cloud provider must ensure that all these bits are correctly implemented – and it’s up to the due diligence of any cloud user/subscriber to verify that in fact these are in place. In addition to all these security features, there are some unique elements that must be considered in cloud computing.

The two main additional elements, in terms of security to both the cloud environment itself and the ‘user’ (whether that be an individual user or an entire organization that utilizes cloud services) are Cloud Access Control and Data Transport Control. Since the cloud is by its very nature remote from the user, almost always a WAN (Wide Area Network) connection is used to connect users to the cloud. This type of access is more difficult to secure, and as has been shown repeatedly by recent history, is susceptible to compromise. Even if the access control is fully effective, thereby allowing only authorized users to enter and perform operations, the problem of unauthorized data transport remains:  again, recent reports demonstrate that ‘inside jobs’ often result in users that have access to data (or if the user’s credentials are compromised or stolen) can therefore move or delete data, often with serious results.

An extra layer of security protocols and procedures are necessary to ensure that data transport or editing operations are appropriate and authorized.

  • Cloud Access Control (CAC)  –  Since the cloud environment (from an external user persepective) is ‘anonymous and external’ [i.e. neither the cloud nor the user can directly authenticate each other, nor is either contained within physical proximity] the possibility of unauthorized access (by a user) or spoofing (misdirecting a user to a cloud site other than which they intended to connect) is much greater. Both users and cloud sites must take extra precautions to ensure these scenarios do not take place.
    • Two-factor authentication is even more important in a cloud environment than in a ‘normal’ network. The best form of such authentication is where one of the factors is a Real Time Key (RTK). Essentially this means that one of the key factors is either generated in real time (or revealed in real time) and this shared knowledge between the user and the cloud is used to help authenticate the session.
    • One common example of RTK is where a short code is transmitted (often as a text message to the user’s cellphone) after the user completes the first part of a signon process using a username/password or some other single-factor login procedure.
    • Another form of RTK could be a shared random key where the user device is usually a small ‘fob’ that displays a random number that changes every 30 seconds. The cloud site contains a random number generator that has the same seed as the user fob so the random numbers will be the same within the 30 second window.
    • Either of these methods secures against both unauthorized user access (as is obvious) and protects the user against spoofing (in the first method, the required prior knowledge of the user’s cellphone number by the cloud site; in the second the requirement of matching random number generators).
    • With small variations to the above procedures, such authentication can apply to M2M (Machine to Machine) sessions as well.
  • Data Transport Control (DTC) – Probably the most difficult aspect of security to control is unauthorized movement, copying, deletion, etc. of data that is stored in a cloud environment. It’s the reason that even up until today many Hollywood studios prohibit their most valuable motion picture assets from being stored or edited in public cloud environments. Whether from external ‘hackers’ or internal network admins or others who have gone ‘rogue’ – protection must be provided to assets in a cloud even from users/machines that have been authenticated.
    • One method is to encrypt the assets, whereby the users that can effect the movement, etc. of an asset do not have the decryption key, so even if data is copied it will be useless without the decryption key. However, this does not protect against deletion or other edit functions that could disrupt normal business. There are also times where the encryption/decryption process would add complexity and reduce efficiency of a workflow.
    • Another method (offered by several commercial ‘managed transport’ applications) is a strict set of controls over which endpoints can receive or send data to/from a cloud. With the correct process controls in place (requiring for example that the defined lists of approved endpoints cannot be changed on the fly, and requires at least two different users to collectively authenticate updates to the endpoint list), a very secure set of transport privileges can be set up.
    • Tightly integrating ACL (Access Control List) actions against users and a set of rules can again reduce the possibility of rogue operations. For instance, the deletion of more than 5 assets within a given time period by a single user would trigger an authentication request against a second user – this would prevent a single user from wholesale data destruction operations. You might lose a few assets but not hundreds or thousands.

One can see that the art of protection here is really in the strategy and process controls that are set up – the technical bits just carry out the strategies. There are always compromises, and the precise set of protocols that will work for one organization will be different for another: security is absolutely not a ‘one size fits all’ concept. There is also no such thing as ‘total security’ – even if one wanted to sacrifice a lot of usability. The best practices serve to reduce the probability of a serious breach or other kind of damage to an acceptable level.

Summary

In this final section on Data Security we’ve discussed the special aspects of cloud security at a high level. As cloud computing becomes more and more integral to almost every business today, it’s vital to consider the security of such entities. As the efficiency, ubiquity and cost-saving features of cloud computing continue to rise, many times a user is not even consciously aware that some of the functionality they enjoy during a session is being provided by one or more cloud sites. To further add to the complexity (and potentially reduce security) of cloud computing in general, many clouds talk to other clouds… and the user may have no knowledge or control over these ‘extended sessions’. One example (that is currently the subject of large-scale security issues) is mobile advertising placement.

When a user launches one of their ‘free’ apps on their smartphone, the little ads that often appear at the bottom are not placed there by the app maker, rather that real estate is ‘leased’ to the highest bidder at that moment. The first ‘connection’ to the app is often an aggregator who resells the ‘ad space’ on the apps to one or more agencies that put this space up for bidding. Factors such as the user’s location, the model of phone, the app being used, etc. all factor in the price and type of ad being accepted. The ads themselves are often further aggregated by mobile ad agencies or clearing houses, many of which are scattered around the globe. With the speed of transactions and the number of layered firms involved, it’s almost impossible to know exactly how many companies have a finger in the pie of the app being used at that moment.

As can be seen from this brief introduction to Data Security, the topic can become complex in the details, but actually rather simple at a high level. It takes a clear set of guidelines, a good set of strategies – and the discipline to carry out the rules that are finally adopted.

Further enquires on this subject can be directed to the author at ed@exegesis9.com

 

 

Data Security – An Overview for Executive Board members [Part 5: Data Compartmentalization]

March 20, 2015 · by parasam

Introduction

This part of the series will discuss Data Compartmentalization – the rational separation of devices, data, applications and communications access from each other and external entities. This strategy is paramount in the design of a good Data Security model. While it’s often overlooked, particularly in small business (large firms tend to have more experienced IT managers who have been exposed to this), the biggest issue with Compartmentalization is keeping it in place over time, or not fully implementing it correctly in the first place. While not difficult per se, a serious amount of thought must be given to the layout and design of all the parts of the Data Structure of a firm if the balance between Security and Usability is to be attained in regards to Compartmentalization.

The concept of Data Compartmentalization (allow me to use the acronym DC in the rest of this post, both for ease of the author in writing and you the reader!) implies a separating element, i.e. a wall or other structure. Just as the watertight compartments in a submarine can keep the boat from sinking if one area is damaged (and the doors are closed!!) a ‘data-wall’ can isolate a breached area without allowing the enterprise at large to be exposed to the risk. DC is not only a good idea in terms of security, but also integrity and general business reliability. For instance, it’s considered good practice to feed different compartments with mains power from different distribution panels. So if one panel experiences a fault not everything goes black at once. A mis-configured server that is running amok and choking that segment of a network can easily be isolated to prevent a system-wide ‘data storm’ – an event that will raise the hairs on any seasoned network admin… a mis-configured DNS server can be a bear to fix!

In this section we’ll take a look at different forms of DC, and how each is appropriate to general access, network communications, applications servers, storage and so on. As in past sections, the important aspect to take away is the overall strategy, and the intestinal fortitude to ensure that the best practices are built, then followed, for the full life span of the organization.

The Data Security Model

Data Compartmentalization

DC (Data Compartmentalization) is essentially the practice of grouping and isolating IT functions to improve security, performance, reliability and ease of maintenance. While most of our focus here will be on the security aspects of this practice, it’s helpful to know that basic performance is often increased (by reducing traffic on a LAN [Local Area Network] sub-net. Reliability is improved since there are fewer moving parts within one particular area; and it’s easier to patch or otherwise maintain a group of devices when they are already grouped and isolated from other areas. It is not at all uncommon for a configuration change or software update to malfunction, and sometimes this can wreak havoc on the rest of the devices that are interconnected on that portion of a network. If all hell breaks loose, you can quickly ‘throw the switch’ and separate the malfunctioning group from the rest of the enterprise – provided this Compartmentalization has first been set up.

Again, from a policy or management point of view, it’s not important to understand all the details of programming firewalls or other devices that act as the ‘walls’ that separate these compartments (believe me, the arcane and tedious methodology required even today to correctly set up PIX firewalls for example gives even the most seasoned network admins severe headaches!). The fundamental concept is that it’s possible and highly desirable to break things down into groups, and then separate these groups at a practical level.

For a bit of perspective, let’s look at how DC operates in conjunction with each of the subjects that we’ve discussed so far: Access Control, Network Security Control and Application Security. In terms of Access Control the principle of DC would enforce that logins to one logical group (or domain or whatever logical division is appropriate to the organization) would restrict access to only the devices within that group. Now once again we run up against the age-old conundrum of Security vs. Usability – and here the oft-desired SSO (Single Sign On) feature is often at odds with a best practice of DC. How does one manage that effectively?

There are several methods: either a user is asked for additional authentication when wanting to cross a ‘boundary’ – or a more sophisticated SSO policy/implementation is put in place, where a request from a user to ‘cross a boundary’ is fed to an authentication server that automatically validates the request against the user’s permissions and allows the user to travel farther in cyberspace. As mentioned earlier in the section on Network Security, there is a definite tradeoff on this type of design, because rogue or stolen credentials could then be used to access very wide areas of an organization’s data structure. There are ways to deal with this, mostly in terms of monitoring controls that match the behavior of users with their past behavior, and a very granular set of ACL (Access Control List) permissions that are very specific about who can do what and where. There is no perfect answer to this balance between ideal security and friction-free access throughout a network. But in each organization the serious questions need to be asked, and a rational policy hammered out – with an understanding of the risks of whatever compromise is chosen.

Moving on to the concept of DC for Network Security, a similar set of challenges and possible solutions to the issues raised above in Access Control present themselves. While one may think that from a pure usability standpoint everything in the organization’s data structure should be connected to everything else, this is neither practical or reliable, let alone secure. One of the largest challenges to effective DC for large and mature networks is that these typically have grown over years, and not always in a fully designed manner: often things have expanded organically, with bits stuck on here and there as immediate tactical needs arose. The actual underpinnings of many networks of major corporations, governments and military sites are usually byzantine, rather disorganized and not fully documented. The topology of a given network or group of networks also has a direct effect on how DC can be implemented: that is why the best designs are where DC is taken as a design principle from day one. There is no one ‘correct’ way to do this: the best answer for any given organization is highly dependent on the type of organization, the amount and type of data being moved around and/or stored, and how much interconnection is required. For instance, a bank will have very different requirements than a news organization.

Application Security can only be enhanced by judicious use of compartmentalization. For instance, web servers, an inherently public-facing application, should be isolated from an organization’s database, e-mail and authentication servers. One should also remember that the basic concepts of DC can be applied no matter how small or large an organization is: even a small business can easily separate public-facing apps from secure internal financial systems, etc. with a few small routers/firewalls. These devices are so inexpensive these days that there is almost no rationale for not implementing these simple safeguards.

One can see that the important concept of DC can be applied to virtually any area of the Data Security model:  while the details of achieving the balance between what to compartmentalize and how to monitor/control the data movement between areas will vary from organization to organization, the basic methodology is simple and provides an important foundation for a secure computational environment.

Summary

In this section we’ve reviewed how Data Compartmentalization is a cornerstone of a sound data structure, aiding not only security but performance and reliability as well. The division of an extended and complex IT ecosystem into ‘blocks’ allows for flexibility, ease of maintenance and greatly contributes to the ability to contain a breach should one occur (and it will inevitably will!). One of the greatest mistakes any organization can make is to assume “it won’t happen to me.” Breaches are astoundingly commonplace, and many are undetected or go unreported even if discovered. For many reasons, including losing customer or investor confidence, potential financial losses, lack of understanding, etc. the majority of breaches that occur within commercial organizations are not publicly reported. Usually we find out only when the breach is sufficient in scope that it must be reported. And the track record for non-commercial institutions is even worse. NGOs, charities, research institutions, etc. often don’t even know of a breach unless something really big goes wrong.

The last part of this series will discuss Cloud Computing: as a relatively new ‘feature’ of the IT landscape, the particular risks and challenges to a good security model warrant a focused effort. The move to using some aspect of the Cloud is becoming prevalent very quickly among all levels of organizations: from the massive scale of iTunes down to an individual user or small business backing up their smartphones to the Cloud.

Part 6 of this series is located here.

 

Data Security – An Overview for Executive Board Members [Part 4: Application Security]

March 19, 2015 · by parasam

Introduction

In this section we’ll move on from Network Security (discussed in the last part) to the topic of Application Security. So far we’ve covered issues surrounding the security of basic access to devices (Access Control) and networks (Network Security), now we’ll look at an oft-overlooked aspect of a good Data Security model: how applications behave in regards to a security strategy. Once we have access to computers, smartphone, tablets, etc.; and then have privileges to connect to other devices through networks it’s like being inside a shop without doing any shopping. Rather useless…

All of our functional performance with data is through one or more applications: e-mail, messaging, social media, database, maps, VoIP (internet telephony), editing and sharing images, financial and planning – the list is endless. Very few modern applications work completely in a vacuum:  i.e. perform their function with absolutely no connection to the “outside world”. The connections that applications make can be as benign as accessing completely ‘local’ data – such as a photo-editing app requiring access to your photo library on the same computer on which the app is running; or can reach out the to entire public internet – such as Facebook.

The security implications of these interactions between applications and other apps, stored data, websites, etc. etc. are the area of discussion for the rest of this section.

The Data Security Model

Application Security

Trying to think of an app that is completely self-contained is actually an exercise. A simple calculator and the utility app that switches on the LED “flash” (to function as a flashlight) are the only two apps on my phone that are completely ‘stand-alone’. Every other app (some 200 in my case) connect in some way to external data (even if on the phone itself) or the local network, the web, etc. Each one of these ‘connections’ carries with it a security risk. Remember that hackers are like rainwater: even the tiniest little hole in your roof or wall will allow water into your home.

While you may think that your cellphone camera app is a very secure thing -after all you are only taking pix with your phone and storing those images directly on your phone… we are not discussing uploading these images or sharing them in any way (yet). However… remember that little message that pops up when you first install an app such as that? Where it asks permission to access ‘your photos’? (Different apps may ask for permission for different things… and this only applies to phones and tablets: laptops and desktops never seem to ask at all – they just connect to whatever they want to!)

I’ll give you an example of how ‘security holes’ can contribute to a weakness in your overall Data Security model: We’ll use the smartphone as an example platform. Your camera has access to photos. Now you’ve installed Facebook as well, and in addition to the Facebook app itself you’ve installed the FB “Platform” (which supports 3rd party FB apps) and a few FB apps, including some that allow you to share photos online. FB apps in general are notoriously ‘leaky’ (poorly written in terms of security, and some even deliberately do things with your data that they do not disclose). A very common user behavior on a phone is to switch apps without fully closing them. If FB is running in the background all installed FB apps are running as well. Each time you take a photo, these images are stored in the Camera Roll which is now shared with the FB apps – which can access and share these images without your knowledge. So the next time you see celebrity pix of things we really don’t need to see any more of… now you know one way this can easily happen.

The extent to which apps ‘share’ data is far greater than is usually recognized. This is particularly true in larger firms that often have distributed databases, etc. Some other examples of particularly ‘porous’ applications are: POS (Point Of Sale) systems, social media applications (corporate integration with Twitter, Facebook, etc. can be highly vulnerable), mobile advertising backoffice systems, applications that aggregate and transfer data to/from cloud accounts and many more. (Cloud Computing is a case unto itself, in terms of security issues, and will be discussed as the final section in this series.)

There are often very subtle ways in which data is shared from a system or systems. Some of these appear very innocuous but a determined hacker can make use of even small bits of data which can be linked with other bits to eventually provide enough information to make a breach possible. One example is many apps (including OS themselves – whether Apple, Android, Windows, etc.) send ‘diagnostic’ data to the vendor. Usually this is described as ‘anonymous’ and gives the user the feeling that’s it’s ok to do this: firstly personal information is not transmitted, secondly the data is supposedly only going to the vendor’s website for data collection – usually to study application crashes.

However, it’s not that hard to ‘spoof’ the server address to which the data is being sent, and seemingly innocent data being sent can often include either the ip address or MAC address of the device – which can be very useful in the future to a hacker that may attempt to compromise that device. The internal state of many ‘software switches’ is also revealed – which can tell a hacker whether some patches have been installed or not. Even if the area revealed by the app dump is not directly useful, a hacker that sees ‘stale’ settings (showing that this machine has not been updated/patched recently) may assume that other areas of the same machine are also not patched, and can use discovered vulnerabilities to attempt to compromise the security of that device.

The important thing to take away from this discussion is not the technical details (that is what you have IT staff for), but rather to ensure that protocols are in place to constantly keep ALL devices (including routers and other devices that are not ‘computers’ in the literal sense) updated and patched as new security vulnerabilities are published. An audit program should be in place to check this, and the resulting logs need to actually be studied, not just filed! You do not want to be having a meeting at some future date where you find out that a patch that could have prevented a data breach remained uninstalled for a year… which BTW is extraordinarily common.

The ongoing maintenance of a large and extended data system (such as many companies have) is a significant effort. It is as important as the initial design and deployment of systems themselves. There are well-know methodologies for doing this correctly that provide a high level of both security and stability for the applications and technical business process in general. It’s just that often they are not universally applied without exception. And it’s those little ‘exceptions’ that can bite you in the rear – fatally.

A good rule of thumb is that every time you launch an application, that app is ‘talking’ to at least ten other apps, OS processes, data stores, etc. Since the average user has dozens of apps open and running simultaneously, you can see that most user environments are highly interconnected and potentially porous. The real truth is that as a collective society, we are lucky that there are not enough really good hackers to go around: the amount of potential vulnerabilities vastly outnumbers those who would take advantage of them!

If you really want to look at extremes of this ‘cat and mouse’ game, do some deep reading on the biggest ‘hack’ of all time: the NSA penetration of massive amounts of US citizen’s data on the one side; the procedures that Ed Snowden took in communicating with Laura Poitras and Glenn Greenwald on the other side (the journalists who first connected with Snowden). Ed Snowden, more than just about anyone, knew how to effectively use computers and not be breached. It was fairly elaborate – but not at all that difficult – he managed to instruct both Laura and Glenn how to set up the necessary security on their computers so that reliable and totally secret communications could take place.

Another very important issue of which to be aware, particularly in this age of combined mobile and corporate computing with thousands of interconnected devices and applications: breaches WILL occur. It’s how you discover and react to them that is often the difference between a relatively minor loss and a CNN exposé level… The bywords one should remember are: Containment, Awareness, Response and Remediation. Any good Data Security protocol must include practices that are just as effective against M2M (Machine to Machine) actions as well as things performed by human actors. So constant monitoring software should be in place to see whether unusual amounts of connections, file transfers, etc. are taking place – even from one server to another. I know it’s an example I’ve used repeatedly in this series, (I can’t help it – it’s such a textbook case of how not to do things!) but the Sony hack revealed that a truly massive amount of data (as in many, many terabytes) was transferred from supposedly ‘highly secure’ servers/storage farms to repositories outside the USA. Someone or something should have been notified that very sustained transfers of this magnitude were occurring, so at least some admin could check and see what was using all this bandwidth. Both of the most common corporate file transfer applications (Aspera and Signiant) have built-in management tools that can report on what’s going where.. so this was not a case of something that needed to be built – it’s a case of using what’s already provided correctly.

Many, if not most, applications can be ‘locked down’ to some extent – the amount and degree of communication can be controlled to help reduce vulnerabilities. Sometimes this is not directly possible within the application, but it’s certainly possible if the correct environment for the apps is designed and implemented appropriately. For example, a given database engine may not have the level of granular controls to effectively limit interactions for your firm’s use case. If that application (and possibly others of similar function) are run on a group of application servers that are isolated from the rest of the network with a small firewall, the firewall settings can be used to very easily and effectively limit precisely which other devices these servers can reach, what kind of data they may send/receive, the time of day when they can be accessed, etc. etc. Again, most of good security is in the overall concept and design, as even excellent implementation of a poor design will not be effective.

Summary

Applications are what actually give us function in the data world, but they must be carefully installed, monitored and controlled in order to obtain the best security and reliability. We’ve reviewed a number of common scenarios that demonstrate how easily data can be compromised by unintended communication to/from your applications. Applications are vital to every aspect of work in the Data world, but can, and do, ‘leak’ data to many unintended areas – or provide an unintended bridge to sensitive data.

The next section will discuss Data Compartmentalization. This is an area of the Data Security model that is least understood and practiced. It’s a bit like the waterproof compartment in a submarine: if one area is flooded, by closing the communicating doors the rest of the boat can be saved from disaster. A big problem (again, not technical but procedural) is that in many organizations, even where a good initial design segregated operations into ‘compartments’ it doesn’t take very long at all for “cables to be thrown over the fence”, therefore bypassing the very protections that were put in place. Often this is done for expediency to fix a problem, or some new app needs to be brought on line quickly and taking the time to install things properly with all the firewall rule changes is waived. These business practices are where good governance, proper supervision, and continually asking the right questions is vital.

Part 5 of this series is located here.

Data Security – An Overview for Executive Board members [Part 3: Network Security]

March 18, 2015 · by parasam

Introduction

In Part 2 of this series we reviewed Access Control – the beginning of a good Data Security policy. In this section we’ll move on to Network Security Controls: once a user has access to a device or data, almost all interactions today are with a network of devices, servers, storage, transmission paths, web sites, etc. The design and management of these networks are absolutely critical to the security model. This area is particularly vulnerable to intrusion or inadvertent ‘leakage’ of data, as networks continue to grow and can become very complex. Often parts of the network have been in place for a long time, with no one currently managing them even aware of the initial design parameters or exactly how pieces are interconnected.

There are a number of good business practices that should be reviewed and compared to your firm’s networks – and this is not highly technical, it just requires methodology, common sense, and the willingness to ask the right questions and take action should the answers reveal weakness in network security.

The Data Security Model

Network Security Controls

Once more than one IT device is interconnected, we have a network. The principle of Access Control discussed in the prior section is equally applicable to a laptop or an entire global network of thousands of devices. There are two major differences when we expand our security concept to an interconnected web of computers, storage, etc. The first is that when a user signs on (whether with a simple username or password, or a sophisticated Certificate and Biometric authorization) instead of logging into a single device, such as their laptop, they sign in to a portion of the network. The devices to which they are authorized are contained in an Access Control List (ACL) – which is usually chosen by the network administrator. (This process is vastly simplified here in regards to the complex networks that can exist today in large firms, but the principle is the same). It’s a bit like a passport that allows you into a certain country, but not necessarily other bordering countries. Or one may gain entry into a neighboring country with restrictions, such as are put in place with visas for international travelers.

The second major difference, in terms of network security in relation to logging into a single device, is that within a network there are often many instances of where one device needs to communicate directly to another device, with no human taking place in that process. These are called M2M (Machine to Machine) connections. It’s just as important that any device that wants to connect to another device within your network be authorized to do so. Again, the network administrator is responsible for setting up the ACLs that control this, and in addition many connections are restricted: a given server may only receive data but not send, or only have access to a particular kind of data.

In a large modern network, the M2M communication usually outnumbers the Human to Machine interactions by many orders of magnitude – and this is where security lapses often occur, just due to the sheer number of interconnected devices and the volume of data being exchanged. While it’s important to have in place trained staff, good technical protocols, etc. to manage this access and communication, the real protection comes from adopting, and continually monitoring adherence to, a sound and logical security model at the highest level. If this is not done, then usually a patchwork of varying approaches to network security ends up being implemented, with almost certain vulnerabilities.

The biggest cause of network security lapses results from the usual suspect: Security vs Usability. We all want ‘ease of use’ – and unfortunately no one more than network administrators and others that must traverse vast amounts of networks daily in their work, often needing to quickly log into many different servers to either solve problems or facilitate changes requested by impatient users. There is a rather well-liked login policy that is very popular with regular users and network admins alike: that of Single Sign On (SSO). While this provides great ease of use, and is really the only practical method for users to navigate large and complex networks, its very design is a security flaw waiting to happen. The proponents of SSO will argue that detailed ACLs (Access Control Lists – discussed in Part 2 of this series) can restrict very clearly the boundaries of who can do what, and where. And, they will point out, these ACLs are applicable to machines as well as humans, so in theory a very granular – and secure – network permissions environment can be built and maintained.

As always, the devil is in the details… and the larger a network gets the more details there are… couple that with inevitable human error, software bugs, determined hackers, etc. etc. and sooner or later a breach of data security is inevitable. This is where the importance of a really sound overall security strategy is paramount – one that takes into account that neither humans nor software are created perfectly, and that breaches will happen at some point. The issue is one of containment, awareness, response and remediation. Just as a well-designed building can tolerate a fire without quickly burning to the ground – due to such features as firestops in wall design, fire doors, sprinkler systems, fire retardant furnishings, etc.; so can a well designed network tolerate breaches without allowing unfettered access to the entire network. Unfortunately, many (if not most) corporate networks in existence today have substantial weaknesses in this area. The infamous Sony Pictures ‘hack’ was initiated by a single set of stolen credentials being used to access virtually the entire world-wide network of this company. (It’s a bit more complicated than this, but in essence that’s what happened). Ideally, the compromise of a single set of credentials should not have allowed the extent and depth of that breach.

Just as when a person is travelling from one state to another within even a single country, you must usually stop at the border and a guard has a quick look at your car, may ask if you have any fruit or veg (in case of parasites), etc. – so a basic check of permissions should occur when access is requested from a substantially different domain that the one where the initial logon took place. Even more important -going back to our traveler analogy: if instead of a car the driver is in a large truck usually a more detailed inspection is warranted. A Bill of Lading must be presented, and a partial physical inspection is usually performed. The Data equivalent of this (stateful inspection) will reveal if the data being moved is appropriate to the situation. Using the Sony hack as an example, the fact that hundreds of thousands of e-mails were being moved from within a ‘secure’ portion of the network to servers located outside the USA should have tripped a notification somewhere…

The important issue here to keep remembering is that the detailed technology, software, hardware or other bits that make this all work are not what needs to understood at the executive level: what DOES need to be in place is the governance, policy and willfulness to enforce the policies on a daily basis. People WILL complain – certain of these suggested policies make accessing, moving, deleting data a bit more difficult and bit more time-consuming. However, as we have seen, the tradeoff is worth it. No manager or executive wants to answer the kind of questions after the fact that can result from ignoring sound security practices…

Probably the single most effective policy to implement – and enforce – is the ‘buddy system’ at the network admin level. No single person should have unfettered access to the entire network scope of a firm. And most importantly, this must include any outside contractors. This is an area oft overlooked. The details must be designed for each firm, as there are so many variations, but essentially at least two people must authenticate major moves/deletions/copies of a certain scope of data. A few examples:  if an admin that routinely works in an IT system in California wants access to the firm’s storage servers in India, a local admin within India should be required to additionally authenticate this request. Or if a power user whose normal function is backup and restore of e-mails wants to delete hundreds of entire mailboxes then a second authentication should be required.

While the above examples involved human operators, the same policies are effective with M2M actions. In many cases, sophisticated malware that is implanted in a network by a hacker can carry out machine operations – provided the credentials are provided. Again, if a ‘check & balance’ system is in place, unfettered M2M operations would not be allowed to proceed unhindered. Another policy to consider adopting is that of not excluding any operations by anyone at all from these intrusion detection, audit or other protective systems. Often senior network admins will request that these protective systems be bypassed for operations performed by a select class of users (most often top network admins) – as they often say that these systems get in the way of their work when trying to resolve critical issues quickly. This is a massive weakness – as has been shown many times when these credentials are compromised.

Summary

Although networks range from very simple to extraordinarily complex, the same set of good governance policies, protocols and other ‘rules of the road’ can provide an excellent level of security within the the data portion of a company. This section has reviewed several of these, and discussed some examples of effective policies. The most important aspect of network security is often not the selection of the chosen security measures, but the practice of ensuring that they are in place completely across the entire network. These measures also should be regularly tested and checked.

In the next section, we’ll discuss Application Security:  how to analyze the extent to which many applications expose data to unintended external hosts, etc. Very often applications are quite ‘leaky’ and can easily compromise data security.

Part 4 of this series is located here.

 

 

Data Security – An Overview for Executive Board members [Part 2: Access Control]

March 16, 2015 · by parasam

Introduction

In Part 1 of this topic we discussed the concepts and basic practices of digital security, and covered an overview of Data Security. In the next parts we’ll go on to cover in detail a few of the most useful parts of the Data Security model, and offer some practical solutions for good governance in these areas. The primary segments of Data Security that have significant human factors, or require an effective set of controls and strategy in order for the technical aspect to be successful are: Access Control, Network Security Controls, Application Security, Data Compartmentalization, and Cloud Computing.

Security vs Usability

This is the cornerstone of so many issues with security: the paradox between really effective security and the ease of use of a digital system. It’s not unlike wearing a seatbelt in a car… a slight decrease in ‘ease of use’ results in an astounding increase in physical security. You know this. The statistics are irrefutable. Yet hundreds of thousands of humans are either killed or injured worldwide every year by not taking this slight ‘security’ effort. So.. if you habitually put on your seat belt each time before you put your car in gear.. .then keep reading, for at least you are open to a tradeoff that will seriously enhance the digital security of your firm, whether a Fortune500 company, a small African NGO or a research organization that is counting the loss of primary forests in Papa New Guinea.

The effective design of a good security protocol is not that different than the design principle that led to seatbelts in cars:  On the security side, the restraint system evolved from a simple lap belt to combination shoulder harness/lap belt systems, often with automatic mechanisms that ‘assisted’ the user to wear them. The coupling of airbags as part of the overall passenger restraint system (which being hidden required no effort on the part of the user to make them work) improved even further the effectiveness of the overall security system. On the usability side, the placement of the buckles, the mechanical design of the buckles (to make it easy for everyone from children to the elderly to open and close them), and other design factors worked to increase the ease of usability. In addition philosophical and social pressures added to the ‘human factor’ of seat belt use:  in most areas there are significant public awareness efforts, driver education and governmental regulations (fines for not wearing seat belts), etc. that further promote the use of these effective physical security devices.

If you attempt to put in place a password policy that requires at least 16 characters with ‘complexity’ (i.e. a mix of Caps, Numbers, Punctuation) – and require the password to be changed monthly you can expect a lot of passwords to be written down on sticky notes on the underside of the keyboards… You have designed a system with very good Security but poor Usability. In each of the areas that we will discuss the issue of Security vs Usability will be addressed, as it is paramount to actually having something that works.

The Data Security Model

  Access Control

In its simplest form, access control is like the keys to your office, home or car. A person that is in possession of the correct key can access content or perform operations that are allowable within the confines of the accessible area. If you live in a low crime area, you may have a very small amount of keys: one for your house, one for your car and another for your office. But as we move into larger cities, we start collecting more keys: a deadbolt key (for extra security), probably a perimeter key for the office complex, a key for the postbox if you live in a housing complex, etc. etc. But even relatively complex physical security is very simple compared to online security for a ‘highly connected’ user. It is very easy to have tens if not hundreds of websites, computer/server logins, e-mail logins, etc. that each require a password. Password managers have become almost a required toolset for any security-minded user today, as how else to keep track of that many passwords! (And I assume here that you don’t make the most basic mistake of reusing passwords across multiple websites…)

Back to basics:  the premise behind the “username / password” authentication model is firstly to uniquely identify the user [username] and then to ensure that the access being granted is to the correct person [a secret password that supposedly is known only to the correct user]. There are several significant flaws with this model but due to its simplicity and relative ease of use it is widespread use throughout the world. In most cases, usernames are not protected in any way (other than being checked for uniqueness). Passwords, depending on the implementation, can be somewhat more protected – many systems encrypt the password that is on the server or device to which the user is attempting to gain access, so that someone (on the inside) that gains access to the password list on the server doesn’t get anything useful. Other attempts at making the password function more secure are password rules (such as requiring complexity/difficulty, longer passwords, forcing users to change passwords regularly, etc.) The problem with this is that the more secure (i.e. elaborate) the password rules become, the more likely that the user will compromise security by attempting to simplify the rules, or copying the password so they may refer to it since it’s too complex to remember. The worst of this type of behavior is the yellow sticky note… the best is a well-designed password manager that stores all the passwords in an encrypted database – that itself requires a password for access!

As can be seen this username/password model is a compromise that fails in the face of large numbers of different passwords needed by each user, and the ease at which many passwords can be guessed by a determined intruder. Various social engineering tactics, coupled with powerful computers and smart “password-guessing” algorithms can often correctly figure out passwords very quickly. We’ve all heard (or used!) birthdays, kids/pets names, switching out vowels with numbers, etc. etc. There isn’t a password simplification method that a hacker has not heard of as well…

So what next? Leaving the username identity alone for the moment, if we focus on just the password portion of the problem we can use biometrics. This has long been used by government, military and other institutions that had the money (these methods used to be obscenely expensive to implement) – but now are within the reach of the average user. Every new iPhone has a fingerprint reader, and these devices are common on many PCs now as well. So far the fingerprint is the only fairly reliable biometric security method in common use, although retina scanners and other even more arcane devices are in use or being investigated. These devices are not perfect, and all the systems I have seen allow the use of a password as a backup method: the fingerprint is used more as convenience as opposed to absolute security. The fingerprint readers on smartphones are not of the same quality and accuracy as a FIPS-compliant device: but in fairness most restrict the number of ‘bad fingerprint reads’ to a small number before the alternate password is required, so the chance of a similar (but not exact) fingerprint being used to unlock the device is very low.

(Apple for instance states that there is a 1 in 50,000 chance of two different fingerprints being read as identical. At the academic level it is postulated that no two fingerprints are, or ever have been, exactly the same. Even if we look at currently living humans that is a ratio of roughly 1 in 6 billion… so fingerprint readers are not all that accurate. However, they are practically more than good enough given the statistical probability of two people with remarkably similar fingerprints being in the position to attempt access to a single device).

Don’t give up! This is not to say that fingerprint readers are not an adequate solution – they are an excellent method – just that there are issues and the full situation should be well understood.

The next level of “password sophistication” is the so-called “two factor” authentication. This is becoming more common, and has the possibility of greatly increasing security, without tremendous user effort. Basically this means the user submits two “passwords” instead of one. There are two forms of “two factor authentication”: static-static and static-dynamic. The SS (static-static) method uses two previously known ‘passwords’ (usually one biometric – such as a fingerprint; and one actual ‘password’ – whether an actual complex password or a PIN number). The SD (static-dynamic) method uses one previously known ‘password’, and the second ‘password’ is some code/password/PIN that is dynamically transmitted to the user at the time of login. Usually these are sent to the user via their cellphone, are randomly created at the time of attempted login – and are therefore virtually impossible to crack. The user must have previously registered their cellphone number with the security provider so that they can receive the codes. There are obvious issues with this method: one has to within cellphone reception, must have not left it at home, etc. etc.

There is an other SD method, which uses a ‘token’ (a small device that contains a random number generator that is seeded with an identical ‘seed’ that is paired with a master security server. This essentially means that both the server and the token will generate the same random numbers each time the seed updates (usually once every 30 seconds). The token works without a cellphone (which also means it can work underground or in areas where there is no reception). These various ‘two factor’ authentication methods are extremely secure, as the probability of a bogus user having both factors is statistically almost zero.

Another method for user authentication is a ‘certificate’. Without going into technical details (which BTW can make even a seasoned IT guru’s eyeballs roll back in her head!) a certificate is bit like a digital version of a passport or driver’s license: an object that is virtually impossible to counterfeit that uniquely identifies the owner as the rightful holder of that ‘certificate’. In the physical world, driver’s licenses often have a picture, the user’s signature, and often a thumbprint or certain biometric data (height, hair/eye color, etc.) Examination of the “license” in comparison to the person validates the identity. An online ‘security certificate’ [X.509 or similar] performs the same function. There are different levels of certificates, with the higher levels (Level 3 for instance) requiring a fairly detailed application process to ensure that the user is who s/he says s/he is. Use of the certificate, instead of just a simple username, offers a considerably higher level of security in the authentication process.

A certificate can then be associated with a password (or a two factor authentication process) for any given website or other access area. There are a lot of details around this, and there is overhead in administering certificates in a large company – but they have been proven worldwide to be secure, reliable and useful. Many computers can be fitted with a ‘card reader’ that read physical ‘certificates’ (where the certificate is like a credit card that the user presents to log in).

One can see that something as simple as wanding a card and then pressing a fingerprint reader is very user-friendly, highly secure, and is a long way from simple passwords and usernames. The principle here is not to get stuck on details, but to understand that there are methods for greatly improving both security and usability to make this aspect of Data Security – Access Control – no longer an issue for an organization that wishes to take the effort to implement them. Some of these methods are not enormously complicated or expensive, so even small firms can make use of these methods.

Summary

In this part we have reviewed Access Control – one of the pillars of good Data Security. Several common methods, with their corresponding Security vs Usability aspects have been discussed. Access Control is a vital part of any firm’s security policy, and is the foundation of keeping your data under control. While there are many more details surrounding good Access Control policies (audits, testing of devices, revocation of users that are no longer authorized, etc.) the principals are easy to comprehend. The most important thing is to know that good Access Control is required, and that shortcuts or compromises can have disastrous results in terms of a firm’s bottom line or reputation. The next part will discuss Network Security Controls – the vitally important aspect of Data Security where computers or other data devices are connected together – and how those networks can be secured.

Part 3 of this series is located here.

 

Data Security – An Overview for Executive Board members [Part 1: Introduction & Concepts]

March 16, 2015 · by parasam

Introduction

This post is a synthesis of a number of conversations and discussions concerning security practices for the digital aspect of organizations. These dialogs were initially with board members and executive-level personnel, but the focus of this discussion is equally useful to small business owners or anyone that is a stakeholder in an organization that uses data or other digital tools in their business: which today means just about everyone!

The point of view is high level and deliberately as non-technical as possible: not to assume that many at this level are not extremely technically competent, but rather to encompass as broad an audience as possible – and, as will be seen, that the biggest issues are not actually that technical in the first place, but rather are issues of strategy, principle, process and oft-misunderstood ‘features’ of the digital side of any business. The points that will be discussed are equally applicable to firms that primarily exist ‘online’ (who essentially have no physical presence to the consumers or participants in their organization) and those organizations that exist mainly as ‘bricks and mortar’ companies (who use IT as a ‘back office’ function just to support their physical business).

In addition, these principles are relevant to virtually any organization, not just commercial business: educational institutions, NGO’s, government entities, charities, medical practices, research institutions, ecosystem monitoring agencies and so on. There is almost no organization on earth today that doesn’t use ‘data’ in some form. Within the next ten years, the transformation will be almost complete: there won’t be ANY organizations that won’t be based, at their core, on some form of IT. From databases to communication to information sharing to commercial transactions, almost every aspect of any firm will be entrenched in a digital model.

The Concept of Security

The overall concept of security has two major components: Data Integrity and Data Security. Data Integrity is the aspect of ensuring that data is not corrupted by either internal or external factors, and that the data can be trusted. Data Security is the aspect of ensuring that only authorized users have access to view, transmit, delete or perform other operations on the data. Each is critical – Integrity can likened to disease in the human body: pathogens that break the integrity of certain cells will disrupt and eventually cause injury or death; Security is similar to the protection that skin and other peripheral structures provide – a penetration of these boundaries leads to a compromise of the operation of the body, or in extreme cases major injury or death.

While Data Integrity is mostly enforced with technical means (backup, comparison, hash algorithms, etc.), Data Security is an amalgam of human factors, process controls, strategic concepts, technical measures (comprising everything from encryption, virus protection, intrusion detection, etc.) and the most subtle (but potentially dangerous to a good security model): the very features of a digital ecosystem that make it so useful also can make it highly vulnerable. The rest of this discussion will focus on Data Security, and in particular those factors that are not overtly ‘technical’ – as there are countless articles etc on the technical side of Data Security. [A very important aspect of Data Integrity – BCDR (Business Continuity and Disaster Recovery) will be the topic of an upcoming post – it’s such an important part of any organizations basic “Digital Foundation”.]

The Non-Technical Aspects of Data Security

The very nature of ‘digital data’ is both an absolute boon to organizations in so many ways: communication, design, finance, sales, online business – the list is endless. The fantastic toolsets we now have in terms of high-powered smartphones and tablets coupled with sophisticate software ‘apps’ have put modern business in the hands of almost anyone. This is based on the core of any digital system: the concept of binary values. Every piece of e-mail, data, bank account details or digital photograph is ultimately a series of digital values: either a 1 or a 0. This is the difference between the older analog systems (many shades of gray) and digital (black or white, only 2 values). This core concept of digital systems makes copying, transmission, etc of data very easy and very fast. A particular block of digital data, when copied with no errors, is absolutely indistinguishable from the ‘original’. While in most cases this is what makes the whole digital world work as well as it does, it also creates a built-in security threat. Once a copy is made, if it is appropriated by an unauthorized user it’s as if the original was taken. The many thousands of e-mails that were stolen and then released by the hackers that compromised the Sony Pictures data networks is a classic example of this…

While there are both technical methods and process controls that can mitigate this risk, it’s imperative that business owners / stakeholders understand that the very nature of a digital system has a built-in risk to data ‘leakage’. Only with this knowledge can adequate controls be put in place to prevent data loss or unauthorized use. Another side to digital systems, particularly communication systems (such as e-mail and social media), is how many of the software applications are designed and constructed. Many of these, mostly social media types, have uninhibited data sharing as the ‘normal’ way the software works – with the user having to take extra effort to limit the amount of sharing allowed.

An area that is a particular challenge is the ‘connectedness’ of modern data networks. The new challenge of privacy in the digital ecosystem has prompted (and will continue to) many conversations, from legal to moral/ethical to practical. The “Facebook” paradigm [everything is shared with everybody unless you take efforts to limit such sharing] is really something we haven’t experienced since small towns in past generations where everybody knew everyone’s business…

While social media is fast becoming an important aspect of many firms’ marketing, customer service and PR efforts, they must be designed rather carefully in order to isolate those ‘data sharing’ platforms from the internal business and financial systems of a company. It is surprisingly easy for inadvertent ‘connections’ to be made between what should be private business data and the more public social media facet of a business. Even if a direct connection is not made between say, the internal company e-mail address book and their external Facebook account (a practice that unfortunately I have witnessed on many more than one occasion!), the inappropriate positioning of a firm’s Twitter client on the same sub-network as their e-mail servers is a hacker’s dream: it usually will take a clever hacker only minutes to ‘hop the fence’ and gain access to the e-mail server if they were able to compromise the Twitter account.

Many of the most important issues surrounding good Data Security are not technical, but rather principles and practices of good security. Since ultimately human beings are often a significant actor in the chain of entities that handle data, these humans need guidance and effective protocols just like the computers need well-designed software that protects the underlying data. Access controls (from basic passwords to sophisticated biometric parameters such as fingerprints or retina scans); network security controls (for instance requiring at least two network administrators to collectively authorize large data transfers or deletions – which would have prevented most of the Sony Pictures data theft/destruction); compartmentalization of data (the practice of controlling both storage and access to different parts of a firms’ digital assets in separate digital repositories); and the newcomer on the block: cloud computing (essentially just remote data centers that host storage, applications or even entire IT platforms for companies) – all of these are areas that have very human philosophies and governance issues that are just implemented with technology.

Summary

In Part 1 of this post we have discussed the concepts and basic practices of digital security, and covered an overview of Data Security. The next part will discuss in further detail a few of the most useful parts of the Data Security model, and offer some practical solutions for good governance in these areas.

Part 2 of this series is located here.

  • Blog at WordPress.com.
  • Connect with us:
  • Twitter
  • Vimeo
  • YouTube
  • RSS
  • Follow Following
    • Parasam
    • Join 95 other followers
    • Already have a WordPress.com account? Log in now.
    • Parasam
    • Customize
    • Follow Following
    • Sign up
    • Log in
    • Report this content
    • View site in Reader
    • Manage subscriptions
    • Collapse this bar
 

Loading Comments...