Introduction
This part of the series will discuss Data Compartmentalization – the rational separation of devices, data, applications and communications access from each other and external entities. This strategy is paramount in the design of a good Data Security model. While it’s often overlooked, particularly in small business (large firms tend to have more experienced IT managers who have been exposed to this), the biggest issue with Compartmentalization is keeping it in place over time, or not fully implementing it correctly in the first place. While not difficult per se, a serious amount of thought must be given to the layout and design of all the parts of the Data Structure of a firm if the balance between Security and Usability is to be attained in regards to Compartmentalization.
The concept of Data Compartmentalization (allow me to use the acronym DC in the rest of this post, both for ease of the author in writing and you the reader!) implies a separating element, i.e. a wall or other structure. Just as the watertight compartments in a submarine can keep the boat from sinking if one area is damaged (and the doors are closed!!) a ‘data-wall’ can isolate a breached area without allowing the enterprise at large to be exposed to the risk. DC is not only a good idea in terms of security, but also integrity and general business reliability. For instance, it’s considered good practice to feed different compartments with mains power from different distribution panels. So if one panel experiences a fault not everything goes black at once. A mis-configured server that is running amok and choking that segment of a network can easily be isolated to prevent a system-wide ‘data storm’ – an event that will raise the hairs on any seasoned network admin… a mis-configured DNS server can be a bear to fix!
In this section we’ll take a look at different forms of DC, and how each is appropriate to general access, network communications, applications servers, storage and so on. As in past sections, the important aspect to take away is the overall strategy, and the intestinal fortitude to ensure that the best practices are built, then followed, for the full life span of the organization.
The Data Security Model
Data Compartmentalization
DC (Data Compartmentalization) is essentially the practice of grouping and isolating IT functions to improve security, performance, reliability and ease of maintenance. While most of our focus here will be on the security aspects of this practice, it’s helpful to know that basic performance is often increased (by reducing traffic on a LAN [Local Area Network] sub-net. Reliability is improved since there are fewer moving parts within one particular area; and it’s easier to patch or otherwise maintain a group of devices when they are already grouped and isolated from other areas. It is not at all uncommon for a configuration change or software update to malfunction, and sometimes this can wreak havoc on the rest of the devices that are interconnected on that portion of a network. If all hell breaks loose, you can quickly ‘throw the switch’ and separate the malfunctioning group from the rest of the enterprise – provided this Compartmentalization has first been set up.
Again, from a policy or management point of view, it’s not important to understand all the details of programming firewalls or other devices that act as the ‘walls’ that separate these compartments (believe me, the arcane and tedious methodology required even today to correctly set up PIX firewalls for example gives even the most seasoned network admins severe headaches!). The fundamental concept is that it’s possible and highly desirable to break things down into groups, and then separate these groups at a practical level.
For a bit of perspective, let’s look at how DC operates in conjunction with each of the subjects that we’ve discussed so far: Access Control, Network Security Control and Application Security. In terms of Access Control the principle of DC would enforce that logins to one logical group (or domain or whatever logical division is appropriate to the organization) would restrict access to only the devices within that group. Now once again we run up against the age-old conundrum of Security vs. Usability – and here the oft-desired SSO (Single Sign On) feature is often at odds with a best practice of DC. How does one manage that effectively?
There are several methods: either a user is asked for additional authentication when wanting to cross a ‘boundary’ – or a more sophisticated SSO policy/implementation is put in place, where a request from a user to ‘cross a boundary’ is fed to an authentication server that automatically validates the request against the user’s permissions and allows the user to travel farther in cyberspace. As mentioned earlier in the section on Network Security, there is a definite tradeoff on this type of design, because rogue or stolen credentials could then be used to access very wide areas of an organization’s data structure. There are ways to deal with this, mostly in terms of monitoring controls that match the behavior of users with their past behavior, and a very granular set of ACL (Access Control List) permissions that are very specific about who can do what and where. There is no perfect answer to this balance between ideal security and friction-free access throughout a network. But in each organization the serious questions need to be asked, and a rational policy hammered out – with an understanding of the risks of whatever compromise is chosen.
Moving on to the concept of DC for Network Security, a similar set of challenges and possible solutions to the issues raised above in Access Control present themselves. While one may think that from a pure usability standpoint everything in the organization’s data structure should be connected to everything else, this is neither practical or reliable, let alone secure. One of the largest challenges to effective DC for large and mature networks is that these typically have grown over years, and not always in a fully designed manner: often things have expanded organically, with bits stuck on here and there as immediate tactical needs arose. The actual underpinnings of many networks of major corporations, governments and military sites are usually byzantine, rather disorganized and not fully documented. The topology of a given network or group of networks also has a direct effect on how DC can be implemented: that is why the best designs are where DC is taken as a design principle from day one. There is no one ‘correct’ way to do this: the best answer for any given organization is highly dependent on the type of organization, the amount and type of data being moved around and/or stored, and how much interconnection is required. For instance, a bank will have very different requirements than a news organization.
Application Security can only be enhanced by judicious use of compartmentalization. For instance, web servers, an inherently public-facing application, should be isolated from an organization’s database, e-mail and authentication servers. One should also remember that the basic concepts of DC can be applied no matter how small or large an organization is: even a small business can easily separate public-facing apps from secure internal financial systems, etc. with a few small routers/firewalls. These devices are so inexpensive these days that there is almost no rationale for not implementing these simple safeguards.
One can see that the important concept of DC can be applied to virtually any area of the Data Security model: while the details of achieving the balance between what to compartmentalize and how to monitor/control the data movement between areas will vary from organization to organization, the basic methodology is simple and provides an important foundation for a secure computational environment.
Summary
In this section we’ve reviewed how Data Compartmentalization is a cornerstone of a sound data structure, aiding not only security but performance and reliability as well. The division of an extended and complex IT ecosystem into ‘blocks’ allows for flexibility, ease of maintenance and greatly contributes to the ability to contain a breach should one occur (and it will inevitably will!). One of the greatest mistakes any organization can make is to assume “it won’t happen to me.” Breaches are astoundingly commonplace, and many are undetected or go unreported even if discovered. For many reasons, including losing customer or investor confidence, potential financial losses, lack of understanding, etc. the majority of breaches that occur within commercial organizations are not publicly reported. Usually we find out only when the breach is sufficient in scope that it must be reported. And the track record for non-commercial institutions is even worse. NGOs, charities, research institutions, etc. often don’t even know of a breach unless something really big goes wrong.
The last part of this series will discuss Cloud Computing: as a relatively new ‘feature’ of the IT landscape, the particular risks and challenges to a good security model warrant a focused effort. The move to using some aspect of the Cloud is becoming prevalent very quickly among all levels of organizations: from the massive scale of iTunes down to an individual user or small business backing up their smartphones to the Cloud.
Part 6 of this series is located here.