Introduction
In Part 2 of this series we reviewed Access Control – the beginning of a good Data Security policy. In this section we’ll move on to Network Security Controls: once a user has access to a device or data, almost all interactions today are with a network of devices, servers, storage, transmission paths, web sites, etc. The design and management of these networks are absolutely critical to the security model. This area is particularly vulnerable to intrusion or inadvertent ‘leakage’ of data, as networks continue to grow and can become very complex. Often parts of the network have been in place for a long time, with no one currently managing them even aware of the initial design parameters or exactly how pieces are interconnected.
There are a number of good business practices that should be reviewed and compared to your firm’s networks – and this is not highly technical, it just requires methodology, common sense, and the willingness to ask the right questions and take action should the answers reveal weakness in network security.
The Data Security Model
Network Security Controls
Once more than one IT device is interconnected, we have a network. The principle of Access Control discussed in the prior section is equally applicable to a laptop or an entire global network of thousands of devices. There are two major differences when we expand our security concept to an interconnected web of computers, storage, etc. The first is that when a user signs on (whether with a simple username or password, or a sophisticated Certificate and Biometric authorization) instead of logging into a single device, such as their laptop, they sign in to a portion of the network. The devices to which they are authorized are contained in an Access Control List (ACL) – which is usually chosen by the network administrator. (This process is vastly simplified here in regards to the complex networks that can exist today in large firms, but the principle is the same). It’s a bit like a passport that allows you into a certain country, but not necessarily other bordering countries. Or one may gain entry into a neighboring country with restrictions, such as are put in place with visas for international travelers.
The second major difference, in terms of network security in relation to logging into a single device, is that within a network there are often many instances of where one device needs to communicate directly to another device, with no human taking place in that process. These are called M2M (Machine to Machine) connections. It’s just as important that any device that wants to connect to another device within your network be authorized to do so. Again, the network administrator is responsible for setting up the ACLs that control this, and in addition many connections are restricted: a given server may only receive data but not send, or only have access to a particular kind of data.
In a large modern network, the M2M communication usually outnumbers the Human to Machine interactions by many orders of magnitude – and this is where security lapses often occur, just due to the sheer number of interconnected devices and the volume of data being exchanged. While it’s important to have in place trained staff, good technical protocols, etc. to manage this access and communication, the real protection comes from adopting, and continually monitoring adherence to, a sound and logical security model at the highest level. If this is not done, then usually a patchwork of varying approaches to network security ends up being implemented, with almost certain vulnerabilities.
The biggest cause of network security lapses results from the usual suspect: Security vs Usability. We all want ‘ease of use’ – and unfortunately no one more than network administrators and others that must traverse vast amounts of networks daily in their work, often needing to quickly log into many different servers to either solve problems or facilitate changes requested by impatient users. There is a rather well-liked login policy that is very popular with regular users and network admins alike: that of Single Sign On (SSO). While this provides great ease of use, and is really the only practical method for users to navigate large and complex networks, its very design is a security flaw waiting to happen. The proponents of SSO will argue that detailed ACLs (Access Control Lists – discussed in Part 2 of this series) can restrict very clearly the boundaries of who can do what, and where. And, they will point out, these ACLs are applicable to machines as well as humans, so in theory a very granular – and secure – network permissions environment can be built and maintained.
As always, the devil is in the details… and the larger a network gets the more details there are… couple that with inevitable human error, software bugs, determined hackers, etc. etc. and sooner or later a breach of data security is inevitable. This is where the importance of a really sound overall security strategy is paramount – one that takes into account that neither humans nor software are created perfectly, and that breaches will happen at some point. The issue is one of containment, awareness, response and remediation. Just as a well-designed building can tolerate a fire without quickly burning to the ground – due to such features as firestops in wall design, fire doors, sprinkler systems, fire retardant furnishings, etc.; so can a well designed network tolerate breaches without allowing unfettered access to the entire network. Unfortunately, many (if not most) corporate networks in existence today have substantial weaknesses in this area. The infamous Sony Pictures ‘hack’ was initiated by a single set of stolen credentials being used to access virtually the entire world-wide network of this company. (It’s a bit more complicated than this, but in essence that’s what happened). Ideally, the compromise of a single set of credentials should not have allowed the extent and depth of that breach.
Just as when a person is travelling from one state to another within even a single country, you must usually stop at the border and a guard has a quick look at your car, may ask if you have any fruit or veg (in case of parasites), etc. – so a basic check of permissions should occur when access is requested from a substantially different domain that the one where the initial logon took place. Even more important -going back to our traveler analogy: if instead of a car the driver is in a large truck usually a more detailed inspection is warranted. A Bill of Lading must be presented, and a partial physical inspection is usually performed. The Data equivalent of this (stateful inspection) will reveal if the data being moved is appropriate to the situation. Using the Sony hack as an example, the fact that hundreds of thousands of e-mails were being moved from within a ‘secure’ portion of the network to servers located outside the USA should have tripped a notification somewhere…
The important issue here to keep remembering is that the detailed technology, software, hardware or other bits that make this all work are not what needs to understood at the executive level: what DOES need to be in place is the governance, policy and willfulness to enforce the policies on a daily basis. People WILL complain – certain of these suggested policies make accessing, moving, deleting data a bit more difficult and bit more time-consuming. However, as we have seen, the tradeoff is worth it. No manager or executive wants to answer the kind of questions after the fact that can result from ignoring sound security practices…
Probably the single most effective policy to implement – and enforce – is the ‘buddy system’ at the network admin level. No single person should have unfettered access to the entire network scope of a firm. And most importantly, this must include any outside contractors. This is an area oft overlooked. The details must be designed for each firm, as there are so many variations, but essentially at least two people must authenticate major moves/deletions/copies of a certain scope of data. A few examples: if an admin that routinely works in an IT system in California wants access to the firm’s storage servers in India, a local admin within India should be required to additionally authenticate this request. Or if a power user whose normal function is backup and restore of e-mails wants to delete hundreds of entire mailboxes then a second authentication should be required.
While the above examples involved human operators, the same policies are effective with M2M actions. In many cases, sophisticated malware that is implanted in a network by a hacker can carry out machine operations – provided the credentials are provided. Again, if a ‘check & balance’ system is in place, unfettered M2M operations would not be allowed to proceed unhindered. Another policy to consider adopting is that of not excluding any operations by anyone at all from these intrusion detection, audit or other protective systems. Often senior network admins will request that these protective systems be bypassed for operations performed by a select class of users (most often top network admins) – as they often say that these systems get in the way of their work when trying to resolve critical issues quickly. This is a massive weakness – as has been shown many times when these credentials are compromised.
Summary
Although networks range from very simple to extraordinarily complex, the same set of good governance policies, protocols and other ‘rules of the road’ can provide an excellent level of security within the the data portion of a company. This section has reviewed several of these, and discussed some examples of effective policies. The most important aspect of network security is often not the selection of the chosen security measures, but the practice of ensuring that they are in place completely across the entire network. These measures also should be regularly tested and checked.
In the next section, we’ll discuss Application Security: how to analyze the extent to which many applications expose data to unintended external hosts, etc. Very often applications are quite ‘leaky’ and can easily compromise data security.
Part 4 of this series is located here.
Tagged: business technology, cybersecurity, data integrity, data security