In this section we’ll move on from Network Security (discussed in the last part) to the topic of Application Security. So far we’ve covered issues surrounding the security of basic access to devices (Access Control) and networks (Network Security), now we’ll look at an oft-overlooked aspect of a good Data Security model: how applications behave in regards to a security strategy. Once we have access to computers, smartphone, tablets, etc.; and then have privileges to connect to other devices through networks it’s like being inside a shop without doing any shopping. Rather useless…
All of our functional performance with data is through one or more applications: e-mail, messaging, social media, database, maps, VoIP (internet telephony), editing and sharing images, financial and planning – the list is endless. Very few modern applications work completely in a vacuum: i.e. perform their function with absolutely no connection to the “outside world”. The connections that applications make can be as benign as accessing completely ‘local’ data – such as a photo-editing app requiring access to your photo library on the same computer on which the app is running; or can reach out the to entire public internet – such as Facebook.
The security implications of these interactions between applications and other apps, stored data, websites, etc. etc. are the area of discussion for the rest of this section.
The Data Security Model
Trying to think of an app that is completely self-contained is actually an exercise. A simple calculator and the utility app that switches on the LED “flash” (to function as a flashlight) are the only two apps on my phone that are completely ‘stand-alone’. Every other app (some 200 in my case) connect in some way to external data (even if on the phone itself) or the local network, the web, etc. Each one of these ‘connections’ carries with it a security risk. Remember that hackers are like rainwater: even the tiniest little hole in your roof or wall will allow water into your home.
While you may think that your cellphone camera app is a very secure thing -after all you are only taking pix with your phone and storing those images directly on your phone… we are not discussing uploading these images or sharing them in any way (yet). However… remember that little message that pops up when you first install an app such as that? Where it asks permission to access ‘your photos’? (Different apps may ask for permission for different things… and this only applies to phones and tablets: laptops and desktops never seem to ask at all – they just connect to whatever they want to!)
I’ll give you an example of how ‘security holes’ can contribute to a weakness in your overall Data Security model: We’ll use the smartphone as an example platform. Your camera has access to photos. Now you’ve installed Facebook as well, and in addition to the Facebook app itself you’ve installed the FB “Platform” (which supports 3rd party FB apps) and a few FB apps, including some that allow you to share photos online. FB apps in general are notoriously ‘leaky’ (poorly written in terms of security, and some even deliberately do things with your data that they do not disclose). A very common user behavior on a phone is to switch apps without fully closing them. If FB is running in the background all installed FB apps are running as well. Each time you take a photo, these images are stored in the Camera Roll which is now shared with the FB apps – which can access and share these images without your knowledge. So the next time you see celebrity pix of things we really don’t need to see any more of… now you know one way this can easily happen.
The extent to which apps ‘share’ data is far greater than is usually recognized. This is particularly true in larger firms that often have distributed databases, etc. Some other examples of particularly ‘porous’ applications are: POS (Point Of Sale) systems, social media applications (corporate integration with Twitter, Facebook, etc. can be highly vulnerable), mobile advertising backoffice systems, applications that aggregate and transfer data to/from cloud accounts and many more. (Cloud Computing is a case unto itself, in terms of security issues, and will be discussed as the final section in this series.)
There are often very subtle ways in which data is shared from a system or systems. Some of these appear very innocuous but a determined hacker can make use of even small bits of data which can be linked with other bits to eventually provide enough information to make a breach possible. One example is many apps (including OS themselves – whether Apple, Android, Windows, etc.) send ‘diagnostic’ data to the vendor. Usually this is described as ‘anonymous’ and gives the user the feeling that’s it’s ok to do this: firstly personal information is not transmitted, secondly the data is supposedly only going to the vendor’s website for data collection – usually to study application crashes.
However, it’s not that hard to ‘spoof’ the server address to which the data is being sent, and seemingly innocent data being sent can often include either the ip address or MAC address of the device – which can be very useful in the future to a hacker that may attempt to compromise that device. The internal state of many ‘software switches’ is also revealed – which can tell a hacker whether some patches have been installed or not. Even if the area revealed by the app dump is not directly useful, a hacker that sees ‘stale’ settings (showing that this machine has not been updated/patched recently) may assume that other areas of the same machine are also not patched, and can use discovered vulnerabilities to attempt to compromise the security of that device.
The important thing to take away from this discussion is not the technical details (that is what you have IT staff for), but rather to ensure that protocols are in place to constantly keep ALL devices (including routers and other devices that are not ‘computers’ in the literal sense) updated and patched as new security vulnerabilities are published. An audit program should be in place to check this, and the resulting logs need to actually be studied, not just filed! You do not want to be having a meeting at some future date where you find out that a patch that could have prevented a data breach remained uninstalled for a year… which BTW is extraordinarily common.
The ongoing maintenance of a large and extended data system (such as many companies have) is a significant effort. It is as important as the initial design and deployment of systems themselves. There are well-know methodologies for doing this correctly that provide a high level of both security and stability for the applications and technical business process in general. It’s just that often they are not universally applied without exception. And it’s those little ‘exceptions’ that can bite you in the rear – fatally.
A good rule of thumb is that every time you launch an application, that app is ‘talking’ to at least ten other apps, OS processes, data stores, etc. Since the average user has dozens of apps open and running simultaneously, you can see that most user environments are highly interconnected and potentially porous. The real truth is that as a collective society, we are lucky that there are not enough really good hackers to go around: the amount of potential vulnerabilities vastly outnumbers those who would take advantage of them!
If you really want to look at extremes of this ‘cat and mouse’ game, do some deep reading on the biggest ‘hack’ of all time: the NSA penetration of massive amounts of US citizen’s data on the one side; the procedures that Ed Snowden took in communicating with Laura Poitras and Glenn Greenwald on the other side (the journalists who first connected with Snowden). Ed Snowden, more than just about anyone, knew how to effectively use computers and not be breached. It was fairly elaborate – but not at all that difficult – he managed to instruct both Laura and Glenn how to set up the necessary security on their computers so that reliable and totally secret communications could take place.
Another very important issue of which to be aware, particularly in this age of combined mobile and corporate computing with thousands of interconnected devices and applications: breaches WILL occur. It’s how you discover and react to them that is often the difference between a relatively minor loss and a CNN exposé level… The bywords one should remember are: Containment, Awareness, Response and Remediation. Any good Data Security protocol must include practices that are just as effective against M2M (Machine to Machine) actions as well as things performed by human actors. So constant monitoring software should be in place to see whether unusual amounts of connections, file transfers, etc. are taking place – even from one server to another. I know it’s an example I’ve used repeatedly in this series, (I can’t help it – it’s such a textbook case of how not to do things!) but the Sony hack revealed that a truly massive amount of data (as in many, many terabytes) was transferred from supposedly ‘highly secure’ servers/storage farms to repositories outside the USA. Someone or something should have been notified that very sustained transfers of this magnitude were occurring, so at least some admin could check and see what was using all this bandwidth. Both of the most common corporate file transfer applications (Aspera and Signiant) have built-in management tools that can report on what’s going where.. so this was not a case of something that needed to be built – it’s a case of using what’s already provided correctly.
Many, if not most, applications can be ‘locked down’ to some extent – the amount and degree of communication can be controlled to help reduce vulnerabilities. Sometimes this is not directly possible within the application, but it’s certainly possible if the correct environment for the apps is designed and implemented appropriately. For example, a given database engine may not have the level of granular controls to effectively limit interactions for your firm’s use case. If that application (and possibly others of similar function) are run on a group of application servers that are isolated from the rest of the network with a small firewall, the firewall settings can be used to very easily and effectively limit precisely which other devices these servers can reach, what kind of data they may send/receive, the time of day when they can be accessed, etc. etc. Again, most of good security is in the overall concept and design, as even excellent implementation of a poor design will not be effective.
Applications are what actually give us function in the data world, but they must be carefully installed, monitored and controlled in order to obtain the best security and reliability. We’ve reviewed a number of common scenarios that demonstrate how easily data can be compromised by unintended communication to/from your applications. Applications are vital to every aspect of work in the Data world, but can, and do, ‘leak’ data to many unintended areas – or provide an unintended bridge to sensitive data.
The next section will discuss Data Compartmentalization. This is an area of the Data Security model that is least understood and practiced. It’s a bit like the waterproof compartment in a submarine: if one area is flooded, by closing the communicating doors the rest of the boat can be saved from disaster. A big problem (again, not technical but procedural) is that in many organizations, even where a good initial design segregated operations into ‘compartments’ it doesn’t take very long at all for “cables to be thrown over the fence”, therefore bypassing the very protections that were put in place. Often this is done for expediency to fix a problem, or some new app needs to be brought on line quickly and taking the time to install things properly with all the firewall rule changes is waived. These business practices are where good governance, proper supervision, and continually asking the right questions is vital.
Part 5 of this series is located here.
Tagged: business technology, cybersecurity, data integrity, data security