• Home
  • about this blog
  • Blog Posts

Parasam

Menu

  • design
  • fashion
  • history
  • philosophy
  • photography
  • post-production
    • Content Protection
    • Quality Control
  • science
  • security
  • technology
    • 2nd screen
    • IoT
  • Uncategorized
  • Enter your email address to follow this blog and receive notifications of new posts by email.

  • Recent Posts

    • Take Control of your Phone
    • DI – Disintermediation, 5 years on…
    • Objective Photography is an Oxymoron (all photos lie…)
    • A Historical Moment: The Sylmar Earthquake of 1971 (Los Angeles, CA)
    • Where Did My Images Go? [the challenge of long-term preservation of digital images]
  • Archives

    • September 2020
    • October 2017
    • August 2016
    • June 2016
    • May 2016
    • November 2015
    • June 2015
    • April 2015
    • March 2015
    • December 2014
    • February 2014
    • September 2012
    • August 2012
    • June 2012
    • May 2012
    • April 2012
    • March 2012
    • February 2012
    • January 2012
  • Categories

    • 2nd screen
    • Content Protection
    • design
    • fashion
    • history
    • IoT
    • philosophy
    • photography
    • post-production
    • Quality Control
    • science
    • security
    • technology
    • Uncategorized
  • Meta

    • Register
    • Log in
    • Entries feed
    • Comments feed
    • WordPress.com

Browsing Category technology

Data Security – An Overview for Executive Board members [Part 5: Data Compartmentalization]

March 20, 2015 · by parasam

Introduction

This part of the series will discuss Data Compartmentalization – the rational separation of devices, data, applications and communications access from each other and external entities. This strategy is paramount in the design of a good Data Security model. While it’s often overlooked, particularly in small business (large firms tend to have more experienced IT managers who have been exposed to this), the biggest issue with Compartmentalization is keeping it in place over time, or not fully implementing it correctly in the first place. While not difficult per se, a serious amount of thought must be given to the layout and design of all the parts of the Data Structure of a firm if the balance between Security and Usability is to be attained in regards to Compartmentalization.

The concept of Data Compartmentalization (allow me to use the acronym DC in the rest of this post, both for ease of the author in writing and you the reader!) implies a separating element, i.e. a wall or other structure. Just as the watertight compartments in a submarine can keep the boat from sinking if one area is damaged (and the doors are closed!!) a ‘data-wall’ can isolate a breached area without allowing the enterprise at large to be exposed to the risk. DC is not only a good idea in terms of security, but also integrity and general business reliability. For instance, it’s considered good practice to feed different compartments with mains power from different distribution panels. So if one panel experiences a fault not everything goes black at once. A mis-configured server that is running amok and choking that segment of a network can easily be isolated to prevent a system-wide ‘data storm’ – an event that will raise the hairs on any seasoned network admin… a mis-configured DNS server can be a bear to fix!

In this section we’ll take a look at different forms of DC, and how each is appropriate to general access, network communications, applications servers, storage and so on. As in past sections, the important aspect to take away is the overall strategy, and the intestinal fortitude to ensure that the best practices are built, then followed, for the full life span of the organization.

The Data Security Model

Data Compartmentalization

DC (Data Compartmentalization) is essentially the practice of grouping and isolating IT functions to improve security, performance, reliability and ease of maintenance. While most of our focus here will be on the security aspects of this practice, it’s helpful to know that basic performance is often increased (by reducing traffic on a LAN [Local Area Network] sub-net. Reliability is improved since there are fewer moving parts within one particular area; and it’s easier to patch or otherwise maintain a group of devices when they are already grouped and isolated from other areas. It is not at all uncommon for a configuration change or software update to malfunction, and sometimes this can wreak havoc on the rest of the devices that are interconnected on that portion of a network. If all hell breaks loose, you can quickly ‘throw the switch’ and separate the malfunctioning group from the rest of the enterprise – provided this Compartmentalization has first been set up.

Again, from a policy or management point of view, it’s not important to understand all the details of programming firewalls or other devices that act as the ‘walls’ that separate these compartments (believe me, the arcane and tedious methodology required even today to correctly set up PIX firewalls for example gives even the most seasoned network admins severe headaches!). The fundamental concept is that it’s possible and highly desirable to break things down into groups, and then separate these groups at a practical level.

For a bit of perspective, let’s look at how DC operates in conjunction with each of the subjects that we’ve discussed so far: Access Control, Network Security Control and Application Security. In terms of Access Control the principle of DC would enforce that logins to one logical group (or domain or whatever logical division is appropriate to the organization) would restrict access to only the devices within that group. Now once again we run up against the age-old conundrum of Security vs. Usability – and here the oft-desired SSO (Single Sign On) feature is often at odds with a best practice of DC. How does one manage that effectively?

There are several methods: either a user is asked for additional authentication when wanting to cross a ‘boundary’ – or a more sophisticated SSO policy/implementation is put in place, where a request from a user to ‘cross a boundary’ is fed to an authentication server that automatically validates the request against the user’s permissions and allows the user to travel farther in cyberspace. As mentioned earlier in the section on Network Security, there is a definite tradeoff on this type of design, because rogue or stolen credentials could then be used to access very wide areas of an organization’s data structure. There are ways to deal with this, mostly in terms of monitoring controls that match the behavior of users with their past behavior, and a very granular set of ACL (Access Control List) permissions that are very specific about who can do what and where. There is no perfect answer to this balance between ideal security and friction-free access throughout a network. But in each organization the serious questions need to be asked, and a rational policy hammered out – with an understanding of the risks of whatever compromise is chosen.

Moving on to the concept of DC for Network Security, a similar set of challenges and possible solutions to the issues raised above in Access Control present themselves. While one may think that from a pure usability standpoint everything in the organization’s data structure should be connected to everything else, this is neither practical or reliable, let alone secure. One of the largest challenges to effective DC for large and mature networks is that these typically have grown over years, and not always in a fully designed manner: often things have expanded organically, with bits stuck on here and there as immediate tactical needs arose. The actual underpinnings of many networks of major corporations, governments and military sites are usually byzantine, rather disorganized and not fully documented. The topology of a given network or group of networks also has a direct effect on how DC can be implemented: that is why the best designs are where DC is taken as a design principle from day one. There is no one ‘correct’ way to do this: the best answer for any given organization is highly dependent on the type of organization, the amount and type of data being moved around and/or stored, and how much interconnection is required. For instance, a bank will have very different requirements than a news organization.

Application Security can only be enhanced by judicious use of compartmentalization. For instance, web servers, an inherently public-facing application, should be isolated from an organization’s database, e-mail and authentication servers. One should also remember that the basic concepts of DC can be applied no matter how small or large an organization is: even a small business can easily separate public-facing apps from secure internal financial systems, etc. with a few small routers/firewalls. These devices are so inexpensive these days that there is almost no rationale for not implementing these simple safeguards.

One can see that the important concept of DC can be applied to virtually any area of the Data Security model:  while the details of achieving the balance between what to compartmentalize and how to monitor/control the data movement between areas will vary from organization to organization, the basic methodology is simple and provides an important foundation for a secure computational environment.

Summary

In this section we’ve reviewed how Data Compartmentalization is a cornerstone of a sound data structure, aiding not only security but performance and reliability as well. The division of an extended and complex IT ecosystem into ‘blocks’ allows for flexibility, ease of maintenance and greatly contributes to the ability to contain a breach should one occur (and it will inevitably will!). One of the greatest mistakes any organization can make is to assume “it won’t happen to me.” Breaches are astoundingly commonplace, and many are undetected or go unreported even if discovered. For many reasons, including losing customer or investor confidence, potential financial losses, lack of understanding, etc. the majority of breaches that occur within commercial organizations are not publicly reported. Usually we find out only when the breach is sufficient in scope that it must be reported. And the track record for non-commercial institutions is even worse. NGOs, charities, research institutions, etc. often don’t even know of a breach unless something really big goes wrong.

The last part of this series will discuss Cloud Computing: as a relatively new ‘feature’ of the IT landscape, the particular risks and challenges to a good security model warrant a focused effort. The move to using some aspect of the Cloud is becoming prevalent very quickly among all levels of organizations: from the massive scale of iTunes down to an individual user or small business backing up their smartphones to the Cloud.

Part 6 of this series is located here.

 

Data Security – An Overview for Executive Board Members [Part 4: Application Security]

March 19, 2015 · by parasam

Introduction

In this section we’ll move on from Network Security (discussed in the last part) to the topic of Application Security. So far we’ve covered issues surrounding the security of basic access to devices (Access Control) and networks (Network Security), now we’ll look at an oft-overlooked aspect of a good Data Security model: how applications behave in regards to a security strategy. Once we have access to computers, smartphone, tablets, etc.; and then have privileges to connect to other devices through networks it’s like being inside a shop without doing any shopping. Rather useless…

All of our functional performance with data is through one or more applications: e-mail, messaging, social media, database, maps, VoIP (internet telephony), editing and sharing images, financial and planning – the list is endless. Very few modern applications work completely in a vacuum:  i.e. perform their function with absolutely no connection to the “outside world”. The connections that applications make can be as benign as accessing completely ‘local’ data – such as a photo-editing app requiring access to your photo library on the same computer on which the app is running; or can reach out the to entire public internet – such as Facebook.

The security implications of these interactions between applications and other apps, stored data, websites, etc. etc. are the area of discussion for the rest of this section.

The Data Security Model

Application Security

Trying to think of an app that is completely self-contained is actually an exercise. A simple calculator and the utility app that switches on the LED “flash” (to function as a flashlight) are the only two apps on my phone that are completely ‘stand-alone’. Every other app (some 200 in my case) connect in some way to external data (even if on the phone itself) or the local network, the web, etc. Each one of these ‘connections’ carries with it a security risk. Remember that hackers are like rainwater: even the tiniest little hole in your roof or wall will allow water into your home.

While you may think that your cellphone camera app is a very secure thing -after all you are only taking pix with your phone and storing those images directly on your phone… we are not discussing uploading these images or sharing them in any way (yet). However… remember that little message that pops up when you first install an app such as that? Where it asks permission to access ‘your photos’? (Different apps may ask for permission for different things… and this only applies to phones and tablets: laptops and desktops never seem to ask at all – they just connect to whatever they want to!)

I’ll give you an example of how ‘security holes’ can contribute to a weakness in your overall Data Security model: We’ll use the smartphone as an example platform. Your camera has access to photos. Now you’ve installed Facebook as well, and in addition to the Facebook app itself you’ve installed the FB “Platform” (which supports 3rd party FB apps) and a few FB apps, including some that allow you to share photos online. FB apps in general are notoriously ‘leaky’ (poorly written in terms of security, and some even deliberately do things with your data that they do not disclose). A very common user behavior on a phone is to switch apps without fully closing them. If FB is running in the background all installed FB apps are running as well. Each time you take a photo, these images are stored in the Camera Roll which is now shared with the FB apps – which can access and share these images without your knowledge. So the next time you see celebrity pix of things we really don’t need to see any more of… now you know one way this can easily happen.

The extent to which apps ‘share’ data is far greater than is usually recognized. This is particularly true in larger firms that often have distributed databases, etc. Some other examples of particularly ‘porous’ applications are: POS (Point Of Sale) systems, social media applications (corporate integration with Twitter, Facebook, etc. can be highly vulnerable), mobile advertising backoffice systems, applications that aggregate and transfer data to/from cloud accounts and many more. (Cloud Computing is a case unto itself, in terms of security issues, and will be discussed as the final section in this series.)

There are often very subtle ways in which data is shared from a system or systems. Some of these appear very innocuous but a determined hacker can make use of even small bits of data which can be linked with other bits to eventually provide enough information to make a breach possible. One example is many apps (including OS themselves – whether Apple, Android, Windows, etc.) send ‘diagnostic’ data to the vendor. Usually this is described as ‘anonymous’ and gives the user the feeling that’s it’s ok to do this: firstly personal information is not transmitted, secondly the data is supposedly only going to the vendor’s website for data collection – usually to study application crashes.

However, it’s not that hard to ‘spoof’ the server address to which the data is being sent, and seemingly innocent data being sent can often include either the ip address or MAC address of the device – which can be very useful in the future to a hacker that may attempt to compromise that device. The internal state of many ‘software switches’ is also revealed – which can tell a hacker whether some patches have been installed or not. Even if the area revealed by the app dump is not directly useful, a hacker that sees ‘stale’ settings (showing that this machine has not been updated/patched recently) may assume that other areas of the same machine are also not patched, and can use discovered vulnerabilities to attempt to compromise the security of that device.

The important thing to take away from this discussion is not the technical details (that is what you have IT staff for), but rather to ensure that protocols are in place to constantly keep ALL devices (including routers and other devices that are not ‘computers’ in the literal sense) updated and patched as new security vulnerabilities are published. An audit program should be in place to check this, and the resulting logs need to actually be studied, not just filed! You do not want to be having a meeting at some future date where you find out that a patch that could have prevented a data breach remained uninstalled for a year… which BTW is extraordinarily common.

The ongoing maintenance of a large and extended data system (such as many companies have) is a significant effort. It is as important as the initial design and deployment of systems themselves. There are well-know methodologies for doing this correctly that provide a high level of both security and stability for the applications and technical business process in general. It’s just that often they are not universally applied without exception. And it’s those little ‘exceptions’ that can bite you in the rear – fatally.

A good rule of thumb is that every time you launch an application, that app is ‘talking’ to at least ten other apps, OS processes, data stores, etc. Since the average user has dozens of apps open and running simultaneously, you can see that most user environments are highly interconnected and potentially porous. The real truth is that as a collective society, we are lucky that there are not enough really good hackers to go around: the amount of potential vulnerabilities vastly outnumbers those who would take advantage of them!

If you really want to look at extremes of this ‘cat and mouse’ game, do some deep reading on the biggest ‘hack’ of all time: the NSA penetration of massive amounts of US citizen’s data on the one side; the procedures that Ed Snowden took in communicating with Laura Poitras and Glenn Greenwald on the other side (the journalists who first connected with Snowden). Ed Snowden, more than just about anyone, knew how to effectively use computers and not be breached. It was fairly elaborate – but not at all that difficult – he managed to instruct both Laura and Glenn how to set up the necessary security on their computers so that reliable and totally secret communications could take place.

Another very important issue of which to be aware, particularly in this age of combined mobile and corporate computing with thousands of interconnected devices and applications: breaches WILL occur. It’s how you discover and react to them that is often the difference between a relatively minor loss and a CNN exposé level… The bywords one should remember are: Containment, Awareness, Response and Remediation. Any good Data Security protocol must include practices that are just as effective against M2M (Machine to Machine) actions as well as things performed by human actors. So constant monitoring software should be in place to see whether unusual amounts of connections, file transfers, etc. are taking place – even from one server to another. I know it’s an example I’ve used repeatedly in this series, (I can’t help it – it’s such a textbook case of how not to do things!) but the Sony hack revealed that a truly massive amount of data (as in many, many terabytes) was transferred from supposedly ‘highly secure’ servers/storage farms to repositories outside the USA. Someone or something should have been notified that very sustained transfers of this magnitude were occurring, so at least some admin could check and see what was using all this bandwidth. Both of the most common corporate file transfer applications (Aspera and Signiant) have built-in management tools that can report on what’s going where.. so this was not a case of something that needed to be built – it’s a case of using what’s already provided correctly.

Many, if not most, applications can be ‘locked down’ to some extent – the amount and degree of communication can be controlled to help reduce vulnerabilities. Sometimes this is not directly possible within the application, but it’s certainly possible if the correct environment for the apps is designed and implemented appropriately. For example, a given database engine may not have the level of granular controls to effectively limit interactions for your firm’s use case. If that application (and possibly others of similar function) are run on a group of application servers that are isolated from the rest of the network with a small firewall, the firewall settings can be used to very easily and effectively limit precisely which other devices these servers can reach, what kind of data they may send/receive, the time of day when they can be accessed, etc. etc. Again, most of good security is in the overall concept and design, as even excellent implementation of a poor design will not be effective.

Summary

Applications are what actually give us function in the data world, but they must be carefully installed, monitored and controlled in order to obtain the best security and reliability. We’ve reviewed a number of common scenarios that demonstrate how easily data can be compromised by unintended communication to/from your applications. Applications are vital to every aspect of work in the Data world, but can, and do, ‘leak’ data to many unintended areas – or provide an unintended bridge to sensitive data.

The next section will discuss Data Compartmentalization. This is an area of the Data Security model that is least understood and practiced. It’s a bit like the waterproof compartment in a submarine: if one area is flooded, by closing the communicating doors the rest of the boat can be saved from disaster. A big problem (again, not technical but procedural) is that in many organizations, even where a good initial design segregated operations into ‘compartments’ it doesn’t take very long at all for “cables to be thrown over the fence”, therefore bypassing the very protections that were put in place. Often this is done for expediency to fix a problem, or some new app needs to be brought on line quickly and taking the time to install things properly with all the firewall rule changes is waived. These business practices are where good governance, proper supervision, and continually asking the right questions is vital.

Part 5 of this series is located here.

Data Security – An Overview for Executive Board members [Part 3: Network Security]

March 18, 2015 · by parasam

Introduction

In Part 2 of this series we reviewed Access Control – the beginning of a good Data Security policy. In this section we’ll move on to Network Security Controls: once a user has access to a device or data, almost all interactions today are with a network of devices, servers, storage, transmission paths, web sites, etc. The design and management of these networks are absolutely critical to the security model. This area is particularly vulnerable to intrusion or inadvertent ‘leakage’ of data, as networks continue to grow and can become very complex. Often parts of the network have been in place for a long time, with no one currently managing them even aware of the initial design parameters or exactly how pieces are interconnected.

There are a number of good business practices that should be reviewed and compared to your firm’s networks – and this is not highly technical, it just requires methodology, common sense, and the willingness to ask the right questions and take action should the answers reveal weakness in network security.

The Data Security Model

Network Security Controls

Once more than one IT device is interconnected, we have a network. The principle of Access Control discussed in the prior section is equally applicable to a laptop or an entire global network of thousands of devices. There are two major differences when we expand our security concept to an interconnected web of computers, storage, etc. The first is that when a user signs on (whether with a simple username or password, or a sophisticated Certificate and Biometric authorization) instead of logging into a single device, such as their laptop, they sign in to a portion of the network. The devices to which they are authorized are contained in an Access Control List (ACL) – which is usually chosen by the network administrator. (This process is vastly simplified here in regards to the complex networks that can exist today in large firms, but the principle is the same). It’s a bit like a passport that allows you into a certain country, but not necessarily other bordering countries. Or one may gain entry into a neighboring country with restrictions, such as are put in place with visas for international travelers.

The second major difference, in terms of network security in relation to logging into a single device, is that within a network there are often many instances of where one device needs to communicate directly to another device, with no human taking place in that process. These are called M2M (Machine to Machine) connections. It’s just as important that any device that wants to connect to another device within your network be authorized to do so. Again, the network administrator is responsible for setting up the ACLs that control this, and in addition many connections are restricted: a given server may only receive data but not send, or only have access to a particular kind of data.

In a large modern network, the M2M communication usually outnumbers the Human to Machine interactions by many orders of magnitude – and this is where security lapses often occur, just due to the sheer number of interconnected devices and the volume of data being exchanged. While it’s important to have in place trained staff, good technical protocols, etc. to manage this access and communication, the real protection comes from adopting, and continually monitoring adherence to, a sound and logical security model at the highest level. If this is not done, then usually a patchwork of varying approaches to network security ends up being implemented, with almost certain vulnerabilities.

The biggest cause of network security lapses results from the usual suspect: Security vs Usability. We all want ‘ease of use’ – and unfortunately no one more than network administrators and others that must traverse vast amounts of networks daily in their work, often needing to quickly log into many different servers to either solve problems or facilitate changes requested by impatient users. There is a rather well-liked login policy that is very popular with regular users and network admins alike: that of Single Sign On (SSO). While this provides great ease of use, and is really the only practical method for users to navigate large and complex networks, its very design is a security flaw waiting to happen. The proponents of SSO will argue that detailed ACLs (Access Control Lists – discussed in Part 2 of this series) can restrict very clearly the boundaries of who can do what, and where. And, they will point out, these ACLs are applicable to machines as well as humans, so in theory a very granular – and secure – network permissions environment can be built and maintained.

As always, the devil is in the details… and the larger a network gets the more details there are… couple that with inevitable human error, software bugs, determined hackers, etc. etc. and sooner or later a breach of data security is inevitable. This is where the importance of a really sound overall security strategy is paramount – one that takes into account that neither humans nor software are created perfectly, and that breaches will happen at some point. The issue is one of containment, awareness, response and remediation. Just as a well-designed building can tolerate a fire without quickly burning to the ground – due to such features as firestops in wall design, fire doors, sprinkler systems, fire retardant furnishings, etc.; so can a well designed network tolerate breaches without allowing unfettered access to the entire network. Unfortunately, many (if not most) corporate networks in existence today have substantial weaknesses in this area. The infamous Sony Pictures ‘hack’ was initiated by a single set of stolen credentials being used to access virtually the entire world-wide network of this company. (It’s a bit more complicated than this, but in essence that’s what happened). Ideally, the compromise of a single set of credentials should not have allowed the extent and depth of that breach.

Just as when a person is travelling from one state to another within even a single country, you must usually stop at the border and a guard has a quick look at your car, may ask if you have any fruit or veg (in case of parasites), etc. – so a basic check of permissions should occur when access is requested from a substantially different domain that the one where the initial logon took place. Even more important -going back to our traveler analogy: if instead of a car the driver is in a large truck usually a more detailed inspection is warranted. A Bill of Lading must be presented, and a partial physical inspection is usually performed. The Data equivalent of this (stateful inspection) will reveal if the data being moved is appropriate to the situation. Using the Sony hack as an example, the fact that hundreds of thousands of e-mails were being moved from within a ‘secure’ portion of the network to servers located outside the USA should have tripped a notification somewhere…

The important issue here to keep remembering is that the detailed technology, software, hardware or other bits that make this all work are not what needs to understood at the executive level: what DOES need to be in place is the governance, policy and willfulness to enforce the policies on a daily basis. People WILL complain – certain of these suggested policies make accessing, moving, deleting data a bit more difficult and bit more time-consuming. However, as we have seen, the tradeoff is worth it. No manager or executive wants to answer the kind of questions after the fact that can result from ignoring sound security practices…

Probably the single most effective policy to implement – and enforce – is the ‘buddy system’ at the network admin level. No single person should have unfettered access to the entire network scope of a firm. And most importantly, this must include any outside contractors. This is an area oft overlooked. The details must be designed for each firm, as there are so many variations, but essentially at least two people must authenticate major moves/deletions/copies of a certain scope of data. A few examples:  if an admin that routinely works in an IT system in California wants access to the firm’s storage servers in India, a local admin within India should be required to additionally authenticate this request. Or if a power user whose normal function is backup and restore of e-mails wants to delete hundreds of entire mailboxes then a second authentication should be required.

While the above examples involved human operators, the same policies are effective with M2M actions. In many cases, sophisticated malware that is implanted in a network by a hacker can carry out machine operations – provided the credentials are provided. Again, if a ‘check & balance’ system is in place, unfettered M2M operations would not be allowed to proceed unhindered. Another policy to consider adopting is that of not excluding any operations by anyone at all from these intrusion detection, audit or other protective systems. Often senior network admins will request that these protective systems be bypassed for operations performed by a select class of users (most often top network admins) – as they often say that these systems get in the way of their work when trying to resolve critical issues quickly. This is a massive weakness – as has been shown many times when these credentials are compromised.

Summary

Although networks range from very simple to extraordinarily complex, the same set of good governance policies, protocols and other ‘rules of the road’ can provide an excellent level of security within the the data portion of a company. This section has reviewed several of these, and discussed some examples of effective policies. The most important aspect of network security is often not the selection of the chosen security measures, but the practice of ensuring that they are in place completely across the entire network. These measures also should be regularly tested and checked.

In the next section, we’ll discuss Application Security:  how to analyze the extent to which many applications expose data to unintended external hosts, etc. Very often applications are quite ‘leaky’ and can easily compromise data security.

Part 4 of this series is located here.

 

 

Data Security – An Overview for Executive Board members [Part 2: Access Control]

March 16, 2015 · by parasam

Introduction

In Part 1 of this topic we discussed the concepts and basic practices of digital security, and covered an overview of Data Security. In the next parts we’ll go on to cover in detail a few of the most useful parts of the Data Security model, and offer some practical solutions for good governance in these areas. The primary segments of Data Security that have significant human factors, or require an effective set of controls and strategy in order for the technical aspect to be successful are: Access Control, Network Security Controls, Application Security, Data Compartmentalization, and Cloud Computing.

Security vs Usability

This is the cornerstone of so many issues with security: the paradox between really effective security and the ease of use of a digital system. It’s not unlike wearing a seatbelt in a car… a slight decrease in ‘ease of use’ results in an astounding increase in physical security. You know this. The statistics are irrefutable. Yet hundreds of thousands of humans are either killed or injured worldwide every year by not taking this slight ‘security’ effort. So.. if you habitually put on your seat belt each time before you put your car in gear.. .then keep reading, for at least you are open to a tradeoff that will seriously enhance the digital security of your firm, whether a Fortune500 company, a small African NGO or a research organization that is counting the loss of primary forests in Papa New Guinea.

The effective design of a good security protocol is not that different than the design principle that led to seatbelts in cars:  On the security side, the restraint system evolved from a simple lap belt to combination shoulder harness/lap belt systems, often with automatic mechanisms that ‘assisted’ the user to wear them. The coupling of airbags as part of the overall passenger restraint system (which being hidden required no effort on the part of the user to make them work) improved even further the effectiveness of the overall security system. On the usability side, the placement of the buckles, the mechanical design of the buckles (to make it easy for everyone from children to the elderly to open and close them), and other design factors worked to increase the ease of usability. In addition philosophical and social pressures added to the ‘human factor’ of seat belt use:  in most areas there are significant public awareness efforts, driver education and governmental regulations (fines for not wearing seat belts), etc. that further promote the use of these effective physical security devices.

If you attempt to put in place a password policy that requires at least 16 characters with ‘complexity’ (i.e. a mix of Caps, Numbers, Punctuation) – and require the password to be changed monthly you can expect a lot of passwords to be written down on sticky notes on the underside of the keyboards… You have designed a system with very good Security but poor Usability. In each of the areas that we will discuss the issue of Security vs Usability will be addressed, as it is paramount to actually having something that works.

The Data Security Model

  Access Control

In its simplest form, access control is like the keys to your office, home or car. A person that is in possession of the correct key can access content or perform operations that are allowable within the confines of the accessible area. If you live in a low crime area, you may have a very small amount of keys: one for your house, one for your car and another for your office. But as we move into larger cities, we start collecting more keys: a deadbolt key (for extra security), probably a perimeter key for the office complex, a key for the postbox if you live in a housing complex, etc. etc. But even relatively complex physical security is very simple compared to online security for a ‘highly connected’ user. It is very easy to have tens if not hundreds of websites, computer/server logins, e-mail logins, etc. that each require a password. Password managers have become almost a required toolset for any security-minded user today, as how else to keep track of that many passwords! (And I assume here that you don’t make the most basic mistake of reusing passwords across multiple websites…)

Back to basics:  the premise behind the “username / password” authentication model is firstly to uniquely identify the user [username] and then to ensure that the access being granted is to the correct person [a secret password that supposedly is known only to the correct user]. There are several significant flaws with this model but due to its simplicity and relative ease of use it is widespread use throughout the world. In most cases, usernames are not protected in any way (other than being checked for uniqueness). Passwords, depending on the implementation, can be somewhat more protected – many systems encrypt the password that is on the server or device to which the user is attempting to gain access, so that someone (on the inside) that gains access to the password list on the server doesn’t get anything useful. Other attempts at making the password function more secure are password rules (such as requiring complexity/difficulty, longer passwords, forcing users to change passwords regularly, etc.) The problem with this is that the more secure (i.e. elaborate) the password rules become, the more likely that the user will compromise security by attempting to simplify the rules, or copying the password so they may refer to it since it’s too complex to remember. The worst of this type of behavior is the yellow sticky note… the best is a well-designed password manager that stores all the passwords in an encrypted database – that itself requires a password for access!

As can be seen this username/password model is a compromise that fails in the face of large numbers of different passwords needed by each user, and the ease at which many passwords can be guessed by a determined intruder. Various social engineering tactics, coupled with powerful computers and smart “password-guessing” algorithms can often correctly figure out passwords very quickly. We’ve all heard (or used!) birthdays, kids/pets names, switching out vowels with numbers, etc. etc. There isn’t a password simplification method that a hacker has not heard of as well…

So what next? Leaving the username identity alone for the moment, if we focus on just the password portion of the problem we can use biometrics. This has long been used by government, military and other institutions that had the money (these methods used to be obscenely expensive to implement) – but now are within the reach of the average user. Every new iPhone has a fingerprint reader, and these devices are common on many PCs now as well. So far the fingerprint is the only fairly reliable biometric security method in common use, although retina scanners and other even more arcane devices are in use or being investigated. These devices are not perfect, and all the systems I have seen allow the use of a password as a backup method: the fingerprint is used more as convenience as opposed to absolute security. The fingerprint readers on smartphones are not of the same quality and accuracy as a FIPS-compliant device: but in fairness most restrict the number of ‘bad fingerprint reads’ to a small number before the alternate password is required, so the chance of a similar (but not exact) fingerprint being used to unlock the device is very low.

(Apple for instance states that there is a 1 in 50,000 chance of two different fingerprints being read as identical. At the academic level it is postulated that no two fingerprints are, or ever have been, exactly the same. Even if we look at currently living humans that is a ratio of roughly 1 in 6 billion… so fingerprint readers are not all that accurate. However, they are practically more than good enough given the statistical probability of two people with remarkably similar fingerprints being in the position to attempt access to a single device).

Don’t give up! This is not to say that fingerprint readers are not an adequate solution – they are an excellent method – just that there are issues and the full situation should be well understood.

The next level of “password sophistication” is the so-called “two factor” authentication. This is becoming more common, and has the possibility of greatly increasing security, without tremendous user effort. Basically this means the user submits two “passwords” instead of one. There are two forms of “two factor authentication”: static-static and static-dynamic. The SS (static-static) method uses two previously known ‘passwords’ (usually one biometric – such as a fingerprint; and one actual ‘password’ – whether an actual complex password or a PIN number). The SD (static-dynamic) method uses one previously known ‘password’, and the second ‘password’ is some code/password/PIN that is dynamically transmitted to the user at the time of login. Usually these are sent to the user via their cellphone, are randomly created at the time of attempted login – and are therefore virtually impossible to crack. The user must have previously registered their cellphone number with the security provider so that they can receive the codes. There are obvious issues with this method: one has to within cellphone reception, must have not left it at home, etc. etc.

There is an other SD method, which uses a ‘token’ (a small device that contains a random number generator that is seeded with an identical ‘seed’ that is paired with a master security server. This essentially means that both the server and the token will generate the same random numbers each time the seed updates (usually once every 30 seconds). The token works without a cellphone (which also means it can work underground or in areas where there is no reception). These various ‘two factor’ authentication methods are extremely secure, as the probability of a bogus user having both factors is statistically almost zero.

Another method for user authentication is a ‘certificate’. Without going into technical details (which BTW can make even a seasoned IT guru’s eyeballs roll back in her head!) a certificate is bit like a digital version of a passport or driver’s license: an object that is virtually impossible to counterfeit that uniquely identifies the owner as the rightful holder of that ‘certificate’. In the physical world, driver’s licenses often have a picture, the user’s signature, and often a thumbprint or certain biometric data (height, hair/eye color, etc.) Examination of the “license” in comparison to the person validates the identity. An online ‘security certificate’ [X.509 or similar] performs the same function. There are different levels of certificates, with the higher levels (Level 3 for instance) requiring a fairly detailed application process to ensure that the user is who s/he says s/he is. Use of the certificate, instead of just a simple username, offers a considerably higher level of security in the authentication process.

A certificate can then be associated with a password (or a two factor authentication process) for any given website or other access area. There are a lot of details around this, and there is overhead in administering certificates in a large company – but they have been proven worldwide to be secure, reliable and useful. Many computers can be fitted with a ‘card reader’ that read physical ‘certificates’ (where the certificate is like a credit card that the user presents to log in).

One can see that something as simple as wanding a card and then pressing a fingerprint reader is very user-friendly, highly secure, and is a long way from simple passwords and usernames. The principle here is not to get stuck on details, but to understand that there are methods for greatly improving both security and usability to make this aspect of Data Security – Access Control – no longer an issue for an organization that wishes to take the effort to implement them. Some of these methods are not enormously complicated or expensive, so even small firms can make use of these methods.

Summary

In this part we have reviewed Access Control – one of the pillars of good Data Security. Several common methods, with their corresponding Security vs Usability aspects have been discussed. Access Control is a vital part of any firm’s security policy, and is the foundation of keeping your data under control. While there are many more details surrounding good Access Control policies (audits, testing of devices, revocation of users that are no longer authorized, etc.) the principals are easy to comprehend. The most important thing is to know that good Access Control is required, and that shortcuts or compromises can have disastrous results in terms of a firm’s bottom line or reputation. The next part will discuss Network Security Controls – the vitally important aspect of Data Security where computers or other data devices are connected together – and how those networks can be secured.

Part 3 of this series is located here.

 

Data Security – An Overview for Executive Board members [Part 1: Introduction & Concepts]

March 16, 2015 · by parasam

Introduction

This post is a synthesis of a number of conversations and discussions concerning security practices for the digital aspect of organizations. These dialogs were initially with board members and executive-level personnel, but the focus of this discussion is equally useful to small business owners or anyone that is a stakeholder in an organization that uses data or other digital tools in their business: which today means just about everyone!

The point of view is high level and deliberately as non-technical as possible: not to assume that many at this level are not extremely technically competent, but rather to encompass as broad an audience as possible – and, as will be seen, that the biggest issues are not actually that technical in the first place, but rather are issues of strategy, principle, process and oft-misunderstood ‘features’ of the digital side of any business. The points that will be discussed are equally applicable to firms that primarily exist ‘online’ (who essentially have no physical presence to the consumers or participants in their organization) and those organizations that exist mainly as ‘bricks and mortar’ companies (who use IT as a ‘back office’ function just to support their physical business).

In addition, these principles are relevant to virtually any organization, not just commercial business: educational institutions, NGO’s, government entities, charities, medical practices, research institutions, ecosystem monitoring agencies and so on. There is almost no organization on earth today that doesn’t use ‘data’ in some form. Within the next ten years, the transformation will be almost complete: there won’t be ANY organizations that won’t be based, at their core, on some form of IT. From databases to communication to information sharing to commercial transactions, almost every aspect of any firm will be entrenched in a digital model.

The Concept of Security

The overall concept of security has two major components: Data Integrity and Data Security. Data Integrity is the aspect of ensuring that data is not corrupted by either internal or external factors, and that the data can be trusted. Data Security is the aspect of ensuring that only authorized users have access to view, transmit, delete or perform other operations on the data. Each is critical – Integrity can likened to disease in the human body: pathogens that break the integrity of certain cells will disrupt and eventually cause injury or death; Security is similar to the protection that skin and other peripheral structures provide – a penetration of these boundaries leads to a compromise of the operation of the body, or in extreme cases major injury or death.

While Data Integrity is mostly enforced with technical means (backup, comparison, hash algorithms, etc.), Data Security is an amalgam of human factors, process controls, strategic concepts, technical measures (comprising everything from encryption, virus protection, intrusion detection, etc.) and the most subtle (but potentially dangerous to a good security model): the very features of a digital ecosystem that make it so useful also can make it highly vulnerable. The rest of this discussion will focus on Data Security, and in particular those factors that are not overtly ‘technical’ – as there are countless articles etc on the technical side of Data Security. [A very important aspect of Data Integrity – BCDR (Business Continuity and Disaster Recovery) will be the topic of an upcoming post – it’s such an important part of any organizations basic “Digital Foundation”.]

The Non-Technical Aspects of Data Security

The very nature of ‘digital data’ is both an absolute boon to organizations in so many ways: communication, design, finance, sales, online business – the list is endless. The fantastic toolsets we now have in terms of high-powered smartphones and tablets coupled with sophisticate software ‘apps’ have put modern business in the hands of almost anyone. This is based on the core of any digital system: the concept of binary values. Every piece of e-mail, data, bank account details or digital photograph is ultimately a series of digital values: either a 1 or a 0. This is the difference between the older analog systems (many shades of gray) and digital (black or white, only 2 values). This core concept of digital systems makes copying, transmission, etc of data very easy and very fast. A particular block of digital data, when copied with no errors, is absolutely indistinguishable from the ‘original’. While in most cases this is what makes the whole digital world work as well as it does, it also creates a built-in security threat. Once a copy is made, if it is appropriated by an unauthorized user it’s as if the original was taken. The many thousands of e-mails that were stolen and then released by the hackers that compromised the Sony Pictures data networks is a classic example of this…

While there are both technical methods and process controls that can mitigate this risk, it’s imperative that business owners / stakeholders understand that the very nature of a digital system has a built-in risk to data ‘leakage’. Only with this knowledge can adequate controls be put in place to prevent data loss or unauthorized use. Another side to digital systems, particularly communication systems (such as e-mail and social media), is how many of the software applications are designed and constructed. Many of these, mostly social media types, have uninhibited data sharing as the ‘normal’ way the software works – with the user having to take extra effort to limit the amount of sharing allowed.

An area that is a particular challenge is the ‘connectedness’ of modern data networks. The new challenge of privacy in the digital ecosystem has prompted (and will continue to) many conversations, from legal to moral/ethical to practical. The “Facebook” paradigm [everything is shared with everybody unless you take efforts to limit such sharing] is really something we haven’t experienced since small towns in past generations where everybody knew everyone’s business…

While social media is fast becoming an important aspect of many firms’ marketing, customer service and PR efforts, they must be designed rather carefully in order to isolate those ‘data sharing’ platforms from the internal business and financial systems of a company. It is surprisingly easy for inadvertent ‘connections’ to be made between what should be private business data and the more public social media facet of a business. Even if a direct connection is not made between say, the internal company e-mail address book and their external Facebook account (a practice that unfortunately I have witnessed on many more than one occasion!), the inappropriate positioning of a firm’s Twitter client on the same sub-network as their e-mail servers is a hacker’s dream: it usually will take a clever hacker only minutes to ‘hop the fence’ and gain access to the e-mail server if they were able to compromise the Twitter account.

Many of the most important issues surrounding good Data Security are not technical, but rather principles and practices of good security. Since ultimately human beings are often a significant actor in the chain of entities that handle data, these humans need guidance and effective protocols just like the computers need well-designed software that protects the underlying data. Access controls (from basic passwords to sophisticated biometric parameters such as fingerprints or retina scans); network security controls (for instance requiring at least two network administrators to collectively authorize large data transfers or deletions – which would have prevented most of the Sony Pictures data theft/destruction); compartmentalization of data (the practice of controlling both storage and access to different parts of a firms’ digital assets in separate digital repositories); and the newcomer on the block: cloud computing (essentially just remote data centers that host storage, applications or even entire IT platforms for companies) – all of these are areas that have very human philosophies and governance issues that are just implemented with technology.

Summary

In Part 1 of this post we have discussed the concepts and basic practices of digital security, and covered an overview of Data Security. The next part will discuss in further detail a few of the most useful parts of the Data Security model, and offer some practical solutions for good governance in these areas.

Part 2 of this series is located here.

The Hack

December 21, 2014 · by parasam

 

It’s a sign of our current connectedness (and the lack of ability or desire for most of us to live under a digital rock – without an hourly fix of Facebook, Twitter, CNN, blogs, etc – we don’t feel we exist) that the title of this post needs no further explanation.

The Sony “hack” must be analyzed apart from the hyperbole of the media, politics and business ‘experts’ to put the various aspects in some form of objectivity – and more importantly to learn the lessons that come with this experience.

I have watched and read endless accounts and reports on the event, from lay commentators, IT professionals, Hollywood business, foreign policy pundits, etc. – yet have not seen a concise analysis of the deeper meaning of this event relative to our current digital ecosystems.

Michael Lynton (CEO, Sony Pictures) stated on CNN’s Fareed Zakaria show today that “the malware inserted into the Sony network was so advanced and sophisticated that 90% of any companies would have been breached in the same manner as Sony Pictures.” Of course he had to take that position – while his interview was public there was a strong messaging to investors in both Sony and the various productions that it hosts.

As reported by Wired, Slate, InfoWorld and others the hack was almost certainly initiated by the introduction of malware into the Sony network – and not particularly clever code at that. For the rogue code to execute correctly, and to have the permissions to access, transmit and then delete massive amounts of data required the credentials of a senior network administrator – which supposedly were stolen by the hackers. The exact means by which this theft took place have not been revealed publicly. Reports on the amount of data stolen vary, but range from a few to as much as a hundred terabytes. That is a massive amount of data. To move this amount of data requires a very high bandwidth pipe – at least a 1Gbps, if not higher. These sized pipes are very expensive, and normally are managed rather strictly to prioritize bandwidth. Depending on the amount of bandwidth allocated for the theft of data, the ‘dump’ must have lasted days, if not weeks.

All this means that a number of rather standard security protocols were either not in place, or not adhered to at Sony Pictures. The issue here is not Sony – I have no bone to pick with them, and in fact they have been a client of mine numerous times in the past while with different firms, and I continue to have connections with people there. This is obviously a traumatic and challenging time for everyone there. It’s the larger implications that bear analysis.

This event can be viewed through a few different lenses: political, technical, philosophical and commercial.

Political – Initially let’s examine the implications of this type of breach, data theft and data destruction without regard to the instigator. In terms of results the “who did it” is not important. Imagine instead of this event (which caused embarrassment, business disruption and economic loss only) an event in which the Light Rail switching system in Los Angeles was targeted. Multiple and simultaneous train wrecks are a highly likely result, with massive human and infrastructure damage certain. In spite of the changes that were supposed to follow on from the horrific crash some years ago in the Valley there, the installation of “collision avoidance systems” on each locomotive still has not taken place. Good intentions in politics often take decades to see fruition…

One can easily look at other bits of infrastructure (electrical grids, petroleum pipelines, air traffic control systems [look at London last week], telecommunications, internet routing and peering – the list goes on and on – of critical infrastructure that is inadequately protected.

Senator John McCain said today that of all the meetings in his political life, none took longer and accomplished less than cybersecurity subjects. This issue is just not taken seriously. Many major hacks have occurred in the past – this one is getting serious attention from the media due to the target being a media company, and that many high profile Hollywood people have had a lot to say – and that further fuels the news machine.

Now whether North Korea instigated or performed this on its own – both possible and according to the FBI is now fact – the issue of a nation-state attacking other national interests is most serious, and demands a response from the US government. But regardless of the perpetrator – whether an individual criminal, a group, etc. – a much higher priority must be placed on the security of both public and private entities in our connected world.

Technical – The reporting and discussion on the methodology of this breach in particular, and ‘hacks’ in general, has ranged from the patently absurd to relatively accurate. In this case (and some other notable breaches in the last few years, such as Target), the introduction of malware into an otherwise protected (at least to some degree) system allowed access and control from an undesirable external party. While the implanting of the malware may have been a relatively simple part of the overall breach, the design of the entire process, codewriting and testing, steering and control of the malware from the external servers, as well as the data collection and retransmission clearly involved a team of knowledgeable technicians and some considerable resources. This was not a hack done by a teenager with a laptop.

On the other hand, the Sony breach was not all that sophisticated. The data made public so far indicates that the basic malware was Trojan Destover, combined with a commercially available codeset EldoS RawDisk which was used for the wiping (destruction) of the Sony data. Both of these programs (and their similes Shamoon and Jokra) have been detected in other breaches (Saudi Aramco, Aug 2012; South Korea, Mar 2013). See this link for further details. Each major breach of this sort tends to have individual code characteristics, along with required access credentials with the final malware deliverable package often compiled shortly before the attack. The evidence disclosed in the Sony breach indicates that stolen senior network admin credentials were part of the package, which allowed the full and unfettered access to the network.

It is highly likely that the network was repeatedly probed some time in advance of the actual breach, both as a test of the stolen credentials (to see how wide the access was, and to inspect for any tripwires that may have been set if the credentials had become suspect).

The real lessons to take away from the Sony event have much more to do with the structure of the Sony network, their security model, security standards and practices, and data movement monitoring. To be clear, this is not picking out Sony as a particularly bad example: unfortunately this firm’s security practices are rather the norm today: very, very few commercial networks are adequately protected or designed – even financial companies who one would assume have better than average security.

Without having to look at internal details, one only has to observe the reported breaches of large retail firms, banks and trading entities, government agencies, credit card clearing houses… the list goes on and on. Add to this that not all breaches are reported, and even less are publicly disclosed – the estimates range from 20-30% of network security breaches are reported. The reasons vary from loss of shareholder or customer trust, appearance of competitive weakness, not knowing what actually deserves reporting and how to classify the attempt or breach, etc. etc. In many cases data on “cyberattacks” is reported anonymously or is gathered statistically by firms that handle security monitoring on an outsource basis. At least these aggregate numbers give a scope to the problem – and it is huge. For example, IBM’s report shows for one year (April 2012 – April 2013)  there were 73,400 attacks on a single large organization during this time period. This resulted in about 100 actual ‘security incidents’ during the year for that one company. A PWC report shows that an estimated 42 million data security incidents will have occurred during 2014 worldwide.

If this amount of physical robberies were occurring to firms the response, and general awareness, would be far higher. There is something insidious about digital crime that doesn’t attract the level of notice that physical events do. The economic loss worldwide is estimated in the hundreds of billions of dollars – with most of these proceeds ending up in organized crime, rogue nation-states and terrorist groups. Given the relative sophistication of ISIS in terms of social media, video production and other high-tech endeavours, it is highly likely that a portion of their funding comes from cybercrime.

The scope of the Sony attack, with the commensurate data loss, is part of what has made this so newsworthy. This is also the aspect of this breach that could have mitigated rather easily – and underscores the design / security practices faults that plague so many firms today. The following points list some of the weaknesses that contributed to the scale of this breach:

  • A single static set of credentials allowed nearly unlimited access to the entire network.
  • A lack of effective audit controls that would have brought attention to potential use of these credentials by unauthorized users.
  • A lack of multiple-factor authentication that would have made hard-coding of the credentials into the malware ineffective.
  • Insufficient data move monitoring: the level of data that was transmitted out of the Sony network was massive, and had to impact normal working bandwidth. It appears that large amounts of data are allowed to move unmanaged in and out of the network – again an effective data move audit / management process would have triggered an alert.
  • Massive data deletion should have required at least two distinct sets of credentials to initiate.
  • A lack of internal firewalls or ‘firestops’ that could have limited the scope of access, damage, theft and destruction.
  • A lack of understanding at the highest management levels of the vulnerability of the firm to this type of breach, with commensurate board expertise and oversight. In short, a lack of governance in this critical area. This is perhaps one of the most important, and least recognized, aspects of genuine corporate security.

Philosophical – With the huge paradigm shift that the digital universe has brought to the human race we must collectively asses and understand the impacts of security, privacy and ownership of that ephemeral yet tangible entity called ‘data’. With an enormous transformation under way where millions of people (the so-called ‘knowledge workers’) produce, consume, trade and enjoy nothing but data. There is not an industry that is untouched by this new methodology: even very ‘mechanistic’ enterprises such as farming, steelmills, shipping and train transportation are deeply intertwined with IT now. Sectors such as telecoms, entertainment, finance, design, publishing, photography and so on are virtually impossible to implement without complete dependence on digital infrastructures. Medicine,  aeronautics, energy generation and prospecting – the lists go on and on.

The overall concept of security has two major components: Data Integrity (ensuring that the data is not corrupted by either internal or external factors, and that the data can be trusted; and Data Security (ensuring that only authorized users have access to view, transmit, delete or perform other operations on the data). Each are critical – Integrity can likened to disease in the human body: pathogens that break the integrity of certain cells will disrupt and eventually cause injury or death; Security is similar to the protection that skin and other peripheral structures provide – a penetration of these boundaries leads to a compromise of the operation of the body, or in extreme cases major injury or death.

An area that is a particular challenge is the ‘connectedness’ of modern data networks. The new challenge of privacy in the digital ecosystem has prompted (and will continue to) many conversations, from legal to moral/ethical to practical. The “Facebook” paradigm [everything is shared with everybody unless you take efforts to limit such sharing] is really something we haven’t experienced since small towns in past generations where everybody knew everyone’s business…

Just as we have always had a criminal element in societies – those that will take, destroy, manipulate and otherwise seek self-aggrandizement at the expense of others – we now have the same manifestations in the digital ecosystem. Only digi-crime is vastly more efficient, less detectable, often more lucrative, and very difficult to police. The legal system is woefully outdated and outclassed by modern digital pirates – there is almost no international cooperation, very poor understanding by most police departments or judges, etc. etc. The sad truth is that 99% of cyber-criminals will get away with their crimes for as long as they want to. A number of very basic things must change in our collective societies in order to achieve the level of crime reduction that we see in modern cultures in the physical realm.

A particular challenge is mostly educational/ethical: that everything on the internet is “free” and is there for the taking without regard to the intellectual property owner’s claim. Attempting to police this after the fact is doomed to failure (at least 80% of the time) – not until users are educated to the disruption and effects of their theft of intellectual property. This attitude has almost destroyed the music industry world-wide, and the losses to the film and television industry amount to billions of dollars annually.

Commercial – The economic losses due to data breaches, theft, destruction, etc are massive, and the perception of the level of this loss is staggeringly low – even among commercial stakeholders whom are directly affected. Firms that spend massive amounts of time, money and design effort to physically protect their enterprises apply the flimsiest of real effective data security efforts. Some of this is due to lack of knowledge, some to lack of understanding of the core principals that comprise a real and effective set of procedures for data protection, and a certain amount of laziness: strong security always takes some effort and time during each session with the data.

It is unfortunate, but the level of pain, publicity and potential legal liability of major breaches such as Sony are seemingly necessary to raise the attention that everyone is vulnerable. It is imperative that all commercial entities, from a vegetable seller at a farmer’s market that uses SnapScan all the way to global enterprises such as BP Oil, J.P. Morgan, or General Motors take cyber crime as a continual, ongoing, and very real challenge – and deal with it at the board level with same importance given to other critical areas of governance: finance, trade secrets, commercial strategy, etc.

Many firms will say, “But we already spend a ridiculous amount on IT, including security!” I am sure that Sony is saying this even today… but it’s not always the amount of the spend, it’s how it’s done. A great deal of cash can be wasted on pretty blinking lights and cool software that in the end is just not effective. Most of the changes required today are in methodology, practice, and actually adhering to already adopted ‘best practices’. I personally have yet to see any business, large or small, that follows the stated security practices set up in that particular firm to the letter.

– Ed Elliott

Past articles on privacy and security may be found at these links:

Comments on SOPA and PIPA

CONTENT PROTECTION – Methods and Practices for protecting audiovisual content

Anonymity, Privacy and Security in the Connected World

Whose Data Is It Anyway?

Privacy, Security and the Virtual World…

Who owns the rain?  A discussion on accountability of what’s in the cloud…

The Perception of Privacy

Privacy: a delusion? a right? an ideal?

Privacy in our connected world… (almost an oxymoron)

NSA (No Secrets Anymore), yet another Snowden treatise, practical realities…

It’s still Snowing… (the thread on Snowden, NSA and lack of privacy continues…)

 

It’s still Snowing… (the thread on Snowden, NSA and lack of privacy continues…)

February 10, 2014 · by parasam

Just a short follow-up here: two more articles that relate to my observations on the unending revelations of data collection, surveillance, etc. by our friendly No Secrets Anymore agency…

The first article (here) relates how NSA whistleblower Edward Snowden used a common “webcrawler” software to comb through the NSA databases and download thousands of pages of classified information. The first thing I thought when reading this was “WTF! – How was this even possible inside what should be one of the most secure networks on the planet??” Turns out that even super-secret networks have rollout delays in deploying critical network monitoring software… (Snowden ran the webcrawler from a Hawaii field office instead of NSA central in Fort Meade, MD…)

The other article (here) is an odd clarification on how much metadata the NSA has been gathering on domestic phone calls – now we are told about 20% of all landline calls made, not the close to 100% that was earlier believed. In addition, we are told that not much bulk collection of cellphone calls is currently occurring, due to a restriction on collection of location information (which is normally embedded in the cellphone call record metadata). This raises an interesting question: since I doubt that many would-be terrorists install a landline (with the requisite time and details for commissioning) in order to make clandestine calls – what is the use of any landline collection (in bulk terms)? Isn’t this just a large waste of taxpayer time and funds that really will have no useful purpose?

What one may take away from these observations is that policy often gets in the way of efficient application of a process – in some cases allowing security leaks, and in other cases seriously diluting the desired effect of a surveillance plan. Many of the same issues that confront commercial entities also plague our (and others) governmental agencies…

 

NSA (No Secrets Anymore), yet another Snowden treatise, practical realities…

February 6, 2014 · by parasam

I really did intend to write about a different topic today… but this article in the New York Time (here) prompted this brief comment. Of course it was inevitable that a book (the first of several) would pop out of the publishing machine to review the NSA/Snowden privacy debacle – and presumably make some coin for the author… Disclosure: I have not yet read the book, but my comments are more around the general issue – not this particular retelling of this Orwellian story…

Again, without regard to the position of Snowden (or those like him) – traitor or whistleblower – the underlying issue is vitally important. The difficult balance between a nation/state’s “need to know” about supposedly private communications of their citizens – in order to ‘protect’ them against perceived threats; and the vital human ‘freedom’ of individual privacy – the lack of unauthorized and unknown surveillance by government or other commercial entities – is a subject that we collectively must not ignore. It is all of our responsibility to be informed: lack of knowledge is not an excuse for the day when your personal details are splattered all over a billboard…

As I have written before: while one may not be able to prevent the dispersal of some of your personal information, the knowledge that using the ‘internet’ is not free, and will inevitably result in the sharing of some of your information and data, is I believe a vitally important fact. Just as knowing that the speed limit on a US highway – in absence of a posted sign – is 55-65MPH (depending on the state in which you are speeding…) will prevent surprise if you are pulled over for driving faster – you shouldn’t be surprised if your browsing history shows up in future targeted advertising – or if you perform lots of web searches for plastic explosives, instructional papers for using cellphones to activate  blasting caps, etc. – you may someday get a visit from some suits…

However – and this closing observation will hopefully reduce some of the paranoia and anxiety of online activity: re-read the last line of the quoted article “…the book also manages to leave readers with an acute understanding of the serious issues involved: the N.S.A.’s surveillance activities and voluminous collection of data, and the consequences that this sifting of bigger and bigger haystacks for tiny needles has had on the public and its right to privacy.”  The critical bit is something that the NSA (and the GCHQ) is dealing with right now: the vast amount of data being gathered is making ‘sifting’ really, really difficult. Finding your 100-word e-mail in literally trillions and trillions of mails, pictures, files, etc. etc. is becoming wretchedly difficult – even the massively powerful supercomputers of the NSA are choking on this task. Hidden in plain sight…

Privacy in our connected world… (almost an oxymoron)

February 4, 2014 · by parasam

Yesterday I wrote on the “ideal” of privacy in our modern world – this morning I read some further information related to this topic (acknowledgement to Robert Cringely as the jumping-off point for this post). If one wants to invest the time, money or both – there are ways to keep your data safe. Really, really safe. The first is the digital equivalent of a Swiss bank account – and yes, it’s also offered by the Swiss – deep inside a mountain bunker – away from the prying eyes of NSA, MI6 and other inquisitive types. Article is here. The other method is a new encryption method that basically offers ‘red herrings’ to would-be password hackers: let them think they have cracked your password, but feed them fake data instead of your real stuff – described here.

Now either of these ‘methods’ requires the user to take proactive steps, and spend time/money. The unfortunate, but real, truth of today’s digital environment is that you – and only you – must take responsibility for the security and integrity of your data. The more security you desire, the more effort you must expend. No one will do it for you (for free) – and those that offer… well, you get the idea. A long time ago one could live in a village and not lock your front door… not any more.

However, before spiraling down a depressive slope of digital angst – there are some facets to consider:  even though it is highly likely (as in actually positively for certain…) that far more of your private life is exposed and stored in the BigData bunkers of Walmart, Amazon, ClearChannel, Facebook or some government… so are the details of a billion other users… There is anonymity in the sheer volume of data. The important thing to remember is that if you really become a ‘person of interest’ – to some intelligence agency, a particularly zealous advertiser, etc. – almost nothing can stop the accumulation of information about you and your activities. However, most people don’t fit this profile. You’re just a drop of water in a very large digital ocean. Relax and float on the waves…

Understanding helps: nothing is free. Ever. So if come to know that the ‘price’ you pay for the ‘free’ Google search engine that solves your trivia questions, settles arguments amongst your children, or allows you to complete your next research project in a fraction of the time that would otherwise be necessary is the ‘donation’ of some information about what you search for, when, how often, etc. – then maybe you can see this as fair payment. After all, the data centers that power the Google search ‘engine’ are ginormous, hugely expensive to build and massively expensive to run – they tend to be located close to power generating sources as the amount of electricity consumed is so large. Ultimately someone has to pay the power bill…

Privacy: a delusion? a right? an ideal?

February 3, 2014 · by parasam

With all of the ‘exploits’ of the NSA and their brethren agencies concerning the “intelligence data” gathering process in the news recently, I wanted to expand on a post I wrote some time ago (here) on the “Perception of Privacy” – although that post was more narrowly focused on privacy as it relates to photography. Without regard to the legality or morality of Edward Snowden’s activities [or similar activities that have shed light on what our collective governments have been doing in terms of ‘snooping’] (I’ll reserve that for a future post) – I want to address the notion of ‘privacy’ in our changing world.

Privacy ultimately implies a separation of thought, speech, activity or other action from the larger world around one. If one reviews your Greek history, the Cynics (one of the three Schools that came from Socrates, Plato, Aristotle) were perhaps the best example of way of life in which there was no privacy. They practiced living with complete “shameless behavior” and did everything in public – not to shock, but to rather exercise indifference to the societal norms and rise above them. However, most cultures have evolved into a balance of public and private activity – although with a substantial variation on what is acceptable “public behavior.”

The issue at hand today with our beliefs around privacy of communications (whether voice or data) is around our “expectation of privacy.” If we post a public comment on Facebook or the New York Times web site, we have no reasonable expectation of privacy and therefore are not worried if this communication is shared or observed by others. However, if we send an e-mail to single recipient, or converse on the telephone with a family member, we have a reasonable expectation of privacy – and would be upset if this communication was shared with others (such as government agencies, etc.) – when there is no pre-existing reason for such a violation of privacy.

The big difference – and the root of much of the dialog currently regarding online privacy – is that various companies (mostly ad based or other big data firms), or nation-state governmental agencies have taken the position that extracting and storing virtually all possible data from communications within their reach is ethical, potentially useful, and profitable. From a governance pov the position is that if we have all this data on hand, then we can review it if we come to believe that person X has potentially violated some standard of behavior and is therefore deserving of surveillance. The commercial position (Big Data) is that the more we know about everyone, the better we can target commercial opportunities – or perhaps protect certain company’s profits [health/life insurance firms, corporate employment, financial institutions will all argue in favor of knowing everything possible about their potential customers].

There are a few problems with this philosophy: one of which is just practical and economic – the vast amount of storage capacity that unfocused data gathering requires. Eventually someone has to pay for all those hard disks… with one of the latest methodologies that has been revealed (harvesting of data from ‘leaky apps’ on mobile devices) generating terabytes of data per hour just from this type of activity – the scope of this data storage dilemma is becoming quite large. When you fill out one of those annoying forms when you sign up to WhatsApp (for example), are you aware that your e-mail address, cellphone number, and potentially your entire contact list is shared and propagated to a huge slew of firms outside of WhatsApp? Including Washington, D.C.? Everything from Angry Birds to top newspaper and television firms that use apps for mobile connectivity have been shown to basically have no safeguards whatsoever in terms of subscriber data privacy.

This is a new and relatively unknown issue for courts, philosophers, commercial firms, governments and their subject citizens to wrestle with. It will take some time for a collective rationale to emerge – and whatever balance between real privacy (almost impossible to have in a highly connected society) and public forum is achieved will vary widely from culture to culture. I’ll continue to observe and post on this topic, but comments are welcome.

Page 3 of 6 « Previous 1 2 3 4 5 6 Next »
  • Blog at WordPress.com.
  • Connect with us:
  • Twitter
  • Vimeo
  • YouTube
  • RSS
  • Follow Following
    • Parasam
    • Join 95 other followers
    • Already have a WordPress.com account? Log in now.
    • Parasam
    • Customize
    • Follow Following
    • Sign up
    • Log in
    • Report this content
    • View site in Reader
    • Manage subscriptions
    • Collapse this bar