• Home
  • about this blog
  • Blog Posts

Parasam

Menu

  • design
  • fashion
  • history
  • philosophy
  • photography
  • post-production
    • Content Protection
    • Quality Control
  • science
  • security
  • technology
    • 2nd screen
    • IoT
  • Uncategorized
  • Enter your email address to follow this blog and receive notifications of new posts by email.

  • Recent Posts

    • Take Control of your Phone
    • DI – Disintermediation, 5 years on…
    • Objective Photography is an Oxymoron (all photos lie…)
    • A Historical Moment: The Sylmar Earthquake of 1971 (Los Angeles, CA)
    • Where Did My Images Go? [the challenge of long-term preservation of digital images]
  • Archives

    • September 2020
    • October 2017
    • August 2016
    • June 2016
    • May 2016
    • November 2015
    • June 2015
    • April 2015
    • March 2015
    • December 2014
    • February 2014
    • September 2012
    • August 2012
    • June 2012
    • May 2012
    • April 2012
    • March 2012
    • February 2012
    • January 2012
  • Categories

    • 2nd screen
    • Content Protection
    • design
    • fashion
    • history
    • IoT
    • philosophy
    • photography
    • post-production
    • Quality Control
    • science
    • security
    • technology
    • Uncategorized
  • Meta

    • Register
    • Log in
    • Entries feed
    • Comments feed
    • WordPress.com

Browsing Tags cybersecurity

The Patriot Act – upcoming expiry of Section 215 and other unpatriotic rules…

April 18, 2015 · by parasam

Section215

On June 1, less than 45 days from now, a number of sections of the Patriot Act expire. The administration and a large section of our national security apparatus, including the Pentagon, Homeland Security, etc. are strongly pushing for extended renewal of these sections without modification.

While this may on the surface seem like something we should do (we need all the security we can get in these times of terrorism, Chinese/North Korean/WhoKnows hacks, etc. – right?) – the reality is significantly different. Many of the Sections of the Patriot Act (including ones that are already in force and do not expire for many years to come) are insidious, give almost unlimited and unprecedented surveillance powers to our government (and by the way any private contractors who the government hires to help them with this task), and are mostly without functional oversight or accountability.

Details of the particular sections up for renewal may be found in this article, and for a humorous and allegorical take on Section 215 (the so-called “Library Records” provision) I highly recommend this John Oliver video. While the full “Patriot Act” is huge, and covers an exhaustingly broad scope of activities that allow the government (meaning its various security agencies, including but not limited to: CIA, FBI, NSA, Joint Military Intelligence Services, etc. etc.) the sections that are of particular interest in terms of digital security pertaining to communications are the following:

  • Section 201, 202 – Ability to intercept communications (phone, e-mail, internet, etc.)
  • Section 206 – roving wiretap (ability to wiretap all locations that a person may have visited or communicated from for up to a year).
  • Section 215 – the so-called “Library Records” provision, basically allowing the government (NSA) to bulk collect communications from virtually everyone and store them for later ‘research’ to see if any terrorist or other activity deemed to be in violation of National Security interests.
  • Section 216 – pen register / trap and trace (the ability to collect metadata and/or actual telephone conversations – metadata does not require a specific warrant, recording content of conversations does).
  • Section 217 – computer communications interception (ability to monitor a user’s web activity, communications, etc.)
  • Section 225 – Immunity from prosecution for compliance with wiretaps or other surveillance activity (essentially protects police departments, private contractors, or anyone else that the government instructs/hires to assist them in surveillance).
  • Section 702 – Surveillance of ‘foreigners’ located abroad (in principle this should restrict surveillance to foreign nationals outside of US at the time of such action, but there is much gray area concerning exactly who is a ‘foreigner’ etc. [for instance, is a foreign born wife of a US citizen a “foreigner” – and if so, are communications between the wife and the husband allowed??]

Why is this Act so problematic?KeyholePeeper

As with many things in life, the “law of unintended consequences” can often overshadow the original problem. In this case, the original rationale of wanting to get all the info possible about persons or groups that may be planning terrorist activities against the USA was potentially noble, but the unprecedented powers and lack of accountability provided for by the Patriot Act has the potential (and in fact has already been proven) to scuttle many individual freedoms that form the basis for our society.

Without regard to the methods or justification for his actions, the revelations provided by Ed Snowden’s leaks of the current and past practices of the NSA are highly informative. This issue is now public, and cannot be ‘un-known’. What is clearly documented is that the NSA (and other entities as has since come to light) have extended surveillance on millions of US citizens living within the domestic US to a far greater extent than even the original authors of the Patriot Act envisioned. [This revealed in multiple tv interviews recently].

The next major issue is that of ‘data creep’ – that such data, once collected, almost always gets replicated into other databases, etc. and never really goes away. In theory, to take one of the Sections (702), data retention even for ‘actionable surveillance of foreign nationals’ is limited to one year, and inadvertent collection of surveillance data on US nationals, or even a foreign national that has travelled within the borders of the USA is supposed to be deleted immediately. But absolutely no instruction or methodology is given on how to do this, nor are any controls put in place to ensure compliance, nor are any audit powers given to any other governmental agency.

As we have seen in past discussions regarding data retention and deletion with the big social media firms (Facebook, Google, Twitter, etc.) it’s very difficult to actually delete data permanently. Firstly, in spite of what appears to be an easy step, actually deleting your data from Facebook is incredibly hard to do (what appears to be easy is just the inactivation of your account, permanently deleting data is a whole different exercise). On top of that, all these firms (and the NSA is no different) make backups of all their server data for protection and business continuity. One would have to search and compare every past backup to ensure your data was also deleted from those.

And even the backups have backups… it’s considered an IT ‘best practice’ to back up critical information across different geographical locations in case of disaster. You can see the scope of this problem… and once you understand that the NSA for example will under certain circumstances make chunks of data available to other law enforcement agencies, how does one then ensure compliance across all these agencies that data deletion occurs properly? (Simple answer: it’s realistically impossible).

So What Do We Do About This?

The good news is that most of these issues are not terribly difficult to fix… but the hard part will be changing the mindset of many in our government who feel that they should have the power to do anything they want in total secrecy with no accountability. The “fix” is to basically limit the scope and power of the data collection, provide far greater transparency about both the methods and actual type of data being collected, and have powerful audit and compliance methods in place that have teeth.

The entire process needs to be stood on its end – with the goal being to minimize surveillance to the greatest extent possible, and to retain as little data as possible, with very restrictive rules about retention, sharing, etc. For instance, if data is shared with another agency, it should ‘self-expire’ (there are technical ways to do this) after a certain amount of time, unless it has been determined that this data is now admissible evidence in a criminal trial – in which case the expiry can be revoked by a court order.

fisainfographic3_blog_0

The irony is that even the NSA has admitted that there is no way they can possibly search through all the data they have collected already – in terms of a general search-terms action. They could of course look for a particular person-name or place-name, but if this is all they needed they could have originally only collected surveillance data for those parameters instead of the bulk of American citizens living in the USA…

While they won’t give details, reasonable assumptions can be drawn from public filings and statements, as well as purchase information from storage vendors… and the NSA alone can be assumed to have many hundreds of exabytes of data stored. Given that 1 exabyte = 1,024 Petabytes (which in turn = 1,024 terabytes) this is an incredible amount of data. To put another way, it’s hundreds of trillions of gigabytes… and remember that your ‘fattest’ iPhone holds 128GB.

It’s a mindset of ‘scoop up all the data we can, while we can, just in case someday we might want to do something with it…’  This is why, if we care about our individual freedom of expression and liberty at all, we must protest against the blind renewal of these deeply flawed laws and regulations such as the Patriot Act.

This discussion is entering the public domain more and more – it’s making the news but it takes action not just talk. Make a noise. Write to your congressional representatives. Let them know this is an urgent issue and that they will be held accountable at election time for their position on this renewal. If the renewal is not granted, then – and typically only then – will the players be forced to sit down and have the honest discussion that should have happened years ago.

Data Security – An Overview for Executive Board members [Part 6: Cloud Computing]

March 21, 2015 · by parasam

Introduction

In the last part of this series on Data Security we’ll tackle the subject of the Cloud – the relatively ‘new kid on the block’. Actually “cloud computing” has been around for a long time, but the concept and naming was ‘repackaged’ along the way. The early foundation for the Cloud was ARPANET (1969), followed by several major milestones: Salesforce.com – the first major enterprise app available on the web (1999); Amazon Web Services (2002); Amazon EC2/S3 (2006); Web 2.0 (2009). The major impediment to mass use of the cloud (aka remote data centers) was cheap bandwidth. In parallel with massive bandwidth [mainly in the US, western Europe and Pacific Rim (APAC)] the development of fast and reliable web-based apps allowed concepts such as SaaS (Software as a Service) and other ‘remote’ applications to be viable. The initial use of the term “Cloud Computing” or “Cloud Storage” was intended to describe (in a buzzword fashion) the rather boring subject of remote data centers, hosted applications, storage farms, etc. in a way that mass consumers and small business could grasp.

Unfortunately this backfired in some corporate circles, leading to a fragmented and slow adoption of some of the potential power of cloud computing. Part of this was PR and communication, another was the fact that the security of early (and still many today unfortunately!) cloud centers was not very good. Concerns over security of assets, management and other access and control issues led many organizations – particularly media firms – to shun ‘clouds’ for some time, as they feared (perhaps rightly so early on) that their assets could be compromised or pirated. With a generally poor communication about cloud architecture, and the difference between ‘public clouds’ and ‘private clouds’ not being effectively communicated, widespread confusion existed for several years concerning this new technology.

Like most things in the IT world, very little was actually “new” – but incremental change is often perceived as boring so the marketing and hype around gradual improvements in remote data center capability, connectivity and features tended to portray these upgrades as a new and exciting (and perhaps untested!) entity.

The Data Security Model

Cloud Computing

To understand the security aspects of the Cloud, and more importantly to recognize the strategic concepts that apply to this computational model, it is important to know what a Cloud is, and what it is not. Essentially a cloud is a set of devices that host services in a remote location in relation to the user. Like most things in the IT world, there is a spectrum of capability and complexity… little clouds and bigger clouds… For instance, a single remote server that hosts some external storage that is capable of being reached by your cellphone can be considered a ‘cloud’… and on the other extreme the massive amount of servers and software that is known as AWS (Amazon Web Services) is also a cloud.

In both cases, and everything in between, all of the same issues we have discussed so far apply to the local cloud environment: Access Control, Network Security, Application Security and Data Compartmentalization. Every cloud provider must ensure that all these bits are correctly implemented – and it’s up to the due diligence of any cloud user/subscriber to verify that in fact these are in place. In addition to all these security features, there are some unique elements that must be considered in cloud computing.

The two main additional elements, in terms of security to both the cloud environment itself and the ‘user’ (whether that be an individual user or an entire organization that utilizes cloud services) are Cloud Access Control and Data Transport Control. Since the cloud is by its very nature remote from the user, almost always a WAN (Wide Area Network) connection is used to connect users to the cloud. This type of access is more difficult to secure, and as has been shown repeatedly by recent history, is susceptible to compromise. Even if the access control is fully effective, thereby allowing only authorized users to enter and perform operations, the problem of unauthorized data transport remains:  again, recent reports demonstrate that ‘inside jobs’ often result in users that have access to data (or if the user’s credentials are compromised or stolen) can therefore move or delete data, often with serious results.

An extra layer of security protocols and procedures are necessary to ensure that data transport or editing operations are appropriate and authorized.

  • Cloud Access Control (CAC)  –  Since the cloud environment (from an external user persepective) is ‘anonymous and external’ [i.e. neither the cloud nor the user can directly authenticate each other, nor is either contained within physical proximity] the possibility of unauthorized access (by a user) or spoofing (misdirecting a user to a cloud site other than which they intended to connect) is much greater. Both users and cloud sites must take extra precautions to ensure these scenarios do not take place.
    • Two-factor authentication is even more important in a cloud environment than in a ‘normal’ network. The best form of such authentication is where one of the factors is a Real Time Key (RTK). Essentially this means that one of the key factors is either generated in real time (or revealed in real time) and this shared knowledge between the user and the cloud is used to help authenticate the session.
    • One common example of RTK is where a short code is transmitted (often as a text message to the user’s cellphone) after the user completes the first part of a signon process using a username/password or some other single-factor login procedure.
    • Another form of RTK could be a shared random key where the user device is usually a small ‘fob’ that displays a random number that changes every 30 seconds. The cloud site contains a random number generator that has the same seed as the user fob so the random numbers will be the same within the 30 second window.
    • Either of these methods secures against both unauthorized user access (as is obvious) and protects the user against spoofing (in the first method, the required prior knowledge of the user’s cellphone number by the cloud site; in the second the requirement of matching random number generators).
    • With small variations to the above procedures, such authentication can apply to M2M (Machine to Machine) sessions as well.
  • Data Transport Control (DTC) – Probably the most difficult aspect of security to control is unauthorized movement, copying, deletion, etc. of data that is stored in a cloud environment. It’s the reason that even up until today many Hollywood studios prohibit their most valuable motion picture assets from being stored or edited in public cloud environments. Whether from external ‘hackers’ or internal network admins or others who have gone ‘rogue’ – protection must be provided to assets in a cloud even from users/machines that have been authenticated.
    • One method is to encrypt the assets, whereby the users that can effect the movement, etc. of an asset do not have the decryption key, so even if data is copied it will be useless without the decryption key. However, this does not protect against deletion or other edit functions that could disrupt normal business. There are also times where the encryption/decryption process would add complexity and reduce efficiency of a workflow.
    • Another method (offered by several commercial ‘managed transport’ applications) is a strict set of controls over which endpoints can receive or send data to/from a cloud. With the correct process controls in place (requiring for example that the defined lists of approved endpoints cannot be changed on the fly, and requires at least two different users to collectively authenticate updates to the endpoint list), a very secure set of transport privileges can be set up.
    • Tightly integrating ACL (Access Control List) actions against users and a set of rules can again reduce the possibility of rogue operations. For instance, the deletion of more than 5 assets within a given time period by a single user would trigger an authentication request against a second user – this would prevent a single user from wholesale data destruction operations. You might lose a few assets but not hundreds or thousands.

One can see that the art of protection here is really in the strategy and process controls that are set up – the technical bits just carry out the strategies. There are always compromises, and the precise set of protocols that will work for one organization will be different for another: security is absolutely not a ‘one size fits all’ concept. There is also no such thing as ‘total security’ – even if one wanted to sacrifice a lot of usability. The best practices serve to reduce the probability of a serious breach or other kind of damage to an acceptable level.

Summary

In this final section on Data Security we’ve discussed the special aspects of cloud security at a high level. As cloud computing becomes more and more integral to almost every business today, it’s vital to consider the security of such entities. As the efficiency, ubiquity and cost-saving features of cloud computing continue to rise, many times a user is not even consciously aware that some of the functionality they enjoy during a session is being provided by one or more cloud sites. To further add to the complexity (and potentially reduce security) of cloud computing in general, many clouds talk to other clouds… and the user may have no knowledge or control over these ‘extended sessions’. One example (that is currently the subject of large-scale security issues) is mobile advertising placement.

When a user launches one of their ‘free’ apps on their smartphone, the little ads that often appear at the bottom are not placed there by the app maker, rather that real estate is ‘leased’ to the highest bidder at that moment. The first ‘connection’ to the app is often an aggregator who resells the ‘ad space’ on the apps to one or more agencies that put this space up for bidding. Factors such as the user’s location, the model of phone, the app being used, etc. all factor in the price and type of ad being accepted. The ads themselves are often further aggregated by mobile ad agencies or clearing houses, many of which are scattered around the globe. With the speed of transactions and the number of layered firms involved, it’s almost impossible to know exactly how many companies have a finger in the pie of the app being used at that moment.

As can be seen from this brief introduction to Data Security, the topic can become complex in the details, but actually rather simple at a high level. It takes a clear set of guidelines, a good set of strategies – and the discipline to carry out the rules that are finally adopted.

Further enquires on this subject can be directed to the author at ed@exegesis9.com

 

 

Data Security – An Overview for Executive Board members [Part 5: Data Compartmentalization]

March 20, 2015 · by parasam

Introduction

This part of the series will discuss Data Compartmentalization – the rational separation of devices, data, applications and communications access from each other and external entities. This strategy is paramount in the design of a good Data Security model. While it’s often overlooked, particularly in small business (large firms tend to have more experienced IT managers who have been exposed to this), the biggest issue with Compartmentalization is keeping it in place over time, or not fully implementing it correctly in the first place. While not difficult per se, a serious amount of thought must be given to the layout and design of all the parts of the Data Structure of a firm if the balance between Security and Usability is to be attained in regards to Compartmentalization.

The concept of Data Compartmentalization (allow me to use the acronym DC in the rest of this post, both for ease of the author in writing and you the reader!) implies a separating element, i.e. a wall or other structure. Just as the watertight compartments in a submarine can keep the boat from sinking if one area is damaged (and the doors are closed!!) a ‘data-wall’ can isolate a breached area without allowing the enterprise at large to be exposed to the risk. DC is not only a good idea in terms of security, but also integrity and general business reliability. For instance, it’s considered good practice to feed different compartments with mains power from different distribution panels. So if one panel experiences a fault not everything goes black at once. A mis-configured server that is running amok and choking that segment of a network can easily be isolated to prevent a system-wide ‘data storm’ – an event that will raise the hairs on any seasoned network admin… a mis-configured DNS server can be a bear to fix!

In this section we’ll take a look at different forms of DC, and how each is appropriate to general access, network communications, applications servers, storage and so on. As in past sections, the important aspect to take away is the overall strategy, and the intestinal fortitude to ensure that the best practices are built, then followed, for the full life span of the organization.

The Data Security Model

Data Compartmentalization

DC (Data Compartmentalization) is essentially the practice of grouping and isolating IT functions to improve security, performance, reliability and ease of maintenance. While most of our focus here will be on the security aspects of this practice, it’s helpful to know that basic performance is often increased (by reducing traffic on a LAN [Local Area Network] sub-net. Reliability is improved since there are fewer moving parts within one particular area; and it’s easier to patch or otherwise maintain a group of devices when they are already grouped and isolated from other areas. It is not at all uncommon for a configuration change or software update to malfunction, and sometimes this can wreak havoc on the rest of the devices that are interconnected on that portion of a network. If all hell breaks loose, you can quickly ‘throw the switch’ and separate the malfunctioning group from the rest of the enterprise – provided this Compartmentalization has first been set up.

Again, from a policy or management point of view, it’s not important to understand all the details of programming firewalls or other devices that act as the ‘walls’ that separate these compartments (believe me, the arcane and tedious methodology required even today to correctly set up PIX firewalls for example gives even the most seasoned network admins severe headaches!). The fundamental concept is that it’s possible and highly desirable to break things down into groups, and then separate these groups at a practical level.

For a bit of perspective, let’s look at how DC operates in conjunction with each of the subjects that we’ve discussed so far: Access Control, Network Security Control and Application Security. In terms of Access Control the principle of DC would enforce that logins to one logical group (or domain or whatever logical division is appropriate to the organization) would restrict access to only the devices within that group. Now once again we run up against the age-old conundrum of Security vs. Usability – and here the oft-desired SSO (Single Sign On) feature is often at odds with a best practice of DC. How does one manage that effectively?

There are several methods: either a user is asked for additional authentication when wanting to cross a ‘boundary’ – or a more sophisticated SSO policy/implementation is put in place, where a request from a user to ‘cross a boundary’ is fed to an authentication server that automatically validates the request against the user’s permissions and allows the user to travel farther in cyberspace. As mentioned earlier in the section on Network Security, there is a definite tradeoff on this type of design, because rogue or stolen credentials could then be used to access very wide areas of an organization’s data structure. There are ways to deal with this, mostly in terms of monitoring controls that match the behavior of users with their past behavior, and a very granular set of ACL (Access Control List) permissions that are very specific about who can do what and where. There is no perfect answer to this balance between ideal security and friction-free access throughout a network. But in each organization the serious questions need to be asked, and a rational policy hammered out – with an understanding of the risks of whatever compromise is chosen.

Moving on to the concept of DC for Network Security, a similar set of challenges and possible solutions to the issues raised above in Access Control present themselves. While one may think that from a pure usability standpoint everything in the organization’s data structure should be connected to everything else, this is neither practical or reliable, let alone secure. One of the largest challenges to effective DC for large and mature networks is that these typically have grown over years, and not always in a fully designed manner: often things have expanded organically, with bits stuck on here and there as immediate tactical needs arose. The actual underpinnings of many networks of major corporations, governments and military sites are usually byzantine, rather disorganized and not fully documented. The topology of a given network or group of networks also has a direct effect on how DC can be implemented: that is why the best designs are where DC is taken as a design principle from day one. There is no one ‘correct’ way to do this: the best answer for any given organization is highly dependent on the type of organization, the amount and type of data being moved around and/or stored, and how much interconnection is required. For instance, a bank will have very different requirements than a news organization.

Application Security can only be enhanced by judicious use of compartmentalization. For instance, web servers, an inherently public-facing application, should be isolated from an organization’s database, e-mail and authentication servers. One should also remember that the basic concepts of DC can be applied no matter how small or large an organization is: even a small business can easily separate public-facing apps from secure internal financial systems, etc. with a few small routers/firewalls. These devices are so inexpensive these days that there is almost no rationale for not implementing these simple safeguards.

One can see that the important concept of DC can be applied to virtually any area of the Data Security model:  while the details of achieving the balance between what to compartmentalize and how to monitor/control the data movement between areas will vary from organization to organization, the basic methodology is simple and provides an important foundation for a secure computational environment.

Summary

In this section we’ve reviewed how Data Compartmentalization is a cornerstone of a sound data structure, aiding not only security but performance and reliability as well. The division of an extended and complex IT ecosystem into ‘blocks’ allows for flexibility, ease of maintenance and greatly contributes to the ability to contain a breach should one occur (and it will inevitably will!). One of the greatest mistakes any organization can make is to assume “it won’t happen to me.” Breaches are astoundingly commonplace, and many are undetected or go unreported even if discovered. For many reasons, including losing customer or investor confidence, potential financial losses, lack of understanding, etc. the majority of breaches that occur within commercial organizations are not publicly reported. Usually we find out only when the breach is sufficient in scope that it must be reported. And the track record for non-commercial institutions is even worse. NGOs, charities, research institutions, etc. often don’t even know of a breach unless something really big goes wrong.

The last part of this series will discuss Cloud Computing: as a relatively new ‘feature’ of the IT landscape, the particular risks and challenges to a good security model warrant a focused effort. The move to using some aspect of the Cloud is becoming prevalent very quickly among all levels of organizations: from the massive scale of iTunes down to an individual user or small business backing up their smartphones to the Cloud.

Part 6 of this series is located here.

 

Data Security – An Overview for Executive Board Members [Part 4: Application Security]

March 19, 2015 · by parasam

Introduction

In this section we’ll move on from Network Security (discussed in the last part) to the topic of Application Security. So far we’ve covered issues surrounding the security of basic access to devices (Access Control) and networks (Network Security), now we’ll look at an oft-overlooked aspect of a good Data Security model: how applications behave in regards to a security strategy. Once we have access to computers, smartphone, tablets, etc.; and then have privileges to connect to other devices through networks it’s like being inside a shop without doing any shopping. Rather useless…

All of our functional performance with data is through one or more applications: e-mail, messaging, social media, database, maps, VoIP (internet telephony), editing and sharing images, financial and planning – the list is endless. Very few modern applications work completely in a vacuum:  i.e. perform their function with absolutely no connection to the “outside world”. The connections that applications make can be as benign as accessing completely ‘local’ data – such as a photo-editing app requiring access to your photo library on the same computer on which the app is running; or can reach out the to entire public internet – such as Facebook.

The security implications of these interactions between applications and other apps, stored data, websites, etc. etc. are the area of discussion for the rest of this section.

The Data Security Model

Application Security

Trying to think of an app that is completely self-contained is actually an exercise. A simple calculator and the utility app that switches on the LED “flash” (to function as a flashlight) are the only two apps on my phone that are completely ‘stand-alone’. Every other app (some 200 in my case) connect in some way to external data (even if on the phone itself) or the local network, the web, etc. Each one of these ‘connections’ carries with it a security risk. Remember that hackers are like rainwater: even the tiniest little hole in your roof or wall will allow water into your home.

While you may think that your cellphone camera app is a very secure thing -after all you are only taking pix with your phone and storing those images directly on your phone… we are not discussing uploading these images or sharing them in any way (yet). However… remember that little message that pops up when you first install an app such as that? Where it asks permission to access ‘your photos’? (Different apps may ask for permission for different things… and this only applies to phones and tablets: laptops and desktops never seem to ask at all – they just connect to whatever they want to!)

I’ll give you an example of how ‘security holes’ can contribute to a weakness in your overall Data Security model: We’ll use the smartphone as an example platform. Your camera has access to photos. Now you’ve installed Facebook as well, and in addition to the Facebook app itself you’ve installed the FB “Platform” (which supports 3rd party FB apps) and a few FB apps, including some that allow you to share photos online. FB apps in general are notoriously ‘leaky’ (poorly written in terms of security, and some even deliberately do things with your data that they do not disclose). A very common user behavior on a phone is to switch apps without fully closing them. If FB is running in the background all installed FB apps are running as well. Each time you take a photo, these images are stored in the Camera Roll which is now shared with the FB apps – which can access and share these images without your knowledge. So the next time you see celebrity pix of things we really don’t need to see any more of… now you know one way this can easily happen.

The extent to which apps ‘share’ data is far greater than is usually recognized. This is particularly true in larger firms that often have distributed databases, etc. Some other examples of particularly ‘porous’ applications are: POS (Point Of Sale) systems, social media applications (corporate integration with Twitter, Facebook, etc. can be highly vulnerable), mobile advertising backoffice systems, applications that aggregate and transfer data to/from cloud accounts and many more. (Cloud Computing is a case unto itself, in terms of security issues, and will be discussed as the final section in this series.)

There are often very subtle ways in which data is shared from a system or systems. Some of these appear very innocuous but a determined hacker can make use of even small bits of data which can be linked with other bits to eventually provide enough information to make a breach possible. One example is many apps (including OS themselves – whether Apple, Android, Windows, etc.) send ‘diagnostic’ data to the vendor. Usually this is described as ‘anonymous’ and gives the user the feeling that’s it’s ok to do this: firstly personal information is not transmitted, secondly the data is supposedly only going to the vendor’s website for data collection – usually to study application crashes.

However, it’s not that hard to ‘spoof’ the server address to which the data is being sent, and seemingly innocent data being sent can often include either the ip address or MAC address of the device – which can be very useful in the future to a hacker that may attempt to compromise that device. The internal state of many ‘software switches’ is also revealed – which can tell a hacker whether some patches have been installed or not. Even if the area revealed by the app dump is not directly useful, a hacker that sees ‘stale’ settings (showing that this machine has not been updated/patched recently) may assume that other areas of the same machine are also not patched, and can use discovered vulnerabilities to attempt to compromise the security of that device.

The important thing to take away from this discussion is not the technical details (that is what you have IT staff for), but rather to ensure that protocols are in place to constantly keep ALL devices (including routers and other devices that are not ‘computers’ in the literal sense) updated and patched as new security vulnerabilities are published. An audit program should be in place to check this, and the resulting logs need to actually be studied, not just filed! You do not want to be having a meeting at some future date where you find out that a patch that could have prevented a data breach remained uninstalled for a year… which BTW is extraordinarily common.

The ongoing maintenance of a large and extended data system (such as many companies have) is a significant effort. It is as important as the initial design and deployment of systems themselves. There are well-know methodologies for doing this correctly that provide a high level of both security and stability for the applications and technical business process in general. It’s just that often they are not universally applied without exception. And it’s those little ‘exceptions’ that can bite you in the rear – fatally.

A good rule of thumb is that every time you launch an application, that app is ‘talking’ to at least ten other apps, OS processes, data stores, etc. Since the average user has dozens of apps open and running simultaneously, you can see that most user environments are highly interconnected and potentially porous. The real truth is that as a collective society, we are lucky that there are not enough really good hackers to go around: the amount of potential vulnerabilities vastly outnumbers those who would take advantage of them!

If you really want to look at extremes of this ‘cat and mouse’ game, do some deep reading on the biggest ‘hack’ of all time: the NSA penetration of massive amounts of US citizen’s data on the one side; the procedures that Ed Snowden took in communicating with Laura Poitras and Glenn Greenwald on the other side (the journalists who first connected with Snowden). Ed Snowden, more than just about anyone, knew how to effectively use computers and not be breached. It was fairly elaborate – but not at all that difficult – he managed to instruct both Laura and Glenn how to set up the necessary security on their computers so that reliable and totally secret communications could take place.

Another very important issue of which to be aware, particularly in this age of combined mobile and corporate computing with thousands of interconnected devices and applications: breaches WILL occur. It’s how you discover and react to them that is often the difference between a relatively minor loss and a CNN exposé level… The bywords one should remember are: Containment, Awareness, Response and Remediation. Any good Data Security protocol must include practices that are just as effective against M2M (Machine to Machine) actions as well as things performed by human actors. So constant monitoring software should be in place to see whether unusual amounts of connections, file transfers, etc. are taking place – even from one server to another. I know it’s an example I’ve used repeatedly in this series, (I can’t help it – it’s such a textbook case of how not to do things!) but the Sony hack revealed that a truly massive amount of data (as in many, many terabytes) was transferred from supposedly ‘highly secure’ servers/storage farms to repositories outside the USA. Someone or something should have been notified that very sustained transfers of this magnitude were occurring, so at least some admin could check and see what was using all this bandwidth. Both of the most common corporate file transfer applications (Aspera and Signiant) have built-in management tools that can report on what’s going where.. so this was not a case of something that needed to be built – it’s a case of using what’s already provided correctly.

Many, if not most, applications can be ‘locked down’ to some extent – the amount and degree of communication can be controlled to help reduce vulnerabilities. Sometimes this is not directly possible within the application, but it’s certainly possible if the correct environment for the apps is designed and implemented appropriately. For example, a given database engine may not have the level of granular controls to effectively limit interactions for your firm’s use case. If that application (and possibly others of similar function) are run on a group of application servers that are isolated from the rest of the network with a small firewall, the firewall settings can be used to very easily and effectively limit precisely which other devices these servers can reach, what kind of data they may send/receive, the time of day when they can be accessed, etc. etc. Again, most of good security is in the overall concept and design, as even excellent implementation of a poor design will not be effective.

Summary

Applications are what actually give us function in the data world, but they must be carefully installed, monitored and controlled in order to obtain the best security and reliability. We’ve reviewed a number of common scenarios that demonstrate how easily data can be compromised by unintended communication to/from your applications. Applications are vital to every aspect of work in the Data world, but can, and do, ‘leak’ data to many unintended areas – or provide an unintended bridge to sensitive data.

The next section will discuss Data Compartmentalization. This is an area of the Data Security model that is least understood and practiced. It’s a bit like the waterproof compartment in a submarine: if one area is flooded, by closing the communicating doors the rest of the boat can be saved from disaster. A big problem (again, not technical but procedural) is that in many organizations, even where a good initial design segregated operations into ‘compartments’ it doesn’t take very long at all for “cables to be thrown over the fence”, therefore bypassing the very protections that were put in place. Often this is done for expediency to fix a problem, or some new app needs to be brought on line quickly and taking the time to install things properly with all the firewall rule changes is waived. These business practices are where good governance, proper supervision, and continually asking the right questions is vital.

Part 5 of this series is located here.

Data Security – An Overview for Executive Board members [Part 3: Network Security]

March 18, 2015 · by parasam

Introduction

In Part 2 of this series we reviewed Access Control – the beginning of a good Data Security policy. In this section we’ll move on to Network Security Controls: once a user has access to a device or data, almost all interactions today are with a network of devices, servers, storage, transmission paths, web sites, etc. The design and management of these networks are absolutely critical to the security model. This area is particularly vulnerable to intrusion or inadvertent ‘leakage’ of data, as networks continue to grow and can become very complex. Often parts of the network have been in place for a long time, with no one currently managing them even aware of the initial design parameters or exactly how pieces are interconnected.

There are a number of good business practices that should be reviewed and compared to your firm’s networks – and this is not highly technical, it just requires methodology, common sense, and the willingness to ask the right questions and take action should the answers reveal weakness in network security.

The Data Security Model

Network Security Controls

Once more than one IT device is interconnected, we have a network. The principle of Access Control discussed in the prior section is equally applicable to a laptop or an entire global network of thousands of devices. There are two major differences when we expand our security concept to an interconnected web of computers, storage, etc. The first is that when a user signs on (whether with a simple username or password, or a sophisticated Certificate and Biometric authorization) instead of logging into a single device, such as their laptop, they sign in to a portion of the network. The devices to which they are authorized are contained in an Access Control List (ACL) – which is usually chosen by the network administrator. (This process is vastly simplified here in regards to the complex networks that can exist today in large firms, but the principle is the same). It’s a bit like a passport that allows you into a certain country, but not necessarily other bordering countries. Or one may gain entry into a neighboring country with restrictions, such as are put in place with visas for international travelers.

The second major difference, in terms of network security in relation to logging into a single device, is that within a network there are often many instances of where one device needs to communicate directly to another device, with no human taking place in that process. These are called M2M (Machine to Machine) connections. It’s just as important that any device that wants to connect to another device within your network be authorized to do so. Again, the network administrator is responsible for setting up the ACLs that control this, and in addition many connections are restricted: a given server may only receive data but not send, or only have access to a particular kind of data.

In a large modern network, the M2M communication usually outnumbers the Human to Machine interactions by many orders of magnitude – and this is where security lapses often occur, just due to the sheer number of interconnected devices and the volume of data being exchanged. While it’s important to have in place trained staff, good technical protocols, etc. to manage this access and communication, the real protection comes from adopting, and continually monitoring adherence to, a sound and logical security model at the highest level. If this is not done, then usually a patchwork of varying approaches to network security ends up being implemented, with almost certain vulnerabilities.

The biggest cause of network security lapses results from the usual suspect: Security vs Usability. We all want ‘ease of use’ – and unfortunately no one more than network administrators and others that must traverse vast amounts of networks daily in their work, often needing to quickly log into many different servers to either solve problems or facilitate changes requested by impatient users. There is a rather well-liked login policy that is very popular with regular users and network admins alike: that of Single Sign On (SSO). While this provides great ease of use, and is really the only practical method for users to navigate large and complex networks, its very design is a security flaw waiting to happen. The proponents of SSO will argue that detailed ACLs (Access Control Lists – discussed in Part 2 of this series) can restrict very clearly the boundaries of who can do what, and where. And, they will point out, these ACLs are applicable to machines as well as humans, so in theory a very granular – and secure – network permissions environment can be built and maintained.

As always, the devil is in the details… and the larger a network gets the more details there are… couple that with inevitable human error, software bugs, determined hackers, etc. etc. and sooner or later a breach of data security is inevitable. This is where the importance of a really sound overall security strategy is paramount – one that takes into account that neither humans nor software are created perfectly, and that breaches will happen at some point. The issue is one of containment, awareness, response and remediation. Just as a well-designed building can tolerate a fire without quickly burning to the ground – due to such features as firestops in wall design, fire doors, sprinkler systems, fire retardant furnishings, etc.; so can a well designed network tolerate breaches without allowing unfettered access to the entire network. Unfortunately, many (if not most) corporate networks in existence today have substantial weaknesses in this area. The infamous Sony Pictures ‘hack’ was initiated by a single set of stolen credentials being used to access virtually the entire world-wide network of this company. (It’s a bit more complicated than this, but in essence that’s what happened). Ideally, the compromise of a single set of credentials should not have allowed the extent and depth of that breach.

Just as when a person is travelling from one state to another within even a single country, you must usually stop at the border and a guard has a quick look at your car, may ask if you have any fruit or veg (in case of parasites), etc. – so a basic check of permissions should occur when access is requested from a substantially different domain that the one where the initial logon took place. Even more important -going back to our traveler analogy: if instead of a car the driver is in a large truck usually a more detailed inspection is warranted. A Bill of Lading must be presented, and a partial physical inspection is usually performed. The Data equivalent of this (stateful inspection) will reveal if the data being moved is appropriate to the situation. Using the Sony hack as an example, the fact that hundreds of thousands of e-mails were being moved from within a ‘secure’ portion of the network to servers located outside the USA should have tripped a notification somewhere…

The important issue here to keep remembering is that the detailed technology, software, hardware or other bits that make this all work are not what needs to understood at the executive level: what DOES need to be in place is the governance, policy and willfulness to enforce the policies on a daily basis. People WILL complain – certain of these suggested policies make accessing, moving, deleting data a bit more difficult and bit more time-consuming. However, as we have seen, the tradeoff is worth it. No manager or executive wants to answer the kind of questions after the fact that can result from ignoring sound security practices…

Probably the single most effective policy to implement – and enforce – is the ‘buddy system’ at the network admin level. No single person should have unfettered access to the entire network scope of a firm. And most importantly, this must include any outside contractors. This is an area oft overlooked. The details must be designed for each firm, as there are so many variations, but essentially at least two people must authenticate major moves/deletions/copies of a certain scope of data. A few examples:  if an admin that routinely works in an IT system in California wants access to the firm’s storage servers in India, a local admin within India should be required to additionally authenticate this request. Or if a power user whose normal function is backup and restore of e-mails wants to delete hundreds of entire mailboxes then a second authentication should be required.

While the above examples involved human operators, the same policies are effective with M2M actions. In many cases, sophisticated malware that is implanted in a network by a hacker can carry out machine operations – provided the credentials are provided. Again, if a ‘check & balance’ system is in place, unfettered M2M operations would not be allowed to proceed unhindered. Another policy to consider adopting is that of not excluding any operations by anyone at all from these intrusion detection, audit or other protective systems. Often senior network admins will request that these protective systems be bypassed for operations performed by a select class of users (most often top network admins) – as they often say that these systems get in the way of their work when trying to resolve critical issues quickly. This is a massive weakness – as has been shown many times when these credentials are compromised.

Summary

Although networks range from very simple to extraordinarily complex, the same set of good governance policies, protocols and other ‘rules of the road’ can provide an excellent level of security within the the data portion of a company. This section has reviewed several of these, and discussed some examples of effective policies. The most important aspect of network security is often not the selection of the chosen security measures, but the practice of ensuring that they are in place completely across the entire network. These measures also should be regularly tested and checked.

In the next section, we’ll discuss Application Security:  how to analyze the extent to which many applications expose data to unintended external hosts, etc. Very often applications are quite ‘leaky’ and can easily compromise data security.

Part 4 of this series is located here.

 

 

Data Security – An Overview for Executive Board members [Part 2: Access Control]

March 16, 2015 · by parasam

Introduction

In Part 1 of this topic we discussed the concepts and basic practices of digital security, and covered an overview of Data Security. In the next parts we’ll go on to cover in detail a few of the most useful parts of the Data Security model, and offer some practical solutions for good governance in these areas. The primary segments of Data Security that have significant human factors, or require an effective set of controls and strategy in order for the technical aspect to be successful are: Access Control, Network Security Controls, Application Security, Data Compartmentalization, and Cloud Computing.

Security vs Usability

This is the cornerstone of so many issues with security: the paradox between really effective security and the ease of use of a digital system. It’s not unlike wearing a seatbelt in a car… a slight decrease in ‘ease of use’ results in an astounding increase in physical security. You know this. The statistics are irrefutable. Yet hundreds of thousands of humans are either killed or injured worldwide every year by not taking this slight ‘security’ effort. So.. if you habitually put on your seat belt each time before you put your car in gear.. .then keep reading, for at least you are open to a tradeoff that will seriously enhance the digital security of your firm, whether a Fortune500 company, a small African NGO or a research organization that is counting the loss of primary forests in Papa New Guinea.

The effective design of a good security protocol is not that different than the design principle that led to seatbelts in cars:  On the security side, the restraint system evolved from a simple lap belt to combination shoulder harness/lap belt systems, often with automatic mechanisms that ‘assisted’ the user to wear them. The coupling of airbags as part of the overall passenger restraint system (which being hidden required no effort on the part of the user to make them work) improved even further the effectiveness of the overall security system. On the usability side, the placement of the buckles, the mechanical design of the buckles (to make it easy for everyone from children to the elderly to open and close them), and other design factors worked to increase the ease of usability. In addition philosophical and social pressures added to the ‘human factor’ of seat belt use:  in most areas there are significant public awareness efforts, driver education and governmental regulations (fines for not wearing seat belts), etc. that further promote the use of these effective physical security devices.

If you attempt to put in place a password policy that requires at least 16 characters with ‘complexity’ (i.e. a mix of Caps, Numbers, Punctuation) – and require the password to be changed monthly you can expect a lot of passwords to be written down on sticky notes on the underside of the keyboards… You have designed a system with very good Security but poor Usability. In each of the areas that we will discuss the issue of Security vs Usability will be addressed, as it is paramount to actually having something that works.

The Data Security Model

  Access Control

In its simplest form, access control is like the keys to your office, home or car. A person that is in possession of the correct key can access content or perform operations that are allowable within the confines of the accessible area. If you live in a low crime area, you may have a very small amount of keys: one for your house, one for your car and another for your office. But as we move into larger cities, we start collecting more keys: a deadbolt key (for extra security), probably a perimeter key for the office complex, a key for the postbox if you live in a housing complex, etc. etc. But even relatively complex physical security is very simple compared to online security for a ‘highly connected’ user. It is very easy to have tens if not hundreds of websites, computer/server logins, e-mail logins, etc. that each require a password. Password managers have become almost a required toolset for any security-minded user today, as how else to keep track of that many passwords! (And I assume here that you don’t make the most basic mistake of reusing passwords across multiple websites…)

Back to basics:  the premise behind the “username / password” authentication model is firstly to uniquely identify the user [username] and then to ensure that the access being granted is to the correct person [a secret password that supposedly is known only to the correct user]. There are several significant flaws with this model but due to its simplicity and relative ease of use it is widespread use throughout the world. In most cases, usernames are not protected in any way (other than being checked for uniqueness). Passwords, depending on the implementation, can be somewhat more protected – many systems encrypt the password that is on the server or device to which the user is attempting to gain access, so that someone (on the inside) that gains access to the password list on the server doesn’t get anything useful. Other attempts at making the password function more secure are password rules (such as requiring complexity/difficulty, longer passwords, forcing users to change passwords regularly, etc.) The problem with this is that the more secure (i.e. elaborate) the password rules become, the more likely that the user will compromise security by attempting to simplify the rules, or copying the password so they may refer to it since it’s too complex to remember. The worst of this type of behavior is the yellow sticky note… the best is a well-designed password manager that stores all the passwords in an encrypted database – that itself requires a password for access!

As can be seen this username/password model is a compromise that fails in the face of large numbers of different passwords needed by each user, and the ease at which many passwords can be guessed by a determined intruder. Various social engineering tactics, coupled with powerful computers and smart “password-guessing” algorithms can often correctly figure out passwords very quickly. We’ve all heard (or used!) birthdays, kids/pets names, switching out vowels with numbers, etc. etc. There isn’t a password simplification method that a hacker has not heard of as well…

So what next? Leaving the username identity alone for the moment, if we focus on just the password portion of the problem we can use biometrics. This has long been used by government, military and other institutions that had the money (these methods used to be obscenely expensive to implement) – but now are within the reach of the average user. Every new iPhone has a fingerprint reader, and these devices are common on many PCs now as well. So far the fingerprint is the only fairly reliable biometric security method in common use, although retina scanners and other even more arcane devices are in use or being investigated. These devices are not perfect, and all the systems I have seen allow the use of a password as a backup method: the fingerprint is used more as convenience as opposed to absolute security. The fingerprint readers on smartphones are not of the same quality and accuracy as a FIPS-compliant device: but in fairness most restrict the number of ‘bad fingerprint reads’ to a small number before the alternate password is required, so the chance of a similar (but not exact) fingerprint being used to unlock the device is very low.

(Apple for instance states that there is a 1 in 50,000 chance of two different fingerprints being read as identical. At the academic level it is postulated that no two fingerprints are, or ever have been, exactly the same. Even if we look at currently living humans that is a ratio of roughly 1 in 6 billion… so fingerprint readers are not all that accurate. However, they are practically more than good enough given the statistical probability of two people with remarkably similar fingerprints being in the position to attempt access to a single device).

Don’t give up! This is not to say that fingerprint readers are not an adequate solution – they are an excellent method – just that there are issues and the full situation should be well understood.

The next level of “password sophistication” is the so-called “two factor” authentication. This is becoming more common, and has the possibility of greatly increasing security, without tremendous user effort. Basically this means the user submits two “passwords” instead of one. There are two forms of “two factor authentication”: static-static and static-dynamic. The SS (static-static) method uses two previously known ‘passwords’ (usually one biometric – such as a fingerprint; and one actual ‘password’ – whether an actual complex password or a PIN number). The SD (static-dynamic) method uses one previously known ‘password’, and the second ‘password’ is some code/password/PIN that is dynamically transmitted to the user at the time of login. Usually these are sent to the user via their cellphone, are randomly created at the time of attempted login – and are therefore virtually impossible to crack. The user must have previously registered their cellphone number with the security provider so that they can receive the codes. There are obvious issues with this method: one has to within cellphone reception, must have not left it at home, etc. etc.

There is an other SD method, which uses a ‘token’ (a small device that contains a random number generator that is seeded with an identical ‘seed’ that is paired with a master security server. This essentially means that both the server and the token will generate the same random numbers each time the seed updates (usually once every 30 seconds). The token works without a cellphone (which also means it can work underground or in areas where there is no reception). These various ‘two factor’ authentication methods are extremely secure, as the probability of a bogus user having both factors is statistically almost zero.

Another method for user authentication is a ‘certificate’. Without going into technical details (which BTW can make even a seasoned IT guru’s eyeballs roll back in her head!) a certificate is bit like a digital version of a passport or driver’s license: an object that is virtually impossible to counterfeit that uniquely identifies the owner as the rightful holder of that ‘certificate’. In the physical world, driver’s licenses often have a picture, the user’s signature, and often a thumbprint or certain biometric data (height, hair/eye color, etc.) Examination of the “license” in comparison to the person validates the identity. An online ‘security certificate’ [X.509 or similar] performs the same function. There are different levels of certificates, with the higher levels (Level 3 for instance) requiring a fairly detailed application process to ensure that the user is who s/he says s/he is. Use of the certificate, instead of just a simple username, offers a considerably higher level of security in the authentication process.

A certificate can then be associated with a password (or a two factor authentication process) for any given website or other access area. There are a lot of details around this, and there is overhead in administering certificates in a large company – but they have been proven worldwide to be secure, reliable and useful. Many computers can be fitted with a ‘card reader’ that read physical ‘certificates’ (where the certificate is like a credit card that the user presents to log in).

One can see that something as simple as wanding a card and then pressing a fingerprint reader is very user-friendly, highly secure, and is a long way from simple passwords and usernames. The principle here is not to get stuck on details, but to understand that there are methods for greatly improving both security and usability to make this aspect of Data Security – Access Control – no longer an issue for an organization that wishes to take the effort to implement them. Some of these methods are not enormously complicated or expensive, so even small firms can make use of these methods.

Summary

In this part we have reviewed Access Control – one of the pillars of good Data Security. Several common methods, with their corresponding Security vs Usability aspects have been discussed. Access Control is a vital part of any firm’s security policy, and is the foundation of keeping your data under control. While there are many more details surrounding good Access Control policies (audits, testing of devices, revocation of users that are no longer authorized, etc.) the principals are easy to comprehend. The most important thing is to know that good Access Control is required, and that shortcuts or compromises can have disastrous results in terms of a firm’s bottom line or reputation. The next part will discuss Network Security Controls – the vitally important aspect of Data Security where computers or other data devices are connected together – and how those networks can be secured.

Part 3 of this series is located here.

 

Data Security – An Overview for Executive Board members [Part 1: Introduction & Concepts]

March 16, 2015 · by parasam

Introduction

This post is a synthesis of a number of conversations and discussions concerning security practices for the digital aspect of organizations. These dialogs were initially with board members and executive-level personnel, but the focus of this discussion is equally useful to small business owners or anyone that is a stakeholder in an organization that uses data or other digital tools in their business: which today means just about everyone!

The point of view is high level and deliberately as non-technical as possible: not to assume that many at this level are not extremely technically competent, but rather to encompass as broad an audience as possible – and, as will be seen, that the biggest issues are not actually that technical in the first place, but rather are issues of strategy, principle, process and oft-misunderstood ‘features’ of the digital side of any business. The points that will be discussed are equally applicable to firms that primarily exist ‘online’ (who essentially have no physical presence to the consumers or participants in their organization) and those organizations that exist mainly as ‘bricks and mortar’ companies (who use IT as a ‘back office’ function just to support their physical business).

In addition, these principles are relevant to virtually any organization, not just commercial business: educational institutions, NGO’s, government entities, charities, medical practices, research institutions, ecosystem monitoring agencies and so on. There is almost no organization on earth today that doesn’t use ‘data’ in some form. Within the next ten years, the transformation will be almost complete: there won’t be ANY organizations that won’t be based, at their core, on some form of IT. From databases to communication to information sharing to commercial transactions, almost every aspect of any firm will be entrenched in a digital model.

The Concept of Security

The overall concept of security has two major components: Data Integrity and Data Security. Data Integrity is the aspect of ensuring that data is not corrupted by either internal or external factors, and that the data can be trusted. Data Security is the aspect of ensuring that only authorized users have access to view, transmit, delete or perform other operations on the data. Each is critical – Integrity can likened to disease in the human body: pathogens that break the integrity of certain cells will disrupt and eventually cause injury or death; Security is similar to the protection that skin and other peripheral structures provide – a penetration of these boundaries leads to a compromise of the operation of the body, or in extreme cases major injury or death.

While Data Integrity is mostly enforced with technical means (backup, comparison, hash algorithms, etc.), Data Security is an amalgam of human factors, process controls, strategic concepts, technical measures (comprising everything from encryption, virus protection, intrusion detection, etc.) and the most subtle (but potentially dangerous to a good security model): the very features of a digital ecosystem that make it so useful also can make it highly vulnerable. The rest of this discussion will focus on Data Security, and in particular those factors that are not overtly ‘technical’ – as there are countless articles etc on the technical side of Data Security. [A very important aspect of Data Integrity – BCDR (Business Continuity and Disaster Recovery) will be the topic of an upcoming post – it’s such an important part of any organizations basic “Digital Foundation”.]

The Non-Technical Aspects of Data Security

The very nature of ‘digital data’ is both an absolute boon to organizations in so many ways: communication, design, finance, sales, online business – the list is endless. The fantastic toolsets we now have in terms of high-powered smartphones and tablets coupled with sophisticate software ‘apps’ have put modern business in the hands of almost anyone. This is based on the core of any digital system: the concept of binary values. Every piece of e-mail, data, bank account details or digital photograph is ultimately a series of digital values: either a 1 or a 0. This is the difference between the older analog systems (many shades of gray) and digital (black or white, only 2 values). This core concept of digital systems makes copying, transmission, etc of data very easy and very fast. A particular block of digital data, when copied with no errors, is absolutely indistinguishable from the ‘original’. While in most cases this is what makes the whole digital world work as well as it does, it also creates a built-in security threat. Once a copy is made, if it is appropriated by an unauthorized user it’s as if the original was taken. The many thousands of e-mails that were stolen and then released by the hackers that compromised the Sony Pictures data networks is a classic example of this…

While there are both technical methods and process controls that can mitigate this risk, it’s imperative that business owners / stakeholders understand that the very nature of a digital system has a built-in risk to data ‘leakage’. Only with this knowledge can adequate controls be put in place to prevent data loss or unauthorized use. Another side to digital systems, particularly communication systems (such as e-mail and social media), is how many of the software applications are designed and constructed. Many of these, mostly social media types, have uninhibited data sharing as the ‘normal’ way the software works – with the user having to take extra effort to limit the amount of sharing allowed.

An area that is a particular challenge is the ‘connectedness’ of modern data networks. The new challenge of privacy in the digital ecosystem has prompted (and will continue to) many conversations, from legal to moral/ethical to practical. The “Facebook” paradigm [everything is shared with everybody unless you take efforts to limit such sharing] is really something we haven’t experienced since small towns in past generations where everybody knew everyone’s business…

While social media is fast becoming an important aspect of many firms’ marketing, customer service and PR efforts, they must be designed rather carefully in order to isolate those ‘data sharing’ platforms from the internal business and financial systems of a company. It is surprisingly easy for inadvertent ‘connections’ to be made between what should be private business data and the more public social media facet of a business. Even if a direct connection is not made between say, the internal company e-mail address book and their external Facebook account (a practice that unfortunately I have witnessed on many more than one occasion!), the inappropriate positioning of a firm’s Twitter client on the same sub-network as their e-mail servers is a hacker’s dream: it usually will take a clever hacker only minutes to ‘hop the fence’ and gain access to the e-mail server if they were able to compromise the Twitter account.

Many of the most important issues surrounding good Data Security are not technical, but rather principles and practices of good security. Since ultimately human beings are often a significant actor in the chain of entities that handle data, these humans need guidance and effective protocols just like the computers need well-designed software that protects the underlying data. Access controls (from basic passwords to sophisticated biometric parameters such as fingerprints or retina scans); network security controls (for instance requiring at least two network administrators to collectively authorize large data transfers or deletions – which would have prevented most of the Sony Pictures data theft/destruction); compartmentalization of data (the practice of controlling both storage and access to different parts of a firms’ digital assets in separate digital repositories); and the newcomer on the block: cloud computing (essentially just remote data centers that host storage, applications or even entire IT platforms for companies) – all of these are areas that have very human philosophies and governance issues that are just implemented with technology.

Summary

In Part 1 of this post we have discussed the concepts and basic practices of digital security, and covered an overview of Data Security. The next part will discuss in further detail a few of the most useful parts of the Data Security model, and offer some practical solutions for good governance in these areas.

Part 2 of this series is located here.

  • Blog at WordPress.com.
  • Connect with us:
  • Twitter
  • Vimeo
  • YouTube
  • RSS
  • Follow Following
    • Parasam
    • Join 95 other followers
    • Already have a WordPress.com account? Log in now.
    • Parasam
    • Customize
    • Follow Following
    • Sign up
    • Log in
    • Report this content
    • View site in Reader
    • Manage subscriptions
    • Collapse this bar
 

Loading Comments...