The Problem with Data Security

Data breaches are a huge problem and one that’s still growing. In the United States alone, over 20 million records were stolen in the first two months of 2018, and those are the ones that we know about due to disclosure.

It’s fair to wonder why this is happening, and what the true scale of the issue might be. As technology advances it allows companies to store and analyze more information. Shouldn’t these advances also make data more secure? But we seem to be moving the opposite direction.

There are several reasons for this, and most of them can be reduced to the choices that people make. Keeping data secure requires thinking that doesn’t come naturally, and to eradicate the problem, this unnatural thinking needs to become a learnt behavior. This goes some way to explaining an ever-widening gap.

Judging risk

People have trouble quantifying risk, due in part to misjudging probability. We either overestimate or underestimate risk, often by orders of magnitude. It’s worse when the relationship between cause and effect is hard to understand.

People are good at understanding events they encounter regularly – where they can see, and therefore learn the connection between, cause and effect. Run across a busy street without looking, and you are likely to be hit by a car. Leave an open fire unattended, and a burning ember might burn down your house. We don’t have to experience the effect directly to understand the risks because the causes are visible and obvious.

By contrast, it is easy to engage in poor security practices for years and not suffer any perceptible consequences. We leave data unsecured and unmonitored, but it is still available to us. Our business does not disappear overnight. We may not even appear in the press. Attackers can remain undetected on our networks for months or even years, so there is no effect, and no-one even looks for a cause. When a breach occurs, if it isn’t obvious what happened, then no one needs to think about why.

Perhaps because of their unseen nature, attackers are often seen as sorcerers who possess sinister magical powers that they can do nothing about. They hide in shadows and evade detection, taking what they want at will. The lack of evidence to show their presence reinforces the low probability of compromise and we sit on our hands. This is perhaps an intuitive reaction until you understand that most attackers are successful due to persistence, rather than sophistication. And efforts to reduce the time to detect a successful attack might be better focused on trying to contain the damage.

The correct response is to build defence in depth and prepare to monitor and block using basics that are already widely available to us. This illustrates the “unnatural thinking gap”. When people can’t judge risks accurately, it’s hard to motivate them to take actions that will reduce the hazard. They don’t see a reason to improve their security practices or to insist on secure products and services.

Convenience over security

When people don’t clearly understand risks, they favor convenience and user-friendliness over security. Security measures are traditionally perceived to be user-hostile to some extent. We’ve all been working on a project that came to a complete stop because firewall rules hadn’t been changed, or permissions hadn’t been properly assigned. Security controls and the policies that dictate them don’t move at the speed of business. One of the reasons the cloud has grown so quickly is because it allows users to circumvent security.

Given a choice between doing a job quickly but with risk and doing it more slowly but securely, most people will prefer speed. If they can find a way to jump through fewer hoops, they will take it, even if it makes them less secure.

When configuring a device or system, a system administrator might leave default settings in place. A major breach occurred due to a network administrator configuring a device to connect an insecure network to a highly secure one for convenience. He had done it before with no issues, probably several times. This aided him in his job, but also created a door for attackers to steal intellectual property, for the sake of a 20-minute data transfer workaround. It is not always inexperience that leads to security failings. Again, we highlight the unnatural thinking gap.

Complexity

Features are selling points in software. The more features an application has, the easier it is to market it. However, complexity makes it more likely for bugs to appear. The risk grows faster than the number of features, since the interaction of different features is the source of many bugs. Not every bug is a security hole, but a significant proportion of them can be exploited to break into a system or gain access to hidden information.

Some features are inherently risky. The ability to run scripts, access the file system, and communicate over the Internet significantly increases the chance that bugs will turn into security issues. Sometimes those abilities are part of the software’s basic functionality, but sometimes they’re just impressive-looking extras that it could run without.

Look at the list of permissions that a typical Android application asks for when you install it. Does a file uploader really need to access your contacts and calendar? But it might let you notify a colleague when you upload a file, so it’s a convenience. This is all great fun or functionality until your contacts all get an invitation from you to sign up for the service; and there is our gap.

Trust

People usually trust other people. Society wouldn’t work very well if we were always suspicious of each other. Con artists throughout history have taken advantage of people’s trust, which can easily become gullibility. Someone who claims a role of authority has an especially easy time of it.

Phishing emails and malicious pop-ups play on trust. “Urgent warnings” get people to download malware. Employees carry out dubious money transfers when they get email claiming to come from the boss. Messages impersonating friends, claiming to have found an interesting website, draw people in without prompting questions. Malicious applications get into app stores, and customers assume they must be safe.

A few years ago, most online scams were poorly written and should not have fooled anyone, but a message that reaches a million people will always find some suckers. Today, “spear-phishing” messages are so well-crafted they can deceive even security experts. We are trained to respond to email, and the visual cues within them. We are not trained to lack trust and be suspicious: to think across the gap.

What to do?

Getting people to follow effective security practices is frustrating. Getting security leaders and business executives to assess risk correctly and prioritize resources accordingly is difficult. These issues can lead to thinking that the job is basically hopeless.

Giving up is never a solution but neither is continuing to execute a failed plan. A better answer is to take human psychology into account when creating security strategies and crafting security measures. They must promote ease of use as well as technical effectiveness. It is necessary to push back against people’s tendency to be lazy, but there are right and wrong ways to do it.

“Security theater” is the name for practices that shout, “We’re doing security!” while accomplishing very little. In the technology realm, making users change their passwords every month is an example. It doesn’t protect anything, and it encourages people to choose simple passwords and/or keep a written copy at their desks.

What is needed is the opposite of security theater: practices that are not obtrusive but provide a real benefit, embedded in the business by culture and repetition until secure processes become second nature It is a difficult challenge to meet and requires real leadership.

The measures that will help the most are ones that are easy to follow, easy to understand and easy for executive sponsors to get behind. If people turn them into habits, they will be safer with very little effort, we will have successfully crossed the gap

Social engineering usually means manipulations that dupe people, but it can be a force for good. Techniques that make it feel natural to follow safe practices will have a positive effect. The trick is to make people feel that they are partners in the effort, not untrusted adversaries.

Accomplishing these goals takes original thinking on both the technical and human sides. The technology must be easy to use while doing its job. Training needs to instill good habits and make them feel natural. With advances on both sides, it is still possible to turn the tide.