This internet browser is outdated and does not support all features of this site. Please switch or upgrade to a different browser to display this site properly.

Re-humanising cyber security

Copy Link
Image for Re-humanising cyber security

The objectives of hackers and cybercriminals haven’t changed over the years: they still work to steal, exploit or disrupt. Their methods, however, have changed – because the weakest link is no longer in the machine, it’s in the user.

Cyber security is a top global concern. Only extreme weather events and natural disasters were deemed greater risks by the World Economic Forum in its 2018 Global Risks Report.

Although cyber security is an ever-increasing problem for Australian organisations across all sectors, according to computer scientist Dr Nik Thompson, the failure of their counter measures lies in disregarding the impact of human behaviours.

“Throwing technology at the problem isn’t working because many cyber incidents are caused by the actions of users,” Thompson explains.

“Phishing remains the most common attack – in fact, many of the largest data breaches in 2018 resulted from employees falling for phishing scams.

“Another concern is that employees are increasingly using smartphones to access and store organisational information. As the use of smartphones as a mainstream computing platform grows, so will the extent and severity of malware and attacks.”

Thompson also has a background in cognitive science, and he combines his two areas of expertise to study the links between human behaviour and cybersecurity.

He says that successful cyber criminals are manipulating people through social engineering – a method made famous by notorious US hacker Kevin Mitnick, who obtained access to more computer systems by tricking users than by cracking into accounts.

“Attackers adapt their methods to [target] the human elements in the security chain. For example, email or phone scammers will exploit people by creating time pressure. They might stress that action is required to avoid imminent adverse consequences.

“Our decision-making processes change when we’re under time pressure.”

“Our decision-making processes change when we’re under time pressure. Rather than complete the process of reasoning, we adopt a heuristic model and rely on a gut feeling.”

“Plus, many users struggle constantly with conflicting goals. They have time pressures, heavy workloads and situations where they’re splitting their attention. Processes like security can be drowned out.”

He points out that such circumstances can be a collective phenomenon for particular professions and environments, such as in hospitals and law firms, where workplace culture and the nature of the work can create risk.

These can also link with basic social norms. Sharing passwords, for example, remains a common behaviour in many workplaces. These practices exist partly because we want to avoid offending colleagues, or even complete strangers, by implying they’re not trustworthy.

“Another example is tailgating in building or room access. If we swipe through a doorway when there’s someone directly behind us, we don’t close the door in their face, or tell them to stop and swipe their own card.

“There’s a strong behavioural influence exerted by the social environment. We need to consider social norms, or organisational and national culture, as part of deeper research into cyber security.”

The malfunctions in our mental models

It seems that our typically human mental models – which combine our general knowledge with our perception of a situation – are our downfall. Add to this some basic forms of ‘cognitive bias’, and you can see why typical human behaviours can undermine the most stringent security tech.

“Many users believe they won’t be targeted because they’re not important enough, that they don’t have useful information,” Thompson says.

“And there’s an interesting paradox in that tech-savvy users take more risks.”

He also points out the need for research into how other demographic differences influence information security behaviour. An example could be whether users of a particular generation are more vulnerable.

“Scammers exploit our perceptions of authority, and some groups could have a greater tendency to not question communications from apparent authorities – such as the tax office.

“Instead of relying solely on technical counter-measures like filters and blocking, it will be more effective to apply our understanding of human behaviours.”

By drawing on the evidence about human behaviours, he says, organisations will better understand their staff – how they assess cyber risks and make decisions. This will enable the design of better workplace practices and systems.

So, we can boost cybersecurity right now without touching a single bit of tech?

“Yes, I absolutely agree with that,” Thompson says.

“Organisations can immediately boost their security through education and training that supports IT users. But it has to be an interactive process that is actionable and provides feedback.

“That’s not to say technical protections aren’t part of the solution, but security is mostly about behaviour change.”

Copy Link