When fiction’s most famous detective, Sherlock Holmes, needed to solve a crime, he turned to his sharp observational skills and deep understanding of human nature. He used this combination more than once when facing off against his arch-nemesis, Dr James Moriarty, a villain adept at exploiting human weaknesses for his gain.
This classic battle mirrors today’s ongoing fight against cybercrime. Like Moriarty, cybercriminals use cunning strategies to exploit their victims’ psychological vulnerabilities. They send deceptive emails or messages that appear to be from trusted sources such as banks, employers, or friends. These messages often contain urgent requests or alarming information to provoke an immediate response.
For example, a phishing email might claim there has been suspicious activity on a victim’s bank account and prompt them to click on a link to verify their account details. Once the victim clicks the link and enters their information, the attackers capture their credentials for malicious use. Or individuals are manipulated into divulging confidential information to compromise their own or a company’s security.
Holmes had to outsmart Moriarty by understanding and anticipating his moves. Modern cybersecurity teams and users must stay vigilant and proactive to outmanoeuvre cybercriminals who continuously refine their deceptive tactics.
Read more: Deepfakes in South Africa: protecting your image online is the key to fighting them
What if those trying to prevent cybercrime could harness Holmes’s skills? Could those skills complement existing, more data-driven ways of identifying potential threats? I am a professor of information systems whose research focuses on, among other things, integrating data science and behavioural science through a sociotechnical lens to investigate the deceptive tactics used by cybercriminals.
Recently, I worked with Shiven Naidoo, a Master’s student in data science, to understand how behavioural science and data science could join forces to combat cybercrime.
Our study found that, just as Holmes’s analytical genius and his sidekick Dr John Watson’s practical approach were complementary, behavioural scientists and data scientists can collaborate to make cybercrime detection and prevention models more effective.
Combining disciplines
Data science uses scientific methods, processes, algorithms and systems to extract knowledge and insights from structured and unstructured data.
When its powerful algorithms are applied to complex and large datasets they can identify patterns that indicate potential cyber threats. Predictive analysis helps cybersecurity teams anticipate and prevent large-scale attacks. This is done through, for instance, detecting anomalies in sentence structure to spot scams.
However, relying solely on data science often overlooks the human factors that drive cybercriminal behaviour.
The behavioural sciences study human behaviour. They consider the principles that influence decision-making and compliance. We drew extensively from US psychologist Robert Cialdini’s social influence model in our study.
This model has been applied in cybersecurity studies to explain how cybercriminals exploit psychological tendencies.
Read more: Five things South Africa must do to combat cybercrime
For example, cybercriminals exploit humans’ tendency to be obedient to authority by impersonating trusted figures to spread disinformation. They also exploit urgency and scarcity principles to prompt hasty actions. Social proof – the tendency to follow the actions of those similar to us – is another tool, used to manipulate users into complying with fraudulent requests. For instance, cybercriminals might create fake reviews or testimonials, prompting users to fall for a scam.
Combining insights
We adapted the social influence model to detect cybercriminal tactics in scam datasets by combining behavioural and data science. Scam datasets consist of unstructured data, which includes complex text data such as phishing emails and fake social media posts. Our data consisted of known scams such as phishing and other malicious activities. It came from FraudWatch International’s Cyber Intelligence Datafeed, which collects information on cybercrime incidents.
It’s tough to draw insights from unstructured data. Models can’t easily discern between meaningful data points and those that are irrelevant or misleading (we call it “noisy data”). Data scientists rely on feature engineering to cut through the noise. This process identifies and labels meaningful data points using knowledge from other fields.
We used domain knowledge from behavioural science to engineer and label meaningful features in unstructured scam data. Scams were labelled based on how they used Cialdini’s social influence principles, transforming raw text data into meaningful features. For example, a phishing email might use the principle of urgency by saying “your account will be locked in 24 hours if you do not respond!”. The raw text is transformed into a meaningful feature labelled “urgency,” which can be analysed for patterns. Then we used machine learning to analyse and visualise the labelled dataset.
The results showed that certain social influence principles such as “liking” and “authority” were frequently used together in scams. We also found that phishing scams often employed a mix of several principles. This made them more sophisticated and harder to detect.
The results gave us valuable insights into how often different types of social influence principles (such as urgency, trust, familiarity) are exploited by cybercriminals, as well as where more than one type is used at a time. Analysing unstructured text data like phishing emails and fake social media posts allowed us to identify patterns that indicated manipulative tactics.
Overall, our work yielded high quality insights from complex scam datasets.
Further applications
It’s important to mention that our dataset was not exhaustive. However, we believe our results are invaluable for mining insights from complex cybercrime data. This kind of analysis can be used by cybersecurity professionals, data scientists, cybersecurity firms and organisations involved in cybersecurity research. It can help improve automated detection systems and inform targeted training.
Rennie Naidoo is a member of the AIS (Association for Information Systems), SAICSIT(the South African Institute for Computer Scientists and Information Technologists), and ISACA (Information Systems Audit and Control Association).
This article was originally published on The Conversation. Read the original article.