Social engineering scams are growing in sophistication, as fraudsters increasingly use coercion and manipulation to exploit victims. According to the FBI’s 2021 IC3 Report, phishing scams account for nearly 22 percent of all data breaches that occur, thus securing it a position as one of the most prevalent cybercrimes. In 2021, nearly 83% of companies experienced phishing attacks.
We sat down with Soudamini Modak, director of fraud and identity strategy at LexisNexis Risk Solutions, to explore the challenges businesses face in detecting social engineering scams, as well as the role of behavioral biometrics in fraud prevention. Soudamini also offers advice for individuals on protecting themselves from falling victim to social engineering scams and discusses the actions that law enforcement and financial institutions can take to combat the rise of these scams. What are social engineering scams and how do they differ from technical hacks on networks or software?
Social engineering scams are a category of fraud where fraudsters rely on gaining the victim’s trust to complete the desired activity, such as divulging account information or transferring funds. As technology solutions have made it harder to hack and steal from institutions, fraudsters have combined other tactics, such as coercion and manipulation, with technological innovations such as malware, website overlays and remote access software to exploit the victims.
Historically, technical hacks focused more on organizations while social engineering scams targeted mostly individual victims. That difference is now eroding, as sometimes it is more cost-effective to bypass barriers by coercion than technical exploits. As such, technical hacks use more and more social engineering techniques to gain an edge through vectors such as business email compromise. Unfortunately, it can be hard to track the full extent of social engineering scams and technical hacks as both types of crimes are underreported, because victims are often too embarrassed to do so.
What makes social engineering scams more difficult to detect and fight?
With social engineering scams, beyond educating victims on fraudulent methods, it can be challenging for organizations to separate a genuine transaction from a social engineering scam transaction. From a fraud prevention perspective, victims will generally be in their usual location and use their typical devices. They will also be able to pass credential checks. To make matters worse, victims are often convinced that they are doing the correct thing. In these scenarios, businesses need a well-designed scam detection strategy that considers aspects like the past behavior of the transacting user, behavioral biometrics patterns, and whether the person transacting is a new identity in their system.
Behavioral biometrics is a relatively new defense tactic that financial institutions, retailers, and other organizations can use to help detect scams. It analyzes in the background the way a user interacts with a device or online application, looking at phone movement, touchscreen behavior, typing rhythm, length of time on a page, and other interactive gestures. It uses this information to develop a deeper understanding of the digital identity behind the action and their typical movements to identify deviations that might be indicative of fraudulent activity. Businesses can leverage the rich insight and real-time context from behavioral analytics to make better fraud decisions that support consumers and protect them across their digital experience.
Can you give examples of some of the fastest-growing social engineering scams, such as authorized push payment, romance, investment, and impersonation fraud?
Social engineering and other types of scams are growing everywhere, as it is getting more cost-effective for scammers to scale increasingly sophisticated attacks. Whether it is a phishing attack, romance scam, or investment scam, they are all getting more common and harder to spot. One of the earlier barriers of entry for criminals was language skills but advances in AI chatbots have given criminals a low-cost way to bypass the language gap and further automate their attacks with seemingly natural language. Today, conversing with a criminal over text messages or email can seem so genuine that users can easily fall victim to a scam.
What can individuals do to protect themselves from falling victim to social engineering scams?
Individuals should be careful with the links they click on and the emails they open. Criminals are getting better at spoofing email addresses, website domains, and the phone numbers from which they are calling. We often see emails seemingly from a well-known business with slight spelling changes in the name or a website link that is not secure. It is important to always pause for a few seconds and consider the source before clicking a link. When in doubt, checking it out is key – calling the organization directly to see if there is a basis behind the urgent communication.
Fraudsters are getting better at weaponizing leaked, breached, and socially available personal information for their gain. It is easy to get caught up in moments when someone’s pulling at your heartstrings. If someone is persistent on you making a quick decision, ask yourself whether it is reasonable that your bank, friend or family member would request this of you. If an investment seems too good to be true, you are told to lie to your bank when you are withdrawing money or you are asked to send money or use gift cards to pay for services, bail or taxes, it is very likely a scam.
Just like individuals have to be careful, banks, and merchants also have to take ownership to educate their customers through their websites and apps.
What actions can law enforcement and financial institutions take to combat the rise of social engineering scams?
Organizations need to take a holistic approach, educate more about the latest scam trends, and act fast, as fraud begets more fraud. There are technologies that can help, especially when integrated into wider solutions, such as detecting the criminals’ tools or signs of coercion using behavioral biometrics, viewing the risk associated with the sender-beneficiary combination as a whole, and looking for signs of fraud on both sides of the transaction equation. Is the victim showing signs of coercion in their behavior? Has there been an increase in customers paying this beneficiary? Is the transaction amount unusual? Is there any other digital intelligence linking either the sender or beneficiary to high-risk or fraudulent events? Businesses should adopt solutions that leverage intelligence from contributory networks. On their own, individual indicators of fraud might not be enough to stop scams, but when combined into a network of intelligence, they make a difference that scales.
###
Comments