5 social engineering ploys popular with scammers

5 social engineering ploys popular with scammers

Imagine a world where a co-worker asking to click a video could be a trap, where a familiar voice on the phone could be a digital illusion, and where a text from a trusted friend might be a fabricated persona. Welcome to social engineering 2024, one of the most formidable threats exploiting human social frailties, such as our tendency to act impulsively when under stress. Email phishing, voice vishing and text-based smishing are well-known social engineering ploys. Yet fraud has evolved, scammers have upped their game, further deploying deceptive practices that blend AI-fueled tactics with social manipulation.

Understanding social engineering risk factors and how they manipulate human behavior is the first step to shielding against threats that often fall under the radar.

1. Business email compromise

An example of CEO fraud called business email compromise (BEC) accounted for $2.7 billion in losses across 21,832 complaints made in one year, according to the IC3.

BEC is a sophisticated scam tricking an individual (usually from accounting) into transferring funds to an attacker’s account. This is typically achieved by impersonating a high-ranking company official, such as the CEO, and making a fraudulent request for a wire transfer. The power of BEC lies in its exploitation of trust and authority. Attackers meticulously research their targets, often using social media and corporate websites to collect background. The email used for the scam often closely mimics a legitimate one, often hijacking actual email accounts through phishing or credential stuffing.  

In a striking 2022 incident, a multinational corporation fell victim to a BEC scam. The fraudster, masquerading as the company’s CEO, targeted a junior finance officer via email. The message demanded an urgent wire transfer for what was claimed to be a sensitive acquisition deal. The convincingly crafted email prompted the officer to bypass standard verification processes, leading to the unauthorized transfer of $1.2 million to an offshore account.

See also  It Sure Looks Like The Forza Motorsport Reboot Will Be Cross-Gen, Just Like GT7

2. Pretexting/impersonations

Pretexting is a form of social engineering where attackers fabricate scenarios to obtain sensitive information under pretenses. This tactic often involves impersonating authority figures, such as law enforcement or company executives, or posing as technical support personnel. The success of pretexting relies heavily on the attacker’s ability to appear convincing and authoritative, then manipulating the victim into divulging confidential information.

The techniques used in pretexting are diverse and can range from simple phone calls to elaborate schemes involving multiple actors and props. For instance, an attacker might call an employee posing as an IT staffer, claiming an issue with the company’s network that requires immediate password verification. This scenario played out at MGM Resorts last year, shutting down Las Vegas casino operations and making national news.

3. Deepfake phishing

In deepfake phishing attacks, scammers use manipulated audio, video, or texts to impersonate individuals or entities. For instance, they might create a video of a CEO issuing urgent instructions for fund transfers or confidential data sharing. Deepfakes can be delivered via email, social media, or through direct messaging platforms, leveraging the perceived authenticity to trick victims into compliance.

One notable incident of deepfake phishing was reported in 2023, involving a deepfake video in which the CEO appeared to instruct finance to initiate a money transfer to a vendor. Sent via a compromised email account, the video appeared convincing enough to bypass initial scrutiny. Believing the request to be legitimate, the finance team transferred $243,000 to the scammer’s account.

4. The long game

The “long game” fraud in social engineering is a methodical strategy where attackers gradually build trust with their target victim over an extended period. These long-game tactics involve patience and persistent communication, often spanning months or even years. To initiate a rapport and sustain interactions, attackers typically use AI-generated profiles (sock puppet accounts) that appear credible, complete with backstories and social media footprints. These profiles engage with the target on shared interests or professional matters, slowly ingratiating themselves. An example of the long-game ploy, which initiated with spear-phishing, was cited in a CISA advisory involving Russian threat actor Star Blizzard.

See also  The Fiat Topolino Would Be $11,000 Well Spent If You Were Allowed To Buy One

5. AI-Persona manipulation

A recent trend in social engineering exploits the increasing reliance on automation and AI in everyday tasks. AI-persona manipulation involves creating AI-generated personas that interact with targets through automated systems like chatbots or virtual assistants. Tools abound for creating these AI personas which marketers use to better understand target audiences. Programmed to mimic human conversational patterns and behaviors, these AI personas can appear genuine and trustworthy. In the hands of threat actors, the insidious aspect involves the seamless integration of these personas into platforms where potential victims have lowered their guard, making it a sophisticated social engineering ploy.

Mitigation strategies and best practices

Organizations must adopt a multifaceted cybersecurity approach to combat advanced social engineering techniques. This includes implementing endpoint threat detection, following basic security protocols like multi-factor authentication, using long and unique passwords and deployment of defenses fundamental to creating barriers against cyberattacks.

But technology alone is not enough. Employee awareness and training form the backbone of an effective defense strategy. Regular training programs that simulate social engineering threats allow employees to practice learning how to recognize what a bogus phishing attempt looks like, and which employees are most susceptible. Businesses that conduct continuous cybersecurity awareness training can greatly reduce the risk of falling prey to social engineering scams. 

Cyber threats constantly evolve, so defenses need to evolve accordingly. This includes testing disaster response plans and conducting regular security audits to ensure patches are up-to-date and effective against known threats, such as those reported by CISA.

Preparing for advanced social engineering tactics like BEC, pretexting, deepfake phishing and AI-persona manipulation can present unique challenges because they each exploit basic human vulnerabilities. The key to defense lies in cybersecurity best practices, continuous employee education, and a culture of vigilance. By staying informed organizations can significantly reduce the risk of falling prey to sophisticated social engineering attacks, safeguard their data, finances, and reputation.

See also  EV inventory problem: Ford Mustang Mach-E piling up on dealer lots or in transit