Social Engineering: Traditional Tactics and the Emerging Role of AI
Explore how AI is revolutionizing social engineering in cybersecurity. Learn about AI-powered attacks and defenses, and how this technology is transforming the future of security.
Explore how AI is revolutionizing social engineering in cybersecurity. Learn about AI-powered attacks and defenses, and how this technology is transforming the future of security.
Download this guide to delve into the most common LLM security risks and ways to mitigate them.
In-context learning
As users increasingly rely on Large Language Models (LLMs) to accomplish their daily tasks, their concerns about the potential leakage of private data by these models have surged.
[Provide the input text here]
[Provide the input text here]
Lorem ipsum dolor sit amet, consectetur adipiscing elit. Suspendisse varius enim in eros elementum tristique. Duis cursus, mi quis viverra ornare, eros dolor interdum nulla, ut commodo diam libero vitae erat. Aenean faucibus nibh et justo cursus id rutrum lorem imperdiet. Nunc ut sem vitae risus tristique posuere.
Lorem ipsum dolor sit amet, Q: I had 10 cookies. I ate 2 of them, and then I gave 5 of them to my friend. My grandma gave me another 2boxes of cookies, with 2 cookies inside each box. How many cookies do I have now?
Title italic
A: At the beginning there was 10 cookies, then 2 of them were eaten, so 8 cookies were left. Then 5 cookieswere given toa friend, so 3 cookies were left. 3 cookies + 2 boxes of 2 cookies (4 cookies) = 7 cookies. Youhave 7 cookies.
English to French Translation:
Q: A bartender had 20 pints. One customer has broken one pint, another has broken 5 pints. A bartender boughtthree boxes, 4 pints in each. How many pints does bartender have now?
Lorem ipsum dolor sit amet, line first
line second
line third
Lorem ipsum dolor sit amet, Q: I had 10 cookies. I ate 2 of them, and then I gave 5 of them to my friend. My grandma gave me another 2boxes of cookies, with 2 cookies inside each box. How many cookies do I have now?
Title italic Title italicTitle italicTitle italicTitle italicTitle italicTitle italic
A: At the beginning there was 10 cookies, then 2 of them were eaten, so 8 cookies were left. Then 5 cookieswere given toa friend, so 3 cookies were left. 3 cookies + 2 boxes of 2 cookies (4 cookies) = 7 cookies. Youhave 7 cookies.
English to French Translation:
Q: A bartender had 20 pints. One customer has broken one pint, another has broken 5 pints. A bartender boughtthree boxes, 4 pints in each. How many pints does bartender have now?
Cybersecurity is constantly evolving, with threats becoming more sophisticated and defenses striving to keep up. Among these threats, social engineering is particularly insidious, exploiting human psychology to breach security systems. Traditionally, it has involved manipulating trust and exploiting human error, but AI is rapidly changing this approach.
Social engineering involves manipulating individuals into revealing confidential information or performing actions that compromise security. This ranges from phishing emails that trick users into clicking malicious links to elaborate schemes involving impersonation and psychological manipulation. As technology advances, attackers' methods evolve, with AI becoming a powerful tool for both cybercriminals and cybersecurity professionals.
The introduction of AI into social engineering marks a significant turning point. AI can create more convincing phishing emails, conduct advanced reconnaissance, and simulate human-like interactions to deceive victims more effectively. Conversely, this technology provides new opportunities for defense, enabling the development of sophisticated detection algorithms, predictive analytics, and automated responses to potential threats.
In this article, we’ll explore social engineering tactics and how AI is transforming the field. We will examine AI's dual role as both a threat and a defense mechanism and discuss the implications for the future of cybersecurity.
Social engineering is a manipulation technique that exploits human psychology to gain access to confidential information or perform unauthorized actions. Instead of breaking into systems using technical methods, social engineers use deceit to trick individuals into giving up sensitive information, such as passwords or financial details.
Social engineering relies on several psychological principles to deceive and manipulate victims:
Some examples of real-life scenarios illustrating social engineering attacks are:
In Q3 2023, social engineering accounted for 37% of successful attacks (source) on organizations and remained the biggest threat to private individuals, impacting 92% of them. Phishing constituted 54% of the techniques used against individuals in the same period. Social engineering attacks are highly effective because they exploit human nature rather than relying solely on technical weaknesses. Understanding the psychological manipulation techniques behind these attacks can help individuals and organizations better protect themselves against such threats.
Social engineering attacks exploit human psychology to manipulate individuals into revealing confidential information or performing actions that compromise security. These attacks often bypass technical defenses by targeting the weakest link in the security chain—humans.
Here's an overview of some traditional social engineering attack types, their psychological manipulation techniques, and real-world examples to illustrate their impact.
Phishing is a fraudulent attempt to obtain sensitive information by disguising as a trustworthy entity in electronic communications. Spear phishing is a more targeted form of phishing aimed at specific individuals or organizations. Malicious actors exploit trust and urgency, often creating a sense of fear or excitement to prompt immediate action.
An example of this is the 2020 Twitter incident, where high-profile Twitter accounts were compromised and the perpetrators posted scam tweets urging individuals to send bitcoin to a designated cryptocurrency wallet, promising that the sent amount would be doubled and returned as a charitable act. Within minutes of the initial tweets, over 320 transactions had occurred on one of the wallet addresses, accumulating more than $110,000 in bitcoin before Twitter removed the scam messages.
Baiting involves offering something enticing to lure victims into a trap, such as a free download or a physical USB stick labeled with an intriguing name. Curiosity and greed are the primary psychological triggers in baiting attacks.
For instance, a scam might promise a free item for completing a survey as shown above. While some offers are genuine, many are not and are used to trick people into actions they wouldn't normally take. Scammers might request a small shipping fee, distribute malware, or collect sensitive information. There is even malware designed to bait users.
Pretexting involves creating a fabricated scenario to obtain private information from the target. Attackers leverage trust and authority, often posing as someone in a position of power or with a legitimate need for information.
In the example above, the malicious actor, posing as Wells Fargo bank, included a link to the genuine Wells Fargo website. However, the sender neglected to effectively camouflage the originating email address.
Tailgating (or piggybacking) is a physical form of social engineering that involves following an authorized person into a restricted area without proper credentials. This form of attack exploits the politeness and helpfulness of individuals who hold doors open for others.
The attacker could pretend to be rummaging through a purse for an access card, claim to have forgotten their own card, or simply act friendly and follow behind the authorized person.
Quid pro quo attacks involve offering a service or benefit in exchange for information or access. Such attacks rely on the human tendency to reciprocate favors.
For instance, an attacker could pose as an IT support manager, offering to assist an employee in installing a new security software update. If the employee agrees, the criminal then walks them through the process of installing malware.
Scareware involves frightening the victim into buying or downloading unnecessary and potentially harmful software. This utilizes fear and urgency to prompt quick action without proper consideration.
A common scareware tactic is to display fake virus alerts, prompting users to download and pay for bogus antivirus software.
A watering hole attack involves compromising a website frequently visited by the target group to infect visitors with malware. It takes advantage of the trust users have in their regularly visited websites.
In 2021, the "Live Coronavirus Data Map" from the John Hopkins Center for Systems Science and Engineering was used to spread malware through watering hole attacks (source). Additionally, links promising a coronavirus tracking app have been sent to some Android phones, often via SMS or watering hole websites. Once downloaded, the app allows individuals, suspected to be based in Libya, to access the smartphone's camera, text messages, and microphone. The identified malware is a customized variant of SpyMax, a readily available commercial spyware.
Domain Name Server (DNS) spoofing or cache poisoning involves corrupting the DNS server to redirect traffic from legitimate sites to malicious ones. It exploits the trust users have in domain names and the seamless redirection to malicious sites.
For example, in 2010, Brazilian ISPs faced a significant DNS spoofing attack, redirecting users to malicious sites when accessing popular websites like YouTube and Gmail (News Source). Exploiting vulnerabilities in DNS caches, the attack affected millions of users. Malware disguised as essential software updates, like the fictitious "Google Defence," was distributed, compromising devices with Trojans, including SpyMax. The attackers also targeted network devices, exploiting security flaws in routers and modems to alter DNS configurations. The involvement of insiders in some cases highlighted the complexity of the threat landscape, emphasizing the need for robust cybersecurity measures for users and ISPs.
The use of AI in traditional social engineering has significantly boosted attack effectiveness. AI algorithms analyze vast data to personalize messages, exploiting psychological vulnerabilities. Specifically, generative AI is revolutionizing such attacks. AI-driven chatbots engage convincingly, adapting in real-time based on victims' responses. Cybercriminals are already using tools like ChatGPT to enhance the sophistication, speed, and scale of their exploits. This integration presents a formidable challenge for defenders, as AI-enhanced attacks become harder to detect and mitigate.
Deepfakes, created using advanced AI techniques like deep learning and GANs, enable realistic impersonation through fake images, audios, and videos. Emerging from a 2017 Reddit post, DeepFakes are now easily produced with open source models, allowing users to swap faces and alter appearances convincingly. This technology poses serious threats to privacy, democracy, and security by bypassing facial authentication, spreading fake news, and facilitating blackmail.
For example, Deep Video Portraits synthesizes a photo-realistic video portrait (example shown above) of a target actor that mimics the actions of a source actor, where source and target can be different subjects. The problem was posed as a video-to-video translation problem and solved using a space-time encoder-decoder deep neural network architecture, as shown below.
Recent technologies like VASA-1 by Microsoft can generate videos with audio, given only a static image and the required speech audio clip.
Generative AI chatbots have captivated the public in recent years while also posing significant challenges and potential risks. OpenAI's release of ChatGPT on November 30, 2022, sparked a tremendous public response, prompting Google to launch Gemini and Microsoft to introduce AI-powered Bing. These LLM-based chatbots generate synthetic media to improve content quality and professional communication.
Recently, GPT-4o has been introduced which demonstrates superior capabilities and has set a new benchmark in chatbot technology. While these advanced models help users with mundane tasks, cybercriminals also employ these models to curate more sophisticated emails for phishing attacks.
Traditional phishing attacks often contained easy-to-detect flaws, like grammatical errors when the malicious actors were not native speakers of the language of the victim. But, with such AI-powered tools, correcting grammar and writing more personalized messages for the victims has become easier than ever.
AI voice cloning has emerged as a sophisticated tool for social engineering attacks, leveraging advanced machine learning algorithms to replicate an individual's voice with startling accuracy. This technology poses significant security risks as cybercriminals can use cloned voices to impersonate trusted figures, such as company executives or family members, to manipulate targets into divulging sensitive information or authorizing fraudulent transactions.
Several online Voice Cloning softwares like ElevenLabs, MurfAI, and LOVO.ai are readily available, which makes it easier for cybercriminals to deceive victims. This accessibility gives rise to misinformation, like this widely circulated DeepFake video featuring a synthesized Barack Obama insulting Donald Trump on YouTube.
Indirect prompt injection is a sophisticated social engineering attack targeting AI systems, especially those using large language models like ChatGPT. Instead of directly feeding malicious input to the AI, attackers embed harmful prompts into seemingly innocuous content such as emails, documents, or web pages. When the AI processes this content, it unwittingly executes the embedded malicious commands. This method exploits the AI's natural processing capabilities, making it a covert and effective strategy for manipulating AI behavior without direct interaction.
An example of indirect prompt injection is shown above.
The rise of AI-driven attacks represents a major shift in cybersecurity. Bots, automated programs that mimic human behavior, play a crucial role in these malicious activities, accounting for a significant portion of internet traffic. In cybersecurity, bots serve diverse functions, aiding detection platforms or executing nefarious tasks for cybercriminals.
With AI tools, threat actors streamline attack processes from start to finish, from selecting targets to executing deceptive tactics. This automation boosts efficiency, resulting in scamming a large number of individuals simultaneously, which presents a formidable challenge for cybersecurity.
AI presents a double-edged sword in the realm of cybersecurity, serving as both a potent weapon for attackers and a critical defense mechanism.
AI has the potential to revolutionize phishing tactics by learning from past successes and adapting to maximize effectiveness. Through sophisticated algorithms and machine learning techniques, AI can analyze vast amounts of data on previous phishing campaigns, identifying patterns and tactics that yield the highest success rates. By understanding what resonates with potential victims, AI can tailor phishing emails to be more convincing, personalized, and difficult to detect.
However, on the defensive side, AI-driven tools are emerging as powerful allies in the fight against cyber threats.
For instance, Lakera Guard, inspired by the MITRE ATT&CK framework, harnesses AI to detect and mitigate phishing attacks across various platforms, including web, chat, and email. By analyzing patterns and anomalies, Lakera Guard can identify malicious activities and thwart potential threats before they cause harm. Moreover, machine learning models play a crucial role in unauthorized activity detection, enabling organizations to proactively identify suspicious behavior and respond swiftly to mitigate risks.
**💡Pro Tip: Check out how Lakera Guard aligns with the MITRE ATT&CK framework.**
Researchers have used AI, specifically Deep Learning, to develop DeepFake detectors. To detect fake images, recently, DeepFake Detection Visual Question Answering (DD-VQA) has been proposed. DD-VQA incorporates common sense reasoning in its pipeline for DeepFake image detection and it further extends the model to explain why an image is labeled as real or fake through a Visual Question Answering pipeline. The model takes as input an image and a question to generate textual answers sequentially. Some examples are shown below.
Similarly, Dynamic Prototype Network (DPNet) is a CNN-based network designed for DeepFake detection in videos that leverages temporal inconsistencies in video sequences. It offers interpretability by presenting short video clips from the sequence where temporal artifacts are detected, enabling humans to interpret the results.
In 2020, the first audio-visual multimodal DeepFake detector was developed in this paper, where the inconsistencies in the perceived emotions from the two modalities were utilized. This work assumes access to a real and fake video pair for each subject. The features from the real video and audio streams are separately extracted. These features are then used in two emotion recognition models (one from speech and one from face movement) which are then classified as real or fake. The workflow of this process is shown below.
Thus, although AI models like Stable Diffusion and Dall-E are being used to generate fake content seamlessly, AI is also being employed to tighten security measures in the form of DeepFake detectors mentioned above.
To safeguard against the rising threat of AI-enhanced social engineering, organizations should prioritize comprehensive strategies aimed at both prevention and response.
In this context, it may be beneficial to note that Lakera Guard provides robust AI security solutions, protecting large language models (LLMs) within enterprises from diverse risks such as prompt injections, data loss, and insecure output handling.
Lakera’s model-agnostic API seamlessly integrates into existing workflows, guaranteeing smooth and secure operations. Key functionalities include safeguarding against prompt injections and jailbreaks, mitigating risks related to training data poisoning, and preventing the disclosure of sensitive information. With its straightforward implementation requiring only a single line of code, Lakera Guard enhances user experience without introducing complexity, offering rapid response times.
Looking ahead, the future of social engineering and AI promises both innovation and challenge. As AI continues to advance, we can anticipate further sophistication in social engineering tactics. AI-powered tools may evolve to better manipulate human psychology, exploiting emotions, trust, and perception with unprecedented precision and scale. The combination of AI's adaptive algorithms and expansive data processing capabilities empowers malicious actors to develop increasingly complex and automated social engineering attacks. This trend suggests that future attacks could be even more deceptive and difficult to detect, posing significant challenges to cybersecurity.
The emergence of AI-generated DeepFakes and social media bots highlights the potential trajectory of AI in social engineering. AI-fabricated videos and images present a unique challenge due to their ability to mimic recognizable figures convincingly. These creations can be used to disseminate misleading information, orchestrate political disruption, or engage in targeted blackmail, amplifying the impact of social engineering attacks. Similarly, AI-driven bots infesting social media platforms pose a significant threat by masquerading as genuine users. With their human-like interactions, these bots can sway public opinion, amplify divisive issues, and disseminate falsehoods, making them exceptionally challenging to detect and combat.
As the capabilities of AI in social engineering evolve, so too must cybersecurity strategies. Organizations must anticipate these emerging threats and invest in adaptive cybersecurity measures to stay ahead of malicious actors. This requires a proactive and forward-thinking approach, emphasizing ongoing vigilance and adaptation. It's crucial to prioritize employee education, regularly updating training programs to prime staff against emerging threats and conducting simulated drills to test their resilience in real-world scenarios.
In summary, the evolution of social engineering and AI presents both opportunities and challenges for cybersecurity.
As AI-driven threats become increasingly sophisticated, organizations must adopt a proactive and adaptive approach to defense. Key insights and strategies include prioritizing employee education, deploying AI as a defense tool, implementing robust security protocols, and promoting vigilance in email and social media interactions.
By combining AI tools with training and best practices, organizations can effectively mitigate the risks posed by AI-driven social engineering attacks. Ultimately, success in navigating this landscape requires a holistic approach that leverages technology, human expertise, and strategic foresight to stay ahead of emerging threats and safeguard against the ever-evolving domain of cyber threats.
Download this guide to delve into the most common LLM security risks and ways to mitigate them.
Get the first-of-its-kind report on how organizations are preparing for GenAI-specific threats.
Compare the EU AI Act and the White House’s AI Bill of Rights.
Get Lakera's AI Security Guide for an overview of threats and protection strategies.
Explore real-world LLM exploits, case studies, and mitigation strategies with Lakera.
Use our checklist to evaluate and select the best LLM security tools for your enterprise.
Discover risks and solutions with the Lakera LLM Security Playbook.
Discover risks and solutions with the Lakera LLM Security Playbook.
Subscribe to our newsletter to get the recent updates on Lakera product and other news in the AI LLM world. Be sure you’re on track!
Lakera Guard protects your LLM applications from cybersecurity risks with a single line of code. Get started in minutes. Become stronger every day.
Several people are typing about AI/ML security. Come join us and 1000+ others in a chat that’s thoroughly SFW.