What is Social Engineering?

And how do we defend against AI and rising attacks?
David Morrison
August 31, 2023

Escalating ransomware and phishing attacks paint a grim picture, leading to widespread data breaches and the leaking of personal data. For cybercriminals, exploiting human vulnerabilities has become a lucrative enterprise, particularly as individuals often represent the weakest link in cybersecurity controls. As we continue to improve our technological defences and implement more robust processes for risk mitigation, it’s hardly surprising that threat actors are shifting their focus to human targets. Why bother trying to penetrate advanced security systems when a single successful phishing email can coerce an employee into bypassing all those safeguards, thereby jeopardising the entire organisation?

This is the underlying premise of social engineering – the psychological manipulation of people into divulging confidential information or performing actions that compromise security. Phishing has taken the limelight for the last few years as it’s highly successful and, when performed well, can be hard to defend against with technical controls alone. If we could, phishing would already be a thing of the past. But alas, it is not.

The consequences of falling for a social engineering attack can be devastating. From financial ruin to severe reputational damage, the cost is often way too high. The cost of a data breach has been rising year after year, with the average now sitting at US$4.45 million, with 74% of breaches involving a human element.

In this article, I want to go beyond phishing and discuss the various forms of social engineering, how this issue will escalate with the advent of AI (and already has), and some ways you can start to defend against it.

The rise of social engineering

Long before the age of computers and the internet, social engineering existed in various forms, from the legendary Trojan Horse in ancient times to spies using cunning and deceit to steal top-secret information. The basis of all these manipulations lies in exploiting human weaknesses to achieve an objective – trust, authority, greed, or fear.

No history of social engineering would be complete without mentioning Kevin Mitnick. In the 1990s, Mitnick used social engineering tactics to commit a series of high-profile cybercrimes, including breaking into major organisations to steal proprietary software. His story is a classic example of how powerful social engineering techniques can be, even when pitted against robust technological defences.

The early 2000s saw the term “phishing” enter the public lexicon, as fraudulent emails pretending to be from trustworthy sources became a common tactic to trick individuals into revealing personal information. Over time, phishing has evolved into more sophisticated forms, like spear-phishing and whaling.

With the rise of social media platforms like Facebook, Twitter, and LinkedIn, social engineering found new avenues to exploit. These platforms are an absolute gold mine of information that can be used to support social engineering attacks. Threat actors now routinely scan social media profiles to gather information for targeted attacks, often impersonating friends, family, or co-workers to gain trust.

As we all know and have experienced first-hand, the COVID pandemic was the catalyst for a massive shift to remote and hybrid work models, creating new opportunities for social engineering attacks. With less face-to-face interaction and lax home security setups, employees became easier targets for schemes like pretexting or baiting, further highlighting the need for organisations to address human risk.

Types of social engineering attacks

While the ultimate goal of any social engineering attack boils down to manipulating individuals into divulging confidential information or performing actions that bypass or compromise security controls, the tactics employed vary. From exploiting human curiosity to leveraging established relationships, each type of social engineering attack has its own unique approach and set of challenges for organisations and individuals.

Phishing

Phishing is a form of social engineering where threat actors impersonate a legitimate organisation or an individual to try and deceive victims into disclosing sensitive information. Executed via email, phishing attacks often contain links or attachments that direct the recipient to a fraudulent website or deliver malicious software. As phishing attacks matured, new tactics evolved into spear phishing and whaling attacks.

Spear Phishing: A more targeted form of phishing where the threat actor customises the message to a specific individual or an organisation. For example, this is where you lose the generic-looking email, and it addresses you by your first name or includes some personal information that legitimises the email and makes you believe the sender knows you.

Whaling: Similar to spear phishing, but targeted at high-profile individuals within an organisation, like executives or board members. The big phish in the company 😀

And then there are derivatives of phishing:

Smishing: This is essentially phishing over SMS, hence the name. SMS Phishing = smishing.

Vishing: You guessed it. Voice phishing = vishing. This includes robocalls automated phone calls that use a computerised auto-dialler to deliver a pre-recorded message. It’s a super lazy method, but effective nonetheless.

Business Email Compromise

Business Email Compromise (BEC) is a targeted attack that involves impersonating executives or other key personnel within an organisation. The premise is to trick employees, vendors, or partners into transferring funds or disclosing sensitive information. Unlike your usual phishing attacks, BEC attacks are highly targeted and often involve a thorough understanding of the organisation, its processes, and its relationships.

We have a full article on BEC scams if you want to read further.

Baiting

Baiting is where a threat actor lures a victim into a compromising situation by appealing to their curiosity or greed. Unlike phishing, which generally involves tricking someone into revealing sensitive information, baiting involves offering something enticing to the victim to trick them into initiating an action, such as downloading malware.

Some examples include:

  • Leaving a USB drive containing malware in a location where the target is likely to find it, hoping they will plug it into a computer.
  • Offering a free download of a popular movie, software, or game, which is actually malware disguised as a legitimate file. We saw this just recently with fake downloads of the Barbie movie.
  • Using social media to advertise offers that seem too good to be true, enticing people to click on malicious links.

Pretexting

Pretexting is where a threat actor fabricates a fictitious scenario, or “pretext”, to obtain information or gain access to a system. The threat actor usually poses as someone who has the right or need to access the information, thereby coercing the target into divulging confidential data or granting access to restricted systems or areas.

Effective pretexting requires more initial work for the threat actor, but the likelihood of success is way higher. By performing open-source intelligence (OSINT) on the target individual and gaining relevant information on that person, they can craft much more realistic and believable scenarios, legitimising their request and making the attack far more likely to succeed.

Examples include:

  • The threat actor calls a target, claiming to be from the IT department and stating they need login credentials to fix a technical issue.
  • Posing as a bank employee, a threat actor calls customers about a banking issue but first needs to “verify their identity,” tricking the victim into revealing personal information, account numbers or passwords.
  • Pretexting is also used in physical attacks, where a threat actor may pose as a cleaner, tradesperson, or other service personnel to gain physical access to a restricted area within an organisation.

Tailgating

Tailgating is where an unauthorised individual gains physical access to a restricted area by following closely behind an authorised user. Unlike other forms of social engineering that are primarily digital, tailgating is a physical security risk and often takes advantage of human courtesy, distraction, or lack of awareness.

I mention courtesy above as it is actually one of the key reasons various social engineering attacks work. As humans, we tend to have an innate sense of wanting to be helpful. In the case of tailgating, we don’t slam the door behind us entering the office when someone is close behind us. We hold the door. If someone is carrying a large box and can’t reach for their swipe card, aren’t you going to open the door for them? If you see a pregnant lady entering the building behind you, you’re going to hold the door. If a man in officially branded coveralls from a data destruction company wheels a blue secure document disposal bin out the door, will you hold the door for them or watch them struggle?

These are all scenarios I’ve seen happen as part of physical penetration tests. And yes, even that last one. Why go rummaging through a company or go trashing looking for sensitive documents when you can find them all in one easy-to-access place and just wheel it out the door? And don’t get me started on the locks they have on these bins 😔

Physical security breaches can lead to severe consequences and often remain the least tested area. As discussed earlier, human nature tends toward helpfulness and avoiding confrontation, which can be exploited by threat actors. Once a threat actor gains access to a facility, they tend to encounter minimal resistance. It’s uncommon for employees to challenge unfamiliar people within an office. If a threat actor appears confident and looks like they belong, the likelihood of them being questioned or stopped is exceptionally low.

Quizzes and Surveys

Here’s a new one for most. The use of quizzes and surveys as a form of social engineering to perform data harvesting is rarely discussed. These seemingly innocent activities often circulate on social media platforms and are designed to collect a range of information, from basic personal details to insights that can be used to answer security questions.

Years ago, well two decades ago, when we first started using security questions for things like password resets, we used what was referred to as “non-wallet” information. It was information that you couldn’t obtain if you stole a person’s wallet or purse. Things like full name, date of birth, address etc can all be found within your wallet, so don’t provide great security when identifying you. Let’s ignore that every time I call my bank, they ask for my full name, DOB, and address and then tell me I’m calling from my registered mobile, so I am now fully verified. Don’t they realise how easy it is to spoof a mobile number? But I have digressed. Non-wallet questions were things like your favourite colour, mother’s maiden name, or pet’s name. All those things you don’t find in your wallet. But with the advent of social media and the mass sharing of information, you can find most of this data easily on someone’s social media profile. This is one of the many reasons to limit what you share on social media and who you share it with.

These quizzes and surveys are a very effective way to harvest non-wallet information and build out profiles that can then be targeted.

Real-world examples

I think I’m pretty safe to say we have all been the recipients of phishing, smishing and vishing attempts. For most of us, this is numerous times a day. So rather than rehash examples you see every day, I’ll talk about and show you a couple you don’t, ones you may have been subjected to and not even know.

One of my all-time favourite “hacker” movies is Sneakers, from 1992. If you haven’t seen it, I highly advise you do. It has an amazing cast of great actors: Robert Redford, Sidney Poitier, Ben Kingsley, Dan Aykroyd and River Phoenix. It even has a very amusing scene with James Earl Jones. If you have ever wondered what Red Teaming is like, this movie is basically it, way before we used the term in infosec. I will use a couple of short clips from the movie as real-world examples because they are 100% accurate and I’ll run you through the techniques they use.

When you combine things like pretexting and have multiple people supporting each other within the pretexting scenario, it’s very hard to defend against an attack. In this scene from Sneakers, Robert Redford is trying to gain access to a building, and River Phoenix is also involved in the pretext.

You can see the effectiveness of a well-constructed pretext in this scenario:

  • River Phoenix is already there pretending to be a delivery guy, but as they aren’t expecting a delivery, security is questioning it. He is here as a decoy to distract security and exert pressure on the situation.
  • Robert Redford comes into the scene, goes to the security desk and asks if his wife has dropped a cake off as there is a party on the second floor. He knows full well there is no cake. He is just building the scene and credibility. You hear a car beep and he states “There she is… late as usual” and goes off to get the cake. It legitimises the situation and makes it look real.
  • River Phoenix continues to argue with security escalating the situation and getting the guard on edge.
  • Robert Redford comes back with his hands full holding a cake box and balloons. He’s struggling and asks security to buzz him through the turnstile as he can’t reach his access card.
  • The guard wants him to wait, so River Phoenix is acting more agitated, Robert Redford is acting more agitated, and in turn, so is the guard.
  • The situation escalates as they are all arguing and Robert Redford loses it demanding he push the buzzer, and of course, security just wants to diffuse the situation and get it over with. Robert Redford has built up a valid scenario that looks legit, and he gets buzzed through. Boom! Successful social engineering.

It’s an amazing scene and draws a perfect picture of how successful social engineering attacks can be when well thought out and planned, and when you have confident “actors” in the team.

In the second example, the background to the scene is they need to gain access to an office in a building that uses voice ID for authentication. The system requires you to say “Hi. My name is *******. My voice is my passport. Verify me.” To defeat this biometric lock, Liz (Mary McDonnell) targets the owner of the office, arranges a blind date (they hack an online dating app) and then uses social engineering to get the target to say words she records so they can piece them together into the required phrase. The scene is split so I’ve only included the last piece where Liz attempts to social engineer him into saying the word “passport”. Not an easy word to get into a casual conversation!

As you can see, never underestimate the abilities of a great social engineer!

AI and the future of social engineering

No cybersecurity discussion is complete nowadays without including AI. I’ll precursor this with, I absolutely love the advances we have made recently with AI and the benefits it brings. But like most technology, it can be used for nefarious purposes as well, especially when it comes to social engineering attacks.

As an example, let’s take the “my voice is my passport” attack above and fast forward that same scenario from 1992 to 2023. In 2023, you only need a sample of a voice speaking to be able to generate an AI voice model and make that voice say whatever you want. Forget having a blind date and extracting individual words. Any method to capture their voice is up for grabs. And we aren’t talking about having to pre-record sentences for playback like they did in Sneakers. Once you have the voice model, you can change your voice in real-time. Talk into your microphone and it comes out of your speakers as your chosen voice.

One super impressive tool to do this is VoiceAI. My kids have been toying with deepfakes and hearing my son talk and have his sister’s voice come out is really disturbing. You can only imagine what teenagers make their siblings say when they can make them say anything 😂 Apps like VoiceAI are also compatible with existing games and apps, such as Zoom. Google Meet, Discord, WhatsApp and Skype to name a few. So, you can have any voice you want in these apps.

How could a threat actor use this you ask? One example is related to a response we get in a LOT of risk assessments and one that can no longer be accepted as a valid answer.

Me: “How do you verify someone on the phone before you take xyz action?”
Interviewee: “We are a very small company. I know everyone and everyone’s voice so I can tell if it’s them.”

Bzzzt! Not in 2023. It really was a valid answer years ago. Not a great answer, but valid for very small companies with a few people. Now you can be a two-person operation and this no longer works.

One recent horrific example of this was back in April this year when a mother received a call from her distraught daughter. She had been kidnapped and the man was demanding $1 million dollars. The linked article is pretty intense as the kidnapper threatened to do some pretty horrible things, so pre-warning. The call was a scam. The daughter’s voice was fake. It was AI-generated. Having kids, I feel for the mother. It would have been the most horrendous moment of her life. But how do we combat this type of realism? Luckily I have some recommendations at the end of this article.

I’ve only talked about audio deepfakes, or voice cloning, but as AI tools get better (and they are ramping up insanely fast) it won’t be long before real-time deepfakes, which include video, will be achievable for the masses. To see a great, and amusing, example, take a look at this video. Brian Monarch is amazing. He pumps out so many deepfake videos of Arnold Schwarzenegger in other movies or video clips. I highly advise taking a look at his channel.

How do we defend against social engineering attacks?

As social engineering attacks get better and more complex, it gets harder and harder to spot them, but I’ve compiled a few areas of focus below.

  • Build awareness: The top of the list always has to be security awareness training, as it is critical in defending against social engineering attacks. This needs to go beyond simple phishing discussions and simulated tests. People need to be aware of all the different types of attacks so they know what to look out for. Only then can you start to spot them and know what to do to defend against them. I’ve discussed a number of them in this article. Take it and use it in your awareness program. As I always say, knowledge is power!
  • Don’t overshare information: Every piece of information you share on the Internet is potential information that can be used when targeting you. Be wary of what you share, in social media, quizzes and surveys, or any other public location. Where possible, restrict what you share. We have a whole article on staying safe on socials so take a look.
  • Verify sources: We are seeing more and more scams that want us to click on links and visit sites. Not just phishing, vishing and smishing, but also things like fake Google Ads that lead to malicious sites. Verify the source. Go to the source manually by entering their URL in your browser so that you can validate that ‘too good to be true’ offer.
  • Don’t click links: It may be a few extra seconds of your life but seriously, avoid clicking links in emails and SMS messages. If your bank sends you a link, don’t click it. Go to their website manually and log in. That few extra seconds could save you.
  • Be wary of random mobile numbers: We have all received them. A call from a mobile number you don’t know, often dropping out as you pick up. You call the number back and say you just had a missed call and they have no idea what you are talking about. It was a threat actor spoofing a legitimate number. This is SO easy to do. If you do answer and get a person in some call centre, it’s likely a scam. I’m sure there are some edge cases out there but I cannot recall the last time I received a legitimate call from a call centre that was a mobile number. They generally come from landline numbers or are hidden. Mobile numbers are being used to help legitimise the call so you are more likely to answer. I think you’re pretty safe to disconnect from these the second the person speaks.
  • Don’t provide personal information to someone who called you: You’ve had these calls. Often completely legitimate. You answer the call, it’s your bank, but before they will deal with you they need to verify your identity. What? You called me? You have no idea if it’s actually your bank or not and you have no way of knowing. Hang up and call the bank directly using the number you have on your statement or their website. Don’t use a number the person gave you on the phone or one in an untrusted source such as email or SMS.
  • Validate people tailgating you: If someone tries to tailgate you and you don’t recognise them, ask them to swipe, or to view their ID. Saying something simple like “Sorry but we aren’t allowed to let someone through the door without ID” is enough. I’ve done this a million times and never had anyone get upset.
  • Question unknown people in your office: If you see someone in your office that you don’t know, or who looks out of place, or doesn’t have a visible ID, stop them and ask if you can help them. Who are they here to see? If they rattle off a valid name, say you will escort them to that person.

Combating deep fakes

I’m sure this is the one you’ve been waiting for and if you read this far, I’m sorry that what I have to say is not that ground-breaking. I’ve broken this out into its own section as there are a number of components.

In business, you can no longer get away with relying on someone’s voice to verify them. You need those security questions we discussed earlier. And you need to go one step further. You need “non-wallet and non-social media questions”. Asking a question like your pet’s name, when so many people love posting about their pet online, is worse than a wallet question. Anyone can find it. I’ve seen better questions start to appear in a few services I’m signed up for (sorry for the life of me I can’t remember which ones) where they are asking questions that are highly unlikely you would ever post to social media. I asked my friendly AI to generate a list of examples, so here are a few so you get the gist:

  • What is the name of the street where you got your first parking ticket?
  • What was your childhood nickname that only close family members know?
  • Where were you when you had your first kiss?
  • Where did you celebrate your most memorable New Year’s Eve?
  • What is the name of the first person you had a crush on but never told anyone about?
  • What’s the name of the friend you’ve known the longest but have never visited?
  • Who was the first non-family member that made a significant impact on your life?

And the list goes on. AI is super handy for things like this 😀 None of the above would be easy things to find out and wouldn’t be time well invested by a threat actor when they can find a softer target.

So, what about your personal life and family, and situations like the incident involving the kidnapped daughter? You use the same principle as above. A family safe word, not your bedroom safe word 😂 It should be a single word, something that you all know, is simple to remember even when under stress, and that you have all agreed on. Most families have some in-joke that only the direct family members know. It could be a word related to that. In a situation like the fake kidnapping, asking her daughter for the safe word would have instantly questioned the legitimacy of the situation. It is not foolproof, but it provides some semblance of legitimacy to various potential situations.

I talked about oversharing on the Internet above, but this is even more important when it comes to deep fakes. Any type of AI needs data to train it. To clone your voice, it needs your voice. To clone your face, it needs your face. It goes against the open-sharing nature of social media and life in 2023, but the more voice and face data you place on the Internet, the easier it will be for threat actors to create deep fakes of you.

This article only scratches the surface and doesn’t cover everything. Both businesses and individuals need to be aware of the myriad of social engineering attacks, how to spot them, and how to defend against them. I don’t like to be pessimistic, but I don’t see the problem getting better before it gets worse. The advent of AI has the potential to take this to unprecedented levels, and what we have seen in this space so far is only the tip of the iceberg of possible attacks. Make sure you stay on top of this evolving threat, so you know what you face, and what you need to do to stay secure and safe.

David Morrison

David Morrison

David is the Co-CEO of Morrisec. With a wealth of experience spanning more than two decades, David has established himself as a leading cybersecurity professional. His expertise and knowledge have proven invaluable in safeguarding organisations from cyber threats across a gamut of industries and roles.

0 Comments