What is an Insider Threat?

Who they are, what they can do, and how to defend against them.
David Morrison
March 30, 2023

When managing cyber risk, a significant amount of people’s time is dedicated to managing technical risk. The continual rise in phishing, business email compromise (BEC) scams and other person-centric attacks have emphasised the need to address people risk as well. But most focus on phishing and BEC awareness only, when these are just a tiny subset of the risks related to your personnel.

Insider threats are a growing problem and continue to rise, and it remains a challenging risk to eliminate completely. We’ve all seen the alarming numbers, with estimates suggesting that damage caused by cybercrime attacks could reach $10.5 trillion annually by 2025. Further statistics find that 70% of insider threat-related attacks are not reported beyond the organisation, meaning we don’t even have all the relevant information or an accurate measure of the issue’s severity.

We are looking at difficult economic times ahead, and a recent report from Palo Alto’s Unit 42 suggest this will be a precursor to a rise in cybercrime. The escalation in cyber threats will not solely originate from individuals with technical know-how seeking to make some quick cash, but from insiders within organisations who are more inclined to accept a lucrative offer from a threat actor seeking assistance in gaining access to organisational assets and data. This may sound far-fetched but it’s a real problem.

Flashpoint released an article last year discussing recruitment tactics and the tactics, techniques and procedures (TTPs) for insider threats which goes into detail about the types of threat actors, who they target and how they go about recruiting insiders. The same Unit 42 report referenced above, when discussing the low technical skill tactics used by the Lapsus$ group, states:

“Instead, the group focuses on using a combination of stolen credentials and social engineering to gain access to organizations. We’ve also seen them solicit employees on Telegram for their login credentials at specific companies in industries including telecom, software, gaming, hosting providers, and call centers.”

An infographic from Pulse and Bravura Security found that in their survey, 65% of respondents had been approached by a threat actor wanting assistance in performing a ransomware attack. 65%! The number is staggering.

However, this is just scratching the surface, and insider threats encompass a much wider range of issues. So first of all, what is an insider threat and how do we define it?

What is an insider threat?

The Cybersecurity and Infrastructure Security Agency (CISA) defines an insider threat as:

“the threat that an insider will use their authorized access, intentionally or unintentionally, to do harm to the department’s mission, resources, personnel, facilities, information, equipment, networks, or systems.”

Typically, we associate an insider with an individual who is an employee within an organisation and has been granted access to business assets and information. However, the definition extends beyond this narrow perspective. To expand on the insider threat definition, CISA has defined an insider as follows:

“An insider is any person who has or had authorized access to or knowledge of an organization’s resources, including personnel, facilities, information, equipment, networks, and systems.“

With such a broad definition, you can imagine your current attack surface just increased significantly, particularly when you factor in the clause “or had authorised access” and the escalating attacks against individuals rather than technology. How many people have worked at your organisation?

So at this point, the insider category has been broadened to include the following (with some overlap):

  • Employees or other internal individuals who have been granted access to company assets.
  • An external person that has been provided access to company assets, such as a contractor, a vendor, a custodian, or a support/repair person. These threats are commonly referred to as third-party threats.
  • A person that has been provided with a computer or access to the network.
  • A person who develops the organisation’s products or services. These people may have access to highly sensitive IP.
  • For government agencies, this is anyone who has access to protected information.

Different organisations, based on their structure and business model, will have different subsets of insiders. Take one of my past roles for example. Working at a university with a student population of over 20,000, each student was granted access to the university network. To compound the threat, some IT courses taught hacking techniques, leading to the possibility of skilled insiders. Depending on the size and complexity of your business, along with the dependence on cloud services and third-party suppliers, the scale of your attack surface with respect to insider threats can be massive.

Insider threat categories

Insider threats can be categorised into two main types:

  • Unintentional threats: These are acts that take place due to negligence or that are accidental. The threat actor had no real intention or malice in committing the act.
  • Intentional threats: These are intentional actions undertaken by an individual. They are either pre-meditated or acts of opportunity where the threat actor is fully aware of what they are doing and should know the implications of their actions. In the case of insider threats, this is where you commonly hear the term “malicious insiders”.

I personally stopped using terms like “hacker,” “attacker,” and “malicious actor” years ago because they only really cover intentional threats. Instead, I moved to the term “threat actor” for myself and the teams I managed. This term is inclusive of anyone or anything that engages in any form of threat, regardless of their intentions.

Unintentional insider threats

Unintentional threats can be divided further into two distinct categories: Negligence and accidental.

Negligent Insider Threats

Although the term may seem severe, negligence covers actions where an individual was either careless or disregarded policies, processes, or procedures, even if they were aware of the potential consequences.

There are a myriad of negligent insider threats to be aware of, some more common than others:

  • Allowing someone into a restricted area: Examples are opening a door to a restricted area for another person without confirming their identity and validity to be in that area. This also includes not watching out for people tailgating you through doors or other physical controls that are meant to restrict access to authorised personnel.
  • Not reporting a lost device: Misplacing or losing a USB key that contains work files is a prime illustration. The compact size and affordability of such devices often result in personnel losing track of them and not taking proper measures to report the loss. Another example is failing to promptly report a lost mobile phone or laptop which could leave the device vulnerable to compromise for an extended duration.
  • Using unsanctioned systems or cloud services: “Shadow IT” has been a longstanding problem, but it has become increasingly prevalent in recent times. Rapid advancements in AI and the easy accessibility of cloud services, such as No-Code increase the attractiveness and ease of deploying unapproved solutions. With minimal IT or development skills, anyone can create an app using no-code technology and use it for work, granting it access to sensitive corporate data without any security or governance oversight. Similarly, tools like ChatGPT enable users to streamline their job functions, but the potential security concerns, such as pasting sensitive information into these tools, are endless.
  • Not installing updates when requested: At some point, we’ve all encountered update notifications for our operating systems and applications. As Murphy’s law would have it, these notifications tend to appear at the most inconvenient times, typically when we’re incredibly busy and can’t afford any downtime. As a result, we postpone installing the updates, and they may go unaddressed for extended periods. While automation within organisations is the ideal solution, it’s not foolproof since balancing automatic updates with minimising the impact on users and their workflows can be a difficult act.
  • Allowing another user to use your credentials: Shared credentials have been a persistent security concern for a long time. To adhere to security best practices, each individual should be given their own account credentials, which must be kept confidential. This not only provides strict access and authorisation controls but also ensures non-repudiation. Non-repudiation is essential in guaranteeing that the originator of an action cannot deny their involvement or challenge the authenticity of the action. It ensures accountability. When accounts are shared, this principle is compromised, and it becomes impossible to trace a negligent, accidental, or malicious action back to the person who owns the account with 100% certainty.

The majority of the time, personnel are only trying to be more efficient or effective in their roles, but don’t understand the potential consequences of their actions. This is an ongoing issue in infosec, where we attempt to find a delicate balance between usability and security. We want to protect the business, but at the same time enable its people to be exceptional in their roles without unnecessary restrictions.

Accidental Insider Threats

When an individual unintentionally causes a security incident, it is known as an accidental insider threat. Such incidents can have a severe impact on the business. The most common examples of such accidents include:

  • Sending information to the wrong person: The most common example of this is when an individual mistakenly sends an email containing sensitive information to the wrong recipient. People are time-starved and always in a hurry, and the autocomplete feature in the email’s To: field can result in someone selecting the wrong recipient without realising it.
  • Clicking a link or opening a document in a phishing email: Phishing emails rely on someone either clicking a link in the email that takes them to a malicious site, or opening a malware-infected document attached to the email. Malicious websites will often replicate a legitimate site, coaxing the victim into entering their account credentials, which are then collected and reused to compromise their employer’s systems and information.
  • Leaving their work laptop or mobile in an insecure location: Although leaving mobile devices unattended could be classified as negligence, few organisations I have seen actually have established specific guidelines regarding when and where employees can leave their devices unattended, as well as what measures should be taken to minimize the risk of theft or loss. This may encompass various scenarios, such as hotel rooms when travelling, the boot of your car, your home office, or even being left unsecured in the organisation’s office.

Numerous threats that are categorised as negligent may also be classified as accidental when the individual is unaware of the correct course of action. This could occur if the individual has not received adequate training on the required business processes or security awareness training. For instance, providing personal information over the phone because the employee had not been trained on the proper procedure to verify the user’s identity.

Intentional Insider Threats

An intentional insider threat refers to an individual who deliberately carries out an act with the intention of causing harm to an organisation. Although they are aware that it is wrong, they do it anyway for various reasons such as financial gain or having a grievance with the company. These individuals are often referred to as “malicious insiders” because their actions are intended to cause harm or will inevitably lead to harm, and they are aware of this fact. Some common examples of such acts include:

  • Motivation to ‘get even’: The individual has a gripe with the organisation, or people within the organisation, and wants to get back at them. This could be due to perceived mistreatment, failure to be paid their bonus, overdue for a pay rise, being looked over for promotion, or having their employment terminated.
  • Theft of information for financial gain: Depending on the organisation and the types of information they manage, this could be intellectual property (IP), trade secrets, PII, credit card information or any other type of data that is worth money if sold on the dark web, to criminals or to a competitor.
  • Theft of information for career advancement: This is common when an employee leaves an organisation to go to a competitor, taking information with them that will benefit their new job. They may have even been asked to take information with them as part of their job offer. Common examples include client lists and IP.
  • Theft of information for social or environmental reasons: This can be whistleblowing, or intentionally releasing information the individual deems should be out there for the world to see. We have all seen Wikileaks and this type of information release. This can also overlap with collusive threats discussed below.
  • Sabotaging equipment: There could be multiple reasons why someone may want to sabotage equipment and cause a denial of service. It could fall into the ‘get even’ bucket with a disgruntled employee, it could be competitors trying to gain an advantage in the marketplace, or it could be nation-state driven such as Stuxnet. Sabotage could involve physically damaging systems, it could erase critical data, or locking everyone out of a system permanently. Someone with domain or superuser access has unlimited power to cause disruptions like these to a business.
  • Collusive threats: These are threats where one or more individuals work with an outside threat actor to perform malicious acts, such as stealing sensitive information.

And the list and motivations go on…

When talking about collusive threats, they have always been around, but they are becoming a greater problem in the age of the Internet, AI and the increased volume and complexity of disinformation campaigns. In this way, they overlap with unintentional insider threats when you factor in the ‘useful idiot’. To quote Wikipedia:

“a useful idiot is a term currently used to reference a person perceived as propagandising for a cause—particularly a bad cause originating from a devious, ruthless source—without fully comprehending the cause’s goals, and who is cynically being used by the cause’s leaders.”

This is a huge topic in and of itself, but if you want to discuss useful idiots and disinformation campaigns at length, then reach out to Morrisec’s Co-CEO Dr Sarah Morrison who has a PhD is in Russian Information Operations. Oh, the stories she can tell you 😂

Common Methods of Exposure and Exfiltration

When considering the impact of insider threats on the three critical elements of information security, namely confidentiality, integrity, and availability (CIA), most organisations are primarily concerned with the potential breach of data confidentiality. This may occur either accidentally or intentionally, resulting in the exposure or theft (exfiltration) of sensitive information. To keep this article to a readable size, I won’t go into the potential impact of insider threats on data integrity and availability, suffice it to say it’s worth considering the potential consequences a system administrator with superuser privileges could have on the integrity and availability of your data, whether their sabotage motivations are based on being disgruntled or whether they are being manipulated by an outside party.

Exposure

Unintentional insider threats generally lead to exposure issues. Confidentiality, and the associated principle of ‘least privilege’, dictates that only those that need access to a specific piece of information should be provided access to that information. Exposure of data breaks this rule and presents data to someone that is not authorised to have access to that data.

As discussed earlier, one of the most common exposures is someone sending information to the wrong recipient. OAIC’s Notifiable Data Breaches Report: July to December 2022 has 25% of notified breaches classified under human error with 42% of these being an email sent to the wrong recipient. The second largest at 33% was the unintended release or publication of information.

But data exposure can happen in many other ways. A common problem that has been with us for decades that I also discussed earlier is ‘shadow IT’. This refers to the utilisation of information and communication technologies (ICT) that have not been authorised or sanctioned by the organisation. Often, employees resort to shadow IT to simplify their job or bypass security controls that restrict access to certain platforms or products that would facilitate their work. This problem has persisted over time but has escalated in recent years due to the increased ease and accessibility of cloud services. And without organisation oversight and governance, sensitive data can be uploaded to, stored or processed in these platforms or services, without adequate controls to protect that data. This leads to data exposure or theft of that data when the system is compromised.

Source code is another source (no pun intended) of business IP that can be critical to the value and success of a business. Once again, managing where this code exists so you can protect it adequately is critical. But the nature of modern development, leveraging online code repositories and developers cloning code to their device, means the data may exist on any number of untrusted devices. The news this week is a perfect example, with Twitter’s source code leaked on GitHub, though circumstantial evidence so far is pointing to a disgruntled employee. Again, it may not always be intentional. The developer may just be trying to do their job. But cloning a work repository to their home system on a weekend to finish some work or fix an urgent bug could lead to data exposure and huge repercussions for their employer.

Loss of a device or insecurely disposing of a device is another common source of data exposure. From the same OAIC report from last quarter, 5% of human errors were loss of device/information, and another 2% was via insecure disposal. PwC released a great paper last week titled Critical infrastructure and the e-waste data security threat that discusses the lack of secure disposal obligations in recent critical infrastructure reforms. At one point in the paper, they discuss buying a second-hand tablet that contained a note with credentials to a corporate database that could have provided access to 20 million sensitive PII records! It’s always been an issue and it still is. I remember around 20 years ago, a friend bought an old Mac on eBay only to find that it had data on it that belonged to the organisation I was working for at the time.

COVID-19 saw a massive change in the way we work, and with that change, becoming an insider threat, willingly or not, became a lot easier. Remote and hybrid working models moved the workforce outside of the trusted office space. In an ideal world, everyone is issued a corporate laptop, with mobile device management (MDM) and a plethora of other controls that reduce business risk. But most organisations can’t afford to issue laptops to everyone, so personnel are forced to use their own systems for work. And these systems are connected to home networks that are set up by non-security professionals. Couple this with BYOD even when in the workplace, especially with mobile phones, and it’s escalated even further. How many of you work in an organisation where everyone is issued a new smartphone? In most organisations, it just isn’t financially viable. But employers expect employees to be contactable, which means putting work email, and work data, on unmanaged and unsecured phones.

Exfiltration

So what about the exfiltration of data by an intentional insider threat that has a strong motivation to perform a malicious act? There is no limit to the impact this kind of threat actor can have on an organisation.

Regarding remote and hybrid work, this situation makes data exfiltration even more effortless. When an endpoint with access to all data is not under your control, there are no barriers preventing an individual from saving the data to their device, copying it to an external drive, or transferring it to any cloud storage service they desire.

Cyberhaven’s 2022 Insider Risk Report found that 9.4% of employees will exfiltrate some form of sensitive information within a six-month period.

44.6% of sensitive data that is exfiltrated is client or customer data, 13.8% is source code, and 8% is PII.

The most common exfiltration vectors are personal cloud storage at 27.5%, personal webmail at 18.7% and corporate email at 14.4%. Removable media, such as USB drives, lands at 14.2%. Dropbox wins the prize as the most used cloud storage service for exfiltration.

The research also revealed that when an employee resigns, 68.7% of data exfiltration incidents occur in the 2 weeks leading up to their resignation.

What’s interesting is there is a 23.1% increase in exfiltration incidents the day before employees are fired. I’m guessing these employees could see the writing on the wall and knew they were about to be terminated. What about the day they are fired? A 109.3% increase (above baseline incidents)! This is definitely an argument for terminating access before their exit meeting.

How to prevent insider threats

Insider threats are a complex problem and difficult one to fully combat. I could write another 50 pages and not cover it all, so I will leave you with a few areas to start. CISA has an Insider Threat Mitigation Guide that I highly recommend and the points I am discussing below are also discussed in that guide.

  • Have a documented asset register: This is a critical piece to identify what is important to your organisation and what you rely on to be able to successfully run and grow your business. This is not just systems, but also information itself, where it is stored, and CIA ratings for each asset. You can read more on how to develop an asset register in our What is an asset register article. With an asset register in place, you know what you are protecting, can define controls to ensure it is adequately protected, and can monitor critical assets for data exposure or exfiltration.
  • Define your threat landscape: Understanding what threats you face is critical in being able to defend against them. These are threats specific to your business but also threats your industry is seeing, such as targeted campaigns. It also factors in threats that could face your third-party suppliers or you as a third-party supplier. You need to also identify threats that could target specific assets or data you hold.
  • Define and implement detection strategies: Based on your defined assets and threat landscape, what do you need to monitor for and how will this be achieved? A perfect example is the statistics around employees stealing data when resigning or being terminated. There need to be detection protocols in place to identify these issues so they can be responded to.
  • Have an incident response plan in place: What happens if or when you have an incident? There are two scenarios within this to address. The first is what to do if your data is exposed or exfiltrated, but also if your detection strategies are in place and effective, it’s critical to be able to respond in a timely manner to reduce exposure or exfiltration. It may be possible to stop the loss of data or at least limit how much is lost. A common gap in incident response plans is testing them. This is critical to ensure they are fit for purpose, and to identify any weaknesses or areas of improvement so if the need arises, your plans work as expected.
  • Implement a means of reporting: Part of your detection strategy should be developing the right culture that encourages reporting of potential threats, weaknesses in security controls, indicators of a potential insider threat, or any other relevant concern. There needs to be clear and easy processes for your personnel to take if they need to report something.
  • Create a threat management team: This could be the same team that is part of your incident response planning and management but needs to be able to assess and respond to actual and potential insider threats.
  • Implement training and awareness: To be effective, you need to build a cyber-aware culture. This goes beyond just insider threats, but insider threats specifically are an area that needs to be discussed and rarely is. Awareness and understanding are imperative in reducing problems like shadow IT. Personnel need to be made aware of the repercussions their actions can have on an organisation, no matter how pure their intentions were.

What’s scary is I’ve only just started scratching the surface of this topic, but hopefully it has given you an idea of the potential risks you face, and some strategies to start defending against these threats. As you’ll note in the mitigation suggestions above, you’re not seeing anything really new or groundbreaking. They are solid, baseline security controls that will address risks far beyond those posed by insider threats. Get these in place and you are well on your way to maturing your overall security posture.

Download the PDF Now

Download our reference PDF summarising the various insider threats you should be aware of and 7 steps to defending against them. Perfect for passing around your business to help build awareness.

David Morrison

David Morrison

David is the Co-CEO of Morrisec. With a wealth of experience spanning more than two decades, David has established himself as a leading cybersecurity professional. His expertise and knowledge have proven invaluable in safeguarding organisations from cyber threats across a gamut of industries and roles.

0 Comments