An insider threat is leaked or misused data that—whether released accidentally or purposefully—could be used in malicious ways or viewed by individuals who shouldn’t have legitimate access.
Insider threats are among the most common organizational security threats, and they’re most often committed by regular people making regular mistakes.
Don't feel like reading? Here's a short video on insider threats, instead.
Sometimes not much. Sometimes a lot. The context behind that vague answer has a lot to do with the damaging potential of a single trade secret, or the unnoticed repetitiveness of many smaller mistakes.
Some insider threats can bring down entire companies, embarrass people, threaten customer or business partner safety, cost money, or put a country’s national security or mission-critical infrastructure at risk. Just search "insider threat" in any search engine to see a dozen United States federal agencies talk about the risks of insider threats:
National Institute of Science and Technology’s (NIST’s) Computer Security Resource Center (CSRC)
Even we—the leading enterprise open source company who believe everything thrives in the open—show trepidation towards insider threats. Our open source development model for Red Hat® Enterprise Linux® gets released publicly through the CentOS Stream open source community, but only after multiple reviews, tests, and quality control processes.
Because—contrary to popular belief—most insider threats are not perpetrated by former employees with malicious intent. They’re most often mistakes caused by normal people. Like you.
You’re probably not thinking of becoming a threat actor. You likely don’t even consider yourself an insider. But think of all the valuable information you have legitimate access to every day: intellectual property, software engineering processes, organizational network credentials, and company performance information.
That’s why it’s important to pay attention to the questions at the end of your corporate ethics courses. Try not to breeze past these questions. Think about them. Really think about them. They’ll help you become better at insider threat detection in the future.
- What are some potential insider threat indicators?
- What scenario might indicate a reportable insider threat?
- How many potential insider threat indicators can you spot?
There are generally 3 classes of insider threats:
- Malicious insider: Someone actively trying to do harm or benefit from stealing or damaging data or services.
- Whistleblower: Someone who believes the company is doing something wrong.
- User error: Someone who simply makes a mistake.
This one deserves its own category because it’s so common. Accidental insider threats happen when—for example—a critical service breaks after someone bypasses change procedures, accidentally leaking credentials or customer data to the internet.
Malicious software that acts against the interest of the user. Malware can affect not only the infected computer or device but potentially any other device the infected device can communicate with.
A form of social engineering to glean private information from insiders are part of seemingly normal conversations.
And then, there’s phishing. Phishing is a form of social engineering in which an attacker tries to trick someone into handing over sensitive information or personal data through a fraudulent request, such as a spoof email or a scam offer. If an insider instigates a phishing attack, it’s an insider threat. If the instigator is outside—perhaps running off malware—it’s considered another type of security threat.
Security is everyone’s responsibility. Security teams can maintain security policies and help everyone become more mindful of security protocols, but relying on specialists alone to control every aspect is an exercise in futility.
Protective systems are always in place—whether or not there are security controls, computer emergency response teams (CERTs), or insider threat programs within your own company. Consider all the local, state, federal, and international agencies with cybersecurity and antitrust charges.
While the most obvious way to protect against insider threats is to have well maintained permissions and firewalls for the sake of data loss prevention, there are also 3 components of an effective security team:
- Education: Security teams can decrease the chances of external threats and insider attacks just by teaching people how to do things correctly, or emphasizing how much power everyday employees have. Educating insiders that many flagged reports will not be threatening can build a sense of community—that the security team exists as a partner, not judge and jury.
- Default to secure: This is the simplest route to security—lock it all down and make access the exception to the rule. It’s easier to do the right thing by default when the easiest route is also the secure route. Defaulting to secure can look like role based access control (RBAC), or only providing exactly what a user needs by following the principle of least access
- Good communication: Try to encourage people to feel comfortable talking to you. This will encourage more people to tell you the truth, the whole truth, and (ideally) the proactive truth. Set yourself up as a partner; someone to work alongside of to verify jobs are performed with security in mind.
- Humility: Remember that the human condition is one fraught with mistakes. You are not a judge and jury. You are the fire department. You fix the problem. Try not to care that a mistake was made. Punishing mistakes by default will forge a culture of fear and unwillingness, where people will wait until the last minute to flag issues.
Because we start with the upstream open source communities to make enterprise-ready software that’s hardened, tested, and securely distributed. The results are enterprise open source products you can use to build, manage, and automate security across hybrid clouds, supply chains, applications, and people.