Security
Headlines
HeadlinesLatestCVEs

Headline

Social Engineering vs Mistakes: Two sources of pain, one process

There are a million ways for awful things to happen to your data and accounts. For example, someone could accidentally commit their AWS access keys publicly to GitHub, and attackers quickly run up $100,000 in charges mining cryptocurrency on expensive GPU-enabled instances. Or “account support” calls with a notice that your account has false charges, but they can remove them once they verify your credit card info. There are fake software updates that steal bank account information.

Red Hat Blog
#linux#red_hat#git#aws

There are a million ways for awful things to happen to your data and accounts. For example, someone could accidentally commit their AWS access keys publicly to GitHub, and attackers quickly run up $100,000 in charges mining cryptocurrency on expensive GPU-enabled instances. Or “account support” calls with a notice that your account has false charges, but they can remove them once they verify your credit card info. There are fake software updates that steal bank account information. And fictitious warnings about login failures to your corporate email with a link to “login and verify access.” Not to mention account information leaked from one of your online services, including your banking site. Although there are many causes of account exposure, they fall broadly into two categories: malicious intent or accidental leaks.

Malicious intent is the one you usually hear about on the news and from service providers. Account databases are hacked, phishing emails trick users into giving up information, or fake “service” calls say your computer is infected. These almost always aim to separate you from your money. They want your bank details to transfer money or your credit card information to purchase items such as gift cards, which is a common way to launder stolen money.

Accidental leaks, or the “oops” described in Red Hat’s Security Detail episode on insider threats, have similar effects as malicious attacks but result from entirely different causes. The most common “oops” are commits to public code repositories, but they can also be an accidental email, a bad paste into chat, or any other way a totally legitimate user puts their data where it shouldn’t be.

Both causes rely on someone giving up access to their data, but the difference is all about intent.

5 steps for handling a data exposure

Luckily, the general process for handling all of these events is the same, and you can even use these steps in your personal life to prepare and respond to exposed data incidents. Those steps are:

  1. Have a plan

  2. Scope the problem

  3. Stop the bleeding

  4. Recover

  5. Take steps to prevent future problems

Let’s take two issues, one personal and the other corporate, to see how this plan works.

Mitigating a malicious exposure

A “support specialist” claiming to be from your credit card company calls and manages to get your credit card number, or a company that stores your card is compromised and exposes your card.

  1. Have a plan: Know which sites store your card for things like recurring payments. You can keep this list in a spreadsheet or written on a piece of paper. This is the list of businesses you need to contact when something bad happens

  2. Scope the problem: Have there been any charges on your account? Will anything important fail when you report the card stolen?

  3. Stop the bleeding: Call the credit card company, dispute any charges that look bad, and request a new card number.

  4. Recover: Walk through your list of companies that store your card companies and have them replace your old card with the new card.

  5. Take steps to prevent future problems: Some banks offer the ability to generate temporary or limited-use card numbers, which allow you to limit the damage if a service is compromised. You can also set alerts on your card charges, which helps you catch a compromise early. Consider setting up alerts on sites such as HaveIBeenPwned to tell you if important personal information is leaked.

It’s likely many, many folks have dealt with this before and this all sounds pretty normal. The process of handling leaks is mostly common sense and not panicking when bad things happen.

Addressing an accidental exposure

You are working on a project that automatically creates and manages cloud instances. In haste, you accidentally `git add` your credentials to the repo and push them to your public repo. One of your coworkers notices the credentials in the repo a few hours later.

  1. Have a plan: Know who to contact on your security team. They can provide assistance and next steps, as well as contacts to assist if more complex analysis and mitigation are needed. In the absence of a team like this, make sure you know who has the ability to update and change credentials and who to contact for emergencies (such as your cloud provider technical account manager, support email address, and so forth). Also, know which services consume your credentials. Where do you have services running, and which user is responsible for them?

  2. Scope the problem: Did the credentials allow account creation or just creating instances? Which cloud provider was it? These questions (and potentially more) need to be asked to determine just where the boundaries of the issue might be.

  3. Stop the bleeding: Immediately disable the credentials with your cloud provider. Many attackers will use the keys to create their own keys, so stopping the bleeding can include looking for any unusual keys in the account or any unknown instances running. After the keys have been revoked, the next step is searching for any unusual instances in all regions to stop the spend. Make sure to document everything you find, and follow your security team’s instructions if they want the instances completely terminated or just stopped for future forensics.

  4. Recover: Push all users and service accounts to have updated credentials and confirm that they are running correctly. Take whatever actions requested to clean up the situation and possibly receive a discount against any charges the attackers rack up. You might also want to run something such as BFG Repo-Cleaner to remove the offending commits. If many people contribute to a repo, this can require some coordination from its members, so read the instructions carefully.

  5. Take steps to prevent future problems: Here’s where you can do a lot to prevent this from happening again.

    1. First, keep config files out of repos. In Git, this is done with a gitignore file in the repo. Work with your team to standardize naming for config files and other sensitive items, and add new repos to your gitignore files. This can block many, many “oops” events!

    2. Second, look for tools such as Gitleaks to run periodically against your repos to look for sensitive data. This can find things like secrets, certificate keys, and other sensitive information. You can also add gitleaks as a pre-commit hook to help block sensitive items from being added to the repo, even if they are outside files listed in gitignore.

    3. Finally, make sure everyone knows the plan and who to contact in an emergency. No matter how small your team, having a plan of action is the most important thing in a crisis to prevent a spiral of damage, should something awful happen.

Defend against data exposure

These two events are similar, but the biggest difference is how you prevent them. Social engineering preys on our desire to help and inherently trust people who seem to be knowledgeable. Defending from that is primarily about training and skeptical thinking, alongside tools such as spam-blocking software and well-defined procedures to prevent giving out things we shouldn’t.

“Oops” events are magnified by not having tools in place to help catch the mistakes that we all make. You can educate people and give them processes, but it’s very easy to make one mistake that costs tens or hundreds of thousands of dollars. The key is to provide developers with a “safe base” that helps stop a small problem before it becomes a catastrophe.

Watch now

After teaching Linux, Red Hat Academy, Networking and developing the first Information Security associates for NC Community College systems, Rich started in 2005 at Red Hat on the Internal support team. He was the first technical member of the newly formed Information Security team in 2009, and has performed as lead incident coordinator, researcher, educator and consultant for all things security.

Read full bio

Red Hat Blog: Latest News

Automatically acquire and renew certificates using mod_md and Automated Certificate Management Environment (ACME) in Identity Management (IdM)