Indicators of Compromised Behavior (IOCd-B)

You can not predict or control how an attacker will behave. You can not predict what tools or techniques he will use. You can’t even imagine what malware or 0-day he has in his arsenal.

Relying on known malicious behavior is the biggest weakness with signature-based intrusion detection systems. Once the malicious behavior changes, your detection fails.

The same problem applies to Indicators of Compromise (IoC). They can be a bit more complex and try to identify malicious behavior based on additional factors and techniques. However, you are still dependent on the attacks and we know that’s something we can’t control or predict. So we must start thinking about other concepts to identify a compromise.

Accounting for Indicators of Compromised Behavior (IoCd-B)

There is one thing that is a constant and that you can predict: your own users and servers and how they engage with each other.

Over a period of time, the same person (or server) will start to display patterns that do remain consistent and can be measured. With this perspective, we should force ourselves to move beyond malicious behavior and pay special attention to the normal behavior and how it changes.

Similar to the old age argument of whitelist versus blacklist approaches, we have learned over time that a whitelist model is going to be more effective than a blacklist, and it scales better.

I believe this to be a missing piece that very few companies and doing and trying to leverage as part of their security posture. Let me try to explain with a few examples:

Example 1: Company Login

Let’s look at the login pattern at a fake mid-size business B1.

B1 is US-based and has two offices, one in California and one in Florida. It has hundreds of employees (E1..E100). All of them have webmail and VPN access.

What’s the normal behavior here?

1- (E1..E100) Access (B1 VPN) From USA (FL, CA) 2- (E1..E100) Access (B1 Mail) From USA (FL, CA) via Webmail

We know we can expect users to be accessing their email from either Florida and California, those are knowns. There are obviously exceptions to the rule, employees traveling, etc. You can obviously scale these rules out to meet your very unique configuration, but the approach is the same.

Once your profile is set, any access that doesn’t meet your configuration warrants a review.

Example 2: Login time

The same analysis can be done for the login time of the employees.

Most of the employees access both the VPN and webmail from 6:30am to 9:30am. Some odd cases happen from 5:30am to 6:30am and 9:30pm to 11:30pm. They may or may not be too noisy, but that will be your prerogative to configure.

So what do we do?

Using the same model as Example 1, we create a profile for log in times. You can do it between 5 am and 12 am to avoid the noise, or you can spend time tuning and writing rules for the exceptions. It could take time, but could reduce your false negatives over time.

Once that is set, you now have a new rule that warrants review if broken.

Example 3: Login type

Most users only use their webmail from email access. A few use IMAP, so we can’t just flag one method over the other.

However, each individual employee (E) very rarely switch from one to the other.

So if E1 (Employee 1) that always access (B1 Mail) via webmail, suddenly uses IMAP and the connection is coming from a new IP address, we flag for review.

Example 4: Browsers

There are many browsers used in our business (B1), but each individual employee has a preference for a browser that it uses more often.

Either way, you can now create a profile for what browser accesses your internal portal and intranet. Something as simple as tracking the User, Browser and SOURCE IP.

If we detect a change of behavior, where an employee switched browsers and connected from a different location, we flag for analysis (review).

Example 5: Simultaneous logins

It is very hard for someone to be in two places at the same time. There are obviously proxies that can allow for this physic marvels, but they do not happen often, especially not for your average employee.

So if we see two different logins in a relative small period of time, from 2 different cities, we escalate for review.



Monitoring Behavior

Yes, the examples provided are very basic and watered down, intentionally so but they hopefully help you better appreciate the concept of IoCd.

It’s unfortunately not enough for us to continue to place our energy on looking solely at attacker behavior and exploits, we have to start thinking beyond the norm. In these scenarios, their tactics make little difference.

These basic concepts can become very complex and can be applied to your entire organization. Not just logins, but system usage, bandwidth usage (incoming and outgoing), programs running, sites visited, ports, etc..

What is the behavior of your environment (users, systems)? That’s where we need to turn our attention to today.

Monitoring Behavior with Open Source

This can be accomplished with OSSEC FTS (First Time Seen) option along with some patches for geo location and better log correlation. Other logging tools can probably be leveraged the same way.

Hopefully, in a few months, I will release these patches open source so everyone can leverage it with minimal work.





Posted in   logging   thoughts     by Daniel Cid (dcid)

Coding for fun and profit. Often fun and little profit.