USM Anywhere™

USM Anywhere Log Data Enhancement

When evaluating threats to your systems, the more complete and clear the context of an incident is, the more accurate and efficient USM Anywhere can be in identifying and responding to those threats. Log data is one of the key sources of this threat data context, providing a tremendous amount of information about network events. Every network connection, authentication request, file transfer, and privilege escalation generates a log message.

However, many of these log messages were not originally designed to be used for security purposes. There are no official standards for log contents (although there are best practices) therefore, log message content is often inconsistent and incomplete.

For example, look at a typical log message generated by an authentication event:

	"outcome" : "Allow",
	"type" : "Authentication",
	"source" : "",
	"destination" : "",
	“time” : “2018-10-17T19:03:26+00:00”

This message is brief and doesn't provide enough context for incident analysis. USM Anywhere can improve that context by normalizing and enriching the data provided in the log message.

Data Normalization

The first step USM Anywhere takes when it analyzes your system logs is to normalize them so that all incoming data uses the same terminology. In this context, normalization means mapping it to a standard terminology. For example, a vendor may use the term "outcome" or “result” to describe the success or failure of the authentication attempt. USM Anywhere normalizes these two different attributes, replacing them with a single, standard term. Likewise, things like source, source_ip, client, client_ip all need to be mapped to the same terminology so events from different vendors can be used for correlation and alarm generation.

The following is an example of how normalization works. Note that USM Anywhere preserves the original log message as a best practice, in case you need to share it with a vendor or need to refer to the original alert. This means that the normalization phase of message processing likely increases the size of the log message by around 100%.

	"log" : "{ \"outcome\" : \"Allow\", 
		   \"type\" : \"Authentication\", 
                   \"source\" : \"\", 
	           \"destination\" : \"\" }",
	"source_address" : "",
	"destination_address" : "",
	“event_outcome” : “ALLOW”,
	"event_name" : "Authentication",
	"timestamp_occured" : “2018-10-17T19:03:26+00:00”

Data Enrichment

Normalization enables you to analyze all of the log messages USM Anywhere receives. Given the incomplete nature of so many log messages, it also makes sense to use this same process to add valuable information to the log messages which helps USM Anywhere perform better incident detection.

Data enrichment is the process by which that valuable information is added to log messages. The USM Anywhere infrastructure has a large amount of contextual data about the network and systems that it can attach to the log messages to fill in the gaps and to enhance threat detection. It also has access to many databases of things like the location of certain IP addresses, device types, and threats it can also leverage.

These are examples of information that can be added through data enrichment:

  • Device Identity
  • Geolocation
  • Collection details and flags

Device Identity

The majority of servers rely on Dynamic Host Configuration Protocol (DHCP) for dynamic IP address allocation. From a security point of view, this means that identifying and containing threats is much more difficult. By the time a system is identified as compromised, it may be on the network in a completely different place with a completely different IP address. To address that problem, USM Anywhere uses the network context it has to collect and include the media access control (MAC) address, fully qualified domain name (FQDN) and a unique identifier for the system, depending on which are known:

"source_asset_id" : "f8ebb373-b551-43d0-a628-a00771b5d0c1",
"source_mac" : "98:01:A7:B4:D8:47",
"destination_fqdn": "ip-10-6-255-129.ec2.internal",
"source_fqdn": "ip-10-6-2-102.ec2.internal",	


Knowing where your network connections are terminating is important when deciding if traffic should be permitted, blocked, or more carefully monitored. Geolocation can play a role in deciding if a given incident is worthy of more attention. USM Anywhere augments logs with geolocation information of source and destination. In the following example, this data enables an operator to quickly determine that this particular destination is probably not an issue:

"destination_address" : "",
"destination_name" : "AD Server",
"destination_asset_id" : "8cdf98a1-533d-9ec2-b5bc-3424caecef15",   
"destination_organisation : "Microsoft Azure",
"destination_city" : "Redmond",
"destination_fqdn" : "",
"destination_hostname" : "ad",
"destination_organisation : "Microsoft Azure",
"destination_latitude" : "47.6801",
"destination_longitude" : "-122.1206",
"destination_region" : "WA",
"destination_country" : "US",
"destination_country_registered" : "US",		

Collection Details and Flags

USM Anywhere also includes some additional information about how the log message was acquired and how it was processed. This information is included to give the security analyst and correlation algorithms insight into the source of the log, when it was received by a sensor, and how it was processed. For example, was_fuzzied = true means that the log message was received from a source that USM Anywhere doesn’t have a specialized plug-in for and therefore it may not have normalized all the fields. If the log is key to an investigation, the operator should look at the original log message and ensure nothing was overlooked.

Impact on Log Storage

Because USM Anywhere adds data to log messages, the size of the original log message inevitably grows. Very sparse messages can grow as much as 1,860%. The messages themselves are still small in the grand scheme of things, typically growing from less than 250 B to as much as 2.6 KB, adding up over time. The good news is that the amount of metadata added is stable, that is, it doesn't grow much larger or shrink in size for different event classes. So with careful planning, storage use can still be quite predictable. For larger events, (for example events coming from network-based intrusion detection systems (NIDS) and Amazon Web Services (AWS), the percentage goes down significantly since the messages start out quite large. However, for small events such as the one in the previous example, it can have a noticeable impact on the total amount of data stored.

These are some syslog- and AWS-heavy data points for planning purposes:

Syslog-heavy deployment

From a sample size of 599,979 events

  • Total size including enriched data in bytes: 1,612,790,164
  • Total size of just log data in bytes: 145,781,057
  • Average log size in bytes: 243
  • Average log size with enriched data: 2,688
  • Increase in size: 1106%

AWS-heavy deployment

From a sample size of 500,000 events

  • Total size including enriched data in bytes: 1,934,740,282
  • Total size of just log data in bytes: 711,502,141
  • Average log size in bytes: 1,423
  • Average log size with enriched data: 3,868
  • Increase in size: 272%

What Happens When You Reach the Tier Limit?

If you find yourself running into problems with inadequate storage space, your first step should be to review your logging strategy with AT&T Cybersecurity Technical Support, or your service provider. It may be that you don’t need to send as many logs as you are. However, it's better to err on the side of logging too much rather than logging too little, since lost logs can't be recovered and security investigations can lead in unexpected directions.

When approaching your monthly storage limit in USM Anywhere, you have two choices: just rely on transient mode, or actively prune your consumption with event filters. See Reaching the Monthly Usage Limit for more information.

Transient Mode

USM Anywhere calculates how much space you have consumed and projects how much you will consume during the month. If consumption exceeds the monthly capacity, transient mode is automatically turned on. When transient mode is turned on, it's important to understand the following:

Event Filtering

If you want to be proactive with your data consumption, consider reducing the amount of data stored by using filters. Event filtering allows packets to be dropped before they enter correlation and persistence, before they consume any of the monthly storage allotment. Filtering enables you to define a set of rules for fields, which, when matched, are dropped. This allows you to easily pick certain types of packets that you don't want enter the system. When filtering, it's important to realize the impact:

  • Filtered events are not stored within cold storage.
  • Filtered events are not correlated. Alarms are not generated off filtered events.
  • Filtered events are dropped from going into hot storage. You will not see them within your events view.

When using filters, it's important to make sure that you're precisely defining the criteria for events to be dropped. If the filter rule is too broad, there is a chance you may drop packets that you are interested in keeping.

Is There Any Way of Freeing Space?

If you are in transient mode and wish to free up space to allow for more events, you can purge the last 10 days of Events. This will only affect events and doesn't purge any alarms that were generated by them. Additionally, the purge does not affect any events that are in cold storage. This removes those events from hot storage and you will not see them within your events view.

Compliance Considerations for Filtering and Purging

It's important to remember that most security compliance regimes require storage of 90 days of logs. Therefore, purging logs may put you in violation of your compliance regulations. It is also important to understand if there are any compliance implications to filtering out data as well. If a filter will restrict the amount of data logged by a system under Payment Card Industry (PCI) or similar requirements, it's important to check with your compliance team first.