Fine-tuning SIEM data sources for performance
Last Modified: 2022-06-23 11:12:36 Etc/GMT
Affected Products
Languages:
This article is available in the following languages:
Trellix CEO, Bryan Palma, explains the critical need for security that’s always learning.
As per Gartner, "XDR is an emerging technology that can offer improved threat prevention, detection and response."
Trellix announced the establishment of the Trellix Advanced Research Center to advance global threat intelligence.
Trellix Advanced Research Center analyzes threat data on ransomware, nation-states, sectors, vectors, LotL, MITRE ATT&CK techniques, and emails.
As of May 14, 2024, Knowledge Base (KB) articles will only be published and updated in our new Trellix Thrive Knowledge space.
Log in to the Thrive Portal using your OKTA credentials and start searching the new space. Legacy KB IDs are indexed and you will be able to find them easily just by typing the legacy KB ID.
Fine-tuning SIEM data sources for performance
Technical Articles ID:
KB93502
Last Modified: 2022-06-23 11:12:36 Etc/GMT Environment
Enterprise Security Manager (ESM) Enterprise Event Receiver SummaryYou can optimize an Enterprise Event Receiver and ESM for the best data source performance.
By default, ESM assigns new data sources with all available rules. Also, it treats each rule with the same priority, even if the real-world event counts aren't the same from rule to rule. This approach makes it easy to rapidly add data sources to the SIEM environment. But, you can get performance gains if you optimize the rules for each data source. SolutionThere are several levels of optimization in order from least effective to most effective. Some of these techniques are easily implemented. But, others are more advanced and might require assistance from Professional Services. Professional Services can optimize your SIEM data sources for you.
To disable rules for low value, high event count events: Many data sources send several logs, some of which are more important than others. Logs that are of low importance, but high in volume, reduce the overall event per second capacity of the receiver. If a particular event type has a high number of events per second, and the logs aren't important to the user, the rules responsible for those events can be disabled. This action doesn't prevent the receiver from collecting and sending those events to the ESM, instead preventing the events from making it to the ESM dashboards. The changes have a low impact on performance but can help reduce the amount of data that the ESM needs to sift through.
Each ASP data source contains a section in properties for the handling of events that don't match any rules. This option can be set to Do nothing, Log as unknown, or Parse as generic syslog. These options can be useful for troubleshooting, but we recommend that you turn the generic syslog setting off. The reason is that it has a considerable performance penalty. It's intended as a short-term solution when setting up a new data source that doesn't have the needed rule files for all events. Essentially, it runs the event past the original data source and ASP rules a second time, but adds in thousands of extra generic syslog rules. Because it adds so many extra rules to each event processed, there's a large impact to performance. Having just a few of these generic rules settings enabled can slow down the entire receiver.
Each event that comes into a data source must be checked against every rule for that data source until a match is found. The more rules that it must be checked against before a match, the slower the parsing performance. By changing the order in which rules are used to check each event, we can reduce the performance impact in parsing. The idea is to make it such that the most common events are more likely to trigger their matching rules first. For example, there are 1,000 rules for a data source, and the most common event is type 51. Place the rule order checks for type 51 first rather than behind the 999 other rules.
Filter rules are similar to disabling the parsing rules, except that you block the collection of the original event all together. The events collected by the receiver follow these filter rules, and if an event comes in which matches a filter, it's removed at the collection side. This removal means that the events are never sent to the parsing pipeline or the ESM, and offers the highest performance gain. If a particular event of low value triggers at higher counts than more important events, and there's no requirement to track these events in the ESM dashboard, a filter rule makes sense to employ. It's important to note that while a filter rule removes events from the receiver and ESM processing, the logs are sent to the ELM for compliance. So, even if an event is filtered from the receiver and ESM, it can still be found with an ELM search. Filter rules are an advanced topic and require that the user is familiar with writing PCRE regular expressions. Affected ProductsLanguages:This article is available in the following languages: |
|