SlideShare a Scribd company logo
Log Analysis Across System Boundaries for Security, Compliance, and Operations

Dr. Anton Chuvakin

WRITTEN: 2007

DISCLAIMER:
Security is a rapidly changing field of human endeavor. Threats we face literally change
every day; moreover, many security professionals consider the rate of change to be
accelerating. On top of that, to be able to stay in touch with such ever-changing reality,
one has to evolve with the space as well. Thus, even though I hope that this document
will be useful for to my readers, please keep in mind that is was possibly written years
ago. Also, keep in mind that some of the URL might have gone 404, please Google
around.


This article covers the importance of utilizing a cross-platform log management
approach rather than a siloed approach to aggregating and reviewing logs for easier
security and compliance initiatives.

All IT users, whether malicious or not, leave traces of their activity in various logs,
generated by IT components, such as firewalls, routers, server and client operating
systems, databases and even business applications. Such logs accumulates, creating
mountains of log data. At the same time, more organizations are starting to become
aware of the value of collecting and analyzing such data: it helps them keep an eye on the
goings-on within the IT infrastructure-- the who’s, what’s when’s, and where’s of
everything that happens. It also makes sense due to the growing emphasis on data
security with companies want to avoid becoming the next TJ Maxx and evolving
regulatory compliance mandates such as PCI DSS and SOX.

Of course, simply generating and collecting the logs is only half the battle. Being able to
quickly search and report on log data in order to detect, manage, or even predict, security
threats and to stay on top of compliance requirements is the other half. However, logs
have traditionally been handled by reviewing them on their individual points of origin
and usually only after a major incident. Such approach is simply not working in this age
of data breaches and stringent compliance requirements. It is not only inefficient and
complex, but can cost a Fortune 1000–sized company millions of dollars and take weeks,
thus destroying or severely reducing the positive effects of such “log review.”

Today the call to action is shifting from mere having log data to centralized data
collection, real-time analysis and in-depth reporting and searching to address IT security
and regulatory compliance issues. Thus, the main log-related goals of a company should
be both to enable log creation and centralized collection and then to find a way to search
and review log data from disparate points of origin across the system boundaries, across
the IT infrastructure.
Let’s first look at what these logs are.

First, because logs contain information about IT activity, all logs generated within an
organization could have relevance to computer security and regulatory compliance.
Some logs are directly related to computer security: for example, intrusion detection
alerts are aimed at notifying users that known malicious or suspicious activity is taking
place. Other logs, such as server and network device logs, are certainly useful to
information security, but in less direct ways. Server logs, such as those from Unix,
Linux, or Windows servers, are automatically created and maintained by a server of
activity performed on it; they represent activity on a single machine. Server logs are
especially useful in cases of insider incidents; given that an insider attack or abuse might
not involve any network access as well as not trigger intrusion detection systems and
happen purely on the same system (with attackers using the console to use the system),
server logs shed the most light on the situation. Relevant logged activities on a server
include login success/failure, account creation and deletion, account settings and
password changes, and file access, altering, or deletions and usually contain the
information on the user who performed the actions.

Network logs, on the other hand, describe data being sent and received over the network,
so it makes sense that these are best suited assist in detecting and monitoring abnormal
network activity. Unlike server logs, which are limited to one machine and indicate only
activity, network logs indicate a connection on the network, a source, and a destination.
Relevant information found in network logs include the time a message was sent or
received, the direction of the message, which network protocol was used to transmit the
message, the total message length, and the first few bytes of the message. On the other
hand, such logs typically do not provide the information on the actual user who attempted
the connection.

All logs (security, server, network, and others) make up one piece of the puzzle of IT
infrastructure activity, so it makes sense that all log data is crucial to enterprise security, to
regulatory compliance, and to IT operations. However, given the number of sources of logs and
the varying information the logs contain, there are many different pieces of the IT infrastructure
puzzle. While one can try to look at logs in siloed fashion, the logs will then fail to form the
“big picture” of enterprise activity.

Let us provide a few compelling reasons in favor of centralized log collection and cross-
device analysis.

First, logs from disparate sources reviewed in the context of other logs offer situational
awareness which is key not only to managing security incidents but also to a company’s day-to-
day IT operations. Routine log reviews and more in-depth analysis of stored logs from all
sources simultaneously are beneficial for identifying security incidents, policy violations,
fraudulent activity, and operational problems and for providing information useful for resolving
such problems.

Moreover, when responding to an incident, one needs to review all possible evidence, which
means all the logs from all the affected and suspect systems. One query across all logs saves
time, and incident response, whether to internal and external security threats, requires quick
access to all logs to figure out the details of the breach, especially if it involves more than one
part of the IT infrastructure. For example, searching for a user or an IP address across 4000
servers might take days if one has to login to each server, find the logs and perform the searches.
If logs are centralized and optimized for searching, it will literally take seconds or less.

And we can’t ignore the high degree of operational efficiency that accompanies having all logs
in one place. Further, troubleshooting issues across all systems becomes a one-click process, as
does running high-level trend reports across all systems in a business unit. To seek out log data
from individual sources, administrators have to consider too many things and spend too much
time piecing together information to be efficient. But a single point of control means that with
the click of a button, all relevant information can be at their fingertips and with a tweak of one
knob, all logging configurations can be updated as security policies and compliance mandates
evolve.

What is no less relevant is a current onslaught of compliance mandates. Further pushing IT
professionals towards a cross-device approach to log analysis, many compliance mandates put
forth a broad call to action to examine and review logs, not specifically database logs, not only
server logs, not just network logs…logs generally. For example, the Federal Information and
Security Management Act (FISMA, via NIST SP 800-53 and NIST SP 800-92) describes the
broad need for log management in federal agencies and how to establish log management
controls including the generation, review, protection, and retention of audit records, and steps to
take in the event of audit failure. The Health Insurance Portability and Accountability Act of
1996 (HIPAA, via NIST SP 800-66) describes the need for regular evaluation of information
system activity by reviewing a variety of logs (including audit logs and access reports). The
Payment Card Industry Data Security Standard (PCI-DSS) mandates logging specific details and
log review procedures to prevent credit card fraud, hacking, and other related security issues in
companies that store, process, or transmit credit card data. Requirement 10 requires that logs for
all system components be reviewed at least daily and those from in-scope systems be stored for
at least one year as well as protected. Thus, the above regulations call for central control over all
log retention, stringent access control and other log protection for evidentiary purposes and even
logging access to all logs. All of these are only possible when logs are centralized.

Another critical benefit of centralized log collection and cross-domain log analysis is that it
enables privileged user monitoring by removing the logs from the control of the privileged IT
users such as system and network administrators. Abuse of system and data access or even data
theft by trusted users is unfortunately all too common and logging is one way to curtail that. Log
data integrity of log data in a centralized repository can also be guaranteed by the access control
rules based on the “need to know” basis, logging all access and the use of cryptographic
technologies, such as hashing and encryption.

In essence, the need to paint a total picture of IT infrastructure activity and the broad requirement
of key regulatory compliance mandates to review logs generally means that IT professionals
need to find a way to execute cross-boundary log analysis. Of course, this approach requires
centralized retention of logs from disparate sources, has benefits.
The key point to remember is that any information that can be gleaned from log data is
always present in enterprise logs. However, the limiting factor to how well that
information can be put to good use is how quickly and efficiently the log data that contain
it can be retrieved, searched, and reported on. If a company’s IT staff can not access the
appropriate logs in time and as a result must spend all of its time fire-fighting security
breaches rather than proactively preventing such breaches before they become major
problems, this does not maximize efficiency and it leaves the company open to even
more security threats as the team struggles to catch up.

Log data storage in a centralized repository and cross-device analysis allows those
seeking information to bypass the time- and- resource-consuming process of combing
through each individual source of log data and piecing the information together
afterwards. Instead of spending their time either searching for information,
administrators have the data they need at their fingertips and also can proactively review
any log data that might indicate abnormal activity and address security, compliance and
operational issues before they become major company blunders. In order to maintain
efficiency and effectiveness, enterprises must be able to break down log silos and allow
the intelligent analysis of log data from disparate sources.


ABOUT THE AUTHOR:

This is an updated author bio, added to the paper at the time of reposting in 2009.

Dr. Anton Chuvakin (http://www.chuvakin.org) is a recognized security expert in the
field of log management and PCI DSS compliance. He is an author of books "Security
Warrior" and "PCI Compliance" and a contributor to "Know Your Enemy II",
"Information Security Management Handbook" and others. Anton has published dozens
of papers on log management, correlation, data analysis, PCI DSS, security management
(see list www.info-secure.org) . His blog http://www.securitywarrior.org is one of the
most popular in the industry.

In addition, Anton teaches classes and presents at many security conferences across the
world; he recently addressed audiences in United States, UK, Singapore, Spain, Russia
and other countries. He works on emerging security standards and serves on the advisory
boards of several security start-ups.

Currently, Anton is developing his security consulting practice, focusing on logging and
PCI DSS compliance for security vendors and Fortune 500 organizations. Dr. Anton
Chuvakin was formerly a Director of PCI Compliance Solutions at Qualys. Previously,
Anton worked at LogLogic as a Chief Logging Evangelist, tasked with educating the
world about the importance of logging for security, compliance and operations. Before
LogLogic, Anton was employed by a security vendor in a strategic product management
role. Anton earned his Ph.D. degree from Stony Brook University.

More Related Content

DOC
Computer Forensics in the Age of Compliance
PDF
Events Classification in Log Audit
PDF
A Survey On Data Leakage Detection
PDF
Extending Information Security to Non-Production Environments
PPTX
Search Inform DLP
PDF
IDS Research
PPTX
ZoneFox, Machine Learning, the Insider Threat and how UEBA protects the user ...
PDF
Data security and privacy
Computer Forensics in the Age of Compliance
Events Classification in Log Audit
A Survey On Data Leakage Detection
Extending Information Security to Non-Production Environments
Search Inform DLP
IDS Research
ZoneFox, Machine Learning, the Insider Threat and how UEBA protects the user ...
Data security and privacy

What's hot (20)

PDF
Rapid7 CAG Compliance Guide
PDF
Executive Summary_2016
PDF
GDPR 9 Step SIEM Implementation Checklist
PPTX
IBM i Security: Identifying the Events That Matter Most
PDF
LogRhythm E Phi Use Case
PDF
Security Information and Event Management
PDF
IRJET- An Approach Towards Data Security in Organizations by Avoiding Data Br...
PDF
How Organizations can Secure Their Database From External Attacks
PPT
Logs = Accountability
DOCX
SecureWorks
PPTX
information security (Audit mechanism, intrusion detection, password manageme...
PPTX
Securing SharePoint -- 5 SharePoint Security Essentials You Cannot Afford to ...
PDF
Isaca global journal - choosing the most appropriate data security solution ...
PDF
Data Loss Prevention with WatchGuard XCS Solutions
PPTX
Back to the Office: Privacy and Security Solutions to Compliance Issues for 2...
PPTX
GDPR & IBM i Security
PDF
Bridging the Data Security Gap
DOCX
Target Data Breach Case Study 10242014
PDF
An Introduction to zOS Real-time Infrastructure and Security Practices
PPT
FIRST 2006 Full-day Tutorial on Logs for Incident Response
Rapid7 CAG Compliance Guide
Executive Summary_2016
GDPR 9 Step SIEM Implementation Checklist
IBM i Security: Identifying the Events That Matter Most
LogRhythm E Phi Use Case
Security Information and Event Management
IRJET- An Approach Towards Data Security in Organizations by Avoiding Data Br...
How Organizations can Secure Their Database From External Attacks
Logs = Accountability
SecureWorks
information security (Audit mechanism, intrusion detection, password manageme...
Securing SharePoint -- 5 SharePoint Security Essentials You Cannot Afford to ...
Isaca global journal - choosing the most appropriate data security solution ...
Data Loss Prevention with WatchGuard XCS Solutions
Back to the Office: Privacy and Security Solutions to Compliance Issues for 2...
GDPR & IBM i Security
Bridging the Data Security Gap
Target Data Breach Case Study 10242014
An Introduction to zOS Real-time Infrastructure and Security Practices
FIRST 2006 Full-day Tutorial on Logs for Incident Response
Ad

Viewers also liked (8)

DOC
Log Management for PCI Compliance [OLD]
DOC
Days of the Honeynet: Attacks, Tools, Incidents
DOC
Take back your security infrastructure
DOC
Logging "BrainBox" Short Article
DOCX
What do I really need to do to STAY compliant with PCI DSS?
DOC
Automated Incident Handling Using SIM
DOCX
Five IDS mistakes people make
DOC
Discovery of Compromised Machines
Log Management for PCI Compliance [OLD]
Days of the Honeynet: Attacks, Tools, Incidents
Take back your security infrastructure
Logging "BrainBox" Short Article
What do I really need to do to STAY compliant with PCI DSS?
Automated Incident Handling Using SIM
Five IDS mistakes people make
Discovery of Compromised Machines
Ad

Similar to Log Analysis Across System Boundaries for Security, Compliance, and Operations (20)

DOC
Audit logs for Security and Compliance
DOC
Log Management in the Age of Compliance
PDF
Leveraging Log Management to provide business value
PDF
Securing your IT infrastructure with SOC-NOC collaboration TWP
PDF
A self adaptive learning approach for optimum path evaluation of process for ...
PDF
A self adaptive learning approach for optimum path evaluation of process for ...
PDF
13 essential log_col_infog
PPT
What Every Organization Should Log And Monitor
PDF
Use Exabeam Smart Timelines to improve your SOC efficiency
PPTX
Logging, monitoring and auditing
DOC
Security Event Analysis Through Correlation
DOC
Where Logs Hide: Logs in Virtualized Environments
PDF
Maceo Wattley Contributor Infosec
PPTX
Security Operation Center Presentat.pptx
PDF
Product description shell control box 4 lts
PPTX
Soc security-analytics
PPTX
Soc security-analyticsof leotechnosoft
PPTX
Log maintenance network securiy
PDF
Changing the Security Monitoring Status Quo
 
DOCX
4777.team c.final
Audit logs for Security and Compliance
Log Management in the Age of Compliance
Leveraging Log Management to provide business value
Securing your IT infrastructure with SOC-NOC collaboration TWP
A self adaptive learning approach for optimum path evaluation of process for ...
A self adaptive learning approach for optimum path evaluation of process for ...
13 essential log_col_infog
What Every Organization Should Log And Monitor
Use Exabeam Smart Timelines to improve your SOC efficiency
Logging, monitoring and auditing
Security Event Analysis Through Correlation
Where Logs Hide: Logs in Virtualized Environments
Maceo Wattley Contributor Infosec
Security Operation Center Presentat.pptx
Product description shell control box 4 lts
Soc security-analytics
Soc security-analyticsof leotechnosoft
Log maintenance network securiy
Changing the Security Monitoring Status Quo
 
4777.team c.final

More from Anton Chuvakin (20)

PPTX
SecureWorld 2025 Keynote Déjà Vu All Over Again_ Learning from Cloud's Early...
PPTX
Detection Engineering Maturity - Helping SIEMs Find Their Adulting Skills
PPTX
Future of SOC: More Security, Less Operations
PPTX
SOC Meets Cloud: What Breaks, What Changes, What to Do?
PPTX
Meet the Ghost of SecOps Future by Anton Chuvakin
PPTX
SANS Webinar: The Future of Log Centralization for SIEMs and DFIR – Is the En...
PPTX
SOC Lessons from DevOps and SRE by Anton Chuvakin
PPTX
Hey SOC, Look LEFT! by Anton Chuvakin RSA 2023 Booth
PPTX
20 Years of SIEM - SANS Webinar 2022
PPTX
10X SOC - SANS Blue Summit Keynote 2021 - Anton Chuvakin
PPTX
SOCstock 2020 Groovy SOC Tunes aka Modern SOC Trends
PPTX
SOCstock 2021 The Cloud-native SOC
PPTX
Modern SOC Trends 2020
PPTX
Anton's 2020 SIEM Best and Worst Practices - in Brief
PPTX
Generic siem how_2017
PPTX
Tips on SIEM Ops 2015
PPTX
Five SIEM Futures (2012)
PPTX
RSA 2016 Security Analytics Presentation
PPTX
Five Best and Five Worst Practices for SIEM by Dr. Anton Chuvakin
PPTX
Five Best and Five Worst Practices for SIEM by Dr. Anton Chuvakin
SecureWorld 2025 Keynote Déjà Vu All Over Again_ Learning from Cloud's Early...
Detection Engineering Maturity - Helping SIEMs Find Their Adulting Skills
Future of SOC: More Security, Less Operations
SOC Meets Cloud: What Breaks, What Changes, What to Do?
Meet the Ghost of SecOps Future by Anton Chuvakin
SANS Webinar: The Future of Log Centralization for SIEMs and DFIR – Is the En...
SOC Lessons from DevOps and SRE by Anton Chuvakin
Hey SOC, Look LEFT! by Anton Chuvakin RSA 2023 Booth
20 Years of SIEM - SANS Webinar 2022
10X SOC - SANS Blue Summit Keynote 2021 - Anton Chuvakin
SOCstock 2020 Groovy SOC Tunes aka Modern SOC Trends
SOCstock 2021 The Cloud-native SOC
Modern SOC Trends 2020
Anton's 2020 SIEM Best and Worst Practices - in Brief
Generic siem how_2017
Tips on SIEM Ops 2015
Five SIEM Futures (2012)
RSA 2016 Security Analytics Presentation
Five Best and Five Worst Practices for SIEM by Dr. Anton Chuvakin
Five Best and Five Worst Practices for SIEM by Dr. Anton Chuvakin

Recently uploaded (20)

PDF
Unlocking AI with Model Context Protocol (MCP)
PDF
Blue Purple Modern Animated Computer Science Presentation.pdf.pdf
DOCX
The AUB Centre for AI in Media Proposal.docx
PDF
Dropbox Q2 2025 Financial Results & Investor Presentation
PDF
NewMind AI Weekly Chronicles - August'25 Week I
PPTX
ACSFv1EN-58255 AWS Academy Cloud Security Foundations.pptx
PDF
Profit Center Accounting in SAP S/4HANA, S4F28 Col11
PDF
Agricultural_Statistics_at_a_Glance_2022_0.pdf
PDF
Review of recent advances in non-invasive hemoglobin estimation
PPTX
MYSQL Presentation for SQL database connectivity
PDF
7 ChatGPT Prompts to Help You Define Your Ideal Customer Profile.pdf
PDF
MIND Revenue Release Quarter 2 2025 Press Release
PDF
Build a system with the filesystem maintained by OSTree @ COSCUP 2025
PDF
Encapsulation theory and applications.pdf
PDF
Spectral efficient network and resource selection model in 5G networks
PPTX
Cloud computing and distributed systems.
PPTX
VMware vSphere Foundation How to Sell Presentation-Ver1.4-2-14-2024.pptx
PDF
Per capita expenditure prediction using model stacking based on satellite ima...
PDF
Diabetes mellitus diagnosis method based random forest with bat algorithm
PDF
Architecting across the Boundaries of two Complex Domains - Healthcare & Tech...
Unlocking AI with Model Context Protocol (MCP)
Blue Purple Modern Animated Computer Science Presentation.pdf.pdf
The AUB Centre for AI in Media Proposal.docx
Dropbox Q2 2025 Financial Results & Investor Presentation
NewMind AI Weekly Chronicles - August'25 Week I
ACSFv1EN-58255 AWS Academy Cloud Security Foundations.pptx
Profit Center Accounting in SAP S/4HANA, S4F28 Col11
Agricultural_Statistics_at_a_Glance_2022_0.pdf
Review of recent advances in non-invasive hemoglobin estimation
MYSQL Presentation for SQL database connectivity
7 ChatGPT Prompts to Help You Define Your Ideal Customer Profile.pdf
MIND Revenue Release Quarter 2 2025 Press Release
Build a system with the filesystem maintained by OSTree @ COSCUP 2025
Encapsulation theory and applications.pdf
Spectral efficient network and resource selection model in 5G networks
Cloud computing and distributed systems.
VMware vSphere Foundation How to Sell Presentation-Ver1.4-2-14-2024.pptx
Per capita expenditure prediction using model stacking based on satellite ima...
Diabetes mellitus diagnosis method based random forest with bat algorithm
Architecting across the Boundaries of two Complex Domains - Healthcare & Tech...

Log Analysis Across System Boundaries for Security, Compliance, and Operations

  • 1. Log Analysis Across System Boundaries for Security, Compliance, and Operations Dr. Anton Chuvakin WRITTEN: 2007 DISCLAIMER: Security is a rapidly changing field of human endeavor. Threats we face literally change every day; moreover, many security professionals consider the rate of change to be accelerating. On top of that, to be able to stay in touch with such ever-changing reality, one has to evolve with the space as well. Thus, even though I hope that this document will be useful for to my readers, please keep in mind that is was possibly written years ago. Also, keep in mind that some of the URL might have gone 404, please Google around. This article covers the importance of utilizing a cross-platform log management approach rather than a siloed approach to aggregating and reviewing logs for easier security and compliance initiatives. All IT users, whether malicious or not, leave traces of their activity in various logs, generated by IT components, such as firewalls, routers, server and client operating systems, databases and even business applications. Such logs accumulates, creating mountains of log data. At the same time, more organizations are starting to become aware of the value of collecting and analyzing such data: it helps them keep an eye on the goings-on within the IT infrastructure-- the who’s, what’s when’s, and where’s of everything that happens. It also makes sense due to the growing emphasis on data security with companies want to avoid becoming the next TJ Maxx and evolving regulatory compliance mandates such as PCI DSS and SOX. Of course, simply generating and collecting the logs is only half the battle. Being able to quickly search and report on log data in order to detect, manage, or even predict, security threats and to stay on top of compliance requirements is the other half. However, logs have traditionally been handled by reviewing them on their individual points of origin and usually only after a major incident. Such approach is simply not working in this age of data breaches and stringent compliance requirements. It is not only inefficient and complex, but can cost a Fortune 1000–sized company millions of dollars and take weeks, thus destroying or severely reducing the positive effects of such “log review.” Today the call to action is shifting from mere having log data to centralized data collection, real-time analysis and in-depth reporting and searching to address IT security and regulatory compliance issues. Thus, the main log-related goals of a company should be both to enable log creation and centralized collection and then to find a way to search and review log data from disparate points of origin across the system boundaries, across the IT infrastructure.
  • 2. Let’s first look at what these logs are. First, because logs contain information about IT activity, all logs generated within an organization could have relevance to computer security and regulatory compliance. Some logs are directly related to computer security: for example, intrusion detection alerts are aimed at notifying users that known malicious or suspicious activity is taking place. Other logs, such as server and network device logs, are certainly useful to information security, but in less direct ways. Server logs, such as those from Unix, Linux, or Windows servers, are automatically created and maintained by a server of activity performed on it; they represent activity on a single machine. Server logs are especially useful in cases of insider incidents; given that an insider attack or abuse might not involve any network access as well as not trigger intrusion detection systems and happen purely on the same system (with attackers using the console to use the system), server logs shed the most light on the situation. Relevant logged activities on a server include login success/failure, account creation and deletion, account settings and password changes, and file access, altering, or deletions and usually contain the information on the user who performed the actions. Network logs, on the other hand, describe data being sent and received over the network, so it makes sense that these are best suited assist in detecting and monitoring abnormal network activity. Unlike server logs, which are limited to one machine and indicate only activity, network logs indicate a connection on the network, a source, and a destination. Relevant information found in network logs include the time a message was sent or received, the direction of the message, which network protocol was used to transmit the message, the total message length, and the first few bytes of the message. On the other hand, such logs typically do not provide the information on the actual user who attempted the connection. All logs (security, server, network, and others) make up one piece of the puzzle of IT infrastructure activity, so it makes sense that all log data is crucial to enterprise security, to regulatory compliance, and to IT operations. However, given the number of sources of logs and the varying information the logs contain, there are many different pieces of the IT infrastructure puzzle. While one can try to look at logs in siloed fashion, the logs will then fail to form the “big picture” of enterprise activity. Let us provide a few compelling reasons in favor of centralized log collection and cross- device analysis. First, logs from disparate sources reviewed in the context of other logs offer situational awareness which is key not only to managing security incidents but also to a company’s day-to- day IT operations. Routine log reviews and more in-depth analysis of stored logs from all sources simultaneously are beneficial for identifying security incidents, policy violations, fraudulent activity, and operational problems and for providing information useful for resolving such problems. Moreover, when responding to an incident, one needs to review all possible evidence, which
  • 3. means all the logs from all the affected and suspect systems. One query across all logs saves time, and incident response, whether to internal and external security threats, requires quick access to all logs to figure out the details of the breach, especially if it involves more than one part of the IT infrastructure. For example, searching for a user or an IP address across 4000 servers might take days if one has to login to each server, find the logs and perform the searches. If logs are centralized and optimized for searching, it will literally take seconds or less. And we can’t ignore the high degree of operational efficiency that accompanies having all logs in one place. Further, troubleshooting issues across all systems becomes a one-click process, as does running high-level trend reports across all systems in a business unit. To seek out log data from individual sources, administrators have to consider too many things and spend too much time piecing together information to be efficient. But a single point of control means that with the click of a button, all relevant information can be at their fingertips and with a tweak of one knob, all logging configurations can be updated as security policies and compliance mandates evolve. What is no less relevant is a current onslaught of compliance mandates. Further pushing IT professionals towards a cross-device approach to log analysis, many compliance mandates put forth a broad call to action to examine and review logs, not specifically database logs, not only server logs, not just network logs…logs generally. For example, the Federal Information and Security Management Act (FISMA, via NIST SP 800-53 and NIST SP 800-92) describes the broad need for log management in federal agencies and how to establish log management controls including the generation, review, protection, and retention of audit records, and steps to take in the event of audit failure. The Health Insurance Portability and Accountability Act of 1996 (HIPAA, via NIST SP 800-66) describes the need for regular evaluation of information system activity by reviewing a variety of logs (including audit logs and access reports). The Payment Card Industry Data Security Standard (PCI-DSS) mandates logging specific details and log review procedures to prevent credit card fraud, hacking, and other related security issues in companies that store, process, or transmit credit card data. Requirement 10 requires that logs for all system components be reviewed at least daily and those from in-scope systems be stored for at least one year as well as protected. Thus, the above regulations call for central control over all log retention, stringent access control and other log protection for evidentiary purposes and even logging access to all logs. All of these are only possible when logs are centralized. Another critical benefit of centralized log collection and cross-domain log analysis is that it enables privileged user monitoring by removing the logs from the control of the privileged IT users such as system and network administrators. Abuse of system and data access or even data theft by trusted users is unfortunately all too common and logging is one way to curtail that. Log data integrity of log data in a centralized repository can also be guaranteed by the access control rules based on the “need to know” basis, logging all access and the use of cryptographic technologies, such as hashing and encryption. In essence, the need to paint a total picture of IT infrastructure activity and the broad requirement of key regulatory compliance mandates to review logs generally means that IT professionals need to find a way to execute cross-boundary log analysis. Of course, this approach requires centralized retention of logs from disparate sources, has benefits.
  • 4. The key point to remember is that any information that can be gleaned from log data is always present in enterprise logs. However, the limiting factor to how well that information can be put to good use is how quickly and efficiently the log data that contain it can be retrieved, searched, and reported on. If a company’s IT staff can not access the appropriate logs in time and as a result must spend all of its time fire-fighting security breaches rather than proactively preventing such breaches before they become major problems, this does not maximize efficiency and it leaves the company open to even more security threats as the team struggles to catch up. Log data storage in a centralized repository and cross-device analysis allows those seeking information to bypass the time- and- resource-consuming process of combing through each individual source of log data and piecing the information together afterwards. Instead of spending their time either searching for information, administrators have the data they need at their fingertips and also can proactively review any log data that might indicate abnormal activity and address security, compliance and operational issues before they become major company blunders. In order to maintain efficiency and effectiveness, enterprises must be able to break down log silos and allow the intelligent analysis of log data from disparate sources. ABOUT THE AUTHOR: This is an updated author bio, added to the paper at the time of reposting in 2009. Dr. Anton Chuvakin (http://www.chuvakin.org) is a recognized security expert in the field of log management and PCI DSS compliance. He is an author of books "Security Warrior" and "PCI Compliance" and a contributor to "Know Your Enemy II", "Information Security Management Handbook" and others. Anton has published dozens of papers on log management, correlation, data analysis, PCI DSS, security management (see list www.info-secure.org) . His blog http://www.securitywarrior.org is one of the most popular in the industry. In addition, Anton teaches classes and presents at many security conferences across the world; he recently addressed audiences in United States, UK, Singapore, Spain, Russia and other countries. He works on emerging security standards and serves on the advisory boards of several security start-ups. Currently, Anton is developing his security consulting practice, focusing on logging and PCI DSS compliance for security vendors and Fortune 500 organizations. Dr. Anton Chuvakin was formerly a Director of PCI Compliance Solutions at Qualys. Previously, Anton worked at LogLogic as a Chief Logging Evangelist, tasked with educating the world about the importance of logging for security, compliance and operations. Before LogLogic, Anton was employed by a security vendor in a strategic product management role. Anton earned his Ph.D. degree from Stony Brook University.