漫談網路安全產品之驗證 劉榮太 威播科技 [email protected]. 2 關於我 … 劉榮太...
TRANSCRIPT
1
漫談網路安全產品之驗證劉榮太威播科技[email protected]
2
關於我…
劉榮太清華大學資訊工程系博士威播科技 CEO/CTO
高速網路設備研發:頻寬管理、內容過濾、入侵偵測與防禦
2007 Activities:① 國家資通安全技術服務與防護管理計畫 “入侵偵測 / 預防系統安全參
考指引” 撰稿人② RSA 2007 演講者 , 題目 “ IPS TEST”
3
Outlines
Independent Test Labs for Network Security Devices NSS Labs ICSA Labs
IPS Test In both Labs In 公安三所
Conclusions
4
for Network Security Devices
Independent Test Labs
5
Independent Test Labs
Why we need it? Why we pick up NSS Labs and ICSA
Labs?
6
NSS Test Labs
http://www.nss.co.uk/ Founded in 1991 by Bob Walder Presences in in Austin, TX and Carlsbad,
CA Established the de facto standards for
testing IDS and IPS, and introduced the industry’s first reports to validate PCI DSS functionality.
7
NSS Labs – Test Services
Category “Covered” Products
Anti-Malware 10 products including Trend Micro, McAfee, and others.
Browser Security Safari, Chrome, IE8, Firefox, Opera
Attack Mitigator Top Layer, Radware, and V-Secure
Intrusion Prevention (IPS)
BroadWeb, IBM, TippingPoint, …
Secure Content Appliances
Panda GateDefender 8200
Unified Threat Management (UTM)
IBM, Fortinet, TippingPoint
Web App Firewall (WAF)
Assurent AssureLogic
PCI Suitability eEye, IBM, ThirdBridge
10 Gbps IPS IBM, McAfee
8
ICSA Labs
http://www.icsalabs.com/ Founded in 1989 National Computer Security Association
(NCSA), Cybertrust, and Verizon Business
Pioneer in Anti-Virus, Firewall, VPN, and WAF.
9
ICSA Labs - Test Services
Category “Covered” Products
Anti-Spam Fortinet, IBM, Kaspereky, Symantec
Anti-Spyware Eset, McAfee, Microsoft, Symantec
Anti-Virus Symantec, McAfee, TrendMicro, … around 110 products
IPSec 3com, ALU, D-Link, Fortinet, Juniper, McAfee, Watchguard, Zyxel.
Network Firewall 3com, ALU, D-Link, Fortinet, Checkpoing, Zyxel, Sonicwall, … around 28 products
Network IPS Fortinet, Sourcefire, Stonesoft (IBM, TippingPoint, and BroadWeb)
PC Firewalls CA, Microsoft
SSL-TLS AEP, Array Networks, F5, Juniper, O2, Sonicwall
Web Application Firewalls Barracuda, F5, Breach, Citrix, Imperva
FIPS 140-2, FIPS 201, SCAP
10
How are so-called dependent 3rd party tests preformed?
How to test a NIPS
11
NSS Labs - http://www.nss.co.uk
12
Section 1 – Detection Engine Aim
Verify sensor’s capable of detecting and blocking exploits accurately, while remaining resistant to false positives
No background network load Signatures acquired from vendor All available attack signatures enabled Two testing
Test 1.1 Attack Recognition Test 1.2 Resistance to False Positives
13
1.1 Attack Recognition
Common exploit, port scans, and DoS attempts
Over 100 exploits run with no load on the network and no IP fragmentation
Test 1.1.1 ~ Test 1.1.14 Backdoors (standard ports and random
ports), DNS, DoS, False negatives, Finger, FTP, HTTP, ICMP (including unsolicited ICMP response), Reconnaissance, RPC, SSH, Telnet, Database, Mail
14
1.1 Attack Recognition (Cont.) Report
Attacks should be identified by their assigned CVE reference
Noisy Device blocks the attack packet only or the entire
“suspicious” TCP session This test is repeated twice
monitor mode (blocking disabled) blocking enabled
“Default” ARRD/ARRB “Custom” ARRD/ARRB
15
Example of Test 1.1
16
1.2 Resistance To False Positives Feed them with
Normal traffic with “suspicious” content, together with several “neutered” exploits
“PASS” if no raise an alert nor block the traffic
“FAIL” if raising an alert Test 1.2.1 false positives
17
Example of Test 1.2
18
Section 2 – IPS Evasion
Aim Verify that the sensor is capable of detecting
and blocking basic exploits when subjected to varying common evasion techniques
Test 2.1 Baselines (with no evasion techniques applied)
Test 2.2 Packet Fragmentation and Stream Segmentation (fragroute)
Test 2.3 URL obfuscation (whisker/Nikto) Test 2.4 Miscellaneous Evasion
Techniques
19
2.2 Packet Fragmentation and Stream Segmentation IP fragmentation
Test 2.2.1 Ordered 8/24 byte fragments Test 2.2.2 Out of ordered 8 byte fragments Test 2.2.X
TCP segmentation Test 2.2.9 Ordered 1 byte segments, interleaved
duplicate segments with invalid TCP checksums Test 2.2.10 Ordered 1 byte segments, interleaved
duplicate segments with null TCP control flags Test 2.2.11 Ordered 1 byte segments, duplicate
last packet Test 2.2.1X
20
2.3 URL Obfuscation (whisker/Nikto) 2.3.1 URL encoding 2.3.2 /./ directory insertion 2.3.3 Long URL 2.3.4 Premature URL ending 2.3.5 Fake parameter 2.3.6 TAB separation 2.3.7 Case sensitivity 2.3.8 Windows \ delimiter 2.3.9 Session splicing
21
2.4 Miscellaneous Evasion Techniques Test 2.4.1 Alter default ports Test 2.4.2 Inserting spaces in FTP command
lines Test 2.4.3 Inserting non-text Telnet opcodes
in FTP data stream Test 2.4.4 Polymorphic mutation
(ADMmutate) Test 2.4.5 Altering protocol and RPC PROC
numbers Test 2.4.6 RPC record fragging
22
Section 2 – IPS Evasion
Above testing, we note if Attempted attack blocked successfully Attempted attacks detected and an alert
raised If the exploit successfully “decoded” to the
original exploit, rather than alerting purely on anomalous traffic detected as a result of the evasion technique itself.
23
Section 3 – Stateful Operation Aim
Sensor’s capability of monitoring stateful sessions established through the device at various traffic loads without either losing state or incorrectly inferring state
24
3.1 Stateless Attack Replay
Stick and Snot, to generate large numbers of false alerts
“PASS” if No alerts raised Packets blocked
Test 3.1.1 Stateless attack replay
25
3.2 Simultaneous Open Connections (default settings)
Two goals reserving state whether or not the sensor will block legitimate
traffic Testing steps
First packet of a two-packet exploit transmitted Opens sessions from 10,000 to one million The second half of the exploit and session closed
Both halves of the exploit required to trigger an alert
26
3.2 Simultaneous Open Connections
Test 3.2.1 Attack Detection Ensures that the sensor continues to detect new exploits as
the number of open sessions is increased in stages from 10,000 to 1,000,000
Test 3.2.2 Attack Blocking Ensures that the sensor continues to block new exploits as
the number of open sessions is increased in stages from 10,000 to 1,000,000
Test 3.2.3 State Preservation Ensures that the sensor maintains the state of pre-existing
sessions as the number of open sessions is increased in stages from 10,000 to 1,000,000
Test 3.2.1 Legitimate Traffic Blocking Ensures the sensor does not begin to block legitimate traffic
as the number of open sessions is increased in stages from 10,000 to 1,000,000
27
3.3 Simultaneous Open Connections (after tuning)
Test 3.3.1 Attack Detection
Test 3.3.2 Attack Blocking
Test 3.3.3 State Preservation
Test 3.3.4 Legitimate Traffic Blocking
28
Section 4 – Detection/Blocking Performance Under Load
Aim Verify that the sensor is capable of
detecting and blocking exploits when subjected to increasing loads of background traffic up to the maximum bandwidth supported as claimed by vendor.
29
Traffic generation equipment Target hostsMachines
generating exploits
The Baseline Attack Test Environment
Infrastructure
External
network
External
network
Internal
network
Internal
network
Management
Network
Management
Network
1. Baseline attacks testing with zero background traffic.2. Background traffic applied (250, 500, 750 and 1000Mbps.
Adtech AX/4000 monitors:1. The overall traffic loading2. The total number of exploits
• Attack Blocking Rate (ABR) / Attack Detecting Rate (ADR)• The MAX load the IPS can sustain before it begins to
drop packets/miss alerts.
30
4.1 UDP Traffic To Random Valid Ports
UDP packets of varying sizes generated by a SmartBits SMB6000 with LAN-3301A 10/100/1000Mbps TeraMetrics cards With variable source IP addresses and ports
transmitting to a single fixed IP address/port Not attempt to simulate “real world” network Determine the raw packet processing
capability
31
4.1 UDP Traffic To Random Valid Ports
Test 4.1.1 64 byte packets – maximum 1,480,000 pps
Test 4.1.2 440 byte packets – maximum 260,000 pps
Test 4.1.3 1514 byte packets – maximum 81,720 pps
32
4.2 HTTP “MAX Stress” Traffic With No Transaction Delays
Aim Stress the HTTP detection engine and determine
how the sensor copes with detecting and blocking exploits under loads
CAW Networks Gigabit WebAvalanche and WebReflector Creating true “real world” traffic at speeds of up to
2.2 Gbps as a background load for our IPS tests. Capable of simulating over 2.5 million users, with
over 2.5 million concurrent sessions, and almost 100,000 HTTP requests per second.
Transaction consists of a single HTTP GET request
33
4.2 HTTP “MAX Stress” Traffic With No Transaction Delays
Test 4.2.1 Max 2,500 new connections per second
Test 4.2.2 Max 5,000 new connections per second
Test 4.2.1 Max 10,000 new connections per second
Test 4.2.1 Max 20,000 new connections per second
34
4.3 HTTP “MAX Stress” Traffic With Transaction Delays
10 second delay in the server (WebReflector) response Test 4.3.1 Max 5,000 new connections per
second
Test 4.3.2 Max 10,000 new connections per second
35
4.4 Protocol Mix Traffic
To simulate more of a “real world” environment Test 4.4.1 72% HTTP traffic (560 byte
packets) + 20% FTP traffic + 6% UDP traffic (256 byte packets)
36
4.5 “Real World” Traffic
IIS Web server installed on a dual P4 SuperMicro server with Gigabit interface
WebAvalanche replay multiple identical sessions from up to 25 new users per second
Traffic generation equipment Target hosts
Caw WebAvalanche
Infrastructure
External
network
External
network
Internal
network
Internal
network
37
4.5 “Real World” Traffic
Test 4.5.1 Pure HTTP Traffic (simulated browsing session on NSS Web site)
Test 4.5.2 Protocol Mix (72% HTTP traffic (simulated browsing sessions as 4.5.1)) + 20% FTP traffic + 6% UDP traffic (256 byte packets))
38
Section 5 – Latency & User Response Times
Aim to determine the effect the IPS sensor has on the
traffic passing through it under various load conditions
Test 5.1 Latency Tools: Spirent SmartFlow and SMB6000 with
Gigabit TeraMetrics cards Measure the throughput, packet loss, and latency Traffic load from 250Mbps to 1Gbps bi-
directionally in steps of 250Mbps Repeated for a range of packet sizes (64, 440,
and 1518 bytes) of UDP traffic
39
5.1 Latency
Latency With No Background Traffic SmarttFlow
Latency With Background Traffic Load WebAvalanche & WebReflector SmartFlow traffic at various packet sizes
(64, 440, 1514 byte) Latency When Under Attack
Spirent WebSuite Software generate a fixed load of DoS/DDoS
40
5.2 User Response Times
WebAvalanche & WebReflector generate HTTP sessions To gauge how any increases in latency
impact user experience in terms of failed connections and increased Web response times
Web Response With No Background Traffic
Web Response When Under Attack
41
Section 6 – Stability & Reliability Aim
To verify the stability of the device under test under various extreme conditions
Test 6.1.1 Blocking Under Extended Attack Exploits mixed with legitimate sessions
transmitted through the device at a max of 100Mbps for 8 hours
Merely a reliability test Device expected to remain operational and stable Block 100% of recognizable exploits, raising an
alert for each
42
Section 6 – Stability & Reliability Test 6.1.2 Passing Legitimate Traffic
Under Extended Attack FAIL if legitimate traffic blocked
Test 6.1.3 ISIC/ESIC/TCPSIC/UDPSIC/ICMPSIC Stress the protocol stack of the device
under test Tool: IP Stack Integrity Checker (ISIC)
43
Section 7 – Management and Configuration
Aim The feature of the management system The ability of the management port to resist
attack Management Port
Attacking management interface more effective than attacking detection interface
Test 7.1.1 open ports Test 7.1.2 ISIC/ESIC/TCPSIC/UDPSIC/ICMPSIC
44
ICSA Labs Test – Administration Functions AF1 – Changing Its Mode of Operation
Unless it is already in the Selected Mode, the SUT must include a means to place its Mission Interfaces into the Selected Mode.
AF2 – Administrative CapabilitiesWhile in the Selected Mode, the SUT must provide a means to: 1. Access the SUT through the Remote Administration interface; 2. Configure and apply various Policies; 3. Configure and change or acquire the date and time; 4. Enable and disable logging of the events defined in LO1.1; 5. Display all required log data in the Log(s) that was specified in LO2
for the events defined in LO1; 6. Generate and display all required report data for the events defined
in RE1 and RE2; 7. Configure and change all Authentication Configuration Data; 8. Configure and change Remote Administration settings; 9. Enable and disable the automatic network acquisition and automatic
enforcement of protection updates.
45
ICSA Labs Test – Administration AD1 – Remote Administration
The capability must exist for a User to perform encrypted Remote Administration of the Engine through at least a single Engine interface.
46
ICSA Labs Test – Identification & Authentication IA1 – Identify & Authenticate Prior to
Administrative Function Access The SUT must include the capability to require and
enforce User identification followed by authentication with a password having the characteristics specified in IA2 or a multi-factor Authentication Mechanism prior to permitting access to the Administrative Functions and other non-required SUT functions.
IA2 – Strength of Password (CONDITIONAL) The SUT must include the capability to set User
passwords to a mix of eight or more letters, numbers, and special characters.
47
ICSA Labs Test – Traffic Flow Traffic Flow – Passing IP Traffic
While in the Selected Mode, the SUT must pass all Clean IP traffic up to 80% of the Rated Throughput through its Mission Interfaces according to the Policy being enforced.
48
Logging LO1 – Required Log Events
The SUT must include the capability to capture the required log data in LO2 for the following security, operational, and system events: 1. Security Events
a. All attempts to pass attacks through the Engine that target any Vulnerability Set elements when the Policy for the Vulnerability Set element related to the attack is tuned to:
i. Detect and prevent; ii. Detect and permit.
2. Operational Events a. When a User powers down the Engine, in the event that such functionality
exists; (CONDITIONAL) b. When a change is made to the Policy being enforced; c. When a change is made to the Authentication Configuration Data of a User; d. When a User attempts to authenticate to a Remote Administration interface.
3. System Events a. After any startup sequence is complete when the Engine powers on; b. When the link status of a Mission Interface changes.
49
Logging LO2 – Required Log Data
The SUT must include the capability to accurately capture in a Log for each required log event in LO1 the following log data elements: 1. For all events:
a. The date and time that the event occurred; i The date must consist of the year, the month, and the numerical day in the month; ii The time must consist of the hour, the minute, and the second;
b. A description indicating why the SUT logged the event. 2. For “Security” events in LO1.1:
a. An indication of the action taken by the SUT; b. The protocol; c. For IP, the source and destination IP addresses; d. For TCP and UDP, the source and destination ports; e. A unique identifier representing the Engine that detected the event.
3. For “Operational” event LO1.2.d: a. An indication of the username that attempted to authenticate; b. An indication of success or failure to authenticate;
4. For “System” event LO1.3.b: a. The physical SUT interface link status.
LO3 – Log Data Presentation
All required log data corresponding to all required log events defined in LO1 must be available for review upon demand and presented in a human readable format while preserving the relative sequence of events.
LO4 – Linking Multiple Logs for a Single Event (CONDITIONAL)
50
Reporting
RE1 – Most Common Policy ViolationsThe SUT must include the capability to report the ten most common Policy violations over the preceding: 1. Hour; 2. Day; 3. Seven days; 4. Thirty days; 5. Ninety days.
RE2 – Most Common Sources of Policy ViolationsFrom its perspective, the SUT must include the capability to report on the ten most common sources of Policy violations over the preceding: 1. Hour; 2. Day; 3. Seven days; 4. Thirty days; 5. Ninety days
51
Functional Testing
FT1 – Administrative Functions Work ProperlyThe SUT must demonstrate through testing that the Administrative Functions defined in AF1 and AF2 operate properly.
FT2 – Average One-Way LatencyWhile testing under the following conditions: 1. While in the Selected Mode; 2. While enforcing a Policy that meets ST4, ST5, ST6,
and ST7; 3. With Background Traffic flowing through the SUT and
filling the SUT bandwidth between 0% and 80% of the Rated Throughput;
4. With attack traffic targeting Vulnerability Set elements comprising between 0% and 2% of the Rated Throughput.
52
Security Testing ST1 – SUT Not Addressable
While in the Selected Mode, it must be demonstrated through testing that the Mission Interfaces ignore non-administrative communication attempts.
ST2 – No Unauthorized Access to Administrative Functions
While in the Selected Mode, it must be demonstrated through testing that unauthorized access to or control of any Administrative Function does not occur.
ST3 – Engine Not Vulnerable
While in the Selected Mode, it must be demonstrated through testing that the Engine itself is not vulnerable via its Mission Interfaces to the evolving set of vulnerabilities known in the Internet community that can be remotely tested.
ST4 – Coverage of Attacks against Relevant Vulnerabilities
The SUT must demonstrate through testing that it is capable of preventing all attacks aimed at Vulnerability Set elements from passing through after arriving on SUT Mission Interfaces, regardless of their origin and destination, under the following conditions: 1. While in the Selected Mode; 2. While exercising the Administrative Functions; 3. With Background Traffic flowing through the SUT and filling the SUT bandwidth between 0%
and 80% of the Rated Throughput; 4. With attack traffic targeting Vulnerability Set elements comprising between 0% and 2% of
the Rated Throughput; 5. With and without the use of evasion techniques known in the Internet community.
53
Security Testing (Cont.) ST5 – Coverage of Trivial Denial of Service (DoS) Attacks
The SUT must demonstrate through testing that it has the capability to appropriately Mitigate all Trivial DoS Attacks arriving on a SUT Mission Interface, regardless of their origin, under the following conditions: 1. While in the Selected Mode; 2. While exercising the Administrative Functions; 3. With Background Traffic flowing through the SUT and filling the SUT bandwidth
between 0% and 80% of the Rated Throughput; 4. With Trivial DoS Attack traffic comprising between 0% and 10% of the Rated
Throughput; 5. With attack traffic targeting Vulnerability Set elements comprising between 0% and
2% of the Rated Throughput. ST6 – Repeated Protection
While in the Selected Mode, the SUT must demonstrate through testing that at all times after successfully preventing attacks targeting Vulnerability Set members and mitigating all Trivial DoS Attacks that it continues to successfully prevent and mitigate, respectively, such attacks in accordance with the Policy.
ST7 – No False Positives after TuningWhile in the Selected Mode and following appropriate tuning of the Policy, the SUT must demonstrate through testing that it does not detect in Clean traffic an attack of any kind.
54
Documentation
DO1 – Set Up InstructionsSufficient, accurate Guidance must be provided for a User to set up the SUT.
DO2 – Administrative Functions Usage InstructionsSufficient, accurate Guidance must be provided for a User to perform the Administrative Functions in AF1 and AF2.
55
Thanks! Q&A? 劉榮太 [email protected]