tssg paper for international symposium on integrated network management (im)
DESCRIPTION
An experimental testbed to predict the performance of XACML Policy Decision PointsTRANSCRIPT
Policy evaluationtestbed
Butler et al
Introduction
XACMLperformancetestbed
Initial experiments
Summary andFuture Work
An experimental testbed to predict theperformance of XACML Policy Decision
Points
Bernard Butler, Brendan Jennings and Dmitri Botvich
Telecommunication Software and Systems Group,Waterford Institute of Technology, Ireland
IFIP/IEEE IM 2011 at TCD, DublinMay 2011
Policy evaluationtestbed
Butler et al
Introduction
XACMLperformancetestbed
Initial experiments
Summary andFuture Work
Outline
IntroductionAccess Control basicsThe ProblemResponse of other researchers
XACML performance testbedOverview
Initial experimentsMeasurement-based simulation
Summary and Future Work
Policy evaluationtestbed
Butler et al
Introduction
Access Control basics
The Problem
Response of otherresearchers
XACMLperformancetestbed
Initial experiments
Summary andFuture Work
Definitions - 1
What is Access Control?
I Generally, Subjects apply Actions to Resources
I Access control is a system which enables an Authorityto limit these interactions.
I Constraints are binary-valued decisions: Permit/Deny
I Decisions are made by searching business Rules
Policy evaluationtestbed
Butler et al
Introduction
Access Control basics
The Problem
Response of otherresearchers
XACMLperformancetestbed
Initial experiments
Summary andFuture Work
Definitions - 2
What is XACML?
I XACML is an industry-standard (OASIS) XMLspecifying Access Control rules
I XACML standard also defines an architecture for AccessControl
I Rules roll up into policies and thence into policy sets
Policy evaluationtestbed
Butler et al
Introduction
Access Control basics
The Problem
Response of otherresearchers
XACMLperformancetestbed
Initial experiments
Summary andFuture Work
Architecture description
P*P
PAP Policy Access Point - Editing policies
PDP Policy Decision Point - Deciding requests
PEP Policy Execution Point - Handling requests
PIP Policy Information Point - Looking up othersources
Policy evaluationtestbed
Butler et al
Introduction
Access Control basics
The Problem
Response of otherresearchers
XACMLperformancetestbed
Initial experiments
Summary andFuture Work
Functionality versus Safety: Initial
Permit Deny
Functionality Safety
Policy evaluationtestbed
Butler et al
Introduction
Access Control basics
The Problem
Response of otherresearchers
XACMLperformancetestbed
Initial experiments
Summary andFuture Work
Functionality versus Safety: After refinement
Permit Deny
Functionality Safety
Policy evaluationtestbed
Butler et al
Introduction
Access Control basics
The Problem
Response of otherresearchers
XACMLperformancetestbed
Initial experiments
Summary andFuture Work
Fine-Grained Access Control
Refining the decision boundary causes slower evaluation
1. More complex conditions and rules
2. More need to evaluate policies
. . . With the following results
1. Longer evaluation times per request
2. More requests
3. PDP(s) become the bottleneck
4. Scalability problems!
Policy evaluationtestbed
Butler et al
Introduction
Access Control basics
The Problem
Response of otherresearchers
XACMLperformancetestbed
Initial experiments
Summary andFuture Work
Is caching a viable solution?
Issue 1: Dynamic policy updates
I Subjects S and Resources R are added and removed
I Entitlements S × R need to be managed
Issue 2: XACML suitability
I Non-local with complex rule and policy combiningalgorithms
I Missing support for change impact analysis
I Verbose
Generally, other approaches are used, notably brute force.
Policy evaluationtestbed
Butler et al
Introduction
Access Control basics
The Problem
Response of otherresearchers
XACMLperformancetestbed
Initial experiments
Summary andFuture Work
Better XACML PDP Performance
Better PDPBetter distributed software engineering - Heras-AF
Better XACML policies
I recombination (Miseldine (2004))
I clustering and reordering (Marouf et al (2009))
I indexing (Gryb)
Better policy representation
I recoding (Xie and Lu (2008)) - Xengine
I reformulation using description logic - Kolovski (2006)
Policy evaluationtestbed
Butler et al
Introduction
Access Control basics
The Problem
Response of otherresearchers
XACMLperformancetestbed
Initial experiments
Summary andFuture Work
Critique
I Each researcher presents evidence in their favour
I Generally compare their approach with a reference PDP
but
I No common published test suite of policies and requests
I Experimental conditions differ
So cannot compare improvements!
Our approach
Create a testbed to measure service times under controlledexperimental conditions
Policy evaluationtestbed
Butler et al
Introduction
XACMLperformancetestbed
Overview
Initial experiments
Summary andFuture Work
Schematic of the measurement testbed
XTS
XTPXTA
XTC
MODE 2
Observed
XACML Requests
MODE 1
Universal
PEP
Request
Scheduler
Generated
XACML Requests
Request
Generator
XACML
Policy Set
Adapter
PDP
Clustering
Algorithm
Performance
Abstraction
Queueing
Model
Simulator
Performance
Predictions
Measurement
Data
Performance
Measurements
Domain
Model
MODE 3
Policy evaluationtestbed
Butler et al
Introduction
XACMLperformancetestbed
Initial experiments
Measurement-basedsimulation
Summary andFuture Work
Example service time measurements for given PDP andpolicy set
Service times for 'single' request seton host 'bear' using 'SunXacmlPDP'
seconds
dens
ity (
scal
ed s
o th
at T
otal
His
togr
am A
rea
= 1
)
0.002 0.003 0.004
050
010
0015
00
Policy evaluationtestbed
Butler et al
Introduction
XACMLperformancetestbed
Initial experiments
Measurement-basedsimulation
Summary andFuture Work
Preliminary analysis
Preprocessing, Comparison and Clustering
I Let t = t(S ,P,R, q) ∈ R|S |×|P|×|R|×q be the set ofmeasured service times.
I Assume t is subject to nonnegative error sot = t (S ,P,R) = minq t is a reduced-error estimate ofthe service time for that PDP, policy set, request setcombination.
I Comparison: Perform ANOVA on the t with itsassociated context.
I Derive the service time clusters.
I Assume each service time cluster represents a differentrequest cluster
Policy evaluationtestbed
Butler et al
Introduction
XACMLperformancetestbed
Initial experiments
Measurement-basedsimulation
Summary andFuture Work
Comparison: ANOVA study
SunXacmlPDP EnterpriseXacmlPDP
1.5e-03 1.2e-03rep 800 800
Table: Comparison of service times for PDPs ‘SunXacmlPDP’and ‘EnterpriseXacmlPDP’.
Deny NotApplicable Permit
1.3e-03 2.1e-03 1.1e-03rep 1244 136 220
Table: Comparison of service times for Decisions ’Deny’ and’NotApplicable’ and ’Permit’.
Policy evaluationtestbed
Butler et al
Introduction
XACMLperformancetestbed
Initial experiments
Measurement-basedsimulation
Summary andFuture Work
Operation of Clustering algorithm 1: Histogram
Histogram of service times for 'single' request set on host 'bear' using 'SunXacmlPDP'
seconds
dens
ity (
scal
ed s
o th
at T
otal
His
togr
am A
rea
= 1
)
0.002 0.003 0.004
050
010
0015
00
Policy evaluationtestbed
Butler et al
Introduction
XACMLperformancetestbed
Initial experiments
Measurement-basedsimulation
Summary andFuture Work
Operation of Clustering algorithm 2: Histogram and Fit
Histogram of service times for 'single' request set on host 'bear' using 'SunXacmlPDP'
seconds
dens
ity (
scal
ed s
o th
at T
otal
His
togr
am A
rea
= 1
)
0.002 0.003 0.004
050
010
0015
00
Policy evaluationtestbed
Butler et al
Introduction
XACMLperformancetestbed
Initial experiments
Measurement-basedsimulation
Summary andFuture Work
Operation of Clustering algorithm 3: Fit and Peaks
Cluster centres of service times for 'single' request set on host 'bear' using 'SunXacmlPDP'
seconds
dens
ity (
scal
ed s
o th
at T
otal
His
togr
am A
rea
= 1
)
●●
●
●
●
●
●
●
0.001 0.002 0.003 0.004 0.005
050
010
0015
00
Policy evaluationtestbed
Butler et al
Introduction
XACMLperformancetestbed
Initial experiments
Measurement-basedsimulation
Summary andFuture Work
Operation of Clustering algorithm 4: Cluster Peaks andEndpoints
Cluster endpoints of service times for 'single' request set on host 'bear' using 'SunXacmlPDP'
seconds
dens
ity (
scal
ed s
o th
at T
otal
His
togr
am A
rea
= 1
)
●●
●
●
●
●
●
●
0.001 0.002 0.003 0.004 0.005
050
010
0015
00
Policy evaluationtestbed
Butler et al
Introduction
XACMLperformancetestbed
Initial experiments
Measurement-basedsimulation
Summary andFuture Work
Compare service time clusters
Scenario: Different PDPs, other controllable conditions areidentical.Observation: Qualitatively different service timedistributions.
●●
●
●
●
●
●
●
0.001 0.002 0.003 0.004 0.005
050
010
0015
00
Service time intervals define request clusters for 'single' request set on host 'bear' using 'SunXacmlPDP'
seconds
dens
ity (
scal
ed s
o th
at T
otal
His
togr
am A
rea
= 1
)
●
● ●
0.0014 0.0016 0.0018 0.0020 0.0022 0.0024
010
000
2000
030
000
4000
0
Service time intervals define request clusters for 'single' request set on host 'bear' using 'EnterpriseXacmlPDP'
seconds
dens
ity (
scal
ed s
o th
at T
otal
His
togr
am A
rea
= 1
)
Policy evaluationtestbed
Butler et al
Introduction
XACMLperformancetestbed
Initial experiments
Measurement-basedsimulation
Summary andFuture Work
Further analysis
Queueing and Simulation
I Parametrise each request cluster (height, location,width)
I Compute explicit queue length and waiting time
I Simulate requests having that service time profile
I Prediction: Examine overload performance for differentrequest mixes.
Policy evaluationtestbed
Butler et al
Introduction
XACMLperformancetestbed
Initial experiments
Measurement-basedsimulation
Summary andFuture Work
Prediction: Compute explicit queue length
Queueing Model
Assume M/G/1 with FIFO scheduling and infinite buffer size.For hyperexponentially-distributed service times, the servicetime density function is
b(x)def=
p∑i=1
αi µie−µix , where
p∑i=1
αi ≡ 1 ≡∫ ∞0
b(x)dx
(1)
Mean Queue Length
From Pollaczek-Khinchin formula, we derive
q̄ = ρ+ ρ2(1 + C 2
b )
2(1− ρ), (2)
where q̄ is the mean queue length, ρ = λx̄ , x̄ is the meanservice time and Cb is the coefficient of variation of theservice times. This formula is explicit.
Policy evaluationtestbed
Butler et al
Introduction
XACMLperformancetestbed
Initial experiments
Measurement-basedsimulation
Summary andFuture Work
Prediction using discrete event simulation
I Suppose a steady state has been reached and ρ = 0.5.
I Suddenly requests increase in frequency so that ρ wouldbe 0.8 if the request service time distribution remainedthe same.
I Now consider favourable and unfavourable overloaddistributions instead of the original distribution. Let
α(overload:lo)j =
n − j + 1∑ni=1 i
α(overload:hi)j =
j∑ni=1 i
I See next slide for explicit and simulated server loadings,representing step changes in access requests such asmight happen when a deadline occurs.
Policy evaluationtestbed
Butler et al
Introduction
XACMLperformancetestbed
Initial experiments
Measurement-basedsimulation
Summary andFuture Work
“Favourable” and “Unfavourable” overload requestdistributions
●
●●●●
●●●●●●●●●●●●●●●
●●●●●●●●●●●●●
●●●●●●
●
●
●●●●●●
●●●●
●●●●
●●●●●●
●
●●●●●●●●
●●●●●●●●
●
●●●●●●●●●●●●●
●●●●●
●●●●●
●
●●●●●
●●●●●
●●●●●●
●●●●●●●●●●●●
●●●●●●●●●●●●●
●●●●●●●
●●●●●●●●●●●●●●●●●●●●●
●●●●●●●●●●
●●●●●●
●●●●●●●●●●●●
●
●●●●
●●●●●●●●●●●●●●●
●●●●●●●●●●●●●
●●●●●●
●
●
●●●●●●
●●●●
●●●●●●
●●●●
●●●●
●●●●
●●●●●
●●●●●●
●●●●●●●●
●
●●
●●●●●●●●●●
●●●●●●●●●●
●●●●
●
●●●●●●●●
●●●●●●●
●●●●●
●●●●
●●●●
●●●●
●
●●
●●●●●●●
●●●●●●●
●●●●●●●●
●●●●●●●●●
●●●●●●
●●●●●●●
●●●●●
0 500 1000 1500 2000
0.0
0.2
0.4
0.6
0.8
1.0
seconds
Load
fact
or (
rho)
Policy evaluationtestbed
Butler et al
Introduction
XACMLperformancetestbed
Initial experiments
Summary andFuture Work
Summary
I (XACML) PDP performance is a real problem
I Our approach...
Compared to other authors, it is
I More than isolated performance improvement proposalsI Repeatable and reproducible
Compared to our earlier work, if offers
I Greatly improved clustering algorithmI Derived explicit model for special (validation) casesI Prediction using discrete event simulation
Policy evaluationtestbed
Butler et al
Introduction
XACMLperformancetestbed
Initial experiments
Summary andFuture Work
Future work
Use a flexible domain model for policies and requests
I richer policies, explicitly implementing security models
I generalised request profiles
Generalise to a distributed PDP implementation
I Multiprocessing / multithreading
I Additional queueing disciplines (such as processorsharing)