estimating the cost for executing business processes in the cloud
TRANSCRIPT
VincenzoFerme,AnaIvanchikj,CesarePautassoFacultyofInformaticsUSILugano,Switzerland
ESTIMATING THE COST FOR EXECUTING BUSINESS PROCESSES
IN THE CLOUD
Vincenzo Ferme
2
Deploying WfMSs to the Cloud
WfMS
sweet spot in the performance vs. resource consumption trade-off
Vincenzo Ferme
2
Deploying WfMSs to the Cloud
WfMSD
A
B
C
sweet spot in the performance vs. resource consumption trade-off
Vincenzo Ferme
2
Deploying WfMSs to the Cloud
WfMSD
A
B
C
IT Resources
sweet spot in the performance vs. resource consumption trade-off
Vincenzo Ferme
2
Deploying WfMSs to the Cloud
WfMSD
A
B
C
$IT Resources
sweet spot in the performance vs. resource consumption trade-off
Vincenzo Ferme
2
Deploying WfMSs to the Cloud
WfMS
IaaS
D
A
B
C
$IT Resources
sweet spot in the performance vs. resource consumption trade-off
Vincenzo Ferme
2
Deploying WfMSs to the Cloud
WfMS
IaaS
D
A
B
C PaaS
Workload
Users
$IT Resources
sweet spot in the performance vs. resource consumption trade-off
BPaaS
Vincenzo Ferme
2
Deploying WfMSs to the Cloud
WfMS
IaaS
D
A
B
C PaaS
Workload
Users
$
$
IT Resources
sweet spot in the performance vs. resource consumption trade-off
BPaaS
Vincenzo Ferme
3
Challenges in Deploying WfMSs in the Cloudevaluate the Cloud cost models, determine the best suitable provider
Evaluate the Cloud Offers
Vincenzo Ferme
3
Challenges in Deploying WfMSs in the Cloudevaluate the Cloud cost models, determine the best suitable provider
Evaluate the Cloud Offers
Vincenzo Ferme
3
Challenges in Deploying WfMSs in the Cloudevaluate the Cloud cost models, determine the best suitable provider
Evaluate the Cloud Offers Best Suitable Provider?
Vincenzo Ferme
3
Challenges in Deploying WfMSs in the Cloudevaluate the Cloud cost models, determine the best suitable provider
Evaluate the Cloud Offers Best Suitable Provider?
$Performance
Resources
Vincenzo Ferme
4
Goalselect candidate VMs according to workload and wanted performance
WfMS
Vincenzo Ferme
4
Goalselect candidate VMs according to workload and wanted performance
WfMSThroughput
Resp. Time
Vincenzo Ferme
4
Goalselect candidate VMs according to workload and wanted performance
WfMS
RAM USAGE
CPU USAGE
Throughput
Resp. Time
Vincenzo Ferme
4
Goalselect candidate VMs according to workload and wanted performance
WfMS
RAM USAGE
CPU USAGECPU + RAM Bound
Throughput
Resp. Time
Vincenzo Ferme
4
Goalselect candidate VMs according to workload and wanted performance
WfMS
RAM USAGE
CPU USAGECPU + RAM Bound
Throughput
Resp. Time
Matching Clouds VMs
$$
$
$$
$
Vincenzo Ferme
5
The Steps of the Methodfrom performance requirements to cheapest VMs
1. Workload Mix
80%C
A
B
20%D
Vincenzo Ferme
5
The Steps of the Methodfrom performance requirements to cheapest VMs
1. Workload Mix
80%C
A
B
20%D
0
600
1200
0 1 2 3 4 5 6 7 8 9 10
2. Load Function
Vincenzo Ferme
5
The Steps of the Methodfrom performance requirements to cheapest VMs
1. Workload Mix
80%C
A
B
20%D
0
600
1200
0 1 2 3 4 5 6 7 8 9 10
2. Load Function
Metrics KPIs
3. Performance Req.
Vincenzo Ferme
5
The Steps of the Methodfrom performance requirements to cheapest VMs
1. Workload Mix
80%C
A
B
20%D
0
600
1200
0 1 2 3 4 5 6 7 8 9 10
2. Load Function
Metrics KPIs
3. Performance Req.
4. Run Experiments
Vincenzo Ferme
5
The Steps of the Methodfrom performance requirements to cheapest VMs
1. Workload Mix
80%C
A
B
20%D
0
600
1200
0 1 2 3 4 5 6 7 8 9 10
2. Load Function
Metrics KPIs
3. Performance Req.
4. Run Experiments5. Analyse Results
ResourcesPerform
ance
Vincenzo Ferme
5
The Steps of the Methodfrom performance requirements to cheapest VMs
1. Workload Mix
80%C
A
B
20%D
0
600
1200
0 1 2 3 4 5 6 7 8 9 10
2. Load Function
Metrics KPIs
3. Performance Req.
4. Run Experiments5. Analyse Results
ResourcesPerform
ance
6. Cloud Offers
$
Vincenzo Ferme
5
The Steps of the Methodfrom performance requirements to cheapest VMs
0
600
1200
0 1 2 3 4 5 6 7 8 9 10
2. Load Function
Metrics KPIs
3. Performance Req.
4. Run Experiments5. Analyse Results
ResourcesPerform
ance
6. Cloud Offers
$
7. VMs Selection
Res. Efficiency
Cost$
Vincenzo Ferme
6
AssumptionsRAM and CPU, only WfMS no DBMS, workload dependent
CPU
RAM
Observed Resources
Vincenzo Ferme
6
AssumptionsRAM and CPU, only WfMS no DBMS, workload dependent
CPU
RAM
No Network, No I/O
Observed Resources
Vincenzo Ferme
6
AssumptionsRAM and CPU, only WfMS no DBMS, workload dependent
CPU
RAM
No Network, No I/O
Observed Resources
WfMS
DBMS
Only WfMS
Vincenzo Ferme
6
AssumptionsRAM and CPU, only WfMS no DBMS, workload dependent
CPU
RAM
No Network, No I/O
Observed Resources
WfMS
DBMS
Only WfMS
80%C
A
B
20% D
0600
1200
0 1 2 3 4 5 6 7 8 9 10
Workload Dependent
Vincenzo Ferme
7
An Example of Applicationapplying the method for the Camunda WfMS
1.
80%C
A
B
20%D
0600
1200
0 1 2 3 4 5 6 7 8 9 10
2.
Metrics KPIs
3. 4.
5.
ResourcesPerform
ance
6.
$
7.
Resources Efficiency
Cost$
Vincenzo Ferme
8
1. The Workload Mixrealistic, mix of 5 models with different complexity
camunda_additional_approval
Change necessary?
Amendsomething
(EmptyScript)
Do the nexttask
(EmptyScript)
Do someapproval(Decision
Script)
Approved?
Do thenext task(EmptyScript)
Counterinstantiation
no
yes
yes
no
21 Elements (Nodes + Edges)
Vincenzo Ferme
9
1. The Workload Mixrealistic, mix of 5 models with different complexity
camunda_oopSignavio
OrderReceived
DetermineLocation
(Decision Script)Exceptiondetected?
Escalation(EmptyScript)
Orderamount?
Order Canceled
Approved?Send Rejection
Email(Empty Script)
OrderRejected
Send Order to WH(Empty Script)
ReceiveDispatch
Confirmation(EmptyScript)
ControlDispatch(EmptyScript)
SendConfirmation
Email(EmptyScript) Order
Confirmed
Order Approval(Decision Script)
>200
<=200noye
s
no
yes
32 Elements (Nodes + Edges)
Vincenzo Ferme
10
1. The Workload Mixrealistic, mix of 5 models with different complexity
camunda_invoice_collaboration
Scan Invoice(EmptyScript)
ArchiveOriginal(EmptyScript)
Ask ApproverAssignment
(Empty Script)
ApproveInvoice
(DecisionScript)
AssignApprover
(EmptyScript)
InvoiceApproved?
Prepare BankTransfer
(Empty Script)
Ask InvoiceReview(EmptyScript)
Reviewsuccessful?
ReviewInvoice
(DecisionScript)
ArchiveInvoice(EmptyScript)
noye
s
no
yes
39 Elements (Nodes + Edges)
Vincenzo Ferme
11
1. The Workload Mixrealistic, mix of 5 models with different complexityactiviti_modelSignavio
In-depth AnalysisMedicalAnalysis(EmptyScript)
RiskAssessment
(EmptyScript)
Send MeetingRequest(EmptyScript)
F2F Meeting(EmptyScript)
CalculateScore
(EmptyScript)
PreliminaryJudgement
(Empty Script)
Send RejectionEmail
(Empty Script)
Approve LoanApplication
(Decision Script)
Approved?
UpdateApplication With
Calculation(Empty Script)
Final Evaluation(Decision Script)
yes
not ok
ok
no
41 Elements (Nodes + Edges)
Vincenzo Ferme
12
1. The Workload Mixrealistic, mix of 5 models with different complexity
COUNTERPARTY
CounterpartyRequested
ReviewOnboarding
Request(Decision
Script)
Overriderequested?
Review OverrideRequest
(Decision Script)
Tax(Empty Script)
Missing TaxDocuments
(Decision Script) Goto tax?
Check Tax Rules(Decision Script)
Overriderequested?
Taxapproved?
Amend Tax Data(Decision Script)
Overriderequested?
High RiskCountry
(Empty Script)
MissingComplianceDocuments
(Empty Script)
World Check(Decision
Script) Gotocompliance?
Check ComplianceRules
(Decision Script)
Overriderequested?
Complianceapproved?
Amend ComplianceData
(Decision Script)
Overriderequested?
Start SSIReview
(Empty Script)
Overrideapproved?
override rejected
Ready topublish?
PublishCounterparty(Empty Script)
Cancel SSI Review(Empty Script)
rejected
published
yes
no
yes
no
no
may
be
no
yes
no
may
be
noyes
yes
yes
no
yes
no
yes
no
no
yes
no
yes
yes
84 Elements (Nodes + Edges)
Vincenzo Ferme
13
2. The Load Functionsrealistic for different sized companies
• 50 • 500 • 1000
users (usr)
Vincenzo Ferme
13
2. The Load Functionsrealistic for different sized companies
Use
rs0
200400600800
1000
Time (min:sec)
0:000:3
01:0
02:0
03:0
04:0
05:0
06:0
07:0
08:0
010
:00
Example for usr=1000• 50 • 500 • 1000
users (usr)
Vincenzo Ferme
13
2. The Load Functionsrealistic for different sized companies
Use
rs0
200400600800
1000
Time (min:sec)
0:000:3
01:0
02:0
03:0
04:0
05:0
06:0
07:0
08:0
010
:00
Example for usr=1000• 50 • 500 • 1000
users (usr)
1 second think time
Vincenzo Ferme
14
3. Performance Requirementsdetermined with “unbounded” experiments
“Unbounded” (U) Configuration:
CPURAM
128 GB 64 Cores
Vincenzo Ferme
14
3. Performance Requirementsdetermined with “unbounded” experiments
“Unbounded” (U) Configuration:
CPURAM
128 GB 64 Cores
Target Performance
Vincenzo Ferme
14
3. Performance Requirementsdetermined with “unbounded” experiments
“Unbounded” (U) Configuration:
CPURAM
128 GB 64 Cores
Target Performance
• Number of Client Requests per second (REQ/s)
• Response Time (RT)• …
Client-side Metrics:
Vincenzo Ferme
14
3. Performance Requirementsdetermined with “unbounded” experiments
“Unbounded” (U) Configuration:
CPURAM
128 GB 64 Cores
Target Performance
• Number of Client Requests per second (REQ/s)
• Response Time (RT)• …
• Number of executed BP Instances (N)
• BP Instance Duration (D)• Throughput (T)• …
Client-side Metrics: Server-side Metrics:
Vincenzo Ferme
15
4. Run Experimentsthe BenchFlow framework
InstanceDatabase
DBMSFaban Drivers
ContainersServers
DATA TRANSFORMERSANALYSERS
Performance Metrics
Performance KPIs
WfMS
Test
Exe
cuti
onA
naly
ses
Faban+
Web Service
Minio
harness
benchflow [BPM ’15] [ICPE ’16]
MONITORAdapters
CO
LLECT
OR
S
Vincenzo Ferme
16
5. Analyse Resultsdefine (determine) the wanted performance
WfMSConfigurations
Containers
Vincenzo Ferme
16
5. Analyse Resultsdefine (determine) the wanted performance
WfMSConfigurations
Containers
3 Servers
10 Gbit/s 10 Gbit/s WfMS, DBMS, Load Generator
Vincenzo Ferme
16
5. Analyse Resultsdefine (determine) the wanted performance
Three Runs to Improve Results Reliability
Vincenzo Ferme
17
5. Analyse Resultsdetermine the right amount of needed resources
Normalised performance over number of CPU cores with 500 Users
Vincenzo Ferme
17
5. Analyse Resultsdetermine the right amount of needed resources
Normalised performance over number of CPU cores with 500 Users
Vincenzo Ferme
17
5. Analyse Resultsdetermine the right amount of needed resources
Normalised performance over number of CPU cores with 500 Users
Vincenzo Ferme
18
5. Analyse Resultsvalidate the performance results
Unbound vs Bound Performance Results
Vincenzo Ferme
18
5. Analyse Resultsvalidate the performance results
Unbound vs Bound Performance Results
Vincenzo Ferme
19
6. Cloud Offersmatch resource requirements with Cloud offers
$$
$
CPU
RAM
CPU + RAM Bound
Vincenzo Ferme
20
6. Cloud Offersmatch resource requirements with Cloud offers
AmAzGp
GcS Am AzGp Gc
AmGp
Gc
0%10%20%30%40%50%60%70%80%90%100%
0 0.2 0.4 0.6 0.8 1 1.2 1.4 1.6 1.8 2 2.2 2.4 2.6 2.8 3
wavg(e(CP
U))[%
]
Price($/hr)50users 500users 1000users
Az
8 8.2 8.4 8.6 8.8
AmAzGp
GcS Am AzGp Gc
AmGp
Gc
0%10%20%30%40%50%60%70%80%90%100%
0 0.2 0.4 0.6 0.8 1 1.2 1.4 1.6 1.8 2 2.2 2.4 2.6 2.8 3
wavg(e(CP
U))[%
]
Price($/hr)50users 500users 1000users
Az
8 8.2 8.4 8.6 8.8
AmAzGp
GcS Am AzGp Gc
AmGp
Gc
0%10%20%30%40%50%60%70%80%90%100%
0 0.2 0.4 0.6 0.8 1 1.2 1.4 1.6 1.8 2 2.2 2.4 2.6 2.8 3
wavg(e(CP
U))[%
]
Price($/hr)50users 500users 1000users
Az
8 8.2 8.4 8.6 8.8
Vincenzo Ferme
21
6. Cloud Offersmatch resource requirements with Cloud offers
AmAzGp
GcS Am AzGp Gc
AmGp
Gc
0%10%20%30%40%50%60%70%80%90%100%
0 0.2 0.4 0.6 0.8 1 1.2 1.4 1.6 1.8 2 2.2 2.4 2.6 2.8 3
wavg(e(CP
U))[%
]
Price($/hr)50users 500users 1000users
Az
8 8.2 8.4 8.6 8.8
AmAzGp
GcS Am AzGp Gc
AmGp
Gc
0%10%20%30%40%50%60%70%80%90%100%
0 0.2 0.4 0.6 0.8 1 1.2 1.4 1.6 1.8 2 2.2 2.4 2.6 2.8 3
wavg(e(CP
U))[%
]
Price($/hr)50users 500users 1000users
Az
8 8.2 8.4 8.6 8.8
AmAzGp
GcS Am AzGp Gc
AmGp
Gc
0%10%20%30%40%50%60%70%80%90%100%
0 0.2 0.4 0.6 0.8 1 1.2 1.4 1.6 1.8 2 2.2 2.4 2.6 2.8 3
wavg(e(CP
U))[%
]
Price($/hr)50users 500users 1000users
Az
8 8.2 8.4 8.6 8.8
Vincenzo Ferme
22
6. Cloud Offersmatch resource requirements with Cloud offers
AmAzGp
GcS Am AzGp Gc
AmGp
Gc
0%10%20%30%40%50%60%70%80%90%100%
0 0.2 0.4 0.6 0.8 1 1.2 1.4 1.6 1.8 2 2.2 2.4 2.6 2.8 3
wavg(e(CP
U))[%
]
Price($/hr)50users 500users 1000users
Az
8 8.2 8.4 8.6 8.8
AmAzGp
GcS Am AzGp Gc
AmGp
Gc
0%10%20%30%40%50%60%70%80%90%100%
0 0.2 0.4 0.6 0.8 1 1.2 1.4 1.6 1.8 2 2.2 2.4 2.6 2.8 3
wavg(e(CP
U))[%
]
Price($/hr)50users 500users 1000users
Az
8 8.2 8.4 8.6 8.8
AmAzGp
GcS Am AzGp Gc
AmGp
Gc
0%10%20%30%40%50%60%70%80%90%100%
0 0.2 0.4 0.6 0.8 1 1.2 1.4 1.6 1.8 2 2.2 2.4 2.6 2.8 3
wavg(e(CP
U))[%
]
Price($/hr)50users 500users 1000users
Az
8 8.2 8.4 8.6 8.8
Vincenzo Ferme
23
7. VMs Selectionselect the set of suitable VMs, and identify the cheapest ones
Normalised Cores Utilisation vs Price of VMs
Vincenzo Ferme
23
7. VMs Selectionselect the set of suitable VMs, and identify the cheapest ones
Normalised Cores Utilisation vs Price of VMs
Vincenzo Ferme
23
7. VMs Selectionselect the set of suitable VMs, and identify the cheapest ones
Normalised Cores Utilisation vs Price of VMs
Vincenzo Ferme
24
Limitations and Future Workapproximation of the actual performance on the Cloud
Performance on the Cloud has more Variance
Vincenzo Ferme
25
Limitations and Future Workmeasure on the Cloud, validate the approach
1. Workload Mix
80%C
A
B
20%D
Vincenzo Ferme
25
Limitations and Future Workmeasure on the Cloud, validate the approach
0
600
1200
0 1 2 3 4 5 6 7 8 9 10
2. Load Function
Metrics KPIs
3. Performance Req.
4. Run Experiments6. Cloud Offers
$
7. VMs Selection
Res. Efficiency
Cost$
5. Analyse Results
ResourcesPerform
ance
Vincenzo Ferme
25
Limitations and Future Workmeasure on the Cloud, validate the approach
Metrics KPIs
3. Performance Req.
4. Run Experiments6. Cloud Offers
$
7. VMs Selection
Res. Efficiency
Cost$
8. Cloud Exp.
5. Analyse Results
ResourcesPerform
ance
Vincenzo Ferme
25
Limitations and Future Workmeasure on the Cloud, validate the approach
Metrics KPIs
3. Performance Req.
4. Run Experiments6. Cloud Offers
$
7. VMs Selection
Res. Efficiency
Cost$
8. Cloud Exp.
Add Network, I/O5. Analyse Results
ResourcesPerform
ance
Vincenzo Ferme
25
Limitations and Future Workmeasure on the Cloud, validate the approach
Metrics KPIs
3. Performance Req.
4. Run Experiments6. Cloud Offers
$
7. VMs Selection
Res. Efficiency
Cost$
8. Cloud Exp.
Add Network, I/O5. Analyse Results
ResourcesPerform
ance
Consider DBMS
Highlights
Deploying WfMSs to the Cloud
Vincenzo Ferme
2
Deploying WfMSs to the Cloud
WfMS
IaaS
D
A
B
C PaaS
Workload
Users
$
$
IT Resources
sweet spot in the performance vs. resource consumption trade-off
BPaaS
Highlights
Deploying WfMSs to the Cloud
Vincenzo Ferme
2
Deploying WfMSs to the Cloud
WfMS
IaaS
D
A
B
C PaaS
Workload
Users
$
$
IT Resources
sweet spot in the performance vs. resource consumption trade-off
BPaaS
Proposed Method
Vincenzo Ferme
5
The Steps of the Methodfrom performance requirements to cheapest VMs
1. Workload Mix
80%C
A
B
20%D
0
600
1200
0 1 2 3 4 5 6 7 8 9 10
2. Load Function
Metrics KPIs
3. Performance Req.
4. Run Experiments5. Analyse Results
ResourcesPerform
ance
6. Cloud Offers
$
Highlights
Deploying WfMSs to the Cloud
Vincenzo Ferme
2
Deploying WfMSs to the Cloud
WfMS
IaaS
D
A
B
C PaaS
Workload
Users
$
$
IT Resources
sweet spot in the performance vs. resource consumption trade-off
BPaaS
Proposed Method
Vincenzo Ferme
5
The Steps of the Methodfrom performance requirements to cheapest VMs
1. Workload Mix
80%C
A
B
20%D
0
600
1200
0 1 2 3 4 5 6 7 8 9 10
2. Load Function
Metrics KPIs
3. Performance Req.
4. Run Experiments5. Analyse Results
ResourcesPerform
ance
6. Cloud Offers
$
Application to Camunda
Vincenzo Ferme
23
7. VMs Selectionselect the set of suitable VMs, and identify the cheapest ones
Normalised Cores Utilisation vs Price of VMs
Highlights
Deploying WfMSs to the Cloud
Vincenzo Ferme
2
Deploying WfMSs to the Cloud
WfMS
IaaS
D
A
B
C PaaS
Workload
Users
$
$
IT Resources
sweet spot in the performance vs. resource consumption trade-off
BPaaS
Proposed Method
Vincenzo Ferme
5
The Steps of the Methodfrom performance requirements to cheapest VMs
1. Workload Mix
80%C
A
B
20%D
0
600
1200
0 1 2 3 4 5 6 7 8 9 10
2. Load Function
Metrics KPIs
3. Performance Req.
4. Run Experiments5. Analyse Results
ResourcesPerform
ance
6. Cloud Offers
$
Application to Camunda
Vincenzo Ferme
23
7. VMs Selectionselect the set of suitable VMs, and identify the cheapest ones
Normalised Cores Utilisation vs Price of VMs
Limitations and Future Work
Vincenzo Ferme
24
Limitations and Future Workapproximation of the actual performance on the Cloud
Performance on the Cloud has more Variance
benchflowbenchflow
http://benchflow.inf.usi.ch
ESTIMATING THE COST FOR EXECUTING BUSINESS PROCESSES
IN THE CLOUD
VincenzoFerme,AnaIvanchikj,CesarePautassoFacultyofInformaticsUSILugano,Switzerland
BACKUP SLIDES
VincenzoFerme,AnaIvanchikj,CesarePautassoFacultyofInformaticsUSILugano,Switzerland
Vincenzo Ferme
29
Published Work
[BTW ’15]C. Pautasso, V. Ferme, D. Roller, F. Leymann, and M. Skouradaki. Towards workflow benchmarking: Open research challenges. In Proc. of the 16th conference on Database Systems for Business, Technology, and Web, BTW 2015, pages 331–350, 2015.
[SSP ’14]M. Skouradaki, D. H. Roller, F. Leymann, V. Ferme, and C. Pautasso. Technical open challenges on benchmarking workflow management systems. In Proc. of the 2014 Symposium on Software Performance, SSP 2014, pages 105–112, 2014.
[ICPE ’15]M. Skouradaki, D. H. Roller, L. Frank, V. Ferme, and C. Pautasso. On the Road to Benchmarking BPMN 2.0 Workflow Engines. In Proc. of the 6th ACM/SPEC International Conference on Performance Engineering, ICPE ’15, pages 301–304, 2015.
Vincenzo Ferme
30
Published Work
[CLOSER ’15]M. Skouradaki, V. Ferme, F. Leymann, C. Pautasso, and D. H. Roller. “BPELanon”: Protect business processes on the cloud. In Proc. of the 5th International Conference on Cloud Computing and Service Science, CLOSER 2015. SciTePress, 2015.
[SOSE ’15]M. Skouradaki, K. Goerlach, M. Hahn, and F. Leymann. Application of Sub-Graph Isomorphism to Extract Reoccurring Structures from BPMN 2.0 Process Models. In Proc. of the 9th International IEEE Symposium on Service-Oriented System Engineering, SOSE 2015, 2015.
[BPM ’15]V. Ferme, A. Ivanchikj, C. Pautasso. A Framework for Benchmarking BPMN 2.0 Workflow Management Systems. In Proc. of the 13th International Conference on Business Process Management, BPM ’15, pages 251-259, 2015.
Vincenzo Ferme
31
Published Work
[BPMD ’15]A. Ivanchikj, V. Ferme, C. Pautasso. BPMeter: Web Service and Application for Static Analysis of BPMN 2.0 Collections. In Proc. of the 13th International Conference on Business Process Management [Demo], BPM ’15, pages 30-34, 2015.
[ICPE ’16]V. Ferme, and C. Pautasso. Integrating Faban with Docker for Performance Benchmarking. In Proc. of the 7th ACM/SPEC International Conference on Performance Engineering, ICPE ’16, 2016.
[CLOSER ’16]V. Ferme, A. Ivanchikj, C. Pautasso., M. Skouradaki, F. Leymann. A Container-centric Methodology for Benchmarking Workflow Management Systems. In Proc. of the 6th International Conference on Cloud Computing and Service Science, CLOSER 2016. SciTePress, 2016.
Vincenzo Ferme
[CAiSE ’16]M. Skouradaki, V. Ferme, C. Pautasso, F. Leymann, A. van Hoorn. Micro-Benchmarking BPMN 2.0 Workflow Management Systems with Workflow Patterns . In Proc. of the 28th International Conference on Advanced Information Systems Engineering, CAiSE ’16, 2016.
32
Published Work
[ICWE ’16]C. Jürgen, V. Ferme, H.C. Gall. Using Docker Containers to Improve Reproducibility in Software and Web Engineering Research. In Proc. of the 16th International Conference on Web Engineering, 2016.
[ICWS ’16]M. Skouradaki, V. Andrikopoulos, F. Leymann. Representative BPMN 2.0 Process Model Generation from Recurring Structures. In Proc. of the 23rd IEEE International Conference on Web Services, ICWS '16, 2016.
Vincenzo Ferme
33
Published Work
[SummerSOC ’16]M. Skouradaki, T. Azad, U. Breitenbücher, O. Kopp, F. Leymann. A Decision Support System for the Performance Benchmarking of Workflow Management Systems. In Proc. of the 10th Symposium and Summer School On Service-Oriented Computing, SummerSOC '16, 2016.
[OTM ’16]M. Skouradaki, V. Andrikopoulos, O. Kopp, F. Leymann. RoSE: Reoccurring Structures Detection in BPMN 2.0 Process Models Collections. In Proc. of On the Move to Meaningful Internet Systems Conference, OTM '16, 2016. (to appear)
[BPM Forum ’16]V. Ferme, A. Ivanchikj, C. Pautasso. Estimating the Cost for Executing Business Processes in the Cloud. In Proc. of the 14th International Conference on Business Process Management, BPM Forum ’16, 2016.
Vincenzo Ferme
34
Docker Performance
[IBM ’14]W. Felter, A. Ferreira, R. Rajamony, and J. Rubio. An updated performance comparison of virtual machines and Linux containers. IBM Research Report, 2014.
Although containers themselves have almost no overhead, Docker is not without performance gotchas. Docker volumes have noticeably better performance than files stored in AUFS. Docker’s NAT also introduces overhead for workloads with high packet rates. These features represent a tradeoff between ease of management and performance and should be considered on a case-by-case basis.
”
“Our results show that containers result in equal or better performance than VMs in almost all cases.
“”
BenchFlow Configures Docker for Performance by Default
Vincenzo Ferme
35
•We want to characterise the Workload Mix using Real-World process models
• Share your executable BPMN 2.0 process models, even anonymised
Process Models
• We want to characterise the Load Functions using Real-World behaviours
• Share your execution logs, even anonymised
Execution Logs
•We want to add more and more WfMSs to the benchmark
• Contact us for collaboration, and BenchFlow framework support
WfMSs
Call for CollaborationWfMSs, process models, process logs
Vincenzo Ferme
36
Cloud WfMSs, business process optimisations, capacity planningState of the Art of WfMSs in the Cloud
• Cloud WfMSs: emerging market
• BPaaS: started to be evaluated
• Business Process Execution Optimisations
• Cost-aware WfMSs
Vincenzo Ferme
36
Cloud WfMSs, business process optimisations, capacity planningState of the Art of WfMSs in the Cloud
• Cloud WfMSs: emerging market
• BPaaS: started to be evaluated
• Business Process Execution Optimisations
• Cost-aware WfMSs
We Focus on Capacity Planning with a Widely Used WfMSs
Vincenzo Ferme
37
BenchFlow Frameworksystem under test
Docker Engine
Vincenzo Ferme
37
BenchFlow Frameworksystem under test
Docker EngineContainers
Vincenzo Ferme
37
BenchFlow Frameworksystem under test
Docker Engine
Docker Machine
prov
ides
Containers
Vincenzo Ferme
37
BenchFlow Frameworksystem under test
Docker Swarm
Docker Engine
Docker Machine
prov
ides
Containers
Vincenzo Ferme
37
BenchFlow Frameworksystem under test
Docker Swarm
Docker Engine
Docker Machine
prov
ides
manages
Containers
Servers
Vincenzo Ferme
37
BenchFlow Frameworksystem under test
Docker Swarm
Docker Engine
Docker Compose
Docker Machine
prov
ides
manages
Containers
Servers
Vincenzo Ferme
37
BenchFlow Frameworksystem under test
Docker Swarm
Docker Engine
Docker Compose
Docker Machine
prov
ides
manages
Containers
Servers
SUT’s Deployment Conf.
deploys
Vincenzo Ferme
38
Server-side Data and Metrics Collection
WfMS
Start Workflow
asynchronous execution of workflows
Vincenzo Ferme
Users
D
A
BC
Start
Application Server
Web Service
Load Driver
DBMS
WfMS
InstanceDatabase
38
Server-side Data and Metrics Collection
WfMS
Start Workflow
asynchronous execution of workflows
Vincenzo Ferme
Users
D
A
BC
Start
Application Server
Web Service
Load Driver
DBMS
WfMS
InstanceDatabase
38
Server-side Data and Metrics Collection
WfMS
Start Workflow
End
asynchronous execution of workflows
Vincenzo Ferme
39
Server-side Data and Metrics Collectionmonitors
DBMSFaban Drivers
ContainersServers
harnessWfMS
Test
Exe
cuti
on Web Service
Vincenzo Ferme
39
Server-side Data and Metrics Collectionmonitors
DBMSFaban Drivers
ContainersServers
harnessWfMS
Test
Exe
cuti
on Web Service
MONITOR
Vincenzo Ferme
40
Server-side Data and Metrics Collectionmonitors
DBMSFaban Drivers
ContainersServers
harnessWfMS
Test
Exe
cuti
on Web Service
• CPU usage• Database state
Examples of Monitors:Monitors’ Characteristics:• RESTful services• Lightweight (written in Go)• As less invasive on the SUT as possible
Vincenzo Ferme
40
Server-side Data and Metrics Collectionmonitors
DBMSFaban Drivers
ContainersServers
harnessWfMS
Test
Exe
cuti
on Web Service
CPU Monitor
DB Monitor
• CPU usage• Database state
Examples of Monitors:Monitors’ Characteristics:• RESTful services• Lightweight (written in Go)• As less invasive on the SUT as possible
Vincenzo Ferme
41
Server-side Data and Metrics Collectioncollect data
DBMSFaban Drivers
ContainersServers
harnessWfMS
Test
Exe
cuti
on Web Service
Vincenzo Ferme
41
Server-side Data and Metrics Collectioncollect data
DBMSFaban Drivers
ContainersServers
harnessWfMS
Test
Exe
cuti
on Web Service
MinioInstanceDatabase
Ana
lyse
sC
OLLEC
TO
RS
Vincenzo Ferme
42
Server-side Data and Metrics Collectioncollect data
DBMSFaban Drivers
ContainersServers
harnessWfMS
Test
Exe
cuti
on Web Service
• Container’s Stats (e.g., CPU usage)• Database dump• Applications Logs
Examples of Collectors:Collectors’ Characteristics:
• RESTful services• Lightweight (written in Go)• Two types: online and offline• Buffer data locally
Vincenzo Ferme
42
Server-side Data and Metrics Collectioncollect data
DBMSFaban Drivers
ContainersServers
harnessWfMS
Test
Exe
cuti
on Web Service
Stats Collector
DB Collector
• Container’s Stats (e.g., CPU usage)• Database dump• Applications Logs
Examples of Collectors:Collectors’ Characteristics:
• RESTful services• Lightweight (written in Go)• Two types: online and offline• Buffer data locally