system architecture metrics at sig: lessons learned · software improvement group 1500+ systems...

28
GETTING SOFTWARE RIGHT System Architecture Metrics at SIG: Lessons Learned Zeljko Obrenovic, http://obren.info February 2, 2016

Upload: others

Post on 14-Aug-2020

1 views

Category:

Documents


0 download

TRANSCRIPT

Page 1: System Architecture Metrics at SIG: Lessons Learned · Software Improvement Group 1500+ systems 40,000+ versions 7 bln+ lines of code 25 mln+ LOC analyzed each week 100+ languages

GETTING SOFTWARE RIGHT

System Architecture Metrics at SIG: Lessons Learned

Zeljko Obrenovic, http://obren.info

February 2, 2016

Page 2: System Architecture Metrics at SIG: Lessons Learned · Software Improvement Group 1500+ systems 40,000+ versions 7 bln+ lines of code 25 mln+ LOC analyzed each week 100+ languages

Software Improvement Group

1500+ systems

40,000+ versions

7 bln+ lines of code

25 mln+ LOC analyzed each

week

100+ languages

Page 2

Page 3: System Architecture Metrics at SIG: Lessons Learned · Software Improvement Group 1500+ systems 40,000+ versions 7 bln+ lines of code 25 mln+ LOC analyzed each week 100+ languages

Software Improvement Group

Contents

1  No silver bullets

2  Getting what you measure

3  The importance of context and structure: “story tells that data, data do not tell the story”

4  Human in the loop: exploration, creativity and (simple) metrics + implication for tools

5  Visualizing the metrics: from paper and glue to 3D

Page 3

Page 4: System Architecture Metrics at SIG: Lessons Learned · Software Improvement Group 1500+ systems 40,000+ versions 7 bln+ lines of code 25 mln+ LOC analyzed each week 100+ languages

Software Improvement Group

100+ languages (200 dialects)

abap, abapsmartforms, abl, accell, acl, actionscript, actionscript3, ada, adabasnatural, adfxml, ads, angularjstemplate,

applicationmaster, aps, aquima, asp, aspx, batch, bea, bixml, biztalk, bpel, brail, bsp, c, cache, cacheobjectscript, ccl, cgdc, cl, clearbasic,

clojure, cobol, coffeescript, coldfusion, common, configuration, coolgen, coolgenc, coolgencobol, cordysbpm, cpp, csharp,

csharppreprocessed, csp, css, cucumber, datastage, datastageetl, datastageworkflow, db2, dcl, delphi, delphiforms, docker, drools,

easytrieve, egl, ejs, embeddedsql, erb, esql, filetab, finacle, flow, fortran, freemarker, gensym, groovy, gsp, gupta, hql, html, ibmbpm,

ibmbpmbpd, ibmbpmprocess, ideal, idms, importexpander, informix, informix4gl, informixsql, ingres, intershoppipeline, jade,

jasperreports, java, javafx, javascript, jbc, jcl, jcs, jsf, json, jsp, less, lodestar, logicnets, lotusscript, lua, magic, magik, magnum, matlab,

mendix, messagebuilder, models, mustache, mysql, naviscript, nodejs, nonstopsql, normalizedsystemsjava, objectivec, odi, opa, opc,

openroad, oracle, pascal, pascalprecompiled, performance, perl, php, pli, plsql, plsqlforms, plsqlreports, postgresql, powerbuilder,

powershell, progress, progressstrict, pronto, prt, puppet, python, razor, rexx, rpg, ruby, sas, sass, scala, scl, scr, script, shared, shellscript,

siebeldeclarative, siebeljs, siebelscripted, siebelworkflow, smalltalk, sourcegraph, sql, sqlite, sqlj, sqr, ssis, starlimssql, streamserve,

swift, synon, t4, tacl, tal, tandem, tapestry, tibco, tibcobe, tibcobejava, tibcobestatemachine, tripleforms, tsql, turtle, typescript, uil,

uniface, unknown, until, vag, vagrecord, vb, vbnet, velocity, visualobjects, visualrpg, webfocus, webmethods, wsdl, wtx, xaml, xml, xpdl,

xpp, xquery, xsd, xslt, yaml

Page 4 of 5

Page 5: System Architecture Metrics at SIG: Lessons Learned · Software Improvement Group 1500+ systems 40,000+ versions 7 bln+ lines of code 25 mln+ LOC analyzed each week 100+ languages

Software Improvement Group

Dependencies among modules 31 modules, 347 connections, 79 places with cyclic dependencies

Page 6: System Architecture Metrics at SIG: Lessons Learned · Software Improvement Group 1500+ systems 40,000+ versions 7 bln+ lines of code 25 mln+ LOC analyzed each week 100+ languages

Software Improvement Group

Basic Types of Metrics SIG is a “boring” company

Page 6

•  Size – e.g. lines of code

•  Complexity – e.g. control flow

•  Coupling – degree of interdependence C1 C2

C3

C4

C5

C6 C7

C8

Page 7: System Architecture Metrics at SIG: Lessons Learned · Software Improvement Group 1500+ systems 40,000+ versions 7 bln+ lines of code 25 mln+ LOC analyzed each week 100+ languages

Software Improvement Group

Mapping technologies to basic metrics

4. System Codebase maintained

by a team of developers 3. Component

e.g.: Top-level directory, package, namespace, Maven sub-module,

Visual Studio solution

2. Module e.g.: File, Class, Script

1. Unit e.g.: Method,

Procedure, Function, subroutine

Page 7

Page 8: System Architecture Metrics at SIG: Lessons Learned · Software Improvement Group 1500+ systems 40,000+ versions 7 bln+ lines of code 25 mln+ LOC analyzed each week 100+ languages

Software Improvement Group

Contents

1  No silver bullets

2  Getting what you measure

3  The importance of context and structure: “story tells that data, data do not tell the story”

4  Human in the loop: exploration, creativity and (simple) metrics + implication for tools

5  Visualizing the metrics: from paper and glue to 3D

Page 8

Page 9: System Architecture Metrics at SIG: Lessons Learned · Software Improvement Group 1500+ systems 40,000+ versions 7 bln+ lines of code 25 mln+ LOC analyzed each week 100+ languages

Software Improvement Group

Getting what you measure

>  Making alterations just to improve the value of a metric

•  Difficult to avoid using metrics as a goal rather than as a feedback mechanism

>  Recognized when changes made to the software are purely cosmetic.

>  Examples: order of getters and setter, mechanical splitting of methods (e.g. step 1, step 2…)

>  In the end you need to be able to zoom in the code

•  Need for tools that enable rapid metrics

zoom in / zoom out to the source code

“Metric is a good servant but a bad master”

Page 9

Page 10: System Architecture Metrics at SIG: Lessons Learned · Software Improvement Group 1500+ systems 40,000+ versions 7 bln+ lines of code 25 mln+ LOC analyzed each week 100+ languages

Software Improvement Group

Getting what you measure

>  Focusing on only a single metric:

•  E.g. duplication vs. coupling

>  “Focusing” on too many metrics:

•  E.g. more metrics than lines of code

>  7 ± 2 (complementary) metrics

•  Applying usability principles when designing metrics

One-track metric or a metric galore

Page 10

1

7 ± 2

10000….

Page 11: System Architecture Metrics at SIG: Lessons Learned · Software Improvement Group 1500+ systems 40,000+ versions 7 bln+ lines of code 25 mln+ LOC analyzed each week 100+ languages

Software Improvement Group

SIG Metrics Models

Page 11

Volume

Unit size Duplication

Unit complexity

Module coupling

Unit interfacing

Component balance

Component independence

Secure data

transport

Access management

Identification strength

Session management

Input & output

verification

Authorized access

Secure data

storage

User management

Evidence strength

Internal commu-nication

Single transaction

optimalisation

External commu-nication

Transaction scalability

Isolation

Data scalability

Resource elasticity

Observability

Fault isolation

Redundancy

Transaction handling

Deployment automation

Reliability testing

System autonomy

Failover Uptime

Maintainability

Security

Performance

Reliability

Page 12: System Architecture Metrics at SIG: Lessons Learned · Software Improvement Group 1500+ systems 40,000+ versions 7 bln+ lines of code 25 mln+ LOC analyzed each week 100+ languages

Software Improvement Group

Getting what you measure

>  2,463 duplicates vs. 9% duplication

>  28,882 methods, 1,326 are longer than 15 lines of code vs. risk profiles

In the land of details aggregation rules

Page 12

Page 13: System Architecture Metrics at SIG: Lessons Learned · Software Improvement Group 1500+ systems 40,000+ versions 7 bln+ lines of code 25 mln+ LOC analyzed each week 100+ languages

Software Improvement Group

Getting what you measure

•  Comparison with other systems – benchmark

•  Comparison in time (trend analysis)

•  Research

Giving meaning to numbers

Page 13

Page 14: System Architecture Metrics at SIG: Lessons Learned · Software Improvement Group 1500+ systems 40,000+ versions 7 bln+ lines of code 25 mln+ LOC analyzed each week 100+ languages

Software Improvement Group

Reviews

Systems, process…

Benchmark

External Research/Data Reviews

Technology Risk Reviews

Client Specific Insights

Software Intelligence

Portal

Software Measurement

Instruments

Measurement Experiences

project specific

data

project specific

data

calibration data

context

Page 15: System Architecture Metrics at SIG: Lessons Learned · Software Improvement Group 1500+ systems 40,000+ versions 7 bln+ lines of code 25 mln+ LOC analyzed each week 100+ languages

Software Improvement Group

Getting what you measure

>  About concrete numbers with transparent model and connected to the concrete examples source code you can have concrete discussion

>  Sometimes the biggest value of metrics is not the measurement but making such discussions possible

“Better wrong than vague”

Page 15

Page 16: System Architecture Metrics at SIG: Lessons Learned · Software Improvement Group 1500+ systems 40,000+ versions 7 bln+ lines of code 25 mln+ LOC analyzed each week 100+ languages

Software Improvement Group

Contents

1  No silver bullets

2  Getting what you measure

3  The importance of context and structure: “story tells that data, data do not tell the story”

4  Human in the loop: exploration, creativity and (simple) metrics + implication for tools

5  Visualizing the metrics: from paper and glue to 3D

Page 16

Page 17: System Architecture Metrics at SIG: Lessons Learned · Software Improvement Group 1500+ systems 40,000+ versions 7 bln+ lines of code 25 mln+ LOC analyzed each week 100+ languages

Software Improvement Group

Story

>  “The facts [metrics] do not tell the story. The story tells the facts [metrics].”

>  We discover a subject.

>  We create a hypothesis to verify.

>  We seek open source data to verify the hypothesis.

>  We seek human sources.

>  As we collect the data, we organise it – so that it is easier to examine, compose into a story, and check.

>  We put the data in a narrative order and compose the story.

>  We do quality control to make sure the story is right.

>  We publish the story, promote and defend it.

Unexpected lessons from investigative journalism

Page 18: System Architecture Metrics at SIG: Lessons Learned · Software Improvement Group 1500+ systems 40,000+ versions 7 bln+ lines of code 25 mln+ LOC analyzed each week 100+ languages

Software Improvement Group

SIG Quality Model Maintainability Metrics and their story, takes time to build narrative

Page 18

Analysability

Volume

Unit size Duplication

Unit complexity

Module coupling

Unit interfacing

Component balance

Component independence

Modifiability

Testability

Modularity

Reusability

Page 19: System Architecture Metrics at SIG: Lessons Learned · Software Improvement Group 1500+ systems 40,000+ versions 7 bln+ lines of code 25 mln+ LOC analyzed each week 100+ languages

Software Improvement Group

The Cycle of Unsustainable Software Development

Fear to touch existing code

Workarounds (hacks, copy-paste, …)

System grows in size and complexity

New developers are needed

Page 19

Page 20: System Architecture Metrics at SIG: Lessons Learned · Software Improvement Group 1500+ systems 40,000+ versions 7 bln+ lines of code 25 mln+ LOC analyzed each week 100+ languages

Software Improvement Group

From Measuring to Guidelines – Identifying Tipping Points

guideline

Page 20

Page 21: System Architecture Metrics at SIG: Lessons Learned · Software Improvement Group 1500+ systems 40,000+ versions 7 bln+ lines of code 25 mln+ LOC analyzed each week 100+ languages

Software Improvement Group

Contents

1  No silver bullets

2  Getting what you measure

3  The importance of context and structure: “story tells that the facts, facts do not tell the story”

4  Human in the loop: exploration, creativity and (simple) metrics + implication for tools

5  Visualizing the metrics: from paper and glue to 3D

Page 21

Page 22: System Architecture Metrics at SIG: Lessons Learned · Software Improvement Group 1500+ systems 40,000+ versions 7 bln+ lines of code 25 mln+ LOC analyzed each week 100+ languages

Software Improvement Group

Analysis with metrics is a creative process

>  Exploration •  Explore metrics, use metrics to guide exploration

•  Speed and ease, safe exploration space

>  Many paths and many styles •  GUI, scripts…

>  Open Interchange •  E.g. database, XML, CSV, Excel, PowerPoint…

à Enable innovative usage of tools

à e.g. quality workshops

Principles of creativity support tools

Page 22

Explore Collect

Organize Reflect

Question Answer

Page 23: System Architecture Metrics at SIG: Lessons Learned · Software Improvement Group 1500+ systems 40,000+ versions 7 bln+ lines of code 25 mln+ LOC analyzed each week 100+ languages

Software Improvement Group

Metrics cannot replace exploration of source code

>  Biggest / most complex constructs first

>  Elephant in the room, fast forward through code

•  Not missing huge atypical concepts

>  Simple query + false positives elimination

•  Simple search with more false positives + quick exploration and preview

Pattern of metrics driven explorations

Page 24: System Architecture Metrics at SIG: Lessons Learned · Software Improvement Group 1500+ systems 40,000+ versions 7 bln+ lines of code 25 mln+ LOC analyzed each week 100+ languages

Software Improvement Group

Contents

1  No silver bullets

2  Getting what you measure

3  The importance of context and structure: “story tells that the facts, facts do not tell the story”

4  Human in the loop: exploration, creativity and (simple) metrics + implication for tools

5  Visualizing the metrics: from paper and glue to 3D

Page 24

Page 25: System Architecture Metrics at SIG: Lessons Learned · Software Improvement Group 1500+ systems 40,000+ versions 7 bln+ lines of code 25 mln+ LOC analyzed each week 100+ languages

Software Improvement Group

It is difficult to beat Graphviz and barcharts

>  The only visualizations working well for very diverse data sets sizes

>  “Advanced” visualizations work very nicely for concrete data sets shown in the examples, but fail miserably if a data set is different

For automatically generated graphs

Page 25

Page 26: System Architecture Metrics at SIG: Lessons Learned · Software Improvement Group 1500+ systems 40,000+ versions 7 bln+ lines of code 25 mln+ LOC analyzed each week 100+ languages

Software Improvement Group

3D Visualizations In principle do not work / add value, but…

Page 26

Page 27: System Architecture Metrics at SIG: Lessons Learned · Software Improvement Group 1500+ systems 40,000+ versions 7 bln+ lines of code 25 mln+ LOC analyzed each week 100+ languages

Software Improvement Group

Do not underestimate paper and glue Making metrics tangible

Page 27

Page 28: System Architecture Metrics at SIG: Lessons Learned · Software Improvement Group 1500+ systems 40,000+ versions 7 bln+ lines of code 25 mln+ LOC analyzed each week 100+ languages

Software Improvement Group

Conclusions

>  No silver bullets: basic principles apply everywhere, all the time

>  Getting what you measure: “metric is a good servant but a bad master”

>  The importance of context and structure: “story tells that the facts, facts do not tell the story”

>  Human in the loop: analysis with metrics is a creative process

>  Visualizing the metrics: again “no silver bullets”, but it pays of to keep experimenting

Page 28