better testing for c# software through source code analysis

59
This document is a sample audit report produced automatically from the results of the analysis of the application on the Kalistick platform. It does not include any specific comments on the results. Its purpose is to serve as a model to build custom reports, it illustrates the ability of the platform to render a clear and comprehensible quality of an application. This document is confidential and is the property of Kalistick. It should not be circulated or modified without permission. Kalistick 13 av Albert Einstein F-69100 Villeurbanne +33 (0) 486 68 89 42 [email protected] www.kalistick.com DEMO Application SHARPDEVELOP Audit Report 2011-01-01

Upload: kalistick

Post on 29-Nov-2014

1.418 views

Category:

Technology


1 download

DESCRIPTION

You are probably using source code analysis for your C# software in order to ensure code quality. Want to go further ? You can use source code analysis to test the software more efficiently through risk based testing and improved regression testing and then deliver the software faster reducing testing cost in the meantime

TRANSCRIPT

Page 1: Better testing for C# software through source code analysis

This document is a sample audit report produced automatically

from the results of the analysis of the application on the Kalistick platform.

It does not include any specific comments on the results.

Its purpose is to serve as a model to build custom reports,

it illustrates the ability of the platform to render a clear

and comprehensible quality of an application.

This document is confidential and is the property of Kalistick.

It should not be circulated or modified without permission.

Kalistick 13 av Albert Einstein F-69100 Villeurbanne +33 (0) 486 68 89 42

[email protected]

www.kalistick.com

DEMO Application SHARPDEVELOP

Audit Report

2011-01-01

Page 2: Better testing for C# software through source code analysis

Code audit of SharpDevelop application 2011-01-01

Confidential – This document is the property of Kalistick 2/59

1 Executive Summary The Quality Cockpit uses static analysis techniques: it does not execute the application, but analyzes the

elements that compose it (code, test results, architecture ...). The results are correlated, aggregated and

compared within the project context to identify risks related to quality. This report presents the results.

Variation compared to the objective

This chart compares the current status of the project to the objectives set for each quality factor. The goal, set at the initialization of the audit, represents the importance of each quality factor. It is intended to define the rules to follow during development and the accepted tolerance.

Rate of overall non-compliance

This gauge shows the overall level of quality of the application compared to its objective. It displays the percentage of the application (source code) regarded as not-compliant. According to the adopted configuration, a rate higher than 15% indicates the need for further analysis.

Origin of non-compliances

This graph identifies the technical origin of detected non-compliances, and the main areas of improvement. According to elements submitted for the analysis, some quality domains may not be evaluated.

Page 3: Better testing for C# software through source code analysis

Code audit of SharpDevelop application 2011-01-01

Confidential – This document is the property of Kalistick 3/59

Report Organization This report presents the concepts of Quality Cockpit, the goal and the associated technical requirements

before proceeding with the summary results and detailed results for each technical area.

1 Executive Summary ...................................................................................................................................... 2

2 Introduction .................................................................................................................................................. 4

2.1 The Quality Cockpit............................................................................................................................... 4

2.2 The analytical ........................................................................................................................................ 4

3 Quality objective........................................................................................................................................... 7

3.1 The quality profile ................................................................................................................................ 7

3.2 The technical requirements ................................................................................................................. 7

4 Summary of results ..................................................................................................................................... 10

4.1 Project status ...................................................................................................................................... 10

4.2 Benchmarking ..................................................................................................................................... 13

4.3 Modeling application .......................................................................................................................... 17

5 Detailed results ........................................................................................................................................... 20

5.1 Detail by quality factors...................................................................................................................... 20

5.2 Implementation .................................................................................................................................. 21

5.3 Structure ............................................................................................................................................. 26

5.4 Test ..................................................................................................................................................... 35

5.5 Architecture ........................................................................................................................................ 42

5.6 Duplication ......................................................................................................................................... 43

5.7 Documentation ................................................................................................................................... 44

6 Action Plan .................................................................................................................................................. 47

7 Glossary ...................................................................................................................................................... 49

8 Annex .......................................................................................................................................................... 51

8.1 Cyclomatic complexity ........................................................................................................................ 51

8.2 The coupling ....................................................................................................................................... 53

8.3 TRI and TEI .......................................................................................................................................... 54

8.4 Technical Requirements ..................................................................................................................... 56

Page 4: Better testing for C# software through source code analysis

Code audit of SharpDevelop application 2011-01-01

Confidential – This document is the property of Kalistick 4/59

2 Introduction

2.1 The Quality Cockpit This audit is based on an industrialized process of code analysis. This industrialization ensures reliable results

and easily comparable with the results of other audits.

The analysis process is based on the "Quality Cockpit" platform, available through SaaS1 model

(https://cockpit.kalistick.com). This platform has the advantage of providing a knowledge base unique in that

it centralizes the results from statistical analysis of millions code lines, enriched continuously with new

analyses. It allows performing comparative analysis with other similar projects.

2.2 The analytical The analysis focuses on the code of the application (source code and binary code), for Java (JEE) or C# (. Net)

technologies. It is a static analysis (without runtime execution), supplemented by correlation with

information from development tools already implemented for the project: version control system, unit

testing frameworks, code coverage tools.

The results are given through an analytical approch based around three main dimensions:

The quality factors, which determine the nature of the impact of non-compliances detected, and the

impact on the quality of the application

The quality domains, which specify the technical origin of non-compliances

The severity levels, which positions the non-compliances on a severity scale to characterize their

priority

1 Software as a Service: application accessible remotely via Internet (using a standard browser)

Page 5: Better testing for C# software through source code analysis

Code audit of SharpDevelop application 2011-01-01

Confidential – This document is the property of Kalistick 5/59

2.2.1 The quality factors

The quality factors standardize a set of quality attributes which should claim the application according to ISO

912623:

Maintainability. Ability of software to be easily repaired, depending on the effort required to locate,

identify and correct errors.

Reliability. Ability of software to function properly in making the service expected in normal

operation.

Changeability. Ability of software to be able to evolve, depending on the effort required to add,

delete, and modify the functions of an operating system.

Security. Ability of software to operate within the constraints of integrity, confidentiality and

traceability requirements.

Transferability. Ability to perform maintenance and evolution of software by a new team separate

from the one which developed the original software.

Efficiency. Relationship between the level of software performance and the number of resources

required to operate in nominal conditions.

2.2.2 The quality domains

The quality domains determine the nature of problems according to their technical origin. There is six of it:

Implementation. The problems inherent in coding: misuse of language, potential bugs, code hard to

understand ... These problems can affect one or more of the six quality factors.

Structure. Problems related to the code organization: methods too long, too complex, with too many

dependencies ... These issues impact maintainability and changeability of the application.

Test. Describes how the application is tested based on results of unit tests (failure rate, execution

time ...) but also of the nature of the code covered by the test execution. The objective is to ensure

that the tests cover the critical parts of the application.

2 ISO/IEC 9126-1:2001 Software engineering — Product quality — Part 1: Quality model :

http://www.iso.org/iso/iso_catalogue/catalogue_tc/catalogue_detail.htm?csnumber=22749 3 The analysis focuses on a subset of ISO 9126 in order to focus on controllable dimensions automatically.

Page 6: Better testing for C# software through source code analysis

Code audit of SharpDevelop application 2011-01-01

Confidential – This document is the property of Kalistick 6/59

Architecture. Problems with the software architecture of the application. The platform allows the

definition of an architectural model to modularize the application into layers or components and

define communication constraints between them. The analysis identifies in the code all the calls

which do not satisfy these constraints, to detect the maintainability, changeability and security risk

levels.

Documentation. Problems related to lack of documentation in the code. This area primarily impacts

the transferability of code.

Duplication. Identification of all significant copy-pastes in the application. They impact reliability,

maintainability, transferability and changeability.

2.2.3 Severity levels

The severity levels are intended to characterize the priority of correction of non-compliances. This priority

depends on the severity of the impact of non-compliance, but also on the effort required for correction:

some moderately critical problems might be marked with a high level of severity because of the triviality of

their resolution.

To simplify interpretation, the severity levels are expressed using a four-level scale. The first is an error, the

others are warnings, from most to least severe:

Forbidden

Highly inadvisable

Inadvisable

To be avoided

Compared to the Forbidden level, other levels of severity are managed with a tolerance threshold, which

increases inversely with gravity.

Page 7: Better testing for C# software through source code analysis

Code audit of SharpDevelop application 2011-01-01

Confidential – This document is the property of Kalistick 7/59

3 Quality objective One of distinctive features of "Quality Cockpit" is to perform the analysis according to real needs of the

project in terms of quality, in order to avoid unnecessary efforts and to ensure greater relevance of quality

risks.

These requirements are formalized by defining the "quality profile" of the application, which characterizes

the quality levels expected on each of the six main quality factors. This profile is then translated as "technical

requirements" which are technical rules to be followed by the developers.

3.1 The quality profile For this audit, the profile is established as follows:

See the Quality Cockpit

3.2 The technical requirements Based on the above quality profile, technical requirements have been selected from the “Quality Cockpit”

knowledge base. These technical requirements cover the six quality domains (implementation, structure,

testing, architecture, documentation, duplication) and are configured according to the quality profile

(thresholds, levels of severity ...). The objective is to ensure a calibration of requirements that ensures the

highest return on investment.

Page 8: Better testing for C# software through source code analysis

Code audit of SharpDevelop application 2011-01-01

Confidential – This document is the property of Kalistick 8/59

Here are the details of these technical requirements:

Domain Rule Explanation, goal and possible thresholds Im

ple

me

nta

tio

n - According to your profile, between 150 and 200 rules were selected. They

are exhaustively presented in the appendix of the report (8.4.1 Implementation rules). Objective: avoid bad practices and apply best practices related to the technology used.

Stru

ctu

re

Size of methods Number of statements. This measure is different from the number of lines of code: it does not include comment lines or blank lines but only lines with at least one statement. Objective: avoid processing blocks difficult to understand. The threshold for the project is:

Number of lines: 100

Complexity of methods Cyclomatic complexity of a method. It measures the complexity of the control flow of a method by counting the number of independent paths covering all possible cases. The higher the number, the harder the code is to maintain and test. Objective: avoid processing blocks difficult to understand, not testable and which tend to have a significant rate of failure. The threshold for the project is:

Cyclomatic complexity: 20

Complexity and coupling of methods

Identifies methods difficult to understand, test and maintain because of moderate complexity (cyclomatic complexity) and numerous references to other types (efferent coupling). Objective: avoid processing blocks difficult to understand and not testable. The thresholds for the project are:

Cyclomatic complexity: 15

Efferent coupling: 20

Page 9: Better testing for C# software through source code analysis

Code audit of SharpDevelop application 2011-01-01

Confidential – This document is the property of Kalistick 9/59

Test

Test coverage methods Rate of code coverage for a method. This metric is standardized by our platform based on raw measures of code coverage when they are provided in the project archive. This rule requires a minimum level of testing (code coverage) for each method of the application according to the TRI (TestRelevancyIndex); TRI for each method assesses the risk that it contains bugs. His calculation takes into account the business risks defined for the application. Objective: focus the test strategy and test efforts towards sensitive areas of the application and check them. These sensitive areas are evaluated according to their propensity to contain bugs and according to business risks defined for the application. Details of the thresholds are provided in the annex to the report (8.4.2 Code coverage).

Arc

hit

ect

ure

Rules defined specifically through the architecture model.

See the architecture model defined for the application to check architecture constraints. Objective: ensure that developments follow the expected architecture model and do not introduce inconsistencies which could be security holes, maintenance or evolution issues. Note: violations of architecture are not taken into account in the calculation of non-compliance.

Do

cum

en

tati

on

Header documentation of methods

Identifies methods of moderate complexity which have no documentation header. The methods considered are those whose cyclomatic complexity and number of statements exceed the thresholds defined specifically for the project. Objective: ensure that documentation is available in key processing blocks to facilitate any changes in the development team (transferability). The thresholds for the project are:

Cyclomatic complexity: 10

Number of lines: 50

Du

plic

atio

n Detection of

duplications

Duplicated blocks are invalid beyond 20 Statements Objective: detect identical blocks of code in several places in the application, which often causes inconsistencies when making changes, and which are factor of increased costs of testing and development.

Domain Rule Explanation, goal and possible thresholds

Page 10: Better testing for C# software through source code analysis

Code audit of SharpDevelop application 2011-01-01

Confidential – This document is the property of Kalistick 10/59

4 Summary of results This chapter summarizes the status of the project using global indicators. These indicators measure the

intrinsic quality of the project, but also compare its situation to other projects using “Quality Cockpit”

knowledge base.

4.1 Project status The following indicators are related to the intrinsic situation of the project.

4.1.1 Rate of overall non-compliance

The rate of non-compliance measures the percentage of application code considered as non-compliant.

See the Quality Cockpit

Specifically, this represents the ratio between the total number of statements, and the

number of statements in non-compliant classes. A class is considered as non-compliant if at least

one of the following statements is true:

- A forbidden non-compliance is detected in the class

- A set of non-compliances highly inadvisable, inadvisable, or to be avoided are detected in

the class, and beyond a certain threshold. This calculation depends on the severity of each non-

compliance and on the quality profile that adjusts the threshold of tolerance.

Page 11: Better testing for C# software through source code analysis

Code audit of SharpDevelop application 2011-01-01

Confidential – This document is the property of Kalistick 11/59

4.1.2 Deviation from target

This chart summarizes the difference between the target as represented by the quality profile and the

current status of the project. This difference is shown for each quality factor:

See the Quality Cockpit

The level of non-compliance is calculated for each quality factor, and then weighted by the

level of requirements set for the related quality factor.

Quality theme Classes Significant non-compliances % application

Changeability 429 1794 84%

Efficiency 159 283 42%

Maintainability 54 339 18%

Reliability 425 1925 84%

Security 0 0 0%

Transferability 51 180 25%

[Total] 480 2286 86.92%

Detailed results specify for each quality factor: the number of non-compliant classes, the

number of violations for selected rules, and the percentage of application code involved in non-

compliant classes.

Page 12: Better testing for C# software through source code analysis

Code audit of SharpDevelop application 2011-01-01

Confidential – This document is the property of Kalistick 12/59

4.1.3 Origin of non-compliances

The following chart shows the distribution of non-compliances according to their technical origin:

See the Quality Cockpit

This chart compares each field according to the impact of rules that are associated with

the quality of the application. The impact is measured from the number of statements in classes

non-compliant.

4.1.4 Volumetry

The following table specifies the volume of the analyzed application:

Metric Value Trend

Line count 70895 +0.14%

Statement count 48877 +0.15%

Method count 7568 +0.36%

Class count 975 +0.21%

Package count 48 =

See the Quality Cockpit

A "line" corresponds to a physical line of a source file. It may involve a white line or a

comment line. A "statement" is a primary unit of code, it can be written on multiple lines, but

also a line may contain multiple statements. For simplicity, a statement is delimited by a

semicolon (;) or a left brace ({).

Page 13: Better testing for C# software through source code analysis

Code audit of SharpDevelop application 2011-01-01

Confidential – This document is the property of Kalistick 13/59

4.2 Benchmarking The “Quality Cockpit" knowledge base allows a comparative analysis of the project with other projects

reviewed on the platform. The objective is to measure its level of quality compared to an overall average.

This comparison benchmarking is proposed in relation to two categories of projects:

The “Intra-Cockpit” projects: projects analyzed continuously on the platform, therefore, with a

quality level above average (a priori)

The “Extra-Cockpit” projects: the projects reviewed from time to time on the platform in audit

mode, so with a highly heterogeneous quality.

Note: each project having its own specific quality profile, benchmarking does not take in account project

configuration, but uses instead raw measures.

4.2.1 Comparison on implementation issues

The chart below shows the status of the project implementation compared to the Extra-Cockpit projects,

therefore analyzed promptly on the platform. For each level of severity, the quality of the project is

positioned relative to others:

See the Quality Cockpit

Page 14: Better testing for C# software through source code analysis

Code audit of SharpDevelop application 2011-01-01

Confidential – This document is the property of Kalistick 14/59

The project is positioned relative to other projects according to the rate of violations for

each rule. The distribution is based on the quartile method, three groups are distinguished,

"Better": the 25% best projects, "On the average": the 50% average projects, "Worse": the 25%

worse projects. This information is then synthesized by level of severity.

The implementation rules compared are not necessarily the same as quality profiles, but

here we compare the rules according to their severity level set for each project.

The following graph provides the same analysis, but this time with the Intra-Cockpit projects, analyzed

continuously on the platform, so with a level of quality normally above average since detected violations

should be more corrected:

See the Quality Cockpit

A dominant red color indicates that the other projects tend to correct the violations

detected on this project.

Page 15: Better testing for C# software through source code analysis

Code audit of SharpDevelop application 2011-01-01

Confidential – This document is the property of Kalistick 15/59

4.2.2 Mapping the structure

The following chart compares the size of the methods of the current project with those of other projects,

"Intra-Cockpit" and "Extra-Cockpit", comparing the ratio of the application (as a percentage of statements)

which is located in processing blocks (methods) with a high number of statements:

See the Quality Cockpit

A significant proportion of the application in the right area is an indicator of greater

maintenance and evolution costs.

NB: The application analyzed is indicated by the term "Release".

Page 16: Better testing for C# software through source code analysis

Code audit of SharpDevelop application 2011-01-01

Confidential – This document is the property of Kalistick 16/59

A similar comparison is provided for the cyclomatic complexity4 of methods, comparing the proportion of the

application (as a percentage of statements) that is located within complex methods:

See the Quality Cockpit

A significant proportion of the application in the right area shows not only greater

maintenance and evolution costs, but also problems of reliability because this code is difficult to

test.

4.2.3 Comparison of main metrics

The following table compares the project with other projects, "Intra-Cockpit" and "Extra-cockpit", on the

main metrics related to the structure of the code. Recommended interval values are provided for

information purposes.

Metric Project Extra-Cockpit Intra-Cockpit Recommended interval

Classes per package 20.31 10.48 10.9 6 - 26

Methods per class 7.76 7.78 7.58 4 - 10

Statements per method 6.46 12.76 10.85 7 - 13

Cyclomatic complexity per statement 0.31 0.22 0.15 0.16 - 0.24

See the Quality Cockpit

4 Cyclomatic complexity measures the complexity of the code, and thus its ability to test it,

cf.http://classes.cecs.ucf.edu/eel6883/berrios/notes/Paper%204%20(Complexity%20Measure).pdf

Page 17: Better testing for C# software through source code analysis

Code audit of SharpDevelop application 2011-01-01

Confidential – This document is the property of Kalistick 17/59

4.3 Modeling application To facilitate understanding of analysis results, the application is modeled in two ways: a functional

perspective to better identify the business features of the application and link them to the source code, and

a technical perspective to verify the technical architecture of the application.

These models are built using the modeling wizard available in the Cockpit. You can modify these templates

on the pages Functional modelization et Technical Architecture (depending on your user rights).

4.3.1 Functional model

The functional model represents the business view of application, which may be understood by all project

members.

See the Quality Cockpit

The functional model is composed of modules, each one representing a business feature,

or a group of functionalities. These modules have been identified from a lexical corpus generated

from the application code which allows isolating the business vocabulary of the application.

Page 18: Better testing for C# software through source code analysis

Code audit of SharpDevelop application 2011-01-01

Confidential – This document is the property of Kalistick 18/59

4.3.2 Technical model

The technical model represents the technical architecture of the application code. The idea is to define a

target architecture model, which identifies the layers and / or technical components within the application,

and sets constraints to allow or prohibit communications between each of these elements.

The aim is threefold:

Homogenize the behavior of an application. For example, to ensure that the logging traces are

written through a specific API, that data accesses pass through a dedicated layer, that some third-

party library is only used by specific components ...

Ensure tightness of some components to facilitate their development and limit unintended

consequences, but also make them shareable with other applications. Dependency cycles are for

instance forbidden.

Avoid security flaws for example by ensuring that calls to data layer always pass through a business

layer in charge of validation controls.

Results of the architecture analysis are provided in chapter 5.5 Architecture.

Page 19: Better testing for C# software through source code analysis

Code audit of SharpDevelop application 2011-01-01

Confidential – This document is the property of Kalistick 19/59

See the Quality Cockpit

Green arrows formalize allowed communications between modules, while red arrows

formalize forbidden communications.

Page 20: Better testing for C# software through source code analysis

Code audit of SharpDevelop application 2011-01-01

Confidential – This document is the property of Kalistick 20/59

5 Detailed results This chapter details the results by focusing, for each quality domain, non-compliant elements.

5.1 Detail by quality factors The histogram below details the non-compliance rate for each quality factor, displaying also the number of

non-compliant classes. As a reminder, the rate of non-compliance is based on the number of statements

defined in non-compliant classes compared to the total number of statements in the project.

These rates of non-compliance directly depend on the quality profile and on the level of requirements that

have been selected:

See the Quality Cockpit

Same class may be non-compliant on several factors, the total does not necessarily

correspond to the sum of the factors.

Page 21: Better testing for C# software through source code analysis

Code audit of SharpDevelop application 2011-01-01

Confidential – This document is the property of Kalistick 21/59

5.2 Implementation Implementation domain covers the rules related to coding techniques. Unlike other domains, these rules are

often specific to the characteristics of a language (Java / C#). They identify, for example:

Potential bugs: uninitialized variables, concurrency issues, recursive calls ...

Optimizations in terms of memory or CPU

Security vulnerabilities

Obsolete code

Code deviating from recommended standards

...

Implementations rules are the most numerous of the technical requirements. They are called "practice".

5.2.1 Breakdown by severity

The objective of this indicator is to identify the severity of the practices that led to the invalidation of the

classes. Here, severity is divided in two levels: forbidden practices (Forbidden security level) and inadvisable

practices (Highly inadvisable, Inadvisable and To be avoided security levels).

The following pie compares the number of non-compliant classes in implementation, according to the

practices that participated in this invalidation:

When a class only violates forbidden practices, it is in the group “Forbidden practices”

When a class only violates inadvisable practices, it is in the group “Inadvisable practices”

Otherwise, the class violates practices of both categories and is in the group “Inadvisable and

forbidden practices”

See the Quality Cockpit

Page 22: Better testing for C# software through source code analysis

Code audit of SharpDevelop application 2011-01-01

Confidential – This document is the property of Kalistick 22/59

The effort of correction related to forbidden practices is generally less important compared

to lower severities: a single violation is sufficient to cause a forbidden non-compliance when

several inadvisable practices are needed to cause non-compliance, depending on tolerance

thresholds.

The table below completes the previous graph by introducing the concept of “Significant non-compliance”. A

significant violation is a violation whose correction can fix fully or partially the non-compliance of a class.

Indeed, due to tolerance thresholds associated with levels of severity, the correction of some violations has

no impact on the non-compliance of the class.

Severity Significant non-compliances

New non-compliances

Corrected non-compliances

Other non-compliances

Forbidden 382 5 0 0

Highly inadvisable 176 1 0 55

Inadvisable 81 5 2 336

To be avoided 202 1 1 340

The columns "New non-compliance" and "Corrected non-compliances" are only relevant if

the audit follows a previous audit.

Page 23: Better testing for C# software through source code analysis

Code audit of SharpDevelop application 2011-01-01

Confidential – This document is the property of Kalistick 23/59

5.2.2 Practices to fix in priority

The two following tables provide a list of forbidden practices and highly inadvisable practices detected in the

application. These are generally the rules to correct first.

These tables provide for each practice the number of new non-compliances (if a previous audit has been

done), the total number of non-compliances for this practice, the number of non-compliant classes where

this practice has been detected and the percentage of statements of these classes compared to the overall

number of statement in the project.

These figures help to set up an action plan based on the impact associated with each practice.

5.2.2.1 Forbidden practices

Practice New Non-compliances

NC classes

% application

AvoidRedundantCasts 1 124 83 28.55%

ImplementIDisposableForTypesWithDisposableFields 0 103 64 13.32%

DontHardcodeLocaleSpecificStrings 2 81 56 13.62%

UseConstInsteadOfReadOnlyWhenPossible_ 0 33 10 4.29%

UseIsNullOrEmptyToCheckEmptyStrings 0 12 9 3.9%

OverrideEqualsWithOperatorOnValueTypes 0 11 11 3.52%

PropertyNamesMustNotMatchGetMethods 0 6 5 1.46%

InstantiateExceptionsWithArguments 0 5 4 1.79%

DontImplementWriteOnlyProperty 0 3 3 1%

DefineMessageForObsoleteAttribute 2 2 2 1%

DontUseInadvisableTypes 0 1 1 1%

DontRaiseExceptionInUnexpectedMethod_ 0 1 1 1%

See the Quality Cockpit

Page 24: Better testing for C# software through source code analysis

Code audit of SharpDevelop application 2011-01-01

Confidential – This document is the property of Kalistick 24/59

5.2.2.2 Practice highly inadvisable

Practice New Non-compliances

NC classes % application

NeverMakeCtorCallOverridableMethod 0 185 48 10.15%

DontUseNonConstantStaticVisibleFields 1 26 11 2.94%

OverrideMethodsInIComparableImplementations 0 9 6 1.75%

DefineAttributeForISerializableTypes 0 7 5 2.77%

DontNestGenericInMemberSignatures_ 0 3 2 1.91%

DontIgnoreMethodsReturnValue 0 1 1 1%

See the Quality Cockpit

5.2.3 Classes to fix in priority on the implementation issues

The two following tables provide an additional view about the impact of implementation issues in listing the

main classes involved in forbidden practices or highly inadvisable practices.

For each class is associated the number of existing violations (forbidden or highly inadvisable practices), the

number of new violations (if a previous audit has been done), and the compliance status of the class.

Page 25: Better testing for C# software through source code analysis

Code audit of SharpDevelop application 2011-01-01

Confidential – This document is the property of Kalistick 25/59

5.2.3.1 Classes with forbidden practices

Class NC New Non-compliances

Instructions

ICSharpCode.SharpDevelop.Dom.NRefactoryResolver.NRefactoryResolver

Yes 0 10 647

ICSharpCode.SharpDevelop.Dom.VBNet.VBExpressionFinder Yes 0 10 319

ICSharpCode.SharpDevelop.Dom.CSharp.CSharpExpressionFinder

Yes 0 10 600

ICSharpCode.SharpDevelop.DefaultEditor.Gui.Editor.SharpDevelopTextAreaControl

Yes 0 9 255

ICSharpCode.SharpDevelop.Gui.DefaultWorkbench Yes 0 8 387

ICSharpCode.SharpDevelop.Project.ConfigurationGuiHelper Yes 1 8 0

ICSharpCode.SharpDevelop.Widgets.TreeGrid.DynamicListItem

Yes 0 8 211

ICSharpCode.SharpDevelop.ParserService Yes 0 7 489

ICSharpCode.SharpDevelop.Gui.SdiWorkbenchLayout Yes 0 7 360

ICSharpCode.SharpDevelop.Dom.DomPersistence Yes 0 6 567

ICSharpCode.Core.MenuService Yes 0 6 78

ICSharpCode.SharpDevelop.Dom.DefaultProjectContent Yes 0 5 554

ICSharpCode.SharpDevelop.Gui.XmlForms.XmlLoader Yes 0 5 200

ICSharpCode.SharpDevelop.Refactoring.RefactoringService Yes 0 5 312

ICSharpCode.SharpDevelop.Project.ProjectService Yes 0 5 355

ICSharpCode.SharpDevelop.Project.MSBuildEngine Yes 0 5 337

ICSharpCode.SharpDevelop.Gui.FontSelectionPanelHelper Yes 0 5 101

ICSharpCode.SharpDevelop.Project.Commands.AddExistingItemsToProject

Yes 0 4 168

ICSharpCode.SharpDevelop.Debugging.DebuggerService Yes 0 4 288

See the Quality Cockpit

Page 26: Better testing for C# software through source code analysis

Code audit of SharpDevelop application 2011-01-01

Confidential – This document is the property of Kalistick 26/59

5.2.3.2 Classes with practice highly inadvisable

Class NC New Non-compliances

Instructions

ICSharpCode.SharpDevelop.Gui.ExtTreeNode Yes 0 69 248

ICSharpCode.SharpDevelop.Gui.ClassBrowser.MemberNode Yes 0 12 72

ICSharpCode.SharpDevelop.Gui.XmlForms.XmlForm Yes 0 12 22

ICSharpCode.SharpDevelop.Dom.HostCallback Yes 1 9 19

ICSharpCode.SharpDevelop.Dom.ReflectionLayer.ReflectionClass Yes 0 8 102

ICSharpCode.SharpDevelop.Dom.ExpressionContext Yes 0 6 142

ICSharpCode.SharpDevelop.Dom.ReflectionLayer.ReflectionMethod

Yes 0 5 41

ICSharpCode.SharpDevelop.Project.Dialogs.NewProjectDialog Yes 0 4 274

ICSharpCode.SharpDevelop.Project.FileNode Yes 0 4 155

ICSharpCode.SharpDevelop.Gui.NewFileDialog Yes 0 3 378

ICSharpCode.SharpDevelop.Project.ProjectNode Yes 0 3 114

ICSharpCode.SharpDevelop.Dom.DefaultEvent Yes 0 3 43

ICSharpCode.SharpDevelop.Dom.ReflectionLayer.ReflectionParameter

Yes 0 3 14

ICSharpCode.SharpDevelop.Dom.DefaultProperty Yes 0 3 62

ICSharpCode.SharpDevelop.Dom.DefaultMethod Yes 0 3 84

ICSharpCode.SharpDevelop.Dom.DefaultProjectContent Yes 0 2 554

ICSharpCode.SharpDevelop.Internal.Templates.FileTemplate Yes 0 2 124

ICSharpCode.SharpDevelop.Dom.DefaultParameter Yes 0 2 76

ICSharpCode.Core.MenuCommand Yes 0 2 85

See the Quality Cockpit

5.3 Structure The Structure domain targets rules related to the code structure, for example:

The size of methods

The cyclomatic complexity of methods

Coupling, or the dependencies of methods towards other classes

The objective is to ensure that the code is structured in such a way that it can be easily maintained, tested,

and can evolve.

These rules are “metric”. They measure values (e.g. A number of statements) and are conditioned by

thresholds (e.g. 100 statements / method). Only metrics on which developers are able to act are presented

here. They apply to all methods.

Page 27: Better testing for C# software through source code analysis

Code audit of SharpDevelop application 2011-01-01

Confidential – This document is the property of Kalistick 27/59

5.3.1 Typology of structural problems

This histogram shows for each rule of structure domain number of non-compliance (thus methods) and the

percentage of related statements compared to the total number of statements in the application:

See the Quality Cockpit

The percentage of statements shown is interesting since there is often only a few methods

concentrating a large part of the application code.

When some rules have been configured to be excluded from the analysis, they are

displayed in this graph but without any results.

One method may be affected by several rules; therefore, the total does not correspond to

the sum of numbers.

The following table completes this view by introducing the number of new violations and the number of

violations corrected in the case where a previous audit was conducted:

Anomaly Significant non-compliances

New non-compliances

Corrected non-compliances

NC rate

Cyclomatic complexity higher than 20 41 1 0 5%

Page 28: Better testing for C# software through source code analysis

Code audit of SharpDevelop application 2011-01-01

Confidential – This document is the property of Kalistick 28/59

See the Quality Cockpit

5.3.2 Mapping methods by size

The histogram below shows a mapping of methods according to their size. The size is expressed in number of

statements to ignore the writing styles conventions.

The last interval identifies the methods with a number of statements which exceeds the threshold. These

methods are considered non-compliant because they are generally difficult to maintain and extend, and also

show a high propensity to reveal bugs because they are difficult to test.

The percentage of statements is provided because larger methods usually focus a significant part of the

application:

See the Quality Cockpit

The following table details the main non-compliant methods identified in the last interval of the previous

graph:

Method Instructions Lines Complexity New violation

5.3.3 Mapping methods by complexity

The histogram below shows a mapping of methods according to their cyclomatic complexity (see 8.1

Cyclomatic complexity).

Page 29: Better testing for C# software through source code analysis

Code audit of SharpDevelop application 2011-01-01

Confidential – This document is the property of Kalistick 29/59

Cyclomatic complexity is a measure aiming to characterize the complexity of a block of code, by identifying

all possible execution paths. This concept has been standardized by Mc Cabe5, but several calculation

methods exist. The one used here is the most popular and the simplest: it counts the number of branching

operators (if, for, while,? ...) and conditions (??, && ...).

The last interval identifies methods whose complexity exceeds the threshold. These methods are considered

non-compliant for the same reasons as for the long methods: they are generally difficult to maintain and

extend, and also show a high propensity to reveal bugs.

The percentage of statements and the percentage of complexity are provided because the most complex

methods generally focus a significant part of the application.

See the Quality Cockpit

The following table details the main non-compliant methods identified in the last interval of the previous

graph:

5 1976, IEEE Transactions on Software Engineering: 308–320.

http://classes.cecs.ucf.edu/eel6883/berrios/notes/Paper%204%20(Complexity%20Measure).pdf.

Page 30: Better testing for C# software through source code analysis

Code audit of SharpDevelop application 2011-01-01

Confidential – This document is the property of Kalistick 30/59

Method Instructions Lines Complexity New violation

ICSharpCode.SharpDevelop.Dom.ReflectionLayer.ReflectionClass.InitMembers ( System.Type)

21 36 21 New

ICSharpCode.SharpDevelop.Dom.MemberLookupHelper.ConversionExists ( ICSharpCode.SharpDevelop.Dom.IReturnType, ICSharpCode.SharpDevelop.Dom.IReturnType)

53 77 83

ICSharpCode.SharpDevelop.Dom.CSharp.CSharpExpre

ssionFinder.SearchBracketForward ( System.String, System.Int32, System.Char, System.Char)

57 78 47

ICSharpCode.SharpDevelop.Dom.VBNet.VBNetAmbience.Convert ( ICSharpCode.SharpDevelop.Dom.IClass)

81 128 44

ICSharpCode.SharpDevelop.Dom.CSharp.CSharpAmbie

nce.Convert ( ICSharpCode.SharpDevelop.Dom.IClass) 77 118 41

ICSharpCode.SharpDevelop.Dom.NRefactoryResolver.NRefactoryResolver.ResolveInternal ( ICSharpCode.NRefactory.Ast.Expression,

ICSharpCode.SharpDevelop.Dom.ExpressionContext)

71 100 34

ICSharpCode.SharpDevelop.Widgets.SideBar.SideBarControl.ProcessCmdKey ( ref

System.Windows.Forms.Message, System.Windows.Forms.Keys)

81 97 32

ICSharpCode.SharpDevelop.Dom.MemberLookupHelper.GetBetterPrimitiveConversion (

ICSharpCode.SharpDevelop.Dom.IReturnType, ICSharpCode.SharpDevelop.Dom.IReturnType)

20 22 31

ICSharpCode.SharpDevelop.Dom.NRefactoryResolver.TypeVisitor.CreateReturnType (

ICSharpCode.NRefactory.Ast.TypeReference, ICSharpCode.SharpDevelop.Dom.IClass, ICSharpCode.SharpDevelop.Dom.IMember, System.Int32, System.Int32, ICSharpCode.SharpDevelop.Dom.IProjectContent, System.Boolean)

53 73 31

ICSharpCode.SharpDevelop.Dom.CecilReader.CecilClass.InitMembers ( Mono.Cecil.TypeDefinition)

57 83 30

ICSharpCode.SharpDevelop.DefaultEditor.Gui.Editor.MethodInsightDataProvider.SetupDataProvider ( System.String,

ICSharpCode.TextEditor.Document.IDocument, ICSharpCode.SharpDevelop.Dom.ExpressionResult, System.Int32, System.Int32)

42 59 29

ICSharpCode.SharpDevelop.Refactoring.RefactoringService.AddReferences (

System.Collections.Generic.List<ICSharpCode.SharpDevelop.Refactoring.Reference>, ICSharpCode.SharpDevelop.Dom.IClass, ICSharpCode.SharpDevelop.Dom.IMember, System.Boolean, System.String, System.String)

55 89 29

ICSharpCode.SharpDevelop.Project.DirectoryNode.Initialize ( )

81 121 29

Page 31: Better testing for C# software through source code analysis

Code audit of SharpDevelop application 2011-01-01

Confidential – This document is the property of Kalistick 31/59

ICSharpCode.SharpDevelop.Project.MSBuildBasedProject.SetPropertyInternal ( System.String, System.String, System.String, System.String, ICSharpCode.SharpDevelop.Project.PropertyStorageLocations, System.Boolean)

92 140 28

ICSharpCode.SharpDevelop.Dom.NRefactoryResolver.NRefactoryResolver.ResolveIdentifierInternal ( System.String)

53 81 28

ICSharpCode.SharpDevelop.Commands.ToolMenuBuilder.ToolEvt ( System.Object, System.EventArgs)

54 74 26

ICSharpCode.SharpDevelop.Dom.MemberLookupHelper.GetBetterFunctionMember ( ICSharpCode.SharpDevelop.Dom.IReturnType[], ICSharpCode.SharpDevelop.Dom.IMethodOrProperty, ICSharpCode.SharpDevelop.Dom.IReturnType[], System.Boolean, ICSharpCode.SharpDevelop.Dom.IMethodOrProperty, ICSharpCode.SharpDevelop.Dom.IReturnType[], System.Boolean)

32 51 26

ICSharpCode.SharpDevelop.Dom.CSharp.CSharpExpressionFinder.FindFullExpression ( System.String, System.Int32)

51 68 26

ICSharpCode.SharpDevelop.Dom.CSharp.CSharpExpressionFinder.ReadNextToken ( )

58 76 25

5.3.4 Mapping methods by their complexity and efferent coupling

This rule is intended to identify methods whose code has many dependencies to other classes. The concept

of “efferent coupling” refers to those outgoing dependencies.

The principle is that a method with a strong efferent coupling is difficult to understand, maintain and test.

First because it requires knowledge of the different types it depends on, then because the risk of

destabilization is higher because of these dependencies.

This rule is crossed with the cyclomatic complexity to ignore some trivial methods, such as initialization

methods of graphical interfaces that make calls to many classes of widgets without presenting any real

complexity.

This rule considers that a method is non-compliant if it exceeds a threshold of efferent coupling and

threshold of cyclomatic complexity.

Page 32: Better testing for C# software through source code analysis

Code audit of SharpDevelop application 2011-01-01

Confidential – This document is the property of Kalistick 32/59

The chart below shows a mapping of methods according to their complexity and their efferent coupling. Each

dot represents one or more methods with the same values of complexity and coupling. They are divided into

four zones according to their status in relation to both thresholds:

The area on the lower left (green dots) contains compliant methods, below both thresholds

The area on the lower right (gray dots) contains compliant methods; they have reached the

complexity threshold, but remain below the coupling threshold

The area in the upper left (gray dots) contains compliant methods; they have reached the coupling

threshold, but remain below the complexity threshold

The area in the upper right (red dots) contains non-compliant methods; above both thresholds

See the Quality Cockpit

The intensity of the color of the dots depends on the number of methods that share the

same values in complexity and coupling: the more the color of the point is marked, the more

involved methods.

Page 33: Better testing for C# software through source code analysis

Code audit of SharpDevelop application 2011-01-01

Confidential – This document is the property of Kalistick 33/59

The histogram below provides an additional view of this mapping and precise figures for the four zones in

terms of percentage of methods and statements of the application. The last bars indicate the area of non-

compliance:

See the Quality Cockpit

The following table details the main non-compliant methods:

Page 34: Better testing for C# software through source code analysis

Code audit of SharpDevelop application 2011-01-01

Confidential – This document is the property of Kalistick 34/59

Method Efferent Coupling

Complexity New violation

ICSharpCode.SharpDevelop.Refactoring.RefactoringMenuBuilder.BuildSubmenu ( ICSharpCode.Core.Codon, System.Object)

45 22

ICSharpCode.SharpDevelop.Project.Commands.AddExistingItemsToProject.Run ( )

39 22

ICSharpCode.SharpDevelop.Dom.CecilReader.CecilClass.InitMembers ( Mono.Cecil.TypeDefinition)

36 30

ICSharpCode.SharpDevelop.Gui.NewFileDialog.OpenEvent ( System.Object, System.EventArgs)

35 17

ICSharpCode.SharpDevelop.Commands.ToolMenuBuilder.ToolEvt ( System.Object, System.EventArgs)

32 26

ICSharpCode.SharpDevelop.Dom.NRefactoryResolver.NRefactoryResolver.ResolveInternal ( ICSharpCode.NRefactory.Ast.Expression, ICSharpCode.SharpDevelop.Dom.ExpressionContext)

31 34

ICSharpCode.SharpDevelop.DefaultEditor.Gui.Editor.MethodInsightDataProvider.SetupDataProvider ( System.String, ICSharpCode.TextEditor.Document.IDocument, ICSharpCode.SharpDevelop.Dom.ExpressionResult, System.Int32, System.Int32)

30 29

ICSharpCode.SharpDevelop.Dom.NRefactoryResolver.NRefactoryResolver.CtrlSpace ( System.Int32, System.Int32, System.String, System.String, ICSharpCode.SharpDevelop.Dom.ExpressionContext)

30 18 New

ICSharpCode.SharpDevelop.DefaultEditor.Commands.ClassBookmarkMenuBuilder.BuildSubmenu ( ICSharpCode.Core.Codon, System.Object)

29 19

ICSharpCode.SharpDevelop.Project.MSBuildEngine.BuildRun.ParseSolution ( Microsoft.Build.BuildEngine.Project)

29 19

ICSharpCode.SharpDevelop.Dom.NRefactoryResolver.NRefactoryResolver.ResolveIdentifierInternal ( System.String)

28 28

ICSharpCode.SharpDevelop.Dom.CecilReader.CreateType ( ICSharpCode.SharpDevelop.Dom.IProjectContent, ICSharpCode.SharpDevelop.Dom.IDecoration, Mono.Cecil.TypeReference)

28 21

ICSharpCode.SharpDevelop.Project.ProjectService.LoadProject ( System.String)

27 15

ICSharpCode.SharpDevelop.Dom.NRefactoryResolver.TypeVisitor.CreateReturnType ( ICSharpCode.NRefactory.Ast.TypeReference, ICSharpCode.SharpDevelop.Dom.IClass, ICSharpCode.SharpDevelop.Dom.IMember, System.Int32, System.Int32, ICSharpCode.SharpDevelop.Dom.IProjectContent, System.Boolean)

26 31

ICSharpCode.SharpDevelop.Project.DirectoryNode.Initialize ( ) 26 29

ICSharpCode.SharpDevelop.Widgets.TreeGrid.DynamicList.OnPaint ( System.Windows.Forms.PaintEventArgs)

26 21

ICSharpCode.SharpDevelop.Project.Dialogs.NewProjectDialog.OpenEvent ( System.Object, System.EventArgs)

26 20

Page 35: Better testing for C# software through source code analysis

Code audit of SharpDevelop application 2011-01-01

Confidential – This document is the property of Kalistick 35/59

ICSharpCode.SharpDevelop.Dom.CecilReader.CecilClass ( ICSharpCode.SharpDevelop.Dom.ICompilationUnit, ICSharpCode.SharpDevelop.Dom.IClass, Mono.Cecil.TypeDefinition, System.String).CecilClass ( ICSharpCode.SharpDevelop.Dom.ICompilationUnit, ICSharpCode.SharpDevelop.Dom.IClass, Mono.Cecil.TypeDefinition, System.String)

26 15

ICSharpCode.SharpDevelop.Gui.GotoDialog.TextBoxTextChanged ( System.Object, System.EventArgs)

25 18

See the Quality Cockpit

5.4 Test The Test domain provides rules to ensure that the application is sufficiently tested, quantitatively but also

qualitatively, i.e. tests should target risk areas.

5.4.1 Issues

It is important to situate the problems inherent in managing tests to understand the results of analysis for

this area.

5.4.1.1 Unit testing and code coverage

The results of this domain depend on the testing process applied to the project: if automated unit testing

process and / or code coverage are implemented on the project, then the analysis uses the results of these

processes.

As a reminder, we must distinguish unit testing and code coverage:

A unit test is an automated test, which usually focus on a simple method inside source code. But

since this method has generally dependencies on other methods or classes, a unit test can test a

more or less important part of the application (the larger is this part, the less relevant is the test)

Code coverage measures the amount of code executed from tests, by identifying each element

actually executed at runtime (statements, conditional branches, methods ...). These tests can be

unit tests (automated) or integration tests / functional (manual or automated).

Code coverage is interesting to combine with the unit tests because it is the only way to measure the code

actually tested. However, many projects still do not check the code coverage, which does not allow verifying

the quality of testing in this type of analysis.

The indicators presented next address both cases; they are useful for projects with unit tests and/or code

coverage but also for other projects.

5.4.1.2 Relevance of code coverage

Code coverage provides figures indicating the proportion of code executed after the tests, for example 68%

of statements of a method are covered or 57% of the project statements...

Page 36: Better testing for C# software through source code analysis

Code audit of SharpDevelop application 2011-01-01

Confidential – This document is the property of Kalistick 36/59

The problem is that these figures do not take into account the relevance to test the code. For example a

coverage of 70% of the application is a good figure, but the covered code could be trivial and without any

real interest for the tests (e.g. accessors or generated code), whereas the critical code may be located in the

remaining 30%.

The analysis performed here captures the relevance to test of each method, which is used to calibrate the

code coverage requirements and to set appropriate thresholds to better target testing effort towards risk

areas.

5.4.2 TestRelevancyIndex metrics (TRI) and TestEffortIndex (TEI)

To refine the analysis of tests, two new metrics were designed by the Centre of Excellence in Information and

Communication Technologies (CETIC) based on researches conducted during the past 20 years and from the

“Quality Cockpit” knowledge base6.

The TestRelevancyIndex (TRI) measures the relevancy of testing a method in accordance with its technical

risks and its business risk.

Technical risk assesses the probability of finding a defect; it is based on different metrics such as cyclomatic

complexity, number of variables, number of parameters, efferent coupling, cumulative number of non-

compliances...

The business risk associates a risk factor to business features which should be tested in priority (higher risk),

or instead which should not be tested (minor risk). It must be determined at the initialization of the audit to

be considered in the TRI calculations. The objective is to guide the testing effort on the important features.

For this, the TRI is used to classify the methods according to a scale of testing priority, and thus to distinguish

the truly relevant methods to test from trivial and irrelevant methods in this area. For each level of the scale,

a specific threshold to achieve with code coverage can be set. This allows setting a high threshold for critical

methods, and a low threshold for low-priority methods.

The TestEffortIndex (TEI) completes the TRI by measuring the level of effort required to test a method. Like

the TRI, it is based on a set of unit metrics characterizing a method. It helps to refine the decisions to select

the code to be tested by balancing the effort over the relevance test.

The details of calculating these two indexes are providing in annex (8.2 The coupling).

5.4.3 Mapping methods by testing priority

The histogram below shows a mapping of methods according to their priority of testing, using a scale of four

levels based on TRI of methods (each level corresponding to a range of TRI).

This mapping uses the code coverage information only if they were supplied for analysis. For each priority

level are indicated:

The average coverage rate (0 if coverage information was not provided)

The number of methods not covered (no coverage)

6 CETIC, Kalistick. Statistically Calibrated Indexes for Unit Test Relevancy and Unit Test Writing Effort, 2010

Page 37: Better testing for C# software through source code analysis

Code audit of SharpDevelop application 2011-01-01

Confidential – This document is the property of Kalistick 37/59

The number of methods insufficiently covered (coverage rate below the target rate set for this level

of priority)

The number of methods sufficiently covered (coverage greater than or equal to the target rate set

for this level of priority)

The table below shows these figures for each priority level, also adding a fifth level corresponding to the

methods without test priority:

See the Quality Cockpit

5.4.4 Coverage of application by tests

This graph, called “TreeMap” shows code coverage of the application against test objectives. It helps to

identify parts of the application that are not sufficiently tested regarding identified risks. It gathers the

classes of project into technical subsets, and characterizes them following two dimensions:

size, which depends on the number of statements

color, which represents the deviation from the test objective set for the classes: the color red

indicates that the current coverage is far from the goal, whereas the green color indicates that the

goal is reached

Test priority Covered Uncovered Insufficient covered

Critical 0 1373 0

High 0 515 0

Medium 0 10 0

Low 0 14 0

None 0 5656 0

[Total] 0 7568 0

Page 38: Better testing for C# software through source code analysis

Code audit of SharpDevelop application 2011-01-01

Confidential – This document is the property of Kalistick 38/59

See the Quality Cockpit

A class can be green even if it is not or little tested: for example, classes with a low

probability of technical defects or without business risk. Conversely, a class already tested can be

stated as insufficient (red / brown) if its objective is very demanding.

An effective strategy to improve its coverage is to focus on large classes close to the goal.

5.4.5 Most important classes to test (Top Risks)

The following chart allows quickly identifying the most relevant classes to test, the “Top Risks”. It is a

representation known as "cloud" that displays the classes using two dimensions:

The size of the class name depends on its relevancy in being tested (TRI cumulated for all methods of

this class)

The color represents the deviation from the coverage goal set for the class, just as in the previous

TreeMap

Page 39: Better testing for C# software through source code analysis

Code audit of SharpDevelop application 2011-01-01

Confidential – This document is the property of Kalistick 39/59

See the Quality Cockpit

This representation identifies the critical elements, but if you want to take into account the

effort of writing tests, you must focus on the following representation to select items to be

corrected.

5.4.6 Most important classes to test and require the least effort (Quick Wins)

The “Quick Wins” complements “Top Risks” by taking into account the testing effort required for testing the

class (TEI):

The size of the class name depends on its interest in being tested (TRI), but weighted by the effort

required (TEI accumulated for all methods): a class with a high TRI and a high TEI (therefore difficult

to test) appears smaller than a class with an average TRI but a low TEI

The color represents the deviation from the coverage goal set for the class, just like the TreeMap or

QuickWin

Page 40: Better testing for C# software through source code analysis

Code audit of SharpDevelop application 2011-01-01

Confidential – This document is the property of Kalistick 40/59

See the Quality Cockpit

5.4.7 Methods to test in priority

The following table details the main methods to be tested first. Each method is associated with its current

coverage rate, the raw value of its TRI and its level of TEI:

Page 41: Better testing for C# software through source code analysis

Code audit of SharpDevelop application 2011-01-01

Confidential – This document is the property of Kalistick 41/59

Method Coverage Relevancy (TRI)

Priority Effort New violation

ICSharpCode.SharpDevelop.Refactoring.RefactoringMenuBuilder.BuildSubmenu ( ICSharpCode.Core.Codon, System.Object)

0% 37.00 Critical High

ICSharpCode.SharpDevelop.Project.Solution.SetupSolution ( ICSharpCode.SharpDevelop.Project.Solution, System.String)

0% 37.00 Critical Very high

ICSharpCode.SharpDevelop.Project.MSBuildBasedProject.SetPropertyInternal ( System.String, System.String, System.String, System.String, ICSharpCode.SharpDevelop.Project.PropertyStorageLocations, System.Boolean)

0% 37.00 Critical Very high

ICSharpCode.SharpDevelop.Dom.NRefactoryResolver.NRefactoryResolver.ResolveIdentifierInternal ( System.String)

0% 37.00 Critical Very high

ICSharpCode.SharpDevelop.Commands.SharpDevelopStringTagProvider.Convert ( System.String)

0% 36.00 Critical High

ICSharpCode.Core.AddInTree.Load ( System.Collections.Generic.List<System.String>, System.Collections.Generic.List<System.String>)

0% 36.00 Critical High

ICSharpCode.SharpDevelop.Project.Commands.AddExistingItemsToProject.Run ( )

0% 36.00 Critical High

ICSharpCode.SharpDevelop.Dom.CecilReader.CreateType ( ICSharpCode.SharpDevelop.Dom.IProjectContent, ICSharpCode.SharpDevelop.Dom.IDecoration, Mono.Cecil.TypeReference)

0% 35.00 Critical High

ICSharpCode.SharpDevelop.Commands.ToolMenuBuilder.ToolEvt ( System.Object, System.EventArgs)

0% 35.00 Critical High

ICSharpCode.SharpDevelop.DefaultEditor.Gui.Editor.MethodInsightDataProvider.SetupDataProvider ( System.String, ICSharpCode.TextEditor.Document.IDocument, ICSharpCode.SharpDevelop.Dom.ExpressionResult, System.Int32, System.Int32)

0% 35.00 Critical Very high

ICSharpCode.SharpDevelop.Refactoring.RefactoringService.AddReferences ( System.Collections.Generic.List<ICSharpCode.SharpDevelop.Refactoring.Reference>, ICSharpCode.SharpDevelop.Dom.IClass, ICSharpCode.SharpDevelop.Dom.IMember, System.Boolean, System.String, System.String)

0% 35.00 Critical High

Page 42: Better testing for C# software through source code analysis

Code audit of SharpDevelop application 2011-01-01

Confidential – This document is the property of Kalistick 42/59

ICSharpCode.SharpDevelop.Dom.ReflectionLayer.ReflectionReturnType.Create ( ICSharpCode.SharpDevelop.Dom.IProjectContent, ICSharpCode.SharpDevelop.Dom.IDecoration, System.Type, System.Boolean)

0% 35.00 Critical High

ICSharpCode.SharpDevelop.Dom.NRefactoryResolver.NRefactoryResolver.ResolveInternal ( ICSharpCode.NRefactory.Ast.Expression, ICSharpCode.SharpDevelop.Dom.ExpressionContext)

0% 35.00 Critical Very high

ICSharpCode.SharpDevelop.Dom.NRefactoryResolver.TypeVisitor.CreateReturnType ( ICSharpCode.NRefactory.Ast.TypeReference, ICSharpCode.SharpDevelop.Dom.IClass, ICSharpCode.SharpDevelop.Dom.IMember, System.Int32, System.Int32, ICSharpCode.SharpDevelop.Dom.IProjectContent, System.Boolean)

0% 35.00 Critical Very high

ICSharpCode.SharpDevelop.Dom.CSharp.CSharpExpressionFinder.SearchBracketForward ( System.String, System.Int32, System.Char, System.Char)

0% 35.00 Critical High

ICSharpCode.SharpDevelop.DefaultEditor.Gui.Editor.AbstractCodeCompletionDataProvider.CreateItem ( System.Object, ICSharpCode.SharpDevelop.Dom.ExpressionContext)

0% 35.00 Critical High

ICSharpCode.SharpDevelop.Project.MSBuildEngine.BuildRun.ParseSolution ( Microsoft.Build.BuildEngine.Project)

0% 34.00 Critical Normal

ICSharpCode.SharpDevelop.Project.ProjectService.LoadProject ( System.String)

0% 34.00 Critical High

ICSharpCode.SharpDevelop.Project.ProjectBrowserControl.FindDeepestOpenNodeForPath ( System.String)

0% 34.00 Critical High

See the Quality Cockpit

5.5 Architecture The Architecture domain aims to monitor compliance of a software architecture model. The target

architecture model has been presented in Chapter 4.3.2 Technical model. The following diagram shows the

results of architecture analysis by comparing this target model with current application code.

Currently, architecture non-compliances are not taken into account in the calculation of

non-compliance of the application.

Page 43: Better testing for C# software through source code analysis

Code audit of SharpDevelop application 2011-01-01

Confidential – This document is the property of Kalistick 43/59

See the Quality Cockpit

Non-compliances related to communication constraints between two elements are

represented using arrows. The starting point is the calling element, the destination is the one

called. The orange arrows involve direct communication between a top layer and bottom layer

non-adjacent (sometimes acceptable). The black arrows refer to communications totally

prohibited.

5.6 Duplication The Duplication domain is related to the “copy-and-paste” identified in the application. To avoid many false

positives in this area, a threshold is defined to ignore blocks with few statements.

Duplications should be avoided for several reasons: maintenance and changeability issues, testing costs, lack

of reliability...

5.6.1 Mapping of duplication

The chart below shows a mapping of duplications within the application. It does not take into account the

duplication involving a number of statements below the threshold, because they are numerous and mostly

irrelevant (e.g. duplication of accessors between different classes sharing similar properties).

Page 44: Better testing for C# software through source code analysis

Code audit of SharpDevelop application 2011-01-01

Confidential – This document is the property of Kalistick 44/59

Duplicates are categorized by ranges of duplicated statements. For each range is presented:

The number of different duplicated blocks (each duplicated at least once)

The maximum number of duplications of the same block

See the Quality Cockpit

5.6.2 Duplications to fix in priority

The following table lists the main duplicates to fix in priority. Each block is identified by a unique identifier,

and each duplication is located in the source code. If a previous audit were completed, a flag indicates

whether duplication is new or not.

Duplication number

Duplicated blocks size

Class involved Lines New violation

See the Quality Cockpit

5.7 Documentation The Documentation domain aims to control the level of technical documentation of the code. Only the

definition of standard comment header of the methods is verified: Javadoc for Java, XmlDoc for C#. Inline

comments (in the method bodies) are not evaluated because of the difficulty to verify their relevance (often

commented code or generated comments).

In addition, the header documentation is verified only for methods considered quite long and complex.

Because the effort to document trivial methods is rarely justified. For this, a threshold on the cyclomatic

complexity and a threshold on the number of statements are defined to filter out methods to check.

Page 45: Better testing for C# software through source code analysis

Code audit of SharpDevelop application 2011-01-01

Confidential – This document is the property of Kalistick 45/59

5.7.1 Mapping documentation issues

The chart below shows the status of header documentations for all methods with a complexity greater than

the threshold. The methods are grouped by ranges of size (number of statements). For each range are given

the number of methods with header documentation and the number of methods without header

documentation. The red area in the last range corresponds to the methods not documented therefore non-

compliant.

5.7.2 Methods to document in priority

The following table lists the main methods to document in priority:

Page 46: Better testing for C# software through source code analysis

Code audit of SharpDevelop application 2011-01-01

Confidential – This document is the property of Kalistick 46/59

Method Instructions Complexity New violation

ICSharpCode.SharpDevelop.Project.MSBuildBasedProject.SetPropertyInternal

92 28

ICSharpCode.SharpDevelop.Project.DirectoryNode.Initialize 81 29

ICSharpCode.SharpDevelop.Dom.VBNet.VBNetAmbience.Convert

81 44

ICSharpCode.SharpDevelop.Widgets.SideBar.SideBarControl.ProcessCmdKey

81 32

ICSharpCode.SharpDevelop.Commands.SharpDevelopStringTagProvider.Convert

81 25

ICSharpCode.SharpDevelop.Dom.CSharp.CSharpAmbience.Convert

77 41

ICSharpCode.SharpDevelop.Refactoring.RefactoringMenuBuilder.BuildSubmenu

74 22

ICSharpCode.SharpDevelop.Project.Solution.Save 74 11

ICSharpCode.SharpDevelop.Dom.NRefactoryResolver.NRefactoryResolver.ResolveInternal

71 34

ICSharpCode.SharpDevelop.Widgets.TreeGrid.DynamicList.OnPaint

71 21

ICSharpCode.SharpDevelop.DefaultEditor.XmlFormattingStrategy.TryIndent

70 24

ICSharpCode.SharpDevelop.Project.Commands.AddExistingItemsToProject.Run

70 22

ICSharpCode.SharpDevelop.Project.Solution.SetupSolution 64 20

ICSharpCode.Core.AddInTree.Load 62 21

ICSharpCode.SharpDevelop.DefaultEditor.Commands.ClassMemberMenuBuilder.BuildSubmenu

60 23

ICSharpCode.SharpDevelop.Dom.Refactoring.NRefactoryRefactoringProvider.GetFullCodeRangeForType

58 24

ICSharpCode.SharpDevelop.Dom.CSharp.CSharpExpressionFinder.ReadNextToken

58 25

ICSharpCode.SharpDevelop.Dom.CecilReader.CecilClass.InitMembers

57 30

ICSharpCode.SharpDevelop.Dom.CSharp.CSharpExpressionFinder.SearchBracketForward

57 47

See the Quality Cockpit

Page 47: Better testing for C# software through source code analysis

Code audit of SharpDevelop application 2011-01-01

Confidential – This document is the property of Kalistick 47/59

6 Action Plan For each domain, a recommendation of corrections was established on the basis of tables detailing the rules

and code elements to correct. The following graph provides a comprehensive strategy to establish a plan of

corrections by defining a list of actions. This list is prioritized according to the expected return on

investment: the actions recommended in the first place are those with the best ratio between effort to

produce and gain on the overall rate of non-compliance.

Here is the explanation of each step:

1. Correction of forbidden practices

These practices are often easy to correct, and because they invalidate the classes directly, the

correction generally leads to significantly improve the overall rate of non-compliance (if classes are

not invalidated by other rules).

2. Splitting long methods

Using some IDE, it is often easy to break a method too long into several unit methods. This is

achieved using automated operations performing refactorings, avoiding any risk of regression

associated with manual intervention.

3. Documentation of complex methods

This step aims to document methods identified as non-compliant in documentation domain, this is a

simple but potentially tedious operation.

4. Correction of inadvisable practices

Correspond to all practices remaining after correction of forbidden practices: practices highly

inadvisable, inadvisable and to be avoided.

Page 48: Better testing for C# software through source code analysis

Code audit of SharpDevelop application 2011-01-01

Confidential – This document is the property of Kalistick 48/59

5. Removing of duplications

This operation is more or less difficult depending on the case: you have first to determine whether

the duplication should really be factorized, because two components may share the same code base

but be independent. Note that the operation can be automated by some IDE and according to the

type of duplication.

6. Modularization of complex operations

This operation is similar to splitting long methods, but is often more difficult to achieve due to the

complexity of the code.

The action plan can be refined on the Quality Cockpit using the mechanism of "tags." Tags

allow labeling the results of analysis to facilitate operations such as the prioritization of

corrections, their assignment to developers or the targeting of their fix version.

Page 49: Better testing for C# software through source code analysis

Code audit of SharpDevelop application 2011-01-01

Confidential – This document is the property of Kalistick 49/59

7 Glossary

Block coverage

Block coverage measures the rate of code blocks executed during testing compared to total blocks. A code

block is a code path with a single entry point, a single exit point and a set of statements executed in

sequence. It ends when it reaches a conditional statement, a function call, an exception, or a try / catch.

Branch coverage

Branch coverage measures the rate of branches executed during tests by the total number of branches.

if (value)

{

//

}

This code will be covered by branches to 100% if the if condition was tested in the case of true and false.

Line coverage

Lines (or statements) coverage measures the rate of executed lines during testing against the total number

of lines. This measure is insensitive to conditional statements, coverage of lines can reach 100% whereas all

conditions are not executed.

Line of code

A physical line of a source code in a text file. White line or comment line are counted in lines of code.

Non-compliance

A test result that does not satisfy the technical requirements defined for the project. Non-compliance is

related to a quality factor and a quality domain.

Synonym (s): violation

Quality domain

The test results are broken down into four areas depending on the technical origin of the non-compliances:

Implementation: Issues related to the use of language or algorithmic Structure: Issues related to the organization of the source code: methods size, cyclomatic complexity

... Test: Related to unit testing and code coverage Architecture: Issues related to the software architecture Documentation : Issues related to the code documentation: comments headers, inline comments ... Duplication : The “copy-pastes” found in the source code

Quality factor

The test results are broken down into six quality factors following application needs in terms of quality:

Efficiency: Does the application ensure required execution performance? Changeability: Do the code changes require higher development costs?

Page 50: Better testing for C# software through source code analysis

Code audit of SharpDevelop application 2011-01-01

Confidential – This document is the property of Kalistick 50/59

Reliability: Does the application contain bugs that affect its expected behavior? Maintainability: Do the maintenance updates require a constant development cost? Security: Has the application security flaws? Transferability: Is the transfer of the application towards a new development team a problem?

Statement

A statement is a primary code unit. For simplicity, a statement is delimited by a semicolon (;) or by a left

brace ({). Examples of statements in Java:

int i = 0;

if (i == 0) {

} else {}

public final class SomeClass

{

import com.project.SomeClass;

package com.project;

Unlike lines of code, statements do not include blank lines and comment lines. In addition, a line can contain

multiple statements.

Page 51: Better testing for C# software through source code analysis

Code audit of SharpDevelop application 2011-01-01

Confidential – This document is the property of Kalistick 51/59

8 Annex

8.1 Cyclomatic complexity Cyclomatic complexity is an indicator of the number of possible paths of execution.

Its high value is a sign that the source code will be hard to understand, to test, to validate, to maintain and to

evolve.

8.1.1 Definition

Imagine a control graph that represents the code that you want to measure the complexity. Then, count the

number of faces of the graph. This gives the structural complexity of the code, also called cyclomatic

complexity.

Page 52: Better testing for C# software through source code analysis

Code audit of SharpDevelop application 2011-01-01

Confidential – This document is the property of Kalistick 52/59

8.1.2 Example

We want to measure the complexity of the code:

Code analyzed Control graph

int x = 3;

if (x > 0) {

x++;

} else {

x -;

}

The graph contains 4 edges, 4 nodes and 2 faces (1

ingoing, 1 outgoing). The CC number is thus 2.

8.1.3 Corollary of the definition

CC = Number of decisions + 1

An if statement counts for 1 decision

A while statement counts for 1 decision

A case statement counts for N decision.

8.1.4 Diagnosis to be made

Value of the cyclomatic complexity

Risk assessment by S.E.I.7

1-10 Simple program, without much risk

11-20 Complexity and moderate risk

21-50 Complex, high risk

Above 50 Not testable, high risk

7 : The S.E.I. (Software Engineering Institute, http://www.sei.cmu.edu/) is the institute at the origin of the CMMI

standar. Its researches on the quality of code make it a major and reliable actor in the domain. CMMi (Capability Maturity Model Integration), is a process improvement approach that helps organizations improve their performance. CMMI can be used to guide process improvement across a project, a division, or an entire organization. (source: Wikipedia).

Page 53: Better testing for C# software through source code analysis

Code audit of SharpDevelop application 2011-01-01

Confidential – This document is the property of Kalistick 53/59

8.2 The coupling The coupling measures the level of dependencies between packages, classes or methods. When the coupling

is high, then the application is not modular and will be difficult to change.

8.2.1 Definition

Two classes are coupled when methods declared in one use methods or instantiate variables defined in the

other. The relationship is symmetric: if class A is coupled to B, then B is coupled to A. The metric CBO

(Coupling Between Classes) measure for a given class A, the number of classes that are coupled to this class.

The efferent coupling measures for a given method, the number of references made to third types and their

methods in the method body. The higher the efferent coupling is, the more the method depends on other

classes.

8.2.2 Calculation of coupling

The calculation of coupling between classes may be simple, for example by counting:

The attribute declarations with references to classes

Formal parameters (in the method signatures for example)

throws declarations

Local variables

Types

The calculation of efferent coupling for a method is also straightforward, for example by counting:

Formal parameters (in the method signature) with a non-primitive type defined outside the class

throws declarations

Local variables of the method using a non-primitive type defined outside the class

Page 54: Better testing for C# software through source code analysis

Code audit of SharpDevelop application 2011-01-01

Confidential – This document is the property of Kalistick 54/59

8.3 TRI and TEI TRI (TestRelevancyIndex) and TEI (TestEffortIndex) are two indexes which measure respectively the

relevance to test the code and effort for implementing these tests. They were designed in partnership with

the Centre of Excellence in Information and Communication Technologies (CETIC) from millions of lines of

code analyzed by the “Quality Cockpit” platform 8.

8.3.1 TRI (TestRelevancyIndex)

8.3.1.1 Objective

The goal of TRI is to refine the analysis of code coverage performed by tests correlating the raw concept of

code coverage with relevance to test a method. The emphasis is no longer just the percentage of code

covered but also the relevance in the choice of tested methods. The interest is to ensure that the goal of

code coverage to reach will target appropriated methods.

8.3.1.2 Principle

TRI is an index specific to methods, whose value is obtained by scoring the values of some unitary metrics

(cyclomatic complexity, afferent coupling...) and applying a risk factor. This risk factor is associated with

business features the code element in implied with. Therefore risk factors are specific to the application.

Depending on the value of TRI, the methods are classified into five priority groups:

Critical : Complex / sensitive methods to test absolutely High Average Low No : Useless trivial methods to test

Each priority group defines:

A coverage threshold to reach.

A level of severity for non-compliances

It is thus possible to specify for the critical elements a demanding test objective, handling different use cases,

and for lower priority items, define tests that target only nominal use cases.

8.3.1.3 Details of unit metrics

The TRI is calculated from the following unit metrics:

Cyclomatic complexity of the method

Number of method parameters

Number of local variables in the method

Afferent coupling

Efferent coupling

Cumulative number of non-compliance of the method

8 CETIC, Kalistick. Statistically Calibrated Indexes for Unit Test Relevancy and Unit Test Writing Effort, 2010

Page 55: Better testing for C# software through source code analysis

Code audit of SharpDevelop application 2011-01-01

Confidential – This document is the property of Kalistick 55/59

8.3.2 The TEI (TestEffortIndex)

8.3.2.1 Objective

TEI introduces a new dimension in the prioritization of test methods, providing an estimate of the effort

required to test a method.

This index is not implied in the non-compliance of methods, it is simply provided as a guide.

8.3.2.2 Principle

The TEI is an index specific to methods, whose value is obtained by scoring the values of some unit metrics

(cyclomatic complexity, number of parameters ...). Based on this TEI value, the methods are classified into

five groups of test effort:

Very low: trivial methods to test

Low

Normal

High

Very high: complex methods very difficult to exhaustively test

8.3.2.3 Details of unit metrics

The TEI is calculated from the following unit metrics:

Cyclomatic complexity of the method

Number of method parameters

Number of local variables in the method

Page 56: Better testing for C# software through source code analysis

Code audit of SharpDevelop application 2011-01-01

Confidential – This document is the property of Kalistick 56/59

8.4 Technical Requirements

8.4.1 Implementation rules

Reference Short description Severity AlwaysCallGetLastErrorAfterPInvoke

Forbidden

AlwaysDeclareNamespace Forbidden AvoidRedundantCasts Forbidden CallSuppressFinalizeInDisposeMethods

Forbidden

DeclareUsageForAttribute Forbidden DeclareVersionForAssembly_

The assembly version must be declared. Forbidden

DefineMarshalingForPInvokeStringArguments

Forbidden

DefineMessageForObsoleteAttribute

Forbidden

DontAccessAnyReferenceTypeInDestructor

Forbidden

DontAccessOrModifyObjectMoreThanOnceInAExpression

Forbidden

DontChangeVisibilityOfInheritedMember

Forbidden

DontCompareFloatWithEquals

Forbidden

DontDeriveExceptionFromSystemException

Forbidden

DontHardcodeLocaleSpecificStrings

Forbidden

DontHideBaseClassMethod

Forbidden

DontImplementEmptyFinalizers

Forbidden

DontImplementWriteOnlyProperty

Forbidden

DontOverloadOperatorEqualsOnReferenceType

Forbidden

DontRaiseExceptionInUnexpectedMethod_

Certain methods should not throw exceptions because they carry out simple operations and result either in a return value or the end of an operation.

Forbidden

DontThrowExceptionsInFinallyBlock

Don't throw exceptions in a finally block. Forbidden

DontUseInadvisableTypes Forbidden DontUseObjectTypeForIndexers

Forbidden

DontUseTooManyParametersOnGenerics_

Forbidden

DontUseWin32ApiWhenManagedApiExist

Forbidden

FollowISerializableImplementationRule

Forbidden

ISerializableTypesMustCallBaseClassMethods

Forbidden

ImplementIDisposableForTypesWithDisposableFields

Forbidden

InstantiateExceptionsWithArguments

Forbidden

Page 57: Better testing for C# software through source code analysis

Code audit of SharpDevelop application 2011-01-01

Confidential – This document is the property of Kalistick 57/59

OverrideEqualsWithOperatorOnValueTypes

Forbidden

PInvokesMustNotBeVisible Forbidden PropertyNamesMustNotMatchGetMethods

Forbidden

TypeLinkDemandsRequireInheritanceDemands

Forbidden

TypeWithNativeResourcesMustBeDisposable

Forbidden

UseConstInsteadOfReadOnlyWhenPossible_

A field declared as <code>static</code> and <code>readonly</code> whose initial value can be calculated during compilation should use <code>const</code> instead of <code>static readonly</code>.

Forbidden

UseIsNanFunction Forbidden UseIsNullOrEmptyToCheckEmptyStrings

Forbidden

UseMarshalAsForBooleanPInvokeArguments_

Use the <code>System.Runtime.InteropServices.MarshalAsAttribute </code> attribute to properly convert between an unmanaged Boolean and a managed Boolean.

Forbidden

UseParamsKeywordInsteadOfArglist

Forbidden

UseSTAThreadAttributeForWindowsFormsEntryPoints

Forbidden

DeclareFinalizerForDisposableTypes

Highly inadvisable

DefineAttributeForISerializableTypes

Highly inadvisable

DefineDeserializationMethodsForOptionalFields

Highly inadvisable

DontIgnoreMethodsReturnValue

Highly inadvisable

DontMakePointersVisible Highly inadvisable

DontNestGenericInMemberSignatures_

Don't nest generic types as method parameters. Highly inadvisable

DontTouchForLoopVariable

Highly inadvisable

DontUseMultidimensionalIndexers

Highly inadvisable

DontUseNonConstantStaticVisibleFields

Highly inadvisable

EnumeratorMustBeStronglyTyped

Highly inadvisable

ImplementGenericInterfaceForCollections

Highly inadvisable

ListMustBeStronglyTyped Highly inadvisable

NeverMakeCtorCallOverridableMethod

Highly inadvisable

OverrideGetHashCodeWhenOverridingEquals

Highly inadvisable

Page 58: Better testing for C# software through source code analysis

Code audit of SharpDevelop application 2011-01-01

Confidential – This document is the property of Kalistick 58/59

OverrideMethodsInIComparableImplementations

Highly inadvisable

ReviewParametersAttributeStringLiteral_

Review the attribute values. The values of the "version", "guid", "uri", "urn", and "url" parameters must correspond to correct expected values.

Highly inadvisable

AttributeArgumentShouldBeLinkedToAccessor

Inadvisable

DefineAttributeForNonSerializableFields

Inadvisable

DefinePrivateConstructorForStaticClass

Inadvisable

DefineZeroValueForEnum Inadvisable DontCatchTooGeneralExceptions

Inadvisable

DontDeclareRefAndOutParameters

Inadvisable

DontThrowBasicException Inadvisable DontThrowRuntimeException

Inadvisable

DontUseCaseToDifferPublicIdentifiers

Inadvisable

DontUseReservedKeywordsForIdentifiers

Inadvisable

FollowSerializationMethodsImplementationRule

Inadvisable

FollowSuffixStandardForIdentifiers

Inadvisable

OverrideLinkDemandMustBeIdenticalToBase

Inadvisable

OverrideOperatorEquals Inadvisable PreserveStackTraceWhenThrowingNewException

Inadvisable

ProvideTypeParameterForGenericMethods

Inadvisable

StaticTypesShouldBeSealed

Inadvisable

UseGenericEventHandler_ Use the generic delegate <code>System.EventHandler<TEventArgs>(Object sender,TEventArgs e)</code>.

Inadvisable

UseInt32ForEnumStorage Inadvisable UseInterfaceRatherThanClasses_

For the sake of being generic, interfaces are much more flexible to use rather than classes that implement them.

Inadvisable

UseStaticWhenPossible Inadvisable ConsiderUsingProperty To be avoided DontCompareBooleanWithTrueOrFalse

To be avoided

DontDefinePublicGenericLists

To be avoided

DontDirectlyReturnArray To be avoided DontImplementConstructorForStaticTypes

To be avoided

DontMakeRedundantInitialization_

It is unnecessary to initialize a field with its default value. To be avoided

DontMakeTypeFieldsPublic

To be avoided

DontPrefixEnumValuesWithEnumName

To be avoided

DontUseReservedKeywordForEnum

To be avoided

Page 59: Better testing for C# software through source code analysis

Code audit of SharpDevelop application 2011-01-01

Confidential – This document is the property of Kalistick 59/59

DontUseTypeNamesForNamespaces

To be avoided

FlagsEnumsMustHavePluralNames

To be avoided

IdentifiersMustNotContainTypeNames

To be avoided

ImplementNamedMethodsWhenOverloadingOperators

To be avoided

InitializeStaticFieldsInline To be avoided OnlyFlagsEnumsMustHavePluralNames

To be avoided

PassBaseTypeAsParameters

To be avoided

RemoveUnusedInternalClasses

To be avoided

RemoveUnusedParameters

To be avoided

RemoveUnusedPrivateFields

To be avoided

RemoveUnusedPrivateMethods

To be avoided

ReviewUnusedLocals To be avoided SealAttributesDeclarations To be avoided DisposeDisposableFields_ All <em>disposable</em> fields (inherited from

<code>System.IDisposable</code>) must be disposed in the <code>System.IDisposable.Dispose()</code> method for this type.

[None]

8.4.2 Code coverage thresholds

The following table shows the code coverage thresholds expected according to test priority (TRI). This test

priority is scaled according to five levels, based on TRI ranges:

Priority Test [TRI Min. TRI Max. [ Threshold Severity

No 0 6 0 For information

Low 6 11 60% For information

Average 11 16 70% For information

High 16 21 80% For information

Critical 21 [Infinite] 90% For information