Software Project Management - Chapter 4 Software Metrics
Software Project Management - Chapter 4 Software Metrics
Software Project Management - Chapter 4 Software Metrics
Measurements
Chapter 4
Table of Contents
4.1 Introduction
4.2 Software Metrics
4.3 Software Measurement
4.4 Metrics for Software Quality Attributes
2
4.1 Introduction
• Software measurement is concerned with deriving a numeric value for
quantitative evaluation of an attribute of a software product or
process.
• Applied in software process with the intent of improving it on a
continuous basis.
• This allows for objective comparisons between products, techniques and
processes.
3
4.1 Introduction
Use of measurements
• To assign a value to system quality attributes
• By measuring the characteristics of system components, such as their cyclomatic
complexity, and then aggregating these measurements, you can assess system
quality attributes, such as maintainability.
• To identify the system components whose quality is sub-standard
• Measurements can identify individual components with characteristics that
deviate from the norm. For example, you can measure components to discover
those with the highest complexity. These are most likely to contain bugs because
the complexity makes them harder to understand.
4
4.2 Software Metrics
4.2.1 Use of Software Metrics
4.2.2 Types of Metrics
5
4.1 Introduction
6
4.2 Software Metrics
8
4.2 Software Metrics
Indicators
• A metric or combination of metrics that provides insight into the
software process, project or the product itself so that improvement or
adjustment can be made to the process or project.
• E.g., 800 errors/KLOC is an indicator depicting an insight that the
software is of poor quality.
9
4.2 Software Metrics
4.2.2 Types of Metrics
Software metrics may be categorized as
a. Product metrics
b. Process metrics
c. Project metrics Metrics
11
4.2.2 a. Product Metrics
Dynamic Metrics
• Closely related to software quality attributes
• Can be collected during system testing or after the system has gone
into use
• Relatively easy to measure
• Examples of measurement:
o time required to start up the system, execution time required for a particular
function (performance/efficiency attribute)
o the number of system failures (reliability attribute).
12
4.2.2 a. Product Metrics
Static Metrics
• Have an indirect relationship with quality attributes
• You need to try and derive a relationship between these metrics and properties
such as complexity, understandability and maintainability.
• Examples:
• Line of code (LOC) – more lines, more complex, more errors
• Cyclomatic complexity – number of independent paths
• Depth of conditional nesting – deeply nested if-statement are hard to
understand and potentially error-prone
• Length of identifiers – longer, more meaningful
13
4.2.2 a. Product Metrics
Cyclomatic Complexity
1
• Independent path: any path
through the program that
introduces at least one new set of 2,3
processing statements or a new 4,5
6
condition.
• E.g. in Flowchart A (ref 3 pg 446), 7 8
independent paths are:
• Path 1: 1-11 9
• Path 2: 1-2-3-4-5-10-1-11 10
• Path 3: 1-2-3-6-8-9-10-1-11
• Path 4: 1-2-3-6-7-9-10-1-11
11
14
4.2.2 a. Product Metrics
Relationships between internal and external
software quality attributes
15
4.2.2 b. Process Metrics
• Gives an indication of the system development processes
• Examples:
• Measures of errors uncovered • Time (hr or day)/SE task - can also be
before the release of the software project metrics)
• Defects delivered to and reported • Time elapsed from the time a request is
by users made until evaluation is complete
• Effort (person-hr) to perform the evaluation
• Work products/deliverables • Time elapsed from completion of
delivered (productivity) evaluation to assignment of change order
• Schedule conformance to personnel
• Human effort expended • Effort required to make the change
• Calendar time expended • Time required to make the change
• Errors uncovered during work to make
change
Note: Some of the process metrics are also metrics for project and product as well
16
4.2.2 b. Process Metrics
19
4.2.2 b. Process Metrics
PSP Core Measures
• Size – the size measure for a product part, e.g. LOC
• Effort – the time required to complete a task (usually in minutes)
• Quality – the number of defects in the product
• Schedule – a measure of project progression, tracked against planned
and actual completion dates
Note:
software developers use many other measures derived from these basic measures,
e.g.: estimation accuracy, productivity, PV, EV, etc.
Logging time, defect and size data is an essential part of planning and tracking
PSP projects as historical data is used to improve estimating accuracy.
20
4.2.2 c. Project Metrics
• Used by Project Manager and team to
o Monitor progress during software development,
o Adapt project workflow and technical activities, and
o Control product quality
• By comparing actual metrics with estimated/expected metrics to
o Minimize the development schedule by making adjustments necessary to avoid
delays and mitigate potential problems and risks
o Assess product quality on an on-going basis and modify the technical approach
to improve quality when necessary
21
4.2.2 c. Project Metrics
Collection of Project Metrics
• During estimation:
o Metrics collected from past projects are used as a basis from which effort and
time estimates are made for current work
• During project execution:
o Scheduled vs actual milestone dates – used to control progress
• During technical work:
o Errors uncovered per review hour
o Distribution of effort per SE task
o Pages of documentation per SE task
o Function points per SE task
o Delivered source lines per SE task
22
4.3 Software Measurement
4.3.1 Direct and Indirect Measures
4.3.2 Size-Oriented Metrics
4.3.3 Function-Oriented Metrics
23
4.3 Software Measurement
24
4.3 Software Measurement
Normalized Metrics
• Project A has 20 errors while Project B has 50 errors. Can we conclude
that Project A is of better quality?
• Normalization of metrics between projects are required to reduce/get a
value that is comparable in a fair manner between different projects.
• Two methods to obtain normalized metrics to compare different projects:
• Size-oriented metrics
• Function-oriented metrics
25
4.3 Software Measurement
Quality Productivity
26
4.3.2 Size-oriented Metrics
27
4.3.2 Size-oriented Metrics
28
4.3 Software Measurement
29
4.3.3 Function-oriented metrics
30
4.3.3 Function-oriented metrics
31
4.3.3 Function-oriented metrics
Weighting factor
Measurement parameter count simple average complex
No. of user inputs x 3 4 6 =
No. of user outputs x 4 5 7 =
No. of user inquiries x 3 4 6 =
No. of files x 7 10 15 =
No. of external interfaces x 5 7 10 =
Count total
32
4.3.3 Function-oriented metrics
33
4.3.3 Function-oriented metrics
34
4.3.3 Function-oriented metrics
36
4.3.3 Function-oriented metrics
37
Exercise
Measurement Project A’s Project B’s
Weighting factor
Parameter Count Count
Number of inputs Simple(2) 2 5
Number of outputs Simple(2) 4 12
Number of inquiries Average(4) 7 9
Number of files Average(5) 3 7
Number of external Complex(6) 2 3
interfaces
Fi = 60
Assuming that Weighting factor for each parameter is Average(5),
a. Calculate function point (FP) for both project A and B.
b. 120 pages of documentation is found for Project A while 190 pages for Project B.
Evaluate which project has higher maintainability by using FP in your normalized
measurement.
38
4.4 Metrics for Software Quality Attributes
4.4.1 McCall’s Software Quality Factors
4.4.2 Metrics for Measuring Software Quality
4.4.3 Challenges in Measuring Software
4.4.4 Characteristics of Good Software Metrics
39
4.4 Metrics for Software Quality Attributes
40
4.4 Metrics for Software Quality Attributes
41
4.4.2 Metrics for Measuring Software Quality
a. Correctness
• The degree to which the software performs its required function
• Common measures:
• Defects per KLOC
• Defects over a standard period of time. (i.e. one year)
Note: Defect refers to a verified lack of conformance to requirements.
42
4.4.2 Metrics for Measuring Software Quality
b. Maintainability
• The ease with which a program can be corrected if an error is
encountered, adapted if its environment changes, or enhanced if the
customer desires a change in requirements.
• No direct way to measure, so must use indirect measures:
o Mean-time-to-change (MTTC)
The time it takes to analyze the change request, design an appropriate modification, implement
the change, test it, and distribute the change to all users.
Programs that are maintainable will have a lower MTTC.
o Spoilage
The cost to correct defects encountered after the software has been released to its end-users.
When the ratio of spoilage to overall project cost is plotted as a function of time, a manager
can determine whether the overall maintainability of software produced by a software
development organization is improving.
43
4.4.2 Metrics for Measuring Software Quality
c. Usability
• How easy it is to use the system
• Can be measured in terms of these characteristics:
o The time required to learn how to perform a task for the first time using the system
o The time required to become moderately efficient in the use of the system
o The net increase in productivity, measured against the old process or system,
measured after a user has gained moderate efficiency
o A subjective measure of user attitude towards the system (using a questionnaire)
44
4.4 Metrics for Software Quality Attributes
45
4.4 Metrics for Software Quality Attributes
46
4.4 Metrics for Software Quality Attributes
47