Nothing Special   »   [go: up one dir, main page]

Software Project Management - Chapter 4 Software Metrics

Download as pdf or txt
Download as pdf or txt
You are on page 1of 47

Software Metrics &

Measurements
Chapter 4
Table of Contents
4.1 Introduction
4.2 Software Metrics
4.3 Software Measurement
4.4 Metrics for Software Quality Attributes

2
4.1 Introduction
• Software measurement is concerned with deriving a numeric value for
quantitative evaluation of an attribute of a software product or
process.
• Applied in software process with the intent of improving it on a
continuous basis.
• This allows for objective comparisons between products, techniques and
processes.

3
4.1 Introduction

Use of measurements
• To assign a value to system quality attributes
• By measuring the characteristics of system components, such as their cyclomatic
complexity, and then aggregating these measurements, you can assess system
quality attributes, such as maintainability.
• To identify the system components whose quality is sub-standard
• Measurements can identify individual components with characteristics that
deviate from the norm. For example, you can measure components to discover
those with the highest complexity. These are most likely to contain bugs because
the complexity makes them harder to understand.

4
4.2 Software Metrics
4.2.1 Use of Software Metrics
4.2.2 Types of Metrics

5
4.1 Introduction

4.2 Software Metrics


• A quantitative measure of the degree to which a system, component or
process possesses a given attribute.
• Examples of metrics (normalized):
o Errors/KLOC
o Defects/KLOC
o Cost/LOC
o Pages of documents/KLOC
o Errors/person-month
o LOC/person-month

6
4.2 Software Metrics

4.2.1 Use of Software Metrics


• Quality control
o Measures of the fitness of use of the work products that are produced.
• Project control
• Productivity assessment
o Measures of software development output as a function of effort and time
applied.
• Estimation (using historical metrics)
o What was software development productivity on past projects?
o What was the quality of the software that was produced?
o How can past productivity and quality data be extrapolated to the present?
o How can it help us plan and estimate more accurately?
7
4.2 Software Metrics

Match the following examples of metrics to the purpose of measurement:

Purpose of Quality Project Productivity


control control assessment Estimation
measurement

Metrics Defects Pg document Errors LOC


Errors/KLOC /KLOC Cost/LOC /KLOC /person- /person-
month month

8
4.2 Software Metrics

Indicators
• A metric or combination of metrics that provides insight into the
software process, project or the product itself so that improvement or
adjustment can be made to the process or project.
• E.g., 800 errors/KLOC is an indicator depicting an insight that the
software is of poor quality.

9
4.2 Software Metrics
4.2.2 Types of Metrics
Software metrics may be categorized as
a. Product metrics
b. Process metrics
c. Project metrics Metrics

Product Process Project


4.2 Software Metrics

4.2.2 a. Product metrics


• Concerned with the quality of the software itself
• Classes of product metric
o Dynamic metrics
 Collected by measurements made of a program in execution
 Help assess efficiency and reliability
o Static metrics
 Collected by measurements made of the system representations such as design, program or
documentation
 Help assess complexity, understandability and maintainability.

11
4.2.2 a. Product Metrics

Dynamic Metrics
• Closely related to software quality attributes
• Can be collected during system testing or after the system has gone
into use
• Relatively easy to measure
• Examples of measurement:
o time required to start up the system, execution time required for a particular
function (performance/efficiency attribute)
o the number of system failures (reliability attribute).

12
4.2.2 a. Product Metrics

Static Metrics
• Have an indirect relationship with quality attributes
• You need to try and derive a relationship between these metrics and properties
such as complexity, understandability and maintainability.
• Examples:
• Line of code (LOC) – more lines, more complex, more errors
• Cyclomatic complexity – number of independent paths
• Depth of conditional nesting – deeply nested if-statement are hard to
understand and potentially error-prone
• Length of identifiers – longer, more meaningful

13
4.2.2 a. Product Metrics
Cyclomatic Complexity
1
• Independent path: any path
through the program that
introduces at least one new set of 2,3
processing statements or a new 4,5
6
condition.
• E.g. in Flowchart A (ref 3 pg 446), 7 8
independent paths are:
• Path 1: 1-11 9
• Path 2: 1-2-3-4-5-10-1-11 10
• Path 3: 1-2-3-6-8-9-10-1-11
• Path 4: 1-2-3-6-7-9-10-1-11
11

14
4.2.2 a. Product Metrics
Relationships between internal and external
software quality attributes

15
4.2.2 b. Process Metrics
• Gives an indication of the system development processes
• Examples:
• Measures of errors uncovered • Time (hr or day)/SE task - can also be
before the release of the software project metrics)
• Defects delivered to and reported • Time elapsed from the time a request is
by users made until evaluation is complete
• Effort (person-hr) to perform the evaluation
• Work products/deliverables • Time elapsed from completion of
delivered (productivity) evaluation to assignment of change order
• Schedule conformance to personnel
• Human effort expended • Effort required to make the change
• Calendar time expended • Time required to make the change
• Errors uncovered during work to make
change
Note: Some of the process metrics are also metrics for project and product as well
16
4.2.2 b. Process Metrics

Types of process metric


• The time taken for a particular process to be completed
• This can be the total time devoted to the process, calendar time, the time spent
on the process by particular engineers, and so on.
• The resources required for a particular process
• Resources might include total effort in person-days, travel costs or computer
resources.
• The number of occurrences of a particular event
• Examples of events that might be monitored include the number of defects
discovered during code inspection, the number of requirements changes
requested, the number of bug reports in a delivered system and the average
number of lines of code modified in response to a requirements change.
17
4.2.2 b. Process Metrics
Private vs Public Process Metrics
• Private metrics
o Process metrics that should be private to the individual software engineer and
serve as an indicator for the individual only
o Examples:
 Defect rates (by individual)
 Defect rates (by module)
 Errors found during development
• Public metrics
o Integrated information that was originally private to individuals and teams
o Examples:
 Project level defect rates
 Effort
 Calendar times
 Related data to uncover indicators to improve organizational process performance
4.2.2 b. Process Metrics
Personal Software Process (PSP)
• An approach that uses private process metrics designed to help
software engineers improve their performance
• Provides disciplined methods to help software engineers
o Improve their estimating and planning skills
o Make commitments they can keep
o Manage the quality of their projects
o Reduce the number of defects in their work
• Each level has detailed scripts, checklists and templates to guide the
engineer through required steps and helps them improve their own
personal software process.

19
4.2.2 b. Process Metrics
PSP Core Measures
• Size – the size measure for a product part, e.g. LOC
• Effort – the time required to complete a task (usually in minutes)
• Quality – the number of defects in the product
• Schedule – a measure of project progression, tracked against planned
and actual completion dates
Note:
 software developers use many other measures derived from these basic measures,
e.g.: estimation accuracy, productivity, PV, EV, etc.
 Logging time, defect and size data is an essential part of planning and tracking
PSP projects as historical data is used to improve estimating accuracy.

20
4.2.2 c. Project Metrics
• Used by Project Manager and team to
o Monitor progress during software development,
o Adapt project workflow and technical activities, and
o Control product quality
• By comparing actual metrics with estimated/expected metrics to
o Minimize the development schedule by making adjustments necessary to avoid
delays and mitigate potential problems and risks
o Assess product quality on an on-going basis and modify the technical approach
to improve quality when necessary

21
4.2.2 c. Project Metrics
Collection of Project Metrics
• During estimation:
o Metrics collected from past projects are used as a basis from which effort and
time estimates are made for current work
• During project execution:
o Scheduled vs actual milestone dates – used to control progress
• During technical work:
o Errors uncovered per review hour
o Distribution of effort per SE task
o Pages of documentation per SE task
o Function points per SE task
o Delivered source lines per SE task

22
4.3 Software Measurement
4.3.1 Direct and Indirect Measures
4.3.2 Size-Oriented Metrics
4.3.3 Function-Oriented Metrics

23
4.3 Software Measurement

4.3.1 Direct and Indirect Measures


• Direct Measure: more • Indirect Measure: measurement
quantitative, easier to collect. towards non-functional req. but can
• E.g.: still be expressed in quantitative
• Process & Project form, quite difficult to measure.
• Cost & effort applied • E.g.:
• Product • Functionality
• LOC • Quality
• Execution speed • Complexity
• Memory size • Efficiency
• Defects over time • Reliability
• Maintainability

24
4.3 Software Measurement

Normalized Metrics
• Project A has 20 errors while Project B has 50 errors. Can we conclude
that Project A is of better quality?
• Normalization of metrics between projects are required to reduce/get a
value that is comparable in a fair manner between different projects.
• Two methods to obtain normalized metrics to compare different projects:
• Size-oriented metrics
• Function-oriented metrics

25
4.3 Software Measurement

4.3.2 Size-oriented Metrics


• Size-oriented metrics are derived by normalizing (dividing) with a related
measure, e.g.:
• Quality measures over product size
• Productivity measures over the effort

Quality Productivity

Example Errors LOC


KLOC Example
Person-month

26
4.3.2 Size-oriented Metrics

Unnormalized Size-oriented Measures


Project LOC Effort $(000) pp. doc. Errors Defects People
Alpha 12100 24 168 365 134 29 3
Beta 27200 62 440 1224 321 86 5
Gamma 20200 43 314 1050 256 64 6
… … … … … … … …

27
4.3.2 Size-oriented Metrics

Examples of Normalized Size-oriented Metrics


• Errors per KLOC (thousand lines of code)
• Defects per KLOC
• $ per LOC
• Page of documentation per KLOC
• Errors per person-month
• LOC per person-month
• $ per page of documentation

28
4.3 Software Measurement

4.3.3 Function-oriented Metrics


• It measures the functionality delivered by the software system as
a normalized value.
• This so-called “functionality” is derived indirectly using Function
Point (FP) calculation.

29
4.3.3 Function-oriented metrics

Function Point (FP) Formulae


FP = Count Total x [0.65 + 0.01 Fi]

Count total is the


Fi are “complexity adjustment values”
sum or all FP entries
where i = 1 to 14

30
4.3.3 Function-oriented metrics

FP = Count Total x [0.65 + 0.01 Fi]


Count Total is the sum of the counts from the following measurement
parameters:
• No. of user inputs – click on save, print, etc
• No. of user outputs – no. of reports, screens, error messages
• No. of user inquiries – HELP, Search/Find request
• No. of files – database, separate files, etc
• No. of external interfaces - connection to hard-disk, printers, disk-drives,
etc

31
4.3.3 Function-oriented metrics

Computing the Count Total for FP To be decided by


organization
(subjective)

Weighting factor
Measurement parameter count simple average complex
No. of user inputs x 3 4 6 =
No. of user outputs x 4 5 7 =
No. of user inquiries x 3 4 6 =
No. of files x 7 10 15 =
No. of external interfaces x 5 7 10 =
Count total

32
4.3.3 Function-oriented metrics

Examples for various measurement parameters


e.g.
e.g.no.
e.g. e.g.ofHELP,
click reports,
on save,to hard-disk,
connection
search/find
e.g. screens
print request
customer
button
printers, files
disk-drives
Weighting factor
Measurement parameter count simple average complex
No. of user inputs x 3 4 6 =
No. of user outputs x 4 5 7 =
No. of user inquiries x 3 4 6 =
No. of files x 7 10 15 =
No. of external interfaces x 5 7 10 =
Count total

33
4.3.3 Function-oriented metrics

FP = Count Total x [0.65 + 0.01 Fi]


The following scale for the complexity adjustment :
0 – no influence
1 – incidental
2 – moderate
3 – average
4 – significant
5 – essential

34
4.3.3 Function-oriented metrics

FP = Count Total x [0.65 + 0.01 Fi]


Complexity adjustment values are obtained from responses* to the *Responses are based
following questions: on the following scale:
1. Does the system require reliable backup and recovery? 0 – no influence
1 – incidental
2. Are data communications required? 2 – moderate
3. Are there distributed processing functions? 3 – average
4. Is performance critical? 4 – significant
5. Will the system run in an existing, heavily utilized operational environment? 5 – essential
6. Does the system require on-line data entry?
7. Does the on-line data entry require the input transaction to be built over
multiple screens or operations?
8. Are the master files updated on-line?
9. Are the inputs, outputs, files, or inquiries complex?
10. Is the internal processing complex?
11. Is the code designed to be reusable?
12. Are conversion and installation included in the design?
13. Is the system designed for multiple installations in different organizations?
14. Is the application designed to facilitate change and ease of use by the
user? 35
Exercise: Compute the function point
Measurement Parameter Count Weighting factor
Number of inputs 2 Complex (5)
Number of outputs 4 Simple (2)
Number of inquiries 7 Average (4)
Number of files 3 Average (7)
Number of external interfaces 2 Complex (8)
Fi = 60

Formulae: FP = Count Total x [0.65 + 0.01 Fi]

36
4.3.3 Function-oriented metrics

Examples of “functionality” as a normalized value:


• Errors/ FP
• Defects/ FP
• $/ FP
• Pg doc/ FP
• FP/ person-month

37
Exercise
Measurement Project A’s Project B’s
Weighting factor
Parameter Count Count
Number of inputs Simple(2) 2 5
Number of outputs Simple(2) 4 12
Number of inquiries Average(4) 7 9
Number of files Average(5) 3 7
Number of external Complex(6) 2 3
interfaces
Fi = 60
Assuming that Weighting factor for each parameter is Average(5),
a. Calculate function point (FP) for both project A and B.
b. 120 pages of documentation is found for Project A while 190 pages for Project B.
Evaluate which project has higher maintainability by using FP in your normalized
measurement.
38
4.4 Metrics for Software Quality Attributes
4.4.1 McCall’s Software Quality Factors
4.4.2 Metrics for Measuring Software Quality
4.4.3 Challenges in Measuring Software
4.4.4 Characteristics of Good Software Metrics

39
4.4 Metrics for Software Quality Attributes

4.4.1 McCall’s Software Quality Factors


3 software quality factors
Maintainability Portability categories:
Flexibility Reusability
Testability Interoperability • Product operation (using it)
Product Product • Product revision (changing it)
Revision Transition
• Product transition (modifying it
to work in a different
Product environment; i.e., “porting” it)
Operation

Correctness Reliability Usability


Integrity Efficiency

40
4.4 Metrics for Software Quality Attributes

4.4.2 Metrics for Measuring Software Quality


• What metrics can we use for software quality attributes, e.g.:
a. Correctness
b. Maintainability
c. Usability

41
4.4.2 Metrics for Measuring Software Quality

a. Correctness
• The degree to which the software performs its required function
• Common measures:
• Defects per KLOC
• Defects over a standard period of time. (i.e. one year)
Note: Defect refers to a verified lack of conformance to requirements.

42
4.4.2 Metrics for Measuring Software Quality

b. Maintainability
• The ease with which a program can be corrected if an error is
encountered, adapted if its environment changes, or enhanced if the
customer desires a change in requirements.
• No direct way to measure, so must use indirect measures:
o Mean-time-to-change (MTTC)
 The time it takes to analyze the change request, design an appropriate modification, implement
the change, test it, and distribute the change to all users.
 Programs that are maintainable will have a lower MTTC.
o Spoilage
 The cost to correct defects encountered after the software has been released to its end-users.
 When the ratio of spoilage to overall project cost is plotted as a function of time, a manager
can determine whether the overall maintainability of software produced by a software
development organization is improving.

43
4.4.2 Metrics for Measuring Software Quality

c. Usability
• How easy it is to use the system
• Can be measured in terms of these characteristics:
o The time required to learn how to perform a task for the first time using the system
o The time required to become moderately efficient in the use of the system
o The net increase in productivity, measured against the old process or system,
measured after a user has gained moderate efficiency
o A subjective measure of user attitude towards the system (using a questionnaire)

44
4.4 Metrics for Software Quality Attributes

4.4.3 Challenges in Measuring Software


• Measurement is too complex
• Too esoteric that very few professionals could understand
• Violate the basic intuitive notions of what high-quality software really is
• Many researchers attempted to develop a single metric that provides a
comprehensive measure of software complexity
• Derived metrics might not be useful or suitable without realizing it. In
other words, metrics might not prove anything
• Take too much time collecting measures
• Too difficult to determine what to measure and evaluating measures that
are collected

45
4.4 Metrics for Software Quality Attributes

4.4.4 Characteristics of Good Metrics (1/2)


• Consistent in its use of units and dimensions
o The mathematical computation of the metric should use measures that do not
lead to bizarre combinations of units.
o E.g. use size and/or FP throughout all projects
• Programming language independent
o Metrics should be based on the analysis model, the design model, or the
structure of the program itself.
o E.g. LOC is programming language dependent
• Simple and computable
o Relatively easy to learn how to derive the metric
o Its computation should not demand inordinate effort or time

46
4.4 Metrics for Software Quality Attributes

4.4.4 Characteristics of Good Metrics (2/2)


• Empirically and intuitively persuasive
o Metrics should match the practitioner’s notions about the product attribute under
consideration
• Consistent and objective
o Always yield results that are unambiguous
• An effective mechanism for high-quality feedback
o Should motivate team for software development improvement
o The metrics should lead to a higher-quality end product

47

You might also like