More metrics used for testing

© Moreniche

Inspection/Testing will not improve or will not guarantee the quality of the product/service. The product quality can only be guaranteed/improved through constantly improving the underlying processes followed to produce the product or deliver the service

Fundamental Metrics –Rule: Size is either measured as Lines of Code (LOC), the number of function points or change complexity

Rules:

For measuring LOC

•Count statements that are executed or interpreted

•Count only once for any statement

•Do not count blank lines and comments

For measuring Function Points:

•Use the IFPUG guide

For Estimating and Measuring complexity of change in maintenance requests:

•Define program complexity and change complexity. (For example the complexity can be estimated as simple =1, medium=3 or complex=5.)

Notes:

1. For development projects size is estimated at initiation, analysis and design stages. The size is measured after the code is constructed.

2. For conversion projects the size of source code is measured and converted code is estimated in the initiation and design phase. After the code is converted to target platform, the size is measured.

3. TCS-QMS-103, software estimation guidelines provides details on size estimation

4. Size estimation and measurement need to be done at functionality level and aggregated to get the application level

5. For individual maintenance requests use complexity as the measure of size

Fundamental Metrics - Effort: Effort is measured as number of person days spent

To estimate

Effort = size/productivity Where productivity is calculated as

a. Number of LOC or FP/Day in development and conversion projects

b. Number of simple requests/day in maintenance requests

c. The productivity is normally for the complete life cycle.

Estimation:

1. Effort is estimated at the initiation of project and recalculated at every phase.

2. In case of development projects effort estimation is done at functionality level and aggregated up for the application level.

3. In case of Conversion projects effort estimation is done at module level and aggregated up for the application level.

4. The effort for SDLC phases is calculated by apportioning the total effort. Example the apportion is (refer the estimation guidelines for details)

Analysis = 25%; Design = 20%; Construction = 38%; System Testing = 17%

5. In case of maintenance projects, the effort estimation is done at maintenance request level

Notes:

1. Productivity is derived for a given technology from the projects done earlier. Adjust the productivity figures in case if the phases or life cycle has differences from the original project.

2. in case productivity figures are not available for estimation purpose, estimate effort using COCOMO Model.

3. Measure actual effort spent for each phase either by functionality or module as done in the estimation. Effort spent on testing, reviews and rework is also recorded

Fundamental Metrics - Defect:

Defect is any deviation from the user expectation or the standards. Defects are identified through inspections, reviews and testing.

Defect is classified on the basis of severity.

•Severity 1: Fatal errors that cause the program to abort

•Severity 2: Fatal errors that result in erroneous outputs

•Severity 3: Errors which are not fatal

•Severity 4: Suggestions towards improvement

1. Do not count duplicate errors

A library routine may be used by many application programs. An error in the library routine may manifest as many errors across the application. If the error in the library routine is removed then all the errors manifested due to it will disappear. Count such errors only once.

2. Assign first level cause to the defect

The developer has to assign the first level cause to the defect.

3. Identify the source of defect

The developer has to find the source of defect. The source may be the same phase where the defect is detected or one of the earlier phases

Improve Planning & Estimation - Size Deviation

Size deviation is calculated as

Actual Size – Estimated Size

--------------------------------- * 100

Estimated Size

In a development project the size was estimated to be 85 KLOC for the application to be developed. At the end of construction the application had 100 KLOC. Calculate the size deviation for the project

The size deviation for the project = (100 -85)*100 /85 = 17.6%

1.In the above example, if the change in size estimation at the design stage was due to a change request from the customer, then the size deviation will be calculated as (100-105)*100/105 = - 4.7%.

2. Size deviation is calculated for

a. development projects

b. conversion projects where the conversion involves rewriting of code

3. Size deviation is calculated at each phase end. In case of spiral/iterative model based development projects it is calculated at each delivery. The estimated size is taken from at requirement phase or when the last change in requirement was base lined.

Improve Planning & Estimation - Effort Slippage

Effort Slippage is calculated as Actual Effort – Estimated Effort

---------------------------------- * 100

Estimated Effort

In a development project the life cycle effort was estimated for a delivery module as 100 person days.

The effort spent at analysis phase was 40 days. The effort spent at design stage was 22 days. The construction effort was 48 days. The testing effort was 15 days. On the basis of standard apportioning of effort to life cycle phases, calculate the effort slip and infer the data:

Effort slippage for the delivery module

= (120 – 100)*100/100

= 25%

1. As a project planning activity the slippage in the effort need to be monitored continuously during the project course. Effort slippage may lead to cost over run.

2. Effort slippage is formally computed as a metric for phase ends in case of development projects and conversion projects. In spiral and iterative model of development, if the cycle time for each delivery is low (less than 2 months) then it is calculated for each delivery

3. In case of maintenance projects, effort slippage can be computed at CR/SR level metric or for medium and complex maintenance requests.

4. In case of implementation projects, the effort slippage is calculated at phase end

Improve Planning & Estimation - Schedule Slippage

Schedule Slippage is calculated as

Actual elapsed time – planned elapsed time

-------------------------------------------------------- * 100

Planned elapsed time

1. The frequency of computation for schedule slippage is same as that of effort slippage

2. The scheduling process basically has the following input elements:

•The duration estimate which is equal to effort/ Number of resources

•The work breakdown structure that details the tasks in SDLC phases

•The project risks

•The dependency of completion of one or more tasks to start another task

•The resource ramp up plan

•Number of working days in week/month

3. For maintenance requests the schedule slippage can be calculated at request level in specific for medium and complex requests.

Deliver On Time - %End Timeliness

% End Timeliness is calculated as

Actual delivery date – re estimated delivery date

------------------------------------------------------------------------ * 100

Re estimated duration (cumulative for the lifecycle)

In the previous example for schedule slippage calculate the % end timeliness for A&D document and tested code.

For the A& D Document

= ((3/05/2003)–(3/14/2003) / 73) = (-8/73)*100

= - 11%

For the Code Delivery

= ((5/21/2003)–(6/12/2003) / 163) = (-22/163)*100

= - 13.4%

1. The % end timeliness is computed for each deliverable as identified in the plan for all types of projects.

Even if the deliverable is not made at each phase calculating the end timeliness at the end of each phase is recommended.

2. This metric is part of service quality. Missed deliveries can be derived as deliveries where end timeliness > 0%, if delivery is committed as per the plan. The size deviation, effort slippage and schedule slippage are the lead process metrics to %end timeliness

Reduce Defects – Defect density

Defect Density is calculated as

Number of defects detected

-------------------------------------------- * 100

Size of the S/W or design or analysis document

1. During a defect fixing of complex maintenance request, review on the code fix and subsequently regression test were conducted. Two defects in the review and one defect in the regression testing were logged. Calculate the defect density.

For defect fixing in maintenance projects, the defect density is calculated for the overall cycle. The size of complex request is 5.

Defect Density = 3/5

= 0.6 defect/Simple request

2. In a development project during design review 3 defects were reported. The design was developed for a module called “warranty”. The estimated size of the module is 10 KLoc. Calculate the defect density.

For development projects defect density is calculated in each phase at single review level.

Defect density at design phase = 3/10

= 0.3 defect /Kloc

1. Cumulative defect density can be calculated by adding the individual values across the phases at delivery/module level.

2. Defect density is calculated for all types of projects.

Reduce Defects – Review Effectiveness

Review Effectiveness is calculated as

Total Number of errors found in reviews

------------------------------------------------------------------- * 100

Total number of errors found in reviews and testing

1.In a maintenance project, for a complex request the following reviews were conducted.

a. Impact analysis document review – 1 defect reported

b. Code fix review - 2 defects reported.

The following tests were conducted.

a. Unit testing on the code fix - 1 defect reported

b. regression testing of the module - no defect reported.

Calculate review effectiveness.

Total number of defects in reviews = 3; total number of defects in testing=1

Review effectiveness = (3/4) *100 = 75%

1. Higher review effectiveness implies that more defects are removed in reviews. The cost of fixing a defect during review is much cheaper than cost of fixing the defect found in testing.

2. Review effectiveness is calculated for deliverables. In case of maintenance projects it is calculated at request level.

3. Review effectiveness is also applied in narrow contexts such as effectiveness of code review as below

Total Number of errors found in code review

------------------------------------------------------------------------* 100

Total number of errors found in code review + unit testing

Reduce Defects – Phase Containment Effectiveness

Phase Containment Effectiveness is calculated as

Number of phase i defects found during phase i review/test

---------------------------------------------------------------------------- * 100

Number of defects with source i defects found in all the phases

Phase Containment Effectiveness is calculated as

Number of phase i defects found during phase i review/test

---------------------------------------------------------------------------- * 100

Number of defects with source i defects found in all the phases

PCE for design =

(2/4)*100 = 50%

PCE for coding =

(2/3)*100 = 66%

1. Phase containment effectiveness helps to understand the source phase where higher numbers of defects are injected. This will help in analyzing the right phase for reducing the defects.

2. It also gives information on how effective the review /testing done at the end of each phase to identify and contain the defects in the same phase. The review/testing techniques can be improved by analyzing the defects

3. Phase containment effectiveness is calculated for development and conversion projects.

4. This metric needs to be created at the end of the phase except the project start up phase and updated at the end of each subsequent phase

Reduce Defects – Total Defect Containment Effectiveness

Total Defect Containment Effectiveness is calculated as

Number of pre acceptance defects

-------------------------------------------------------------------------------------* 100

Number of pre acceptance defects + number of acceptance test defects

The defects found in the acceptance phase for the previous example on phase containment effectiveness were 2. Consider that only single delivery is made after system testing

Calculate the total defect containment effectiveness.

Total pre delivery defects = 7

Acceptance defects = 2

TDCE = (7/9)*100 = 78%

1. TDCE is calculated after acceptance testing is completed.

2. The pre acceptance defects include EQA defects.

3. Acceptance defects include all client found defects including the intermediate reviews

4. If the onsite team conducts acceptance testing on behalf of the client or as an additional level then the defects found during that testing is to be included as a part of acceptance defects.

5. This is calculated for Development and Conversion projects.

Meet Delivery Quality – Acceptance Defect Density

Acceptance Defect Density is calculated as

Number of acceptance test/review defects

----------------------------------------------------* 100

Size of the deliverable

1.In a development project a design document is delivered. The design document is for developing a new module of size 10 FP. The customer reviewed the same and logged 2 defects to be corrected in the design document. Calculate the acceptance defect density for the design document.

Acceptance defect density = 2/10 = 0.2 defects/FP

1.In a conversion project, the converted application was delivered to the customer. The original application had a size of 100 FP while the converted application had a size of 105 FP. The customer did an acceptance testing for the converted application and logged 10 defects. What is the acceptance test defect density?

Acceptance defect density = 10/105 = 0.95 defects/FP

1. Acceptance test defects whether stated or unstated by the customer is the primary product quality.

2. All the reviews and testing done along the life cycle is to reduce the acceptance defect density to almost zero.

3. All the metric collected on the reviews and testing are the lead metrics to acceptance defect density.

4. Acceptance defect density is calculated for development and conversion projects

Meet Delivery Quality – % Bad Fixes

% bad Fix is calculated as

Number of improperly fixed maintenance requests

-------------------------------------------------------------------* 100

Total number of fixes delivered in the last six months

1. In a maintenance project, 40 requests were raised in the last six months. All the requests were serviced after making corrective changes in the code. In that period 2 bad fixes were reported. Calculate the %Bad Fixes for this month.

2.

Number of requests = 40

Number of Bad fixes = 2

% Bad Fixes = (2/40)*100

= 5%

1. In a maintenance project, % bad fixes, whether stated or unstated by the customer is the primary product quality.

2. All the reviews and testing done along the life cycle is to reduce the %Bad Fixes to almost zero.

3. All the metric collected on the reviews and testing are the lead metrics to %Bad Fixes.

4. % bad Fixes are calculated as cumulative value for a fixed period of time. The period is a moving window taking into account the fixed period from the current month.

Meet Service Quality –Response Time Index

Response Time Index is calculated as

Actual Mean time of closure for maintenance /service requests

------------------------------------------------------------------------- * 100

Estimated mean time of closure for maintenance/service requests

Where Estimated /Actual mean time of closure is

Σ (estimated /actual time of closure)

--------------------------------------------------

Number of maintenance/production requests

A maintenance project had an SLA of 8 hours to complete the requests for service for level 2 support. The project in the last month had 12 level 2 support calls and took 100 hours in total to complete all these requests. Calculate the RTI.

Estimated /Agreed Mean time of closure = 8 hours

Actual mean time of closure = 100/12 = 8.33 hours

RTI = 8.33/8 = 1.04

1. The response time index is calculated to measure the service quality in maintenance projects

2. RTI value of 1 indicates that the SLA/estimated time to service is met on an average.

Meet Service Quality –BMI

Backlog Management Index is calculated as (if there is no customer SLA exists)

Number of requests closed during the month

------------------------------------------------------------------------- * 100

Opening Balance for the month + Number of requests opened

During the month

If the customer SLA exists the BMI is calculated as

Number of requests closed during the month

-------------------------------------------------------------------------- * 100

Opening Balance for the month + Number of requests scheduled

to be closed during the month +number of early closures in the month

In beginning of a month, a maintenance project had 12 requests either in the process of closing or to be analyzed. The project received 10 more requests in that month. During the month the project team closed 18 requests.

Calculate the BMI for the project for the month. (No SLA defined)

BMI = 18 / (12+10) = 0.81

1. BMI and its trend indicate how effectively the queue of requests are managed and the adequacy of resources

2. If the customer maintains the queue, the BMI is not calculated by TCS.

Meet Service Quality –%Compliance to SLAs

% Compliance to SLAs for severity X is calculated as

Number of problems of severity X closed within the time frame

------------------------------------------------------------------------- * 100

Number of problems of severity X to be completed during that month

Example of severity 1 definition is as follows:

Major System Outage or Major Applications Failure – Impacts Entire Facilities or the entire organization. For example inability to input orders in case of order management system. SLA definition is Time for the support group to accept and begin working the case - 15 minutes from case open, 24 X 7 There were five severity1 calls were received in the last month and they were accepted and work started in 12, 6, 7, 10, 22 minutes respectively.

Calculate the % Compliance to SLA for severity 1.

% compliance = 4/5 *100 = 80%

1. SLA compliance is calculated only when the customer agreed to the SLA definitions for various severity.

2. SLA compliance is calculated for internal service such as IDM and QA services

Manage Costs – COQ, rework effort

Cost of quality is calculated as

Preventive, appraisal cost spent for ensuring

Quality and cost of failure due to poor quality

------------------------------------------------------------- * 100

Total cost for developing the software

Rework effort is calculated as

Effort spent on fixing and retesting/re-reviewing

the software defects

--------------------------------------------------------------------- *100

Total effort spent for developing the software

1. The Preventive cost includes the cost for training, developing methods/procedures and Defect Prevention activities

2. The appraisal cost includes the cost of inspections, reviews, testing and associated software

3. The failure cost includes the cost of rework due to failures and defects

What are metrics?

A: A metric is a measurement. While it's easy to count things that are easy to count, and to divide the counts by other things that you count, it's harder to decide what the counts and ratios mean. What are we trying to measure, and what model and what evidence lead us to believe that the counts measure the attribute we claim to be trying to measure?

Without clear thinking about validity, the numbers are more of a dangerous distraction than a useful tool. Rather than blindly using a well-known metric, decide what goal you are trying to achieve and find out how best to measure to achieve that goal.

METRICS:

A) Effort deviation assigned to tester

Formula: (Actual effort - estimated effort) for testing * 100

(Estimated effort for testing)

Source: Timesheets, Project progress report

Frequency: After end of requirements

Units: %

B) Defects assigned to tester (in %)

Formula: Total defects assignable to testing * 100

Total defects

Source: Review notes, Defect database

Frequency: After final testing for Change request/FDS

Units: %

C) Fatal defects assigned to tester (in %)

Formula: Fatal defects assignable to testing * 100

Total defects

Source: Review notes, Defect database

Frequency: After final testing for Change request/FDS

Units: %

D) Major defects assigned to tester (in %)

Formula: Major defects assignable to testing * 100

Total defects

Source: Review notes, Defect database

Frequency: After final testing for Change request/FDS

Units: %

E) Rework effort assigned to tester (in %)

Formula: Rework effort assignable to testing * 100

Total actual effort

Source: Timesheets, Project progress report

Frequency: After final testing for Change request/FDS

Units: %

Metrics Used In Testing

© Moreniche


In this tutorial you will learn about metrics used in testing, The Product Quality Measures

1. Customer satisfaction index,

2. Delivered defect quantities,

3. Responsiveness (turnaround time) to users,

4. Product volatility,

5. Defect ratios,

6. Defect removal efficiency,

7. Complexity of delivered product,

8. Test coverage,

9. Cost of defects,

10. Costs of quality activities,

11. Re-work,

12. Reliability and Metrics for Evaluating Application System Testing.

The Product Quality Measures:

1. Customer satisfaction index

This index is surveyed before product delivery and after product delivery
(and on-going on a periodic basis, using standard questionnaires).The following are analyzed:

· Number of system enhancement requests per year

· Number of maintenance fix requests per year

· User friendliness: call volume to customer service hotline

· User friendliness: training time per new user

· Number of product recalls or fix releases (software vendors)

· Number of production re-runs (in-house information systems groups)

2. Delivered defect quantities

They are normalized per function point (or per LOC) at product delivery (first 3 months or first year of operation) or Ongoing (per year of operation) by level of severity, by category or cause, e.g.: requirements defect, design defect, code defect, documentation/on-line help defect, defect introduced by fixes, etc.

3. Responsiveness (turnaround time) to users

· Turnaround time for defect fixes, by level of severity

· Time for minor vs. major enhancements; actual vs. planned elapsed time

4. Product volatility

· Ratio of maintenance fixes (to repair the system & bring it into compliance with specifications), vs. enhancement requests (requests by users to enhance or change functionality)

5. Defect ratios

· Defects found after product delivery per function point.

· Defects found after product delivery per LOC

· Pre-delivery defects: annual post-delivery defects

· Defects per function point of the system modifications

6. Defect removal efficiency

· Number of post-release defects (found by clients in field operation), categorized by level of severity

· Ratio of defects found internally prior to release (via inspections and testing), as a percentage of all defects

· All defects include defects found internally plus externally (by customers) in the first year after product delivery

7. Complexity of delivered product

· McCabe's cyclomatic complexity counts across the system

· Halstead’s measure

· Card's design complexity measures

· Predicted defects and maintenance costs, based on complexity measures

8. Test coverage

· Breadth of functional coverage

· Percentage of paths, branches or conditions that were actually tested

· Percentage by criticality level: perceived level of risk of paths

· The ratio of the number of detected faults to the number of predicted faults.

9. Cost of defects

· Business losses per defect that occurs during operation

· Business interruption costs; costs of work-arounds

· Lost sales and lost goodwill

· Litigation costs resulting from defects

· Annual maintenance cost (per function point)

· Annual operating cost (per function point)

· Measurable damage to your boss's career

10. Costs of quality activities

· Costs of reviews, inspections and preventive measures

· Costs of test planning and preparation

· Costs of test execution, defect tracking, version and change control

· Costs of diagnostics, debugging and fixing

· Costs of tools and tool support

· Costs of test case library maintenance

· Costs of testing & QA education associated with the product

· Costs of monitoring and oversight by the QA organization (if separate from the development and test organizations)

11. Re-work

· Re-work effort (hours, as a percentage of the original coding hours)

· Re-worked LOC (source lines of code, as a percentage of the total delivered LOC)

· Re-worked software components (as a percentage of the total delivered components)

12. Reliability

· Availability (percentage of time a system is available, versus the time the system is needed to be available)

· Mean time between failure (MTBF).

· Man time to repair (MTTR)

· Reliability ratio (MTBF / MTTR)

· Number of product recalls or fix releases

· Number of production re-runs as a ratio of production runs

Metrics for Evaluating Application System Testing:

Metric = Formula

Test Coverage = Number of units (KLOC/FP) tested / total size of the system. (LOC represents Lines of Code)

Number of tests per unit size = Number of test cases per KLOC/FP (LOC represents Lines of Code).

Acceptance criteria tested = Acceptance criteria tested / total acceptance criteria

Defects per size = Defects detected / system size

Test cost (in %) = Cost of testing / total cost *100

Cost to locate defect = Cost of testing / the number of defects located

Achieving Budget = Actual cost of testing / Budgeted cost of testing

Defects detected in testing = Defects detected in testing / total system defects

Defects detected in production = Defects detected in production/system size

Quality of Testing = No of defects found during Testing/(No of defects found during testing + No of acceptance defects found after delivery) *100

Effectiveness of testing to business = Loss due to problems / total resources processed by the system.

System complaints = Number of third party complaints / number of transactions processed

Scale of Ten = Assessment of testing by giving rating in scale of 1 to 10

Source Code Analysis = Number of source code statements changed / total number of tests.

Effort Productivity = Test Planning Productivity = No of Test cases designed / Actual Effort for Design and Documentation

Test Execution Productivity = No of Test cycles executed / Actual Effort for testing

Search

My Blog List