Home/ Blog/ 5 QA Efficiency Metrics

5 QA Efficiency Metrics You Should Be Tracking

February 1, 2026 10 min read
Dashboard showing 5 QA efficiency metrics — Bug Escape Rate, MTTR, Completeness Score, Reopen Rate, Tester Velocity

TL;DR

5 metrics that reveal the actual health of your QA team: Bug Escape Rate (how many bugs slip to production), Mean Time to Report (how fast a report gets created), Report Completeness Score (are reports actually complete), Bug Reopen Rate (how many tickets bounce back), and Tester Velocity (how much work a tester processes per sprint). Each one comes with a definition, formula, benchmark, and red flag.

Robert Austin in "Measuring and Managing Performance in Organizations" (Dorset House, 1996) warns: metrics that measure individual performance lead to gaming the system. A tester who knows they're measured by number of reports will file trivial bugs all day long. A tester measured by ticket close time will close tickets prematurely.

That's why the 5 metrics below measure the process, not the people. They assess the quality of the reporting system, not individual tester productivity. This is a critical distinction: good QA metrics show where the process is breaking — not who to blame.

1. Bug Escape Rate — how many bugs slip to production

Definition:

The percentage of defects found in production relative to all discovered defects (production + internal testing). Lower is better.

Formula:

Bug Escape Rate = (production bugs / total bugs) x 100%

Benchmark:

  • Good: below 10%
  • Average: 10-20%
  • Poor: above 20%

How to measure: Count bugs reported by customers/users in the last month. Add bugs found internally by QA during the same period. Divide production bugs by the total. You need two sources: the internal tracker and the customer-facing support system (helpdesk, monitoring, support tickets).

Why it matters: Capers Jones in "Applied Software Measurement" (McGraw-Hill, 2008) showed that the cost of fixing a defect grows several times at each subsequent phase of the software lifecycle. A bug caught during testing costs hours. A bug in production costs days — debugging, hotfix, deployment, client communication, potential SLA penalties.

Red flag: Bug Escape Rate above 25% means one in four bugs passes through QA undetected. Check: do your tests cover critical paths? Do testers have time for exploratory testing, or are they just "clicking through a checklist"?

2. Mean Time to Report — how long it takes to create a bug report

Definition:

Average time from the moment a bug is found to the moment a complete ticket exists in the tracker. Includes data collection, writing the description, and attaching evidence.

Formula:

MTTR = total reporting time / number of reports

Benchmark:

  • Manual reporting: 10-15 minutes (Capgemini, World Quality Report 2024)
  • With automated tooling: under one minute
  • Good target: under 3 minutes

How to measure: For one week, ask testers to time themselves from the moment they find a bug to when they click "Create" in the tracker. Collect the data, calculate the mean and median. The median is more important — it eliminates outliers (e.g., one bug requiring a 30-minute description shouldn't skew the picture).

Why it matters: At 8 bugs per day and 12 minutes per report, one tester spends 96 minutes daily on reporting alone. That's nearly 1/5 of a workday. For a team of 4 testers: 384 minutes = over 6 hours daily. At a blended rate of $50/hour, that's $300/day, over $6,000/month — spent on logistics, not testing.

Red flag: MTTR above 15 minutes means the tester is fighting the tool or the process. Check: do they have to manually gather technical data? How many clicks does it take to create a ticket? How many form fields are mandatory?

3. Report Completeness Score — do reports have everything the developer needs

Definition:

Percentage of bug reports that contain all required fields: description, reproduction steps, expected vs actual result, screenshot/recording, technical data (URL, browser, console logs).

Formula:

Completeness = (complete reports / total reports) x 100%

Benchmark:

  • Good: above 90%
  • Average: 70-90%
  • Poor: below 70%

How to measure: Define a completeness checklist (e.g., 6 fields: title, description, reproduction steps, expected result, screenshot, technical data). Review a random sample of 30-50 recent tickets. Count how many have all 6 fields filled. Divide by the number of tickets reviewed.

Why it matters: Capers Jones in "Software Defect Origins and Removal Methods" (2012) found that incomplete reports account for 15-25% of ticket reopens. The developer gets a ticket without a URL, without logs, without repro steps — and either has to figure it out themselves (30-60 minutes) or bounces it back with a question (another 24 hours of delay).

Red flag: Completeness Score below 60% means developers regularly receive incomplete reports. Check: does the tracker template enforce key fields? Do testers have a tool that automatically captures technical data?

4. Bug Reopen Rate — how many tickets bounce back

Definition:

Percentage of closed tickets that return to open/in-progress status. Measures two problems simultaneously: quality of developer fixes and quality of tester reports.

Formula:

Reopen Rate = (reopened tickets / closed tickets) x 100%

Benchmark:

  • Good: below 5%
  • Average: 5-15% (Capers Jones, 2012)
  • Poor: above 15%

How to measure: In Jira (or any tracker) check how many tickets in the last month went from Done/Closed back to Open/In Progress/To Do. Divide by the total number of closed tickets in that period. Multiply by 100.

Why it matters: Every reopen is double work. The developer returns to code they wrote a week ago (30+ minutes to regain context). The tester re-verifies the fix. With 640 tickets per month (4 testers x 8 bugs x 20 days) and a 15% reopen rate, that's 96 reopens. Capers Jones estimates the cost of one reopen at 30-60 minutes of work (tester + developer). That's 48-96 wasted hours per month.

Red flag: Reopen Rate above 20% signals a systematic problem. Investigate the root cause: are tickets bouncing because of incomplete reports (missing repro steps) or because of sloppy fixes? These are entirely different problems with different solutions.

5. Tester Velocity — how much work a tester processes per sprint

Definition:

How many test cases (or QA story points) one tester executes per sprint. This is a throughput metric — it shows whether the QA team can keep up with the development team.

Formula:

Velocity = executed test cases / sprint (per tester)

Benchmark:

  • Depends on the project — trend matters, not absolute value
  • A 20%+ drop in velocity = alarm signal
  • Stable or upward trend = process is working

How to measure: At the end of each sprint, count how many test cases each tester executed (executed, not written). Track the trend over 5-6 sprints. Don't compare testers against each other — compare each tester against their own history. A sudden drop in one person's velocity is a signal: new project? blocked? tool change?

Why it matters: Capgemini's "World Quality Report 2024" reports that testing accounts for an average of 23% of IT project budgets. If velocity drops while the backlog grows — you have a bottleneck in QA. But the root cause may not be the testers: a slow test environment, missing test data, or too much time spent on reporting logistics — all of these reduce velocity without being the tester's fault.

Red flag: If velocity drops while MTTR rises simultaneously — testers are spending more and more time on reporting at the expense of actual testing. This is the most common cause: not a lack of skill, but bad tools and processes.

Summary: 5 metrics on one dashboard

Metric Good Average Red flag
Bug Escape Rate < 10% 10-20% > 25%
Mean Time to Report < 3 min 3-10 min > 15 min
Completeness Score > 90% 70-90% < 60%
Bug Reopen Rate < 5% 5-15% > 20%
Tester Velocity Stable trend Slight fluctuation Drop > 20%

How to start — don't roll out 5 metrics at once

Robert Austin (1996) warns against "metric overload." Start with two metrics that give you the biggest insight into current problems:

Rollout plan

Week 1-2: Bug Reopen Rate + MTTR

  • Reopen Rate — pull from Jira (1 hour of work)
  • MTTR — testers time themselves with a stopwatch for 5 days
  • At the end: is the problem in reports (incomplete) or in fixes (sloppy)?

Week 3-4: Report Completeness Score

  • Review 50 recent tickets against your 6-field checklist
  • Identify the most commonly missing elements
  • Implement a solution: template, tool, or automation

Month 2+: Bug Escape Rate + Tester Velocity

  • Bug Escape Rate requires production data — you need monitoring or a ticketing system
  • Tester Velocity only makes sense after 3-4 sprints (you need a trend, not a data point)

What not to measure — metrics that cause harm

Some metrics seem reasonable on paper but lead to dysfunctional behavior in practice:

  • Bugs per tester: Leads to reporting trivialities. A tester filing 20 bugs a day is not automatically better than one who files 5 critical ones.
  • Ticket close time per developer: Leads to premature closing, reduces fix quality, increases reopen rate.
  • Test automation percentage: Without context it's an empty number. 90% automation on the wrong layer (unit tests checking getters/setters) is worse than 40% on critical user paths.

Rule of thumb: if a metric measures a person rather than a process — sooner or later someone will game the system. Measure the effectiveness of the reporting process, not the productivity of individual testers.

What you can do

Today (30 minutes):

  • Open Jira and calculate your Reopen Rate from the last month — how many tickets went from Done back to Open?
  • Review the 10 most recent tickets for completeness — how many have URL, logs, screenshot?

This week:

  • Ask testers to time their MTTR for 5 days
  • Define a report completeness checklist (6 fields minimum)

This month:

  • Build a dashboard with Reopen Rate + MTTR + Completeness Score
  • Compare MTTR before and after deploying an automated reporting tool

Calculate for your team

Enter your team data and see how much you'd save by optimizing reporting time.

Open ROI calculator →

Sources

  1. Capgemini, "World Quality Report 2024" — data on QA's share of IT budgets and time allocation in testing teams. Link
  2. Capers Jones, "Applied Software Measurement", McGraw-Hill, 2008 — defect cost across software lifecycle phases.
  3. Capers Jones, "Software Defect Origins and Removal Methods", 2012 — reopen rate benchmarks and impact of report completeness on repair costs. Link
  4. Robert Austin, "Measuring and Managing Performance in Organizations", Dorset House, 1996 — risks of individual metrics and measurement gaming.

Free Voice2Bug trial

Enter your email — get 30 days of free access. No obligations.

Free 30-day trial. No credit card. No obligations.

Ready to go? Start free trial

Free Voice2Bug trial — 30 days