"Cannot reproduce" — the three most expensive words in software
TL;DR
"Cannot reproduce" isn't an answer — it's a symptom. A symptom of a report missing the data needed to reproduce the bug: exact URL, browser version, console logs, precise steps. Every round of tester-developer ping-pong costs both sides time — and the bug either waits or escapes to production.
The scenario is a classic. Tester files a bug: "Contact form doesn't send emails after clicking Submit." Developer opens the form, clicks Submit — works fine. Comment on the ticket: "Cannot reproduce. Closing." Tester reopens: "Still broken on my end." Developer: "What browser? What URL? Staging or production?" A day later the tester responds with the missing data. Developer tries again — now reproduces it. The bug existed the whole time. Lost: 2 days.
This isn't an isolated case. Research on debugging processes consistently shows that reproduction attempts consume a significant portion of developer time spent on bug fixes — in many organizations it's roughly half. A Cambridge University report from 2013 estimated that global debugging costs run into hundreds of billions of dollars annually.
Why "cannot reproduce" isn't an answer
"Cannot reproduce" sounds like a diagnosis, but it isn't one. It's information that the developer tried to reproduce the bug with the data they had — and failed. There could be several causes: different URL, different browser, different session state, different point in the application lifecycle. Cem Kaner, James Bach, and Bret Pettichord in "Lessons Learned in Software Testing" (2002) point out that the quality of the bug report determines whether the developer can fix it — not whether the bug "really exists."
The problem isn't the developer not wanting to look. The problem is the report not giving them enough data. The developer isn't sitting in the same browser as the tester. They don't see the same application state. They don't know the context in which the bug appeared. If the report says "form doesn't work" without information about URL, browser, console logs, and exact steps — the developer is left guessing in the dark.
"Cannot reproduce" is not an opinion about whether the bug exists. It's information that the report doesn't contain sufficient data to recreate the conditions under which the bug occurred.
What's missing from 80% of bug reports
A bug report has one job: give the developer enough data to reproduce the problem in their own environment. In practice, most reports fail at this. Michael Bolton in his blog series on DevelopSense writes about it directly: a bug report should tell a story — what you did, what you saw, what you expected.
Bug report elements — most commonly missing:
- Exact URL: "the contact page" is not the same as "https://staging.app.com/en/contact?ref=newsletter"
- Browser + version: the bug may only occur in Safari 17, but the report says "browser"
- Console logs: a JavaScript error that pinpoints the problem exactly — but the tester didn't open DevTools
- Exact reproduction steps: "fill out the form" vs "enter email without @, click Submit, wait 3 seconds"
- Session state: logged in / logged out, user role, test data
- Screenshot with context: screenshot of just the error message vs screenshot of the entire page with visible URL and form state
This isn't a question of tester competence. It's a question of cost. Manually adding each of these elements to a Jira report takes time. Checking the browser version — 30 seconds. Opening the console, copying logs — a minute. Describing steps from memory 5 minutes after finding the bug — risk of missing a detail. The full process: 10-15 minutes. Under time pressure, the tester takes shortcuts. And the report comes out incomplete.
Tester-developer ping-pong — what it costs
Every "cannot reproduce → reopen → additional data → reproduction attempt" cycle costs two people. The developer loses time trying to reproduce with incomplete data, writing a comment, waiting for a response. The tester loses time reading the comment, returning to the bug, collecting the missing data, updating the ticket.
| Ping-pong stage | Time cost (estimated) |
|---|---|
| Developer tries to reproduce (1st attempt) | 15-30 min |
| Developer writes "cannot reproduce" comment | 5 min |
| Tester returns to bug, collects missing data | 10-15 min |
| Developer tries to reproduce (2nd attempt) | 10-20 min |
| Wait time (ticket sits between rounds) | 4-24 hrs |
| Total cost of one ping-pong cycle | 40-70 min work + 4-24 hrs delay |
The CISQ report "The Cost of Poor Software Quality in the US" (2022) estimates that the costs of poor software quality in the United States alone are $2.41 trillion annually. A significant portion of those costs isn't the bugs themselves — it's inefficient processes for handling them. The "cannot reproduce" ping-pong is one of the most common such processes.
In a software house with 5 testers and 10 developers, if 5 tickets per week go through a "cannot reproduce" cycle, that's 200-350 minutes weekly on ping-pong alone. Over 3-6 hours. Plus the delay in fixing — a bug that could have been fixed the day it was reported sits 1-2 days waiting for the next round.
The most expensive thing isn't the bug itself. It's the time between finding it and fixing it — and every ping-pong cycle extends that time.
What this means for your software house
"Cannot reproduce" is a data problem, not a people problem. The developer isn't being difficult — they lack data. The tester isn't being lazy — they didn't have time to collect it. The system forces a trade-off: either a fast report (and incomplete) or a complete report (and 10-15 minutes instead of testing).
The solution isn't training testers to "write better reports." The solution is making technical data — URL, browser, version, console logs, screen resolution — land in the report automatically, with zero extra effort from the tester.
Voice2Bug captures this data at the moment of reporting. The tester clicks the extension icon, describes what they see, takes a screenshot. Under a minute. The Jira report automatically includes: exact URL, browser with version, screen resolution, console logs, voice transcription processed into structured reproduction steps. The developer opens the ticket — and has everything they need to reproduce the bug on the first try.
The result: fewer "cannot reproduce" tickets. Less ping-pong. Less time wasted by both sides. Fewer bugs sitting in Jira for a day or two waiting for missing data. Bug found in the morning — fixed by lunch. Not because the developer is faster. Because they got complete data on the first pass.
What you can do
Today:
- Review tickets from the last sprint — how many had a "cannot reproduce" or "need more info" comment?
- Check how many of those required 2+ rounds of communication before the developer could start fixing
This week:
- Define the minimum data set a bug report MUST contain (URL, browser, steps, screenshot)
- Measure how long it takes to collect that data manually vs automatically
This month:
- Calculate the cost of ping-pong: (number of "cannot reproduce" per week) x (average cycle time) x (hourly rate of dev + tester)
Calculate for your team
Enter your team data and see how much you'd save monthly and annually.
Open ROI calculator →Sources
- Cem Kaner, James Bach, Bret Pettichord, "Lessons Learned in Software Testing", Wiley, 2002.
- Michael Bolton, DevelopSense blog, series on bug reporting. Link
- CISQ (Consortium for Information & Software Quality), "The Cost of Poor Software Quality in the US", 2022.
- Cambridge University, "Cambridge Judge Business School — financial content of US software bugs", 2013.
- Our estimates: ping-pong time and "cannot reproduce" cycle costs based on data from Voice2Bug early adopters and QA process observations in software houses with 10-30 people.
Related articles
Free Voice2Bug trial
Enter your email — get 30 days of free access. No obligations.
Free 30-day trial. No credit card. No obligations.
Ready to go? Start free trial