Around 100% of test coverage suggests that every line of code is being tested, which should theoretically leave no room for bugs, right? While that’s true in a literal sense, the issue isn’t with achieving a certain percentage of coverage—it’s with the obsession over the idea of reaching that number. This is especially evident today, where test automation has been getting more and more popular and widely adopted by organizations. In this article, we’ll explore why the pursuit of high test coverage can sometimes do more harm than good and why a more thoughtful approach to testing is essential.
The Problem with Chasing Test Coverage
The emphasis on meeting a specific test coverage percentage can shift the focus from meaningful testing to simply hitting a metric. This pursuit often leads to the creation of tests that add little value, such as those targeting trivial code segments like getters and setters—code that poses minimal risk to the system. While these tests may inflate coverage numbers, they fail to address the actual vulnerabilities within a system.

The danger here is that critical areas, those with the highest risk, may receive inadequate attention. In systems where the stakes are high—such as in aviation, medical devices, or financial software—focusing on coverage rather than risk can have severe consequences. A system’s failure in these domains could lead to catastrophic outcomes, including loss of life or significant financial loss.
The Hidden Cost of Excessive Tests
Another issue with overemphasizing test coverage is the maintenance burden it creates. Automated tests are still code, and like all code, they require upkeep. The more tests there are, the more challenging it becomes to modify the codebase. Every change risks breaking existing tests, even when the changes are intended to improve or refactor the system. This leads to a situation where the cost of maintaining tests can outweigh their benefits, especially if those tests do not target high-risk areas.
Therefore, the goal should not be to write as many tests as possible but to write the right amount of tests based on risk. A well-maintained suite of targeted, high-impact tests will be more valuable than a sprawling collection of low-impact ones.
Understanding Risk-Based Testing
One of the foundational concepts in professional testing is the idea that exhaustive testing is impossible. Given finite resources and time, not every part of a system can or should be tested equally. This is where risk-based testing comes into play.

Risk-based testing prioritizes testing efforts based on the potential impact and likelihood of failures in different parts of the system. By identifying “risk hotspots,” testers can concentrate on the areas most likely to cause harm if they fail. For example, in an aircraft’s control system, the software that manages flight stability is a high-risk area that demands thorough testing, whereas the system responsible for in-flight entertainment might require less scrutiny.


This approach ensures that testing is not just a numbers game but a strategic effort to mitigate risk. It aligns testing efforts with the real-world consequences of failure, providing greater assurance that the most critical aspects of a system are reliable.
So how exactly is risk-based testing done? What are the steps involved?
From a case study by Souza and Gusmao, the process of risk-based testing can be broken down into these six key steps:
- Risk Identification: The first step is to identify technical risks associated with software functionalities or requirements. This involves reviewing potential risk sources and categories, often using tools like the Taxonomy-Based Questionnaire (TBQ) or a customized risk checklist. Project members complete the TBQ, and then a brainstorming session is conducted to validate and refine the identified risks.
- Risk Analysis: After identifying the risks, the next step is to prioritize them. This is done using heuristic risk analysis, where software engineers and risk analysts assess various metrics—such as complexity, cost, size, and quality—to determine the Risk Exposure (RE) value for each functionality. This RE value guides the focus of subsequent testing efforts.
- Test Planning: With the RE values in hand, the test manager devises a test strategy. This includes defining the overall test approach, determining the number of test cycles, and allocating resources. The risk analysis ensures that testing efforts are concentrated on the areas that pose the greatest threat to software quality.
- Test Design: In this phase, test cases are created specifically to address the identified risks. Each risk is linked to at least one test case, designed to mitigate the potential issues. The insights gathered during the risk identification phase—such as how a functionality might fail—directly inform the design of these targeted test cases.
- Test Execution: Test cases are executed according to their RE priority, ensuring that functionalities with the highest risk are tested first. This order of execution maximizes the chances of identifying critical issues early in the testing process.
- Test Evaluation and Risk Control: The final step involves ongoing monitoring and evaluation. Progress is tracked by assessing the status of test cases and their corresponding risks. A risk is considered mitigated once all related test cases have been successfully executed and passed. Continuous adjustments are made to testing efforts based on risk monitoring, ensuring resources are effectively allocated as new information emerges.
So those are the steps involved in risk-based testing. By following this method, teams can strategically identify, analyze, and address the most significant risks within a system. This ensures that testing efforts are focused where they are needed most, aligning resources with the potential impact on the overall quality and safety of the software.
Conclusion: Quality Over Quantity in Testing
In summary, while test automation and coverage metrics are important, they should not overshadow the ultimate goal of testing: mitigating risk. High-risk areas within a system require more thorough and creative testing approaches, while low-risk areas may need less attention. By focusing on risk-based testing, teams can ensure that their efforts contribute to the system’s overall reliability and safety, rather than merely achieving an arbitrary coverage goal.
In the end, it’s not about how many tests you write, but how effectively those tests protect against the risks that matter most.