Reliability of today's software systems hinges on developers writing test cases that exercise as much of a program as possible. Writing and running such tests is inevitably subject to a time budget. This paper addresses the question of how to maximize quality of testing, given such a fixed time budget. We define a program-path scoring metric along with a way to measure a software component's relevance, and then show how these can be combined to produce a test quality metric that is superior to the test coverage metrics in use today. The key features of our proposal are that (a) it steers testing toward code that is most in need of testing, such as frequently-used or recently-modified code, and (b) it prioritizes testing shorter program paths over longer ones. As a proof-of-concept, we augmented an automated testing tool with our test prioritization criterion and found that it explores up to 70 times more code paths in the same amount of time, with no additional human effort.