If the stats were as good as the hyperbole in the article, it would clearly state the only 2 metrics that really matter: predictive value positive (what's the actual probability that you really have cancer if you test positive) and predictive value negative (what's the actual probability that you're cancer free if you test negative). As tptacek points out, these metrics don't just depend on the sensitivity and specificity of the test, but they are highly dependent on the underlying prevalence of the disease, and why broad-based testing for relatively rare diseases often results in horrible PVP and PVN metrics.
Based on your quoted sections, we can infer:
1. About 250 people got a positive result ("nearly one in 100")
2. Of those 250 people, 155 (62%) actually had cancer, 95 did not.
3. About 24,750 people got a negative test result.
4. Assuming a false negative rate of 1% (the quote says "over 99%") it means of those 24,750 people, about 248 actually did have cancer, while about 24,502 did not.
When you write it out like that (and I know I'm making some rounding assumptions on the numbers), it means the test missed the majority of people who had cancer while subjecting over 1/3 of those who tested positive to fear and further expense.