# Medical statistics 101 for clinicians

(c) Roger Baxter, MD, CAP Today 2002

**Medical statistics 101 for clinicians**

Clinicians who order tests should remember that each test has certain characteristics related to how the result should be interpreted. Two of these characteristics are "sensitivity" (that is, screeninghow well a test picks up every case of disease) and "specificity" (how well the test excludes those who don't have disease).

After you have the results of a test, how sure can you be the patient does or does not have disease? How much can you rely on the test results? For the answer we look to Bayes' theorem, the basis for what follows:

The results of a test must always be interpreted in the context of the population you are testing. This means that your suspicion of disease actually influences the interpretation of the test. Suspicion of disease is called "pretest probability," as opposed to "posttest probability," which is the suspicion of disease after you have the test results. This is statistically measured as the "predictive value" of a positive or negative test, which is a measure of the test's reliability.

Basically, the predictive value of a test takes the specificity, sensitivity, and pretest probability into account and comes up with a value. Here are the formulas from which you can derive these values.

True positive (TP) tests mean the test is positive and the person actually has disease.

False positives (FP) mean the test is positive but the person does not have disease.

True negatives (TN) mean the test is negative and there is no disease.

Predictive value of a positive test

(Predictive value of a positive is the percent of all positive tests that are true positives.)

(Predictive value of a negative is the percent of all negative tests that are truly negative.)

Let&s try some examplesfirst using a test to screen a population.

The ANA is a test for systemic lupus. Our current assay has a 94 percent sensitivity and 92 percent specificity.

Approximately one of every 1,000 young women has lupus, so if we decided to screen 1,000 young women for lupus, we would expect one to have disease. The test is 94 percent sensitive, so we would expect 94 percent x 1 = 0.94 true positive tests. The test is 92 percent specific, so we would expect eight percent of all tests (8% x 1,000) = 80 false-positive tests. The predictive value of a positive test is 0.94/(0.94 + 80) = 0.01 (1%). This means that if we were to use this test for screening, we would expect that 99 percent of positive tests would be false-positive. The predictive value is extremely low, and we cannot rely on the result of the test. Therefore, the ANA makes a poor screening test. Now, let's say we use the test to confirm our suspicion of lupus. A 45-year-old woman is in our office with a malar rash, pleuritis, and arthralgias. We think she has a 65 percent chance of having an autoimmune disease, so we do the ANA. The test is positive.

In 1,000 similar women, 65 percent (650) would have disease. Ninety-four percent of these (611) would be picked up by the test (TP). Eight percent would still be false-positive (0.08 x 1,000 = 80). Predictive value of the test = 611/(611 + 80) = 88%.

Therefore, when the test is used to confirm a suspected case, you can rely on a positive test.

Good clinicians use the predictive value of test results intuitively every day when they order tests on their patients. As you can see from the example, you can be misled easily by a test result if you forget the nature of tests when ordering. Remember to order the test when you suspect disease. Few tests have the specificity they would need to be good screening tests. In addition to specificity, the test cost must be justified by the test's benefit in the population tested.