Molecular Screening Core

Protocols/References

Z-factor

The Z-factor is a measure of the quality or power of a high-throughput screening (HTS) assay. In an HTS campaign, assayists often compare a large number (hundreds of thousands to tens of millions) of single measurements of unknown samples to well established positive and negative control samples. The purpose is to determine which, if any, of the single measurements are significantly different from the negative control. The analyst must consider the distribution of measurements from the positive control, negative control, and the other single measurements in order to determine the probability that each measurement may have occurred by chance. These distributions cannot been determined until after the campaign is completed, and by their nature, HTS projects are expensive in time and resources. So prior to starting a campaign, much work is done to assess the quality of an assay on a smaller scale, and predict if the assay would be useful in a high-throughput setting. The Z-factor predicts if useful data could be expected if the assay were scaled up to millions of samples. You need four parameters to calculate the Z-factor: the mean (µ) and standard deviation (σ) of both the positive (p) and negative (n) controls (µp,σp,µn,σn, respectively).

An alternative but equivalent definition of Z-factor is calculated from the Sum of Standard Deviations (SSD) divided by the range of the assay (R):

  1. SSD = sp + sn
  2. R = | µp - µn |

Z-factor Interpretation 1.0 Ideal. This is approached when you have a huge dynamic range with tiny standard deviations. Z-factors can never actually equal 1.0 and can certainly never be greater than 1.0. between 0.5 and 1.0 An excellent assay. between 0 and 0.5 A marginal assay. less than 0 The signal from the positive and negative controls overlap, making the assay essentially useless for screening purposes.