International Quality Assurance Exchange Program Participant Guide
Section 4.1 – Summary Reports
Participants receive sample summary reports containing a comprehensive package that spans the basic statistics (averages, standard deviations) to the more sophisticated - normality tests, comparisons to published precision, Z scores to detect biases, and outlier detection. Summary reports can be viewed online within one week of the closing date.
Section 4.2 – Annual Reports
Participants receive annual reports containing statistics intended to provide participants with a measure of their long term testing performance, relative to group averages. Year-end reports are posted online each calendar year.
Section 4.3 – Statistics
ESD Criteria used for Outlier Rejection
Extreme Studentized Deviate (ESD) is a procedure for statistically determining whether any observations in a particular sample data set vary significantly from that of the other sample values. These ""outliers"" are flagged in the summary reports and are not included in calculations of the average, median, minimum, maximum and standard deviation.
ESD calculations are carried out as follows:
1. The data set's average is subtracted from each observation to obtain its deviation from the average. This value is then divided by the data set's standard deviation and the absolute value is determined.
2. The maximum value of the above deviations (ESD value) is determined and the corresponding observation is flagged and removed from the data set.
3. The above calculations are repeated enough times to reject a maximum of 20% of the data set.
4. The ESD value determined for each loop above is compared to a corresponding "t" factor from a lookup table. If the ESD value is greater than the corresponding ESD "t" factor, the observation for the corresponding loop and all observations from previous loops are identified as outliers.
Note: Data sets greater than six (6) are subjected to ESD procedure.
Z scores are calculated for each laboratory's result for specific tests as follows:
= (result lab(i) – Average ESD) ÷ Standard Deviation
Z scores from previous exchange samples can easily be compared because the calculation brings the values into a common range i.e. -3 to +3. Participants will get more information from the exchange program if they monitor their historical Z score values. Za (current Z score), Zb (Z score from the most recently performed exchange sample.), and Zc (Z score from the second most recently performed exchange sample) are reported in the tables of the exchange reports. Laboratories with test parameters that have two out of three Z scores (Za, Zb, Zc) with an absolute value of 2.00 or greater are flagged in BORDERED CELLS. This flag identifies potential bias OR an increase in variability. Laboratories with test parameters that have any Z scores (Za, Zb, or Zc) with an absolute value of greater than 3.00 are flagged as BOLDED numbers. This flag identifies a potential special cause situation that warrants investigation.
Statistics such as average and standard deviation are most meaningful when the data is sufficiently precise, and the variability in the data is within acceptable limits. The Variance Ratio statistic is intended to provide assurance that the precision is within a pre-determined range of acceptability. This is accomplished by comparing the variance of the data set (Rdat) to the reproducibility from the published test method (Rpub).
The statistical evaluation below each data set contains the calculated sample variance (Rdat) and the reproducibility from the published test method (Rpub). It also lists the variance ratio, as well as a comparison to the appropriate critical F value, using the selected exchange type I error (false reject) rate of 0.05. If the ratio is less than 1.00, the results are determined to be acceptable. If the ratio is greater than 1.00, it is compared to the corresponding critical F value. If the ratio is less than the critical F value, the results are determined to be acceptable.
The following formulae are used: (SD=standard deviation)
Sample Reproducibility: Rdat=SDx2.77
Reproducibility from the method: Rpub
Sample Variance: Vdat=SD2
Variance from the method: Vpub=(Rpub/2.8)2
Variance Ratio: F=Vdat/Vpub
Note: Data sets greater than twelve (12) are subjected to variance ratio calculation
Section 4.4 – Appeal
Participants may request review of evaluation of their performance in a PT sample by email. The review will be conducted by the PTP Coordinator and the conclusion reported back to the participants.