- THE MAGAZINE
- Advertiser Index
- Raw & Manufactured Materials Overview
- Classifieds & Services Marketplace
- Buyers' Connection
- List Rental
- Market Trends
- Material Properties Charts
- Custom Content & Marketing Services
- CI Top 10 Advanced Ceramic Manufacturers
- Virtual Supplier Brochures
Quality departments and laboratories use a variety of particle size analyzers and analytical techniques to determine millions of particle size distributions per year. This information is a key indicator of a material’s quality—knowing the particle size distribution enables a material user to predict how that material will behave in the manufacturing process. However, for this information to be considered reliable, the material user must have confidence in the quality of the analysis results.
Defining AccuracyHow is the quality of an analysis technique measured? First, we need to define some terms relating to quality. Everyone strives for “accurate” analyses. However, no specific definition exists for the accuracy of many particle sizing techniques due to the lack of a well-defined, universally accepted standard.
Many techniques measure a size-related physical property or phenomenon related to the particle behavior under defined conditions, but these measured properties or phenomena are often influenced by non-size-related particle characteristics, such as shape or porosity. To achieve reliable analyses, we must speak in terms of “agreement,” or relative accuracy, rather than absolute accuracy.
This leads to the need for standard reference materials (SRM), where a national standard group has certified the particle size distribution of a particular lot of material determined under specified analysis conditions using a given technique. Such materials form the basis for relative accuracy. If a selected particle size analyzer can produce the specified result for a given SRM, we would expect the particle size distribution produced for a test sample using the same analyzer to be accurate, assuming that the sample is properly prepared and that the analyzer is set up and operated properly. In short, the instrument does not know which sample is being analyzed; however, if it is working properly for the SRM, it should work for the test sample.
Unfortunately, the number of certified standard reference materials available is quite small, the quantity of each is limited, and they are typically expensive. However, we can develop secondary standards that are traceable to these primary national standards through controlled inter-laboratory testing using instruments that have been certified using a national SRM. As an example, Table 1 contains the specifications for a garnet traceable secondary reference material qualified using laser particle size distribution analysis.
Ensuring PrecisionSRMs and secondary reference standards can be used to ensure the relative accuracy of our analyses, but is accuracy all that is needed? Accuracy is a measure of control of systematic, predictable errors. On average, we could have a high level of accuracy, meaning we are not suffering from systematic errors that are out of control. But random errors can and do occur with every particle sizing technique, providing the potential for a high degree of spread in individual measurements. The level and effect of random errors can be defined with the term "precision."
In terms of quality, both are important measures of precision—the effect of random analysis errors. But to determine whether we have a good level of repeatability and reproducibility, we have to perform a series of tests under a controlled set of conditions with a well-characterized sample. This is often called an inter-laboratory or round-robin study.
Some of the data taken from the labs’ internal studies to determine precision related to both the laboratory analysts and the laser particle size analyzers produced by Micromeritics. In both studies, the garnet traceable secondary reference material was analyzed using a number of Saturn DigiSizer 5200 high-definition laser particle size analyzers. The sample preparation and analysis parameters were well defined in the booklets that accompanied each sample, and these were used throughout the tests. Following is an overview of some of the test results.
Repeatability ResultsThe repeatability of an instrument is how well it produces the same answer for the same sample analyzed a number of times. To determine repeatability, we can look at the standard deviation and coefficient of variance (CV) of several test statistics over a group of repeat analyses. Table 2 contains the average, standard deviation and CV of mean diameter, three percentiles (10th, 50th and 90th) and the cumulative volume percentiles finer than 9 specific diameters, which were calculated from statistics from eight tests performed on a single garnet sample using one instrument. Figure 1 shows an overlay of the frequency distribution for the eight tests. From this table and figure, it can be seen that not only do the results from each of the eight tests fit within the specifications for the traceable reference material, but also that there is essentially no variability between the tests. Such instrument repeatability is needed to ensure that any variation in results are caused by differences in the sample presented to the instrument and not in the instrument itself.
Reproducibility ResultsIn evaluating reproducibility, we need to consider both sample-to-sample reproducibility, where a single instrument is used to analyze a number of different samples, and instrument-to-instrument reproducibility, where a different instrument is used to analyze the same sample. The SPC report capabilities built into the Saturn DigiSizer 5200 help to illustrate sample-to-sample reproducibility in Figure 2 for three monitored statistics: median diameter, 90th percentile and 10th percentile. It is evident that the different samples analyzed vary only slightly, and that the use of the protocols for sample preparation and analysis produces results well within the window of expected results. If the results varied greatly, it could indicate that the protocols might be inadequate to yield proper particle size analysis results for this sample.
The SPC control chart shown in Figure 3 provides a plot of the same statistics from single analyses performed using 48 different instruments. As can be seen in the figure, the level of reproducibility between instruments is only about twice that seen within one instrument for multiple sample analyses. Such data indicate that results from different instruments can be compared and contrasted to determine whether samples analyzed in different locations with different instruments have similar or different particle size distributions. Companies with multiple locations and multiple processing facilities need such instrument-to-instrument reproducibility to ensure that the same product can be delivered to their customers from any of the facilities producing that product.
Analyst ReproducibilityFigure 4 shows the reproducibility for a single analyst. In this case, the analyst prepared and analyzed a number of samples. The SPC chart shows that this analyst is able to carry out multiple analyses of the same material without variation in results. Thus, we can have confidence in results obtained by this analyst and know that if a test sample yields out-of-specification results, these results are due to a bad sample and not the bad practices of the analyst. (It should be noted that in all cases, random bad analyses are possible with any analyst and total faith should not be based upon a single test. Micromeritics’ laboratories generally perform three replicate particle size determinations for each sample analyzed.)
Table 3 compares a portion of the results obtained by four analysts. These results demonstrate that each of the analysts tested is capable of producing quality results, and that any of these analysts should be able to carry out analyses of test samples in a quality laboratory. However, the quality manager might wish to take a close look at the techniques used by Operator 4 to see if an obvious explanation exists for the slightly higher coefficients of variance calculated from the results obtained by this analyst.
Understanding QualityThe foregoing text has provided some examples of how to quantify the two components of precision—repeatability and reproducibility—in terms of the instrument operators and the instrument used. Similar comparisons can be carried out between multiple locations within an organization, from one day to the next, and between organizations, depending on how the tests are defined and how the resulting data are compared.
By understanding the standards used to measure quality in particle size distribution analysis and ensuring that the instruments they use meet those standards, ceramic manufacturers can be assured that their analysis results are accurate—and that their materials will perform as desired throughout the manufacturing process.