- THE MAGAZINE
Whether working with refractories, whitewares, abrasives, enamels, glass or cements, particle size is a fundamental physical characteristic that must be selected, monitored and controlled from the raw material source to the finished product. However, selecting the most appropriate sizing instrument for a particular application is not a simple undertaking.
Matching the Instrument to the ApplicationThe importance of particle size to the properties of finished ceramic products creates a demand for instrumentation that addresses specific requirements in basic research, product development, raw material selection, process control and quality assurance. To select the best instrument for an application, users must first identify their primary analytical requirements in terms of qualities common to all sizing instruments. These qualities may describe physical attributes of the instrument, how it functions, or qualities of data that the instrument produces. Identifying this set of characteristics and determining their weighted importance to a specific application provides an objective basis for evaluating sizing instruments and differentiating between similar products.
Data attributes can be expressed in quantitative dimensions, such as accuracy and resolution, and these pertain to raw data (measured directly) and reduced data (deduced from some other measurement). Reduced data, in particular, has degrees of integrity or reliability. These are qualitative attributes that relate to the scientific or mathematical credibility of the method by which particle size was first deduced, such as scattered light intensity or sedimentation velocity. Repeatability, reproducibility and throughput are terms used to describe qualities of the measurement system, the first two expressing its capability to maintain consistent data quality. Dynamic range, although related to data, describes an instrument function that may be set by the manufacturer, but most often is imposed by the limits of the theoretical model used.
Functional quality is often maximized at the expense of data quality. In some applications, this is acceptable. For example, when monitoring high capacity production lines, speed and repeatability often are more important than resolution and accuracy. However, in basic research and product development, and when establishing standards for calibration or quality control, data quality is pre-eminent.
No single particle sizing method provides a universal solution. Sieving, microscopy, light obscuration, photo- and X-ray sedimentation, and electrozone sensing have a long history in the ceramics industry and, in certain applications, still offer the optimum overall solutions. Over the past decade, light scattering techniques are also finding their niche. Because no single technique may provide the necessary dynamic measurement range, the same sample may require analysis by multiple methods.
Applying different methods to different size ranges of the same sample often results in a discontinuity at the crossover size. This is because the two measurement principles relate a different primary measurement to particle size and report different numerical values for the same dimension that is referred to in general as “particle diameter;” neither value is more or less accurate than the other. As long as the analyst knows where the reported data make the transition to another model, and understands what directly measurable property of the particle is being related to size, then using multiple models can be acceptable. Some instrument designs, however, not only incorporate more than one theoretical model to extend the size range of the device, but also force the data produced by the different models to blend smoothly in the crossover range where a step discontinuity would normally exist. The “smoothed” data set in the crossover range has a lesser degree of integrity than either data set from which is was calculated since it corresponds directly to neither model, but results from a mathematical hybrid of the two.
Higher level specifications address features and functions such as the level of automation, available accessories, report sets and software “friendliness,” and will not be covered in this article. These instrument attributes have value and must be factored into a “to buy or not to buy” decision, but they are by no means common to a specific class of particle sizing instruments and may not be as valuable for differentiating instrument systems as the attributes previously discussed.
Low Angle Light ScatteringRecent improvements have been made to the particle sizing technique known as low angle, static light scattering (SLS) with regard to the fundamental instrument qualities previously described. The low angle, static light scattering technique as discussed here is based on Mie scattering theory as it pertains to the intensity of scattered light over a range of forward angles. This widely used technique encompasses Fraunhofer diffraction and is applicable to most finely divided particulate material transported to the measurement zone as a liquid-solid suspension, an aerosol or as a widely separated flow of dry powder.
Although Mie theory applies strictly only to spherical, isotropic particles with specific and known optical properties, it may be applied to particle suspensions that do not exactly meet all assumptions of the Mie model. Using the conventional SLS method, the quantity and size of spherical particles are determined that most closely produce the same scattering pattern as the particles being tested. Obviously, if the particles fail to conform to the assumptions of Mie theory, then there will be some degree of disparity between the experimentally measured scattering pattern and the theoretical scattering pattern calculated for the most closely matching assemblage of spheres. If particle shape, heterogeneity or other characteristics cause the experimental scattering pattern to deviate considerably from that predicted by Mie theory, then a different measuring technique should be considered.
The SLS technique as defined above has maximum utility in the range from approximately 1.0 to 1000 mm. Here, light scattering characteristics vary significantly with changes in particle size, but become less responsive as particle size drops below the 1.0 mm boundary. By 0.1 mm, light intensity in the forward direction varies only slightly with a change in particle size, and below about 0.1 mm the intensity versus scattering angle relationship is of little value as a size-related measurement. The scattering patterns from particles larger than about 1000 mm exhibit their identifying characteristics at angles very close to the optical axis. The limitation in this case is that the unscattered portion of light is also focused around zero degrees and interferes with the scattered light, so that it is impractical to attempt to extract only the scattered fraction. For particles outside the range of the SLS method, other parameters of the scattering function must be used if Mie theory is to be applied. Otherwise, another technique is required.
Obtaining Information from a Light Scattering PatternBy the SLS method, all available information about particle size resides in the intensity versus angle characteristics of the scattering pattern. In conventional SLS particle size analyzers, depending on the manufacturer, scattering patterns are typically characterized by measuring light intensity at 30 to 130 angular positions. In contrast, a new light scattering instrument* uses an “instrument grade,” 1024 x 1280 element, charged-coupled device (CCD).
Repeatability and reproducibility are dependent on several design characteristics, but the method and maintenance of alignment of the optical axis with the detector array is of primary importance. With a conventional detector array, the optical axis must be aligned exactly with the center of the fixed array geometry because elements of the array are pre-assigned to a particular range of solid scattering angle. The new design, in contrast, requires only that the optical axis intersect the CCD—exactly where is unimportant initially. Software scans the array and determines upon which CCD element the central bright spot resides. This element is defined as the zero degree point in the array. Then, the software assigns the appropriate scattering angle to all other elements.
The CCD in the new design spans approximately five degrees of the scattering pattern across the 1280-pixel column that coincides with the radius vector of the scattering pattern. To span the full angular range of the instrument, the scattering pattern is stepped across the CCD, and several five-degree segments of the scattering pattern are captured and joined to form a radial band of scattering pattern measurements. Since only a five-degree band spans 1280 pixels, each pixel in the column intercepts only a 0.004-degree band of scattering angles. Figures 1 and 2 illustrate how the scattering pattern is stepped across the CCD.
Another benefit gained by using a CCD array for quantitative light detection is that it provides a means for accommodating a wide range of light intensity. The value of the output associated with a CCD element is proportional to the intensity of incident light and to exposure time. Very high light intensities can be measured using microsecond exposures, as illustrated in Figure 3. Toward the other extreme, very low light intensities can be measured by allowing long exposure times (see Figure 4). This is important in accurately characterizing a scattering pattern since light intensity can vary by 10 orders of magnitude. Figures 3 and 4 show two exposures at different light doses.
Collecting and storing large quantities of data and the subsequent processing can consume a substantial amount of time and encumber throughput. In the new instrument design, digital signal processing is performed “on the fly,” consolidating measurements and reducing the number of angle classes for which data are stored to 465. One weighted average value of intensity versus angle class is calculated for all detector elements within each of the logarithmically spaced angle classes. From this collection of 465 data points representing the measured scattering pattern, 160 particle size classes are extracted, providing an approximately three-to-one ratio between known and unknown variables. This is an example of the integrity or reliability attribute of data quality relating to the mathematical basis of data reduction. Figure 5 shows a plot of the measured intensity versus angle data for a sample of 3.12 µm polystyrene spheres.
A small quantity of sample is used, not because of the sensitivity of the SLS instrument, but because Mie theory does not apply to multiple scattering. Particles must be widely separated in the suspension. Small sample quantities can lead to problems with statistical or representative sampling. An important consideration in evaluating an instrument system in this regard is not to consider the quantity of sample introduced, but to determine what percent of the introduced sample actually gets measured. The design of the new instrument addresses this by providing the means to ensure that all sample mass introduced into the system is subjected to measurement. This is achieved by matching the flow rate with the measurement time so that the total capacity of the reservoir flows through the measurement zone at least once.
Accuracy is determined by how closely a measured value agrees with the accepted value for the measured dimension. Different particle sizing techniques rely on different fundamental measurements and different theoretical models. They are likely to produce different results for the same sample unless the particle system conforms to both models. How accurately an SLS instrument determines particle size ultimately depends on the quality of the scattering pattern and the quality of its measurement. If Mie theory is assumed to describe the scattering phenomenon produced by a properly dispersed system of real particles, and if the scattering pattern is accurately measured, then Mie theory alone is sufficient to extract particle size, particle quantity, modality and distribution information. The new instrument design reports only particle size data following directly from Mie theory without reliance on supplementary theories and with no prior assumptions of modality, distribution type or size range entered into the deconvolution algorithm.