Bookmark and Share

From: Food Quality & Safety magazine, December/January 2013

Good Weighing Practices for the Food Industry

by Klaus Fritsch, PhD and Jean-Luc Quenot

In the laboratory, weighing is only one step in a QC analysis chain, but it strongly influences the overall quality and integrity of the final result. In production, weighing is a key factor in achieving batch uniformity and consistency in dispensing or formulation processes. Proper weighing is thus essential in ensuring continuous adherence to predefined process requirements and avoiding a frequent source of out-of-specification results.

Furthermore, accurate weighing processes help to address some of the most demanding challenges of the food industry, increasing public health, consumer safety, productivity, and competitiveness.

This article introduces a scientific methodology for selecting and testing weighing instruments within an integrated qualification approach—good weighing practices. Based primarily on the user’s weighing requirements and prevailing weighing risks, it provides a state-of-the-art strategy to reduce measurement errors and ensure reliable weighing results. Understanding weighing process requirements and important balance and scale properties such as minimum weight is essential to selecting an appropriate system. When the instrument is in use, these requirements and risks are taken into account to establish a specific routine testing scenario.

The higher the impact of inaccurate weighing and the more stringent the weighing accuracy requirements are, the more frequent testing should be. For less risky and stringent applications, however, testing efforts can be reduced accordingly. Risk and life cycle management forms an integrated part of the overall strategy of good weighing practices to bridge the gap between productivity, process quality, safety, and compliance.

OOS Results and Consequences

OOS results have always had a significant impact on consumer safety and product quality, but company productivity is also affected. OOS may result in reduced uptime due to investigations, delayed batch release, and even costly recalls. In recent years, companies are facing more stringent food safety and quality regulations. New challenges concerning food safety and quality are created by developments such as genetically modified organisms or nanotechnology. Furthermore, the rise in international sourcing and trade of food and feed are expected to accelerate this trend.

In light of these issues, along with corresponding changes in international and national laws, standards and inspection processes will be subject to regular revision. One example of recent legislation affecting the industry is FSMA, which went into effect in 2011. FSMA shifts federal regulators’ focus from responding to safety issues to preventing them. Implementation, which is still underway, will lead to enhanced prevention and increased frequency of mandatory FDA inspections. In the past, almost all FDA 483 observations and warning letters were addressed to the pharmaceutical and medical device industry; the focus is now moving toward the food industry.

Weighing is a key activity in most laboratories; however, it’s not well understood, and its complexity is often underestimated. The weighing process is even less understood in the production environment than in the lab. The selection of a scale is affected by external factors such as hygiene, ingress protection, corrosion, the risk of fire or explosion, and the health and safety of the operator. In current practices, all these factors are given higher priority than mere metrological needs. Metrological criteria—the understanding and proper consideration of which are a prerequisite for preventing OOS outcomes—are taken into consideration to an insufficient degree.

More often than not, the qualifications of production operators are lower than those of laboratory technicians. As a consequence, manipulation errors, along with OOS errors, are more frequent in production than in a laboratory.

One frequent practice is to use existing instruments for a different purpose than the one for which they were originally acquired. Unfortunately, the metrological needs of the new application may not clearly match the capability of the recycled scale.

OOS in production is not only an indicator that quality might be at risk. Other problems may result in a hazard to the health and safety of the consumer, a potential breach of legal trade requirements, and an economic loss for the company. When this happens, raw materials, manpower, and asset utilization are mobilized in a process that ends with poor results. Products must then be reworked or disposed of. In many cases, the detection of an error may trigger tedious and costly recall actions that impact the brand negatively.

Food regulations as BRC, IFS, SQF, or FSS C22000 require instruments to be checked or calibrated periodically. For example, the BRC Global Standard for Food Safety, Issue 6, stipulates in Chapter 6.3:

“The company shall identify and control measuring equipment used to monitor CCPs…. All identified measuring devices, including new equipment, shall be checked and where necessary adjusted at a predetermined frequency, based on risk assessment…. Reference measuring equipment shall be calibrated and traceable to a recognized national or international standard and records maintained.”

While the standard calls for instruments to be adjusted when necessary, it remains silent with regard to how accurate results should be defined and verified. The applied principles are con­sequently diverse throughout the industry. In many cases, the principle of “what you see is what you get” is applied.

In this environment of misconception, scales are the last part of the production chain to be suspected when OOS results occur. OOS then becomes a necessary evil, when it should not.

Measurement Uncertainty and Minimum Weight

State-of-the-art strategies for consistently accurate and reliable weighing consist of scientific methodologies on instrument selection and testing.1 Within these methodologies, industry misconceptions on weighing are widespread, including “what you see is what you get.” What do we mean by that? Here’s an example: A user weighs a product on an industrial floor scale and gets a reading of 120kg, which he believes is the true amount of material. However, this reading might not exactly reflect the amount weighed; in other words, the amount weighed might differ slightly from the instrument reading. This is due to the so-called measurement uncertainty, which is applicable to every instrument you might think of.

Measurement uncertainty is determined in calibration, and the results are issued in appropriate calibration certificates. In general, the measurement uncertainty of weighing systems can be appro­ximated by a positive sloped straight line—the higher the load on the balance, the larger the (absolute) measurement uncertainty (Figure 1). Looking at the relative measurement uncertainty, which is the absolute measurement uncertainty divided by the load expressed as a percentage, we see that the smaller the load, the larger the relative measurement uncertainty. If you weigh at the very low end of the instrument’s measurement range, the relative uncertainty can become so high that the weighing result cannot be trusted anymore.

Relative measurement uncertainty [%] (= Absolute measurement uncertainty/weight)
click for large version
Figure 1. Relative measurement uncertainty [%] (= Absolute measurement uncertainty/weight)

It is good practice to define accuracy (tolerance) requirements for every weighing process. Weighing in the red area, as indicated in Figure 1, will result in inaccurate measurements, because here the measurement uncertainty of the instrument is larger than the required accuracy of the weighing process. Consequently, there is a specific accuracy limit for every weighing instrument: the so-called minimum sample weight, better known as the minimum weight. This is the smallest amount of material that will satisfy the specific weighing accuracy requirement.

While measurement uncertainty is described in great detail in the literature2,3, we want to emphasize that for weighing small loads on analytical and microbalances, the dominant factor in measurement uncertainty stems from repeatability (expressed as the standard deviation of a series of weighings). Samples and standards that are typically weighed on these balances are usually small loads in comparison with the capacity of the balance.

Scales follow the same principles as balances, with some additional constraints that rise from the technology used and the size of the instrument. Most scales use strain gauge weighing cells that lead to a lower resolution than balances. In some cases, the rounding error may be predominant, but for scales of higher resolution, the repeatability becomes a decisive contributor to the measurement uncertainty in the lower measurement range of the instrument.

Linearity deviation is often a large contributor, but it is often neglected when weighing small samples. Considering that the relative measurement uncertainty diminishes when weighing larger samples, we can conclude that non-linearity will play a small role in maintaining the measurement uncertainty of the instrument below the required process tolerance. Similarly, we need to focus our attention on repeatability to define the critical limit of a high-resolution industrial scale.

It is important to state that the minimum weight of balances and scales is not constant over time. This is due to changing environmental conditions that affect the performance of the instrument—factors such as vibrations, draft, wear and tear, and temperature changes. The operator also adds variability to the minimum weight, because different users may weigh differently or apply different skills to the instrument.

To ensure that you always operate at a weight above the minimum determined at calibration (at a particular time with particular environmental conditions by a qualified service technician), apply a safety factor. This means you only weigh sufficiently above the minimum weight as determined at calibration. For standard weighing processes, a safety factor of two is commonly used, provided you have reasonably stable environmental conditions and trained operators. For very critical applications or a very unstable environment, an even higher safety factor is recommended.

Another frequent misconception is that the weight of the tare vessel counts toward the minimum weight requirement. In other words, if the tare weighs more than the minimum weight, any quantity of material can be added, and the minimum weight requirement is automatically fulfilled. This suggests that with a large enough tare container, you could weigh a sample of just one gram on an industrial floor scale with a three-ton capacity and still comply with the applicable process accuracy. Given the fact that the rounding error of the digital indication is always the lowest limit of the overall measurement uncertainty, it’s clear that such a small amount of material in any tare container cannot be weighed with accurate results. Although this is an extreme example, it clearly shows that this widespread misinterpretation does not make any sense.

Just recently, we encountered another misconception involving a dispensing application with the measured minimum weight of the scale in question at 100 kg. The company stated that its practice was to dispense 20 kg at a time, always leaving more than 100 kg of substance in the container to adhere to the minimum weight requirement. Its employees did not understand that they would have to dispense at least 100 kg—instead of 20 kg—to comply with their own accuracy requirement.

Routine Testing

“Measuring equipment shall be calibrated and/or verified at specified intervals...against measurement standards traceable to international or national measurement standards.” — ISO9001:2008, 7.6 Control of Monitoring and Measuring Devices.

“The methods and responsibility for the calibration and recalibration of measuring, test and inspection equipment used for monitoring activities outlined in Pre-requisite Program, Food Safety Plans and Food Quality Plans and other process controls...shall be documented and implemented.” — SQF 2000 Guidance – Chapter “Methods & Responsibilities of Calibration of Key Equipment.”

These statements delegate the responsibility for the correct operation of weighing instruments to the user. Statements like these are usually vague; they are intended to be general guidelines. Therefore, they cannot be used for daily operations. Questions such as “How often should I test my weighing instrument?” arise in situations in which guidance is needed to design standard operating procedures. Such guidelines should not be too exhaustive, and thus costly and time consuming, nor too vague, and thus insufficient to assure proper functioning. The right balance between consistent quality and sufficient productivity must be found. The following test procedures for weighing instruments are recommended for normal use:

  • Calibration in situ by authorized personnel, including the determination of measurement uncertainty and minimum weight under normal utilization conditions. The aim is to assess the complete performance of the instrument by testing all relevant weighing parameters, made transparent to the user by a calibration certificate. Calibration is an important step to take after the instrument is installed and the necessary functional tests are performed.
  • Routine test of the weighing system, to be carried out in situ by the user on weighing parameters that have the greatest influence on the performance of the balance or scale; the aim is to confirm the suitability of the instrument for the application.
  • Automatic tests or adjustments, where applicable, using built-in reference weights; the aim is to reduce the effort of manual testing stipulated by specific FDA guidance.4

Test Frequencies

The routine testing procedures and corresponding frequencies are based on:

  • The required weighing accuracy of the application;
  • The impact of OOS results (e.g., for business, consumer, or environment), in case the weighing instrument does not adhere to the process-specific weighing requirements; and
  • The detectability of a malfunction.

The more stringent the accuracy requirements of a weighing process are, the higher the probability is that results will fail to comply. Therefore, test frequency must be increased. Similarly, if the severity of the impact increases, testing should be performed more frequently to offset the likelihood of noncompliance (Figure 2). If malfunction of the weighing instrument is easily detected, test frequency should be decreased. The frequency of testing ranges from daily, for risky applications (user or automatic tests), to weekly, monthly, quarterly, semi-annually, and yearly.

click for large version
Figure 2. Test frequencies increase as a function of more stringent weighing accuracy and increasing severity of impact in case of a weighing that does not meet the process requirements (qualitative chart).

Our experience is that many food companies tend to test their laboratory balances quite frequently. A proper risk-based approach can reveal whether it is necessary to conduct testing so often and whether these efforts can be reduced without compromising data quality. Furthermore, the applied test procedures might not always be appropriate. While many companies take one or several test weightings to assess the balance at different parts of the weighing range, the importance of the repeatability test is often underestimated.

Surprisingly, the approach to production testing often differs from that used with lab testing. Often, only rudimentary procedures —or none at all—are found on the production floor. This leads to inconsistent quality and OOS results. Only a few companies understand the importance of establishing a robust routine testing scheme. For many of these disciplined users, the practice is to reproduce in production what they have implemented in the laboratory. This is not appropriate, however, because probability, severity, and detectability differ significantly in the two settings.

A sound understanding of the instrument's functionality and its weighing parameters, combined with the necessary understanding of the process-specific weighing requirements, eliminates these misconceptions and helps prevent critical weighing errors that might result in OOS outcomes in both the laboratory and the production environment.

Implementing good weighing practices in a risk-based life-cycle approach for evaluating, selecting, and routine testing of balances and scales can reduce measurement errors and ensure reliable weighing processes.

While the standard calls for instruments to be adjusted when necessary, it remains silent with regard to how accurate results should be defined and verified.

The key requirement of effective weighing practices is to ensure that the minimum weight for the required accuracy must be lower than the smallest amount of material the user expects to weigh. Furthermore, an appropriate safety factor to compensate for fluctuations in the minimum weight due to environmental variability and operator variation should be implemented.

An understanding of the weighing process requirements, together with an understanding of the basic principles of balance and scale properties such as measurement uncertainty and minimum weight, enables the user to realize an integrated qualification strategy. Furthermore, a frequent source for OOS problems is eliminated in both the laboratory and the production environment. Appropriate and meaningful routine tests help the user meet specific weighing requirements and avoid unnecessary and costly testing. Risk and life cycle management then become an integral part of an overall strategy to bridge the gap between productivity, process quality, safety, and compliance.

Klaus Fritsch, PhD, is compliance manager within the Global Business Area, Laboratory & Weighing Technologies at Mettler-Toledo AG. He helps companies achieve compliance with their applicable regulations when using weighing systems. Jean-Luc Quenot is the head of global sales support and market management of the Good Weighing Practices Competence Center at Mettler-Toledo AG. He provides support for clients’ evaluation and practical implementation of compliant weighing systems.


  1. Reichmuth A, Fritsch K. Good weighing practices for the pharmaceutical industry – risk-based qualification and life cycle management of weighing systems. Pharm Eng. 2009;29(6).
  2. U.S. Food and Drug Administration. Questions and Answers on Current Good Manufacturing Practices, Good Guidance Practices, Level 2 Guidance – Equipment. Available at: Accessed December 3, 2012.
  3. Guidelines on the Calibration of Non-Automatic Weighing Systems, Calibration Guide Version 3.0, March 2011, EURAMET. Available at: Accessed November 27, 2012.
  4. U.S. Food and Drug Administration. Questions and Answers on Current Good Manufacturing Practices, Good Guidance Practices, Level 2 Guidance – Equipment. Available at: Accessed November 27, 2012.



Current Issue

Current Issue

February/March 2015

Site Search

Site Navigation