Application of Statistical Techniques in Food Science: Chemical Analysis Data
Application of Statistical Techniques in Food Science: Chemical Analysis Data
Application of Statistical Techniques in Food Science: Chemical Analysis Data
P.Muredzi
School of Industrial Sciences and Technology
Harare Institute of Technology, Ganges Rd, Belvedere
Box BE 277, Harare, Zimbabwe
pmuredzi@hit.ac.zw
Abstract— Food practitioners encounter data interpretation and product packs which are below statutory minimum net weight.
dissemination tasks on a daily basis. Data comes not only from Statistical methods can also be applied to evaluate the
laboratory experiments, but also via surveys on consumers, as the
users and receivers of the end products. Understanding such diverse trustworthiness of data obtained by any method of
information demands an ability to be, at least, aware of the process of measurement. This application has been used extensively in
analysing data and interpreting results. This knowledge and ability evaluation of chemical data generated by analytical
gives undeniable advantages in the increasingly numerate world of
laboratories. The statistical analysis provides an evaluation of
food science, but it requires that the practitioner have some
experience with statistical methods. Statistical methods can be how dependable the analytical results are. This can range from
applied to evaluate the trustworthiness of data obtained by any within-laboratory to between-laboratory comparisons,
method of measurement. This application has been used extensively globally. Enforcement agencies rely on such checks so that
in evaluation of chemical data generated by analytical laboratories.
This review paper discusses the application of statistical techniques they can monitor adherence to legal requirements with
in food science emphasizing on chemical analysis data. The paper confidence. Food research application brings in analysis of
also gives a study case on statistical technique application citing an differences and relationships. Here, hypotheses are put
example of “Precision Calculations for Chemical Analysis Data”
forward on the basis of previous work or new ideas and then
Keywords-chemical analysis, precision, accuracy, measurement magnitudes of effects in sample statistics can be assessed for
uncertainty, experimental error, repeatability significance, for instance, examination of the change in colour
pigment content during frozen storage of vegetables.
I. INTRODUCTION Examination of relationships requires that different
measurement systems are applied and then compared. There
There are many applications of statistics in the field of food are many examples of this in studies of food where data from
studies. One of the earliest was in agriculture where R. A. instrumental, sensory and consumer sources are analysed for
Fisher used experimental design to partition variation and to interrelationships. The process of sampling of items, including
enable more precise estimation of effects in crop plot food material and consumer respondents, can be controlled
experiments. There was even an early sensory experiment on using statistical methods and here a statistical appreciation of
tea tasting (Fisher 1966), and since then statistical applications variability is important. Experimental design takes this further,
have increased as food science emerged as a distinct applied where sources of such variation are partitioned to improve
science subject. Some examples of the form of statistical precision or controlled and minimised if extraneous. A
applications in food are given in Table 1. Preparation of data common example is the unwanted effect of order of samples
summaries is one general application of statistics that can be in the sensory assessment of foods – design procedures can
applied across the board. It is one of the simplest applications minimise this. In fact, all the above examples rely on design
and can be done manually if necessary, depending on the procedures if the result is to be valid and adequately
requirements. A variety of simple graphs and table methods interpreted.
are possible, which allow rapid illustration of results. These
summaries are taken further in statistical quality control where
measures such as the mean value are plotted ‘live’, as a
process is on-going. The graphs (control charts) used include
limit lines which are set by using other statistical methods,
which allow detection of out-of-limit material, e.g. food
1|Page
Table 1. : Some applications of statistics in the food field would be no progress and they form the foundation of the
scientific approach in many food disciplines. Amore recent
Method Application approach is that of phenomenology where an inductive
Summaries of results Tables, graphs and descriptive approach can be used to examine phenomena on the basis that
statistics of instrumental, they are socially constructed. Theories and explanations are
sensory generated and built up from data gathered by methods and
and consumer measures of techniques such as interviews (Blumberg et al. 2005). These
methods are often described as qualitative, which again refers
food characteristics
to the data which are in the form of words rather than
Analysis of differences and Research applications on numbers. The modern food practitioner needs to be aware of
relationships differences in food properties such data as there are several qualitative methods (e.g.
due to processing and storage; interviews and focus groups) used in sensory and consumer
correlation studies of work. Analysis of data from qualitative methods can be
instrumental and sensory summarised by numerical techniques such as counting the
properties incidence of certain words and phrases, but usually statistical
Monitoring of results Statistical control of food analysis as such is not involved. Typical use of the scientific
quality and parameters such approach in food studies entails identifying a topic for
as net filled weight research or investigation then posing a research question(s).
Measurement system Uncertainty of estimates for Deductive reasoning from existing knowledge is examined to
integrity pesticides and additives levels develop a research hypothesis. A plan can then be drawn up
in food with an experimental design and specification of measurement
Experimental design Development and applications system, etc. Data are gathered and then statistical analysis is
of balanced order designs in used to test the hypothesis (quantitative). The scope of the
sensory research procedure can be from a simple investigation of the ‘fact-
finding’ type, e.g. determination of chemical content values, to
a complex experimental design, e.g. a study on the effect of
temperature, pressure and humidity levels on the drying
II. DESCRIPTION properties of a food. In this latter case, the objective would be
1.1. The approach to identify any significant differences or relationships.
Progress in food science and all its associated disciplines is Experimental control means that results can be verified and
underpinned by research activity. New information is gathered scrutinised for validity and other aspects. Simple experiments
by investigations and experiments, and in this way knowledge do not usually require stating of hypotheses, etc. In
is advanced. The scientific approach to research and circumstances where differences or relationships are being
exploration follows an established paradigm called the examined, e.g. ‘Does process temperature affect yield of
positivism method. This postulates that events and phenomena product’, a more formal procedure is used or, at least assumed
are objective and concrete, able to be measured and can be (Fig. 1.). The conclusion of one investigation is not the end of
explained in terms of chemical and physical reactions. All the process as each piece of work leads to new ideas and
scientists are familiar with this viewpoint, which is described further studies, etc.
as the scientific deductive approach (Collis and Hussey 2003).
It is largely based on empirical methods, i.e. observations Figure 1: The approach to investigation.
from experiments. The scientific style of approach can be used
for
any type of investigation in any subject. The procedure uses
deduction from theory based on current knowledge. To
advance knowledge, experiments can be designed to test
advances on existing or new theory, using a hypothesis
process. The findings can then be disseminated and knowledge
increased. Results are generalised and can be used to establish
new theories and to model processes and event reactions,
which in turn allows prediction in the formation of new
hypotheses. The term quantitative research is also used in
reference to the scientific approach. This strictly refers to the
nature of the data generated, but it implies the deductive
positivistic viewpoint. In this process, the researcher is
assumed to be objective and detached. Ultimately, the
deductive method searches for an explanation on the basis of
cause– effect relationships. Without such procedures, there
2|Page
improvement, but not for all analytes and all laboratories. A
relatively simple check on performance for proficiency testing
schemes is based on calculation of a form of z-score
Source
Operator
Lab
1.2. Chemical Analysis Preparation
Run
The chemical analyst is interested in the end result, but also in Method
the uncertainty of the estimation; some researchers state that
unless a measure of uncertainty is included the results It is crucial that this source of error is quantified and removed,
themselves are useless (Mullins 2003). This view could well or at least accounted for in any analytical determination,
apply to all scientific measures, but there are still occurrences although this is not always done (O’Donnell and Hibbert
of it not being adopted for chemical data. Many investigations 2005). Bias can be calculated as the error of the mean , and by
have taken place to examine error components and to quantify the location of the range specified by a confidence interval.
their contribution to uncertainty. Also, cost considerations are The ‘true value’ is represented by reference samples or the
included in these studies as reducing uncertainty usually nearest equivalent.
means additional analyses and hence cost in terms of time,
resources and personnel. The interest here is in the balance
between gain in certainty, against the increased cost to the III. GENERAL ANALYSIS
laboratory (Lyn et al. 2002; FSA 2004a). Another unique Errors and measurement uncertainty
aspect of studies in this topic is that the uncertainty is The term, “experimental error” is used extensively in student
examined not only for location measures estimate such as the lab books to account for all manner of unexpected results.
mean, but also for that of the level of variability – thus the While this may be appropriate the error can be allocated to a
uncertainty of the standard deviation is also of interest. number of possible sources which can usually be identified as
Method proficiency testing is one aspect of this protocol that discussed below. Gross errors (e.g. a misread balance or
has been developed for some common standard methods with grossly incorrect additions /omissions of reagents) are usually
measures all focused on uncertainty in analytical chemistry. In accidental in nature and with care they can be avoided. In the
addition to analysis coming under this latter umbrella, where Kjeldahl analysis an obvious gross error would be seen if there
analytes such as pesticides are determined at very low level, was omission of the catalyst for one of the replicates.
there are many proximate analyses and ‘crude content’ Rejection of that value could be considered and there are
methods used for food. These may exhibit higher levels of statistical tests for such “outlier” values. Thus these errors
uncertainty, but their results and in fact, those from any may not affect all measurements in a set and often can be
instrumental measure can be subjected to some of the easily detected. Other types of error occur even when the
calculations detailed below. Food analysis methods have greatest care is taken. Systematic errors (e.g. a balance which
received special attention via The Food Analysis Performance requires servicing and calibration, unrecognized faulty
Assessment Scheme (FAPAS). Patey (1994) described the technique by the analyst, or a method related systematic error)
initial stages and progress of this initiative – there was some usually affect all the analyses in a similar manner. The
3|Page
systematic error effect is also known as bias and affects Where:
accuracy. Note that even if a balance is calibrated (i.e. set to EM = error of the mean
weigh accurately using certified weights) it may still give an T = “true” or “actual” value
inaccurate reading if the balance model is unable to read M = mean value.
beyond a certain level. Thus the lack of calibration is a
determinate error, and can be changed, but the other is The true value may not be available for unknown samples,
constant. Calibration improves accuracy and reduces or unless an independent analysis has been performed giving a
removes any bias which instruments may have. A blank confident estimate. If an indication only is required then a
determination is another aid to detection of a systematic error. rough estimate can be given by “typical” values from text
Another source of error is detected if the test sample was books and/or food product labels. In the food production
analysed more than once. Even if gross and systematic errors situation (Table 3), the true expected value can be calculated
are absent, repeated measurements may show some variation. for quality control purposes from knowledge of the chemical
These are caused by random errors, e.g. small errors in composition of the specified ingredients. Alternatively a
weighing, use of volumetric devices and other analysis standard or control material of known composition can be
instrumentation. Even highly trained analysts using top of the analysed along with the unknown under the same analysis
range equipment may be unable to avoid random error. The conditions, thus enabling the above calculation. A suitable
random error effect in a series of measurements causes the crude test material can be made up by the analyst, formulated
individual results to fall on either side of the mean. They may from constituent chemicals or purified constituents. Another
be accidental in nature but are indeterminate as they are possibility used in some laboratories is use of a previously
difficult to remove entirely. Random errors affect the precision analysed material which is kept in stable storage and sampled
of the analysis method. These errors can occur at any stage of along with the new samples. For more critical circumstances a
the analysis and accumulate to produce the overall error. Some CRM (certified reference material) would be required. While
errors augment one another whereas others may cancel one many RMs are available, not all common constituents such as
another out. The replicate values in Table I are all different nitrogen are found in the certified lists and the matrix (i.e. the
and possible error sources could be deduced by examination of physical and chemical “makeup”) of the RM may be different
each stage of the Kjeldahl analysis. An estimate of error from the unknown food sample. There have been some
magnitude in the final results can now be calculated. developments to answer these food specific requirements, e.g.
the FAPAS (food analysis performance assessment scheme)
Table 3: Quality control laboratory data for percentage of initiative run by MAFF (Ministry of Agriculture, Fisheries,
crude protein analysis on food product and Food) which has food product test materials for proximate
analyses such as nitrogen protein. The use of a hierarchy of
reference standards from secondary RMs to certified RMs and
Replicate Percentage of protein (N2 x 6.25) ultimately primary RMs forms part of the traceability chain for
Number Analysis A Analysis B chemical composition instigated by VAM. The presence of
1 7.3 8.4 errors will affect the magnitude of the percentage REM
2 8.5 9.1 obtained. Assuming the absence of gross and systematic errors
3 - 8.7 then a percentage REM of zero is possible but unlikely due to
4 - 8.2 random errors. Usually negative or positive percentage REM
Mean 7.9 8.6 values are obtained representing results which are below or
above the true value respectively. These statistics can now be
Note: True/most probable value = 8.8 per cent calculated for the data of Table 3. As could be easily deduced
by inspection of the mean values, both analyses have
Accuracy and precision in measurement underestimated percentage protein, and the magnitude of this
Accuracy is the extent of agreement between the determined is shown (Table 4) by the negative percentage REMs. Analysis
value and the true or most probable value; and precision is the B has a greater agreement with the most probable value.
extent of agreement among a series of measurements of the
same quantity. It is important to note that with these terms the Measures of variability (precision)
presence of one does not automatically imply the other: a high The standard deviation (SD) and the mean absolute deviation
degree of precision does not imply accuracy and vice versa. (MD) introduced previously are measures of precision. These
can be standardized as the percentage coefficient of variation
Measures of accuracy (%CV; also known as the relative standard deviation) and the
The degree of concordance with the true value can be percentage relative mean deviation (% RMD) respectively:
calculated as the error of the mean which can also be
expressed as the relative error of the mean (REM):
4|Page
where techniques, modern instruments and trained analysts, minimal
SD = standard deviation replication may be common, except where the technique is
M = mean very rapid and low in cost, e.g. as with modern nitrogen
MD = mean deviation. analysers based on the Dumas method (2.5 minutes per
sample). Thus duplicate determinations or even a single one
These measures are related (MD is approximately 0.8 times done along with a reference or standard analysis for the run
SD). Both are included here as MD is perhaps easier to may be typical. If a single determination is made there is no
understand and calculate. In the form above, erroneous reference point for error detection. Statistically, the greater the
comparisons between data sets possessing different number of determinations the more reliable or accurate the
measurement scales are avoided, e.g. an MD of ten for a mean result. Whether or not a low level of replication is acceptable
of 10,000 gives a very low percentage RMD (0.1 per cent), but depends on several factors: the experience of the analyst and
with the same MD for a mean of 100 the RMD is very high the laboratory itself; the method of analysis and its history
(10 per cent). Two other related measures are important. with respect to the food in question; and the importance of the
Repeatability is the precision obtained when a method of decisions which are to be based on the results. Certainly, low
analysis is repeated under the same conditions, i.e. by the levels of replication in isolation provide a weak basis for
same analyst using the same equipment, on the same sample making confident decisions regarding the data obtained, e.g. a
material, etc. (also referred to as “within laboratory” or standard deviation based on only two values is an extremely
“within run” precision). The analyses in Table 3 can be shaky foundation on which to base further inferences. The
assumed to have been done under repeatability conditions. difference in magnitude between the SD values (Table 5) for
Reproducibility is the precision obtained when the same two and four replicates, for data sets with similar ranges,
method of analysis is repeated on the same test material but illustrates this point. This does not, however, preclude the
under different conditions, i.e. a different analyst, different set routine use of duplicates. The final consideration is how to use
of equipment or a different laboratory or even a different the calculated measures (Tables 4 and 5) to answer questions
method (also known as “between run” or “between concerning the acceptability of the obtained levels of precision
laboratory” precision). It is usual to find that repeatability and accuracy
conditions result in greater precision than those of
reproducibility. In fact the poor reproducibility shown by Table 5: Accuracy measures for percentage of protein data
different laboratories when analysing the same samples was
one of the reasons for instigating the VAM project. The Analysis A Analysis B
magnitude of the percentage CV (or percentage MD) will Number of 2.0 4
range from zero upwards. “Perfect” precision would produce a replicates
CV percentage of zero and although this can occur, more Mean (%) 7.9 8.6
commonly small values are obtained, caused by random error. Range (%) 1.2 0.9
Large percentage CV values may point to gross errors. Note MD 0.6 0.3
that even if the method is perfectly precise, repeated values SD 0.85 0.39
could still vary owing to inherent variation within the food %RMD 0.6 3.9
material itself. Calculation of precision for the data of Table 3 %CV 10.74 4.55
shows that precision is relatively poor in set A (high %CV,
%RMD values). Pertinent to these measures is the number of Acceptance level for precision
repeated measurements. The deviation of a set of replicates around the mean depends
on the precision of the measurement system and on the degree
Table 4: Accuracy measures for percentage of protein data of variability of the population from which the samples
originate. If both are of a completely unknown nature then
Analysis A Analysis B whether or not to accept a set of replicates cannot be decided
Number of 2 4 easily. Some measure of variability must be established. This
replicates can be done by carrying out an initial set of a larger number of
Mean (%) 7.9 8.6 replicates than is envisaged for routine use, e.g. at least ten, or
% REM -10.2 -2.3 if appropriate, by proceeding with duplicate analyses without
Note: Most probable value = 8.8 per cent considering variability until a “data bank” of typical values
has been established from which an estimate of deviation in
the form of the standard deviation can be calculated, i.e. a
comment concerning the “expected variation” for a set of
Acceptable level of replication replicates cannot be made until some measure of variability
The level of replication is an important consideration as it has been established. Once this is available then an error
affects the statistical measures and the cost of the analysis in estimate, known as a confidence interval (CI) can be
terms of time and personnel. In practice the costs can limit the calculated for the population mean of the measurement. It
degree of replication. For routine analyses with established gives a region within which we are confident that the
5|Page
population mean will be located, with a specified probability from such a population will approximate to normality. Thus
or “certainty” level. This statistic can be used as an estimate of this distribution will also possess the above properties and
bias (accuracy) and the width of the interval gives another provides the basis for determining the confidence interval for
perspective on precision, as it emphasizes the effect of sample the population mean based on the sample mean. Large sample
size. To understand a confidence interval we need to sizes provide adequate estimates
appreciate the nature of a population distribution. Put simply, of the population parameters to allow calculation of the
if we know how a population is “mapped out” then it can be confidence interval using the proportions described above. For
used to make estimations based on samples taken from that small samples of the order likely to be used in chemical
population. Imagine that the food product (Table 3) is analysis a more appropriate distribution “standard” for making
analysed a very large number of times for crude protein estimates is the t-distribution – it is similar in shape and
content and grouped values are plotted on a histogram – then it characteristics to the normal distribution but is wider and
is likely that a rough inverted cone shape would be obtained flatter, having more “spread” (especially for small numbers of
(see Figure 1) . Increasing the number of points would have a samples or replicates). Thus the interval will be wider,
smoothing effect on the shape and with a very large number a reflecting the increased uncertainty. A measure of the degree
bell shaped curve would be obtained. Ultimately with an of confidence must be specified and it is expressed on a
infinitely large number of values the curve would be smooth probability scale of zero to 100 per cent, with 100 per cent
representing absolute certainty. Unfortunately, choosing the
Figure 1: Frequency Distribution of 100 per cent Crude 100 per cent level of confidence would result in an interval of
Protein Content Determinations very large width, unusable in practical situations. Usually the
95 or 99 per cent limit is selected, representing high degrees of
confidence. The confidence interval limits are calculated using
the t-value from the t distribution based on the number of
replicates:
Where
n = number of replicates.
6|Page
technique on a range of food products, the precision of
Analysis B (or better) is more typical and it is likely that a
gross error has occurred in Analysis A. Following the above
procedure now gives a more definite guide to accepting the
level of precision.
IV. ACTUALISATION
Study Case: Statistical Technique Application Example -
Where Precision Calculations for Chemical Analysis Data;
r = estimated variability or repeatability which must Source: J.A. Bower 2009
not be exceeded;
t = value from table based on the larger original Data gathered during routine chemical analysis of moisture
number of initial analyses; content in foods were examined for the level of precision.
SD = standard deviation of original number of repeat Mean values were in the range 70– 72 g/100 g and based on
determinations under repeatability conditions. the data bank the population standard deviation was taken as
0.35 g/100 g. A duplicate measure was carried out under
Assuming such circumstances, an additional analysis based on repeatability conditions.
ten crude protein determinations is given below (Table 6)
along with the calculated r statistic. The t-value is smaller as it Table 4A: Repeatability Calculation Data (Excel).
is based on the original ten determinations. Thus we would
expect duplicate crude protein determinations to differ by less
than 1.1 per cent, so although Analysis A looks more
favourable now it could still be rejected on these grounds. Data Moisture Content (g/100g)
Indeed, in the author’s experience of the manual Kjeldahl Duplicate 1 71.5
7|Page
Duplicate 2 70.9 laboratories, hence different technicians, reagents, times etc.
Mean 71.2 The definition is calculated in a similar manner to that for
sd(pop) 0.35 repeatability, with the inclusion of the ‘different laboratory
sd (unknown) 0.42 effect’: reproducibility is the magnitude of the interval for 2
t95%,ldf 12.71 determinations by any two laboratories. The calculation
repeatabilityz 0.97 reflects the wider source of variation by incorporating the
repeatabilityt 7.62 variance of both within- and between-laboratory sources:
8|Page
consequences. Quantification of toxins in food and nutrient
content determination rely on dependable methods of chemical
analysis. Statistical techniques play a part in monitoring and
reporting of such results. This gives confidence that results are
valid and consumers benefit in the knowledge that certain
foods are safe and that diet regimes can be planned with
surety. Other instrumental and sensory measures on food also
receive statistical scrutiny with regard to their trustworthiness.
These aspects are also important for food manufacturers who
require assurance that product characteristics lie within the
required limits for legal chemical content, microbiological
levels and consumer acceptability. Similarly, statistical quality
control methods monitor online production of food to ensure
that manufacturing conditions are maintained and that
consumer rights are protected in terms of net weights, etc.
Food research uses statistical experimental design to improve
the precision of experiments on food. Thus, manufactures and
consumers both benefit from the application of these statistical
methods. Generally, statistics provides higher levels of
confidence and uncertainty is reduced. Food practitioners
Table 4C: Reproducibility with interaction (Excel). apply statistical methods, but ultimately, the consumer
benefits.
9|Page
systems should cover mostly laboratory applications. Research scientists in the food field may be
measurements. cognizant with such publications and be able to keep abreast
of developments. The food scientist in industry may have a
Consumer measures- should refer to problem in this respect and would want to look for an easier
questionnaire measures in surveys, such as route, with a clear guide on the procedures and interpretation,
consumers’ views and opinions on irradiated etc. Students and pupils studying food-related science would
foods. This should cover consumer also be in this situation. Kravchuk et al. (2005) stress the
applications which are usually non- importance of application of statistical knowledge in the
laboratory in nature. teaching of food science disciplines, so as to ensure an on-
going familiarity by continual use. Some advantages of being
conversant with statistics are obvious. An appreciation of the
VII. CONCLUSION basis of statistical methods will aid making of conclusions and
Food issues are becoming increasingly important to decisions on future work. Other benefits include the increased
consumers, most of whom depend on the food industry and efficiency achieved by taking a statistical approach to
other food workers to provide safe, nutritious and palatable experimentation.
products. These people are the modern-day scientists and other
practitioners who work in a wide variety of food-related
situations. Many will have a background of science and are REFERENCES
engaged in laboratory, production and research activities.
Others may work in more integrated areas such as marketing, Abdullah, B. M. (2007) Properties of five canned luncheon
consumer science and managerial positions in food meat formulations as affected by quality of raw materials.
companies. These food practitioners encounter data International Journal of Food Science and Technology, 42,
interpretation and dissemination tasks on a daily basis. Data
30–35.
come not only from laboratory experiments, but also via
surveys on consumers, as the users and receivers of the end
products. Understanding such diverse information demands an Anonymous (1999) Reference materials update. VAM
ability to be, at least, aware of the process of analysing data Bulletin, 20, 33.
and interpreting results. In this way, communicating
information is valid. This knowledge and ability gives AOAC (1999) Official Methods of Analysis of the Association
undeniable advantages in the increasingly numerate world of of Official Analytical Chemists, 15th ed. Association of
food science, but it requires that the practitioner have some
Official Analytical Chemists, Arlington, VA.
experience with statistical methods. Unfortunately, statistics is
a subject that intimidates many. One need only consider some
of the terminology used in statistic text titles (e.g. ‘fear’ and Association of Public Analysts (1986) A Protocol for
‘hate’; Salkind 2004) to realise this. Even the classical Analytical Quality Assurance in Public Analysts’
sciences can have problems. Professional food scientists may Laboratories. Association of Public Analysts, London, UK.
have received statistical instruction, but application may be
limited because of ‘hang-ups’ over emphasis on the Association of Public Analysts, A Protocol for Analytical
mathematical side. Most undergraduate science students and
Quality Assurance in Public Analysts’ Laboratories,
final-year school pupils may also find it difficult to be
motivated with this subject; others with a non-mathematical
background may have limited numeracy skills presenting Blumberg, B., Cooper, D. R. and Schindler, P. S. (2005)
another hurdle in the task. These issues have been identified in Business Research Methods.
general teaching of statistics, but like other disciplines, McGraw-Hill Education, Maidenhead, UK, pp. 18–25.
application of statistical methods in food science is continually
progressing and developing. Statistical analysis was identified, Bowerman, B. L., O’Connell, R. T., Orris, J. B. and Porter,
two decades ago, as one subject in a set of ‘minimum
D. C. (2008) Essentials of Business Statistics, 2nd ed.
standards’ for training of food scientists at undergraduate level
(Iwaoka et al. 1996). Hartel and Adem (2004) identified the McGraw Hill International Edition, McGraw-Hill/Irwin,
lack of preparedness for the mathematical side of food degrees McGraw-Hill Companies Inc., New York.
and they describe the use of a quantitative skills exercise for
food engineering, a route that merits attention for other BSI (2000) BS ISO 5725:1. Accuracy (Trueness and
undergraduate food science courses. Unfortunately, for the Precision) of Measurement Methods and Results – Part 1:
novice, the subject is becoming more sophisticated and General Principles and Definitions. British Standards
complex. Recent years have seen this expansion in the world
Institute, London, UK.
of food science, in particular in sensory science, with new
journals dealing almost exclusively with statistical
10 | P a g e
BSI (2002) BS ISO 5725:2. Accuracy (Trueness and uncertainty, taking into account predicted frequency
Precision) of Measurement Methods and Results – Part 2: distributions. Analyst, 126, 2044–2052
Basic Method for the Determination of Repeatability and
Reproducibility of a Standard Measurement Method. British
Iwaoka, W. T., Britten, P. and Dong, F. M. (1996) The
Standards Institute, London,
UK. changing face of food science education. Trends in Food
Science and Technology, 7, 105–112.
Calcutt, R. and Boddy, R., Statistics for Analytical Chemists,
Association of Public Analysts, London, 1986. 2 Chapman & J¨ulicher, B., Gowik, P. and Uhlig, S. (1999) A top-down in-
Hall, London, 1983. house validation based approach for the investigation of the
measurement uncertainty using fractional factorial
experiments. Analyst, 124, 537–545.
Chatfield, C. (1992) Statistics for Technology, 3rd ed.
Chapman & Hall, London, UK. Collis, J. and Hussey, R. Kane, J. S. (1997) Analytical bias: the neglected component
(2003) Business Research. Palgrave MacMillan, Basingstoke, of measurement uncertainty.
UK, pp. 46–79. Analyst, 122, 1283–1288.
Fisher, R. A. (1966) The Design of Experiments, 8th ed. Kravchuk, O., Elliott, A. and Bhandari, B. (2005) A
Hafner, New York. laboratory experiment, based on the maillard reaction,
conducted as a project in introductory statistics. Journal of
FSA (Food Standards Agency) (2004a) Optimised Food Science Education, 4, 70–75.
Uncertainty at Minimum Cost to Achieve Fitness for Purpose
in Food Analysis. FSA Project No. E01034. Available at Levermore, R. J. and McLean, B. D. (2002) Development of
www.food.gov.uk. a Stable Sulphited Food Reference Material, Campden and
Chorleywood Food Research Assocaiton Group. FSA Project
FSA (Food Standards Agency) (2004b) Pilot Study of
Routine Monitoring of Sampling Precision in the Sampling of code E01041. Available at www.food.gov.uk.
Bulk Foods. FSA Project No. E01049
Lyn, J. A., Ramsey, M. and Wood, R. (2002) Optimised
uncertainty in food analysis: application and comparison
Gacula, M. C. and Singh, J. (1984) Statistical Methods in between four contrasting ‘analyte–commodity’ combinations.
Food and Consumer Research. Academic Press, Orlando, IL. Analyst, 127, 1252–1260.
Gao,Y., Ju, X. and Jiang, H. (2006) Studies on inactivation Malhotra, N. K. and Peterson, M. (2006) Basic Marketing
of Bacillus subtilis spores by high hydrostatic pressure and Research, 2nd ed. International Edition, Pearson Education
heat using design of experiments. Journal of Food
Inc., Upper Saddle River, NJ.
Engineering, 77, 672–679.
Greenfield, H. and Southgate, D. A. T. (2003) Food Mckenzie, J., Schaefer, R. L. and Farber, E. (1995) The
Composition Data, 2nd ed. Food and Agriculture Organisation Student Edition of Minitab for Windows. Addison-Wesley
of the United Nations, Rome, pp. 149–162. Publishing Company Inc., New York.
Hartel, R. W. and Adem, M. (2004) Math skills assessment. Mart´ın-Diana, A. B., Rico, D., F´ıas, J. M., Barat, J. M.,
Journal of Food Science Education, 3, 26–32. Henehan, G. T. M. and Barry-Ryan, C. (2007) Calcium for
extending the shelf life of fresh whole and minimally
Heinonen, M., Valsta, L., Anttolainen, M., Ovaskainen, M., processed fruits and vegetables: a review. Trends in Food
Hyv¨onen, L. and Mutanen, M. (1997) Comparisons Science and Technology, 18, 210–218.
between analytes and calculated food composition data:
carotenoids, retinoids, tocopherols, tocotrienols, fat, fatty acids May, N. S. and Chappell, P. (2002) Finding critical variables
and sterols. Journal of Food Composition and Analysis, 10, 3– for food heating. International Journal of Food Science and
13. Technology, 37, 503–515.
Hill, A. R. C. and von Holst, C. (2001) A comparison of Miller, J. C. and Miller, J. N. (1993) Statistics for Analytical
simple statistical methods for estimating analytical Chemistry, 3rd ed. Ellis Horwood, Chichester, UK.
11 | P a g e
Miller, J. C. and Miller, J. N. (1999) Statistics and Severini, C., Baiano, A., De Pilli, T., Carbone, B. F. and
Chemometrics for Analytical Chemistry, Derossi, A. (2005) Combined treatments of blanching and
4th ed. Ellis Horwood, Chichester, UK. dehydration: study on potato cubes. Journal of Food
Engineering, 68, 289–296.
Moreira, M. D. R., Ponce, A. G., Del Valle, C. E. and
Roura, S. I. (2006) Ascorbic acid retention, microbial growth, Therdthai, N., Zhou, W. and Adamczak, T. (2002)
and sensory acceptability of lettuce leaves subjected to mild Optimisation of the temperature profile in bread baking.
heat shocks. Journal of Food Science, 71(2), S188–S192. Journal of Food Engineering, 55, 41–48.
Mullins, E. (2003) Statistics for the Quality Control Thompson, M. (1994) Proficiency testing in analytical
Chemistry Laboratory. Royal Society for Chemistry, laboratories – the international harmonised protocol. VAM
Cambridge, UK. Bulletin, 11, 4–5
Middleton, R. M. (2004) Data Analysis Using Microsoft Villavicencio, A. L. C. H., Ara´ujo, M. M., Fanaro, G. B.,
Excel. Thomson Brooks/Cole Learning, Belmont, CA. Rela, P. R. and Mancini-Filho, J. (2007) Sensorial analysis
evaluation in cereal bars preserved by ionizing radiation
Miller, J. C. and Miller, J. N. (1999) Statistics and processing. Radiation Physics and Chemistry, 76, 1875–1877.
Chemometrics for Analytical Chemistry, 4th ed. Ellis
Horwood, Chichester, UK. Wakefield,D. and McLaughlin, K. (2005) An Introduction to
Data Analysis Using Minitab R forWindows , 3rd ed. Pearson
Nielsen, S. S. (2003) Food Analysis, 3rd ed. Kluwer Education, Inc., Pearson Prentice Hall, Upper Saddle River,
Academic/Plenum Publishers, New NJ.
York.
Williams, A. A., Rogers, C. A. and Collins, A. J. (1988)
O’Donnell, G. E. and Hibbert, D. B. (2005) Treatment of Relating chemical/physical and sensory data in acceptance
bias in estimating measurement uncertainty. Analyst, 130, studies. Food Quality and Preference, 1(1), 25–31.
721–729.
XiuRong, P. (1995) A view on the traceability of certified
values of chemical composition RMs. VAM Bulletin,
O’Mahony, M. (1986) Sensory Evaluation of Food –
12(reference material special), 18–19.
Statistical Methods and Procedures. Marcel Dekker Inc., New
York.
Yann, D., Didier, H. and Daniel, B. (2005) Utilisation of the
experimental design methodology to reduce browning defects
Patey, A. (1994) The food analysis performance assessment
in hard cheeses technology. Journal of Food Engineering, 68,
scheme. VAM Bulletin, 11, 12–13.
481–490.
12 | P a g e