Statistical Classification
Statistical Classification
Statistical Classification
Often, the individual observations are analyzed into a set of quantifiable properties, known variously as
explanatory variables or features. These properties may variously be categorical (e.g. "A", "B", "AB" or
"O", for blood type), ordinal (e.g. "large", "medium" or "small"), integer-valued (e.g. the number of
occurrences of a particular word in an email) or real-valued (e.g. a measurement of blood pressure). Other
classifiers work by comparing observations to previous observations by means of a similarity or distance
function.
Terminology across fields is quite varied. In statistics, where classification is often done with logistic
regression or a similar procedure, the properties of observations are termed explanatory variables (or
independent variables, regressors, etc.), and the categories to be predicted are known as outcomes, which
are considered to be possible values of the dependent variable. In machine learning, the observations are
often known as instances, the explanatory variables are termed features (grouped into a feature vector), and
the possible categories to be predicted are classes. Other fields may use different terminology: e.g. in
community ecology, the term "classification" normally refers to cluster analysis.
A common subclass of classification is probabilistic classification. Algorithms of this nature use statistical
inference to find the best class for a given instance. Unlike other algorithms, which simply output a "best"
class, probabilistic algorithms output a probability of the instance being a member of each of the possible
classes. The best class is normally then selected as the one with the highest probability. However, such an
algorithm has numerous advantages over non-probabilistic classifiers:
It can output a confidence value associated with its choice (in general, a classifier that can
do this is known as a confidence-weighted classifier).
Correspondingly, it can abstain when its confidence of choosing any particular output is too
low.
Because of the probabilities which are generated, probabilistic classifiers can be more
effectively incorporated into larger machine-learning tasks, in a way that partially or
completely avoids the problem of error propagation.
Frequentist procedures
Early work on statistical classification was undertaken by Fisher,[1][2] in the context of two-group
problems, leading to Fisher's linear discriminant function as the rule for assigning a group to a new
observation.[3] This early work assumed that data-values within each of the two groups had a multivariate
normal distribution. The extension of this same context to more than two-groups has also been considered
with a restriction imposed that the classification rule should be linear.[3][4] Later work for the multivariate
normal distribution allowed the classifier to be nonlinear:[5] several classification rules can be derived based
on different adjustments of the Mahalanobis distance, with a new observation being assigned to the group
whose centre has the lowest adjusted distance from the observation.
Bayesian procedures
Unlike frequentist procedures, Bayesian classification procedures provide a natural way of taking into
account any available information about the relative sizes of the different groups within the overall
population.[6] Bayesian procedures tend to be computationally expensive and, in the days before Markov
chain Monte Carlo computations were developed, approximations for Bayesian clustering rules were
devised.[7]
Some Bayesian procedures involve the calculation of group-membership probabilities: these provide a more
informative outcome than a simple attribution of a single group-label to each new observation.
Feature vectors
Most algorithms describe an individual instance whose category is to be predicted using a feature vector of
individual, measurable properties of the instance. Each property is termed a feature, also known in statistics
as an explanatory variable (or independent variable, although features may or may not be statistically
independent). Features may variously be binary (e.g. "on" or "off"); categorical (e.g. "A", "B", "AB" or
"O", for blood type); ordinal (e.g. "large", "medium" or "small"); integer-valued (e.g. the number of
occurrences of a particular word in an email); or real-valued (e.g. a measurement of blood pressure). If the
instance is an image, the feature values might correspond to the pixels of an image; if the instance is a piece
of text, the feature values might be occurrence frequencies of different words. Some algorithms work only
in terms of discrete data and require that real-valued or integer-valued data be discretized into groups (e.g.
less than 5, between 5 and 10, or greater than 10).
Linear classifiers
A large number of algorithms for classification can be phrased in terms of a linear function that assigns a
score to each possible category k by combining the feature vector of an instance with a vector of weights,
using a dot product. The predicted category is the one with the highest score. This type of score function is
known as a linear predictor function and has the following general form:
where Xi is the feature vector for instance i, βk is the vector of weights corresponding to category k, and
score(Xi, k) is the score associated with assigning instance i to category k. In discrete choice theory, where
instances represent people and categories represent choices, the score is considered the utility associated
with person i choosing category k.
Algorithms with this basic setup are known as linear classifiers. What distinguishes them is the procedure
for determining (training) the optimal weights/coefficients and the way that the score is interpreted.
Algorithms
Since no single form of classification is appropriate for all data sets, a large toolkit of classification
algorithms have been developed. The most commonly used include:[9]
Evaluation
Classifier performance depends greatly on the characteristics of the data to be classified. There is no single
classifier that works best on all given problems (a phenomenon that may be explained by the no-free-lunch
theorem). Various empirical tests have been performed to compare classifier performance and to find the
characteristics of data that determine classifier performance. Determining a suitable classifier for a given
problem is however still more an art than a science.
The measures precision and recall are popular metrics used to evaluate the quality of a classification system.
More recently, receiver operating characteristic (ROC) curves have been used to evaluate the tradeoff
between true- and false-positive rates of classification algorithms.
As a performance metric, the uncertainty coefficient has the advantage over simple accuracy in that it is not
affected by the relative sizes of the different classes. [10] Further, it will not penalize an algorithm for simply
rearranging the classes.
Application domains
Classification has many applications. In some of these it is employed as a data mining procedure, while in
others more detailed statistical modeling is undertaken.
See also
Mathematics
portal
References
1. Fisher, R. A. (1936). "The Use of Multiple Measurements in Taxonomic Problems". Annals of
Eugenics. 7 (2): 179–188. doi:10.1111/j.1469-1809.1936.tb02137.x (https://doi.org/10.111
1%2Fj.1469-1809.1936.tb02137.x). hdl:2440/15227 (https://hdl.handle.net/2440%2F15227).
2. Fisher, R. A. (1938). "The Statistical Utilization of Multiple Measurements". Annals of
Eugenics. 8 (4): 376–386. doi:10.1111/j.1469-1809.1938.tb02189.x (https://doi.org/10.111
1%2Fj.1469-1809.1938.tb02189.x). hdl:2440/15232 (https://hdl.handle.net/2440%2F15232).
3. Gnanadesikan, R. (1977) Methods for Statistical Data Analysis of Multivariate Observations,
Wiley. ISBN 0-471-30845-5 (p. 83–86)
4. Rao, C.R. (1952) Advanced Statistical Methods in Multivariate Analysis, Wiley. (Section 9c)
5. Anderson, T.W. (1958) An Introduction to Multivariate Statistical Analysis, Wiley.
6. Binder, D. A. (1978). "Bayesian cluster analysis". Biometrika. 65: 31–38.
doi:10.1093/biomet/65.1.31 (https://doi.org/10.1093%2Fbiomet%2F65.1.31).
7. Binder, David A. (1981). "Approximations to Bayesian clustering rules". Biometrika. 68: 275–
285. doi:10.1093/biomet/68.1.275 (https://doi.org/10.1093%2Fbiomet%2F68.1.275).
8. Har-Peled, S., Roth, D., Zimak, D. (2003) "Constraint Classification for Multiclass
Classification and Ranking." In: Becker, B., Thrun, S., Obermayer, K. (Eds) Advances in
Neural Information Processing Systems 15: Proceedings of the 2002 Conference, MIT
Press. ISBN 0-262-02550-7
9. "A Tour of The Top 10 Algorithms for Machine Learning Newbies" (https://builtin.com/data-sci
ence/tour-top-10-algorithms-machine-learning-newbies). Built In. 2018-01-20. Retrieved
2019-06-10.
10. Peter Mills (2011). "Efficient statistical classification of satellite measurements". International
Journal of Remote Sensing. 32 (21): 6109–6132. arXiv:1202.2194 (https://arxiv.org/abs/120
2.2194). Bibcode:2011IJRS...32.6109M (https://ui.adsabs.harvard.edu/abs/2011IJRS...32.61
09M). doi:10.1080/01431161.2010.507795 (https://doi.org/10.1080%2F01431161.2010.507
795). S2CID 88518570 (https://api.semanticscholar.org/CorpusID:88518570).