Nothing Special   »   [go: up one dir, main page]

Statistical Classification

Download as pdf or txt
Download as pdf or txt
You are on page 1of 6

Statistical classification

In statistics, classification is the problem of identifying which of a set of categories (sub-populations) an


observation (or observations) belongs to. Examples are assigning a given email to the "spam" or "non-
spam" class, and assigning a diagnosis to a given patient based on observed characteristics of the patient
(sex, blood pressure, presence or absence of certain symptoms, etc.).

Often, the individual observations are analyzed into a set of quantifiable properties, known variously as
explanatory variables or features. These properties may variously be categorical (e.g. "A", "B", "AB" or
"O", for blood type), ordinal (e.g. "large", "medium" or "small"), integer-valued (e.g. the number of
occurrences of a particular word in an email) or real-valued (e.g. a measurement of blood pressure). Other
classifiers work by comparing observations to previous observations by means of a similarity or distance
function.

An algorithm that implements classification, especially in a concrete implementation, is known as a


classifier. The term "classifier" sometimes also refers to the mathematical function, implemented by a
classification algorithm, that maps input data to a category.

Terminology across fields is quite varied. In statistics, where classification is often done with logistic
regression or a similar procedure, the properties of observations are termed explanatory variables (or
independent variables, regressors, etc.), and the categories to be predicted are known as outcomes, which
are considered to be possible values of the dependent variable. In machine learning, the observations are
often known as instances, the explanatory variables are termed features (grouped into a feature vector), and
the possible categories to be predicted are classes. Other fields may use different terminology: e.g. in
community ecology, the term "classification" normally refers to cluster analysis.

Relation to other problems


Classification and clustering are examples of the more general problem of pattern recognition, which is the
assignment of some sort of output value to a given input value. Other examples are regression, which
assigns a real-valued output to each input; sequence labeling, which assigns a class to each member of a
sequence of values (for example, part of speech tagging, which assigns a part of speech to each word in an
input sentence); parsing, which assigns a parse tree to an input sentence, describing the syntactic structure
of the sentence; etc.

A common subclass of classification is probabilistic classification. Algorithms of this nature use statistical
inference to find the best class for a given instance. Unlike other algorithms, which simply output a "best"
class, probabilistic algorithms output a probability of the instance being a member of each of the possible
classes. The best class is normally then selected as the one with the highest probability. However, such an
algorithm has numerous advantages over non-probabilistic classifiers:

It can output a confidence value associated with its choice (in general, a classifier that can
do this is known as a confidence-weighted classifier).
Correspondingly, it can abstain when its confidence of choosing any particular output is too
low.
Because of the probabilities which are generated, probabilistic classifiers can be more
effectively incorporated into larger machine-learning tasks, in a way that partially or
completely avoids the problem of error propagation.

Frequentist procedures
Early work on statistical classification was undertaken by Fisher,[1][2] in the context of two-group
problems, leading to Fisher's linear discriminant function as the rule for assigning a group to a new
observation.[3] This early work assumed that data-values within each of the two groups had a multivariate
normal distribution. The extension of this same context to more than two-groups has also been considered
with a restriction imposed that the classification rule should be linear.[3][4] Later work for the multivariate
normal distribution allowed the classifier to be nonlinear:[5] several classification rules can be derived based
on different adjustments of the Mahalanobis distance, with a new observation being assigned to the group
whose centre has the lowest adjusted distance from the observation.

Bayesian procedures
Unlike frequentist procedures, Bayesian classification procedures provide a natural way of taking into
account any available information about the relative sizes of the different groups within the overall
population.[6] Bayesian procedures tend to be computationally expensive and, in the days before Markov
chain Monte Carlo computations were developed, approximations for Bayesian clustering rules were
devised.[7]

Some Bayesian procedures involve the calculation of group-membership probabilities: these provide a more
informative outcome than a simple attribution of a single group-label to each new observation.

Binary and multiclass classification


Classification can be thought of as two separate problems – binary classification and multiclass
classification. In binary classification, a better understood task, only two classes are involved, whereas
multiclass classification involves assigning an object to one of several classes.[8] Since many classification
methods have been developed specifically for binary classification, multiclass classification often requires
the combined use of multiple binary classifiers.

Feature vectors
Most algorithms describe an individual instance whose category is to be predicted using a feature vector of
individual, measurable properties of the instance. Each property is termed a feature, also known in statistics
as an explanatory variable (or independent variable, although features may or may not be statistically
independent). Features may variously be binary (e.g. "on" or "off"); categorical (e.g. "A", "B", "AB" or
"O", for blood type); ordinal (e.g. "large", "medium" or "small"); integer-valued (e.g. the number of
occurrences of a particular word in an email); or real-valued (e.g. a measurement of blood pressure). If the
instance is an image, the feature values might correspond to the pixels of an image; if the instance is a piece
of text, the feature values might be occurrence frequencies of different words. Some algorithms work only
in terms of discrete data and require that real-valued or integer-valued data be discretized into groups (e.g.
less than 5, between 5 and 10, or greater than 10).

Linear classifiers
A large number of algorithms for classification can be phrased in terms of a linear function that assigns a
score to each possible category k by combining the feature vector of an instance with a vector of weights,
using a dot product. The predicted category is the one with the highest score. This type of score function is
known as a linear predictor function and has the following general form:

where Xi is the feature vector for instance i, βk is the vector of weights corresponding to category k, and
score(Xi, k) is the score associated with assigning instance i to category k. In discrete choice theory, where
instances represent people and categories represent choices, the score is considered the utility associated
with person i choosing category k.

Algorithms with this basic setup are known as linear classifiers. What distinguishes them is the procedure
for determining (training) the optimal weights/coefficients and the way that the score is interpreted.

Examples of such algorithms include

Logistic regression – Statistical model for a binary dependent variable


Multinomial logistic regression – Regression for more than two discrete outcomes
Probit regression – Statistical regression where the dependent variable can take only two
values
The perceptron algorithm
Support vector machine – Set of methods for supervised statistical learning
Linear discriminant analysis – Method used in statistics, pattern recognition, and other fields

Algorithms
Since no single form of classification is appropriate for all data sets, a large toolkit of classification
algorithms have been developed. The most commonly used include:[9]

Artificial neural networks – Computational model used in machine learning, based on


connected, hierarchical functions
Boosting (meta-algorithm) – Method in machine learning
Decision tree learning – Machine learning algorithm
Random forest – Binary search tree based ensemble machine learning method
Genetic programming – Evolving computer programs with techniques analogous to natural
genetic processes
Gene expression programming – Evolutionary algorithm
Multi expression programming
Linear genetic programming – type of genetic programming algorithm
Kernel estimation – Window function
k-nearest neighbor – Non-parametric classification method
Learning vector quantization
Linear classifier – Statistical classification in machine learning
Fisher's linear discriminant – Method used in statistics, pattern recognition, and other
fields
Logistic regression – Statistical model for a binary dependent variable
Naive Bayes classifier – Probabilistic classification algorithm
Perceptron – Algorithm for supervised learning of binary classifiers
Quadratic classifier – used in machine learning to separate measurements of two or more
classes of objects
Support vector machine – Set of methods for supervised statistical learning
Least squares support vector machine

Evaluation
Classifier performance depends greatly on the characteristics of the data to be classified. There is no single
classifier that works best on all given problems (a phenomenon that may be explained by the no-free-lunch
theorem). Various empirical tests have been performed to compare classifier performance and to find the
characteristics of data that determine classifier performance. Determining a suitable classifier for a given
problem is however still more an art than a science.

The measures precision and recall are popular metrics used to evaluate the quality of a classification system.
More recently, receiver operating characteristic (ROC) curves have been used to evaluate the tradeoff
between true- and false-positive rates of classification algorithms.

As a performance metric, the uncertainty coefficient has the advantage over simple accuracy in that it is not
affected by the relative sizes of the different classes. [10] Further, it will not penalize an algorithm for simply
rearranging the classes.

Application domains
Classification has many applications. In some of these it is employed as a data mining procedure, while in
others more detailed statistical modeling is undertaken.

Biological classification – The science of identifying, describing, defining and naming


groups of biological organisms
Biometric – Metrics related to human characteristics identification
Computer vision – Computerized information extraction from images
Medical image analysis and medical imaging – Technique and process of creating visual
representations of the interior of a body
Optical character recognition – Computer recognition of visual text
Video tracking – finding the location in each frame of a video sequence
Credit scoring – Numerical expression representing a person's creditworthiness
Document classification – Process of categorizing documents
Drug discovery and development – Process of bringing a new pharmaceutical drug to the
market
Toxicogenomics – branch of toxicology and genomics
Quantitative structure-activity relationship – Quantitative prediction of the biological,
ecotoxicological or pharmaceutical activity of a molecule
Geostatistics – Branch of statistics focusing on spatial data sets
Handwriting recognition – Ability of a computer to receive and interpret intelligible
handwritten input
Internet search engines
Micro-array classification
Pattern recognition – Automated recognition of patterns and regularities in data
Recommender system – Information filtering system to predict users' preferences
Speech recognition – Automatic conversion of spoken language into text
Statistical natural language processing – Field of linguistics and computer science

See also
Mathematics
portal

Artificial intelligence – Ability of systems to perceive, synthesize, and infer information


Binary classification – the task of classifying the elements of a given set into two groups
(predicting which group each one belongs to) on the basis of a classification rule
Multiclass classification – Problem in machine learning and statistical classification
Class membership probabilities – Machine learning problem
Classification rule
Compound term processing
Confusion matrix – Table layout for visualizing performance; also called an error matrix
Data mining – Process of extracting and discovering patterns in large data sets
Data warehouse – Centralized storage of knowledge
Fuzzy logic – System for reasoning about vagueness
Information retrieval – Obtaining information resources relevant to an information need
List of datasets for machine learning research – OAIS 2.0
Machine learning – Study of algorithms that improve automatically through experience
Recommender system – Information filtering system to predict users' preferences

References
1. Fisher, R. A. (1936). "The Use of Multiple Measurements in Taxonomic Problems". Annals of
Eugenics. 7 (2): 179–188. doi:10.1111/j.1469-1809.1936.tb02137.x (https://doi.org/10.111
1%2Fj.1469-1809.1936.tb02137.x). hdl:2440/15227 (https://hdl.handle.net/2440%2F15227).
2. Fisher, R. A. (1938). "The Statistical Utilization of Multiple Measurements". Annals of
Eugenics. 8 (4): 376–386. doi:10.1111/j.1469-1809.1938.tb02189.x (https://doi.org/10.111
1%2Fj.1469-1809.1938.tb02189.x). hdl:2440/15232 (https://hdl.handle.net/2440%2F15232).
3. Gnanadesikan, R. (1977) Methods for Statistical Data Analysis of Multivariate Observations,
Wiley. ISBN 0-471-30845-5 (p. 83–86)
4. Rao, C.R. (1952) Advanced Statistical Methods in Multivariate Analysis, Wiley. (Section 9c)
5. Anderson, T.W. (1958) An Introduction to Multivariate Statistical Analysis, Wiley.
6. Binder, D. A. (1978). "Bayesian cluster analysis". Biometrika. 65: 31–38.
doi:10.1093/biomet/65.1.31 (https://doi.org/10.1093%2Fbiomet%2F65.1.31).
7. Binder, David A. (1981). "Approximations to Bayesian clustering rules". Biometrika. 68: 275–
285. doi:10.1093/biomet/68.1.275 (https://doi.org/10.1093%2Fbiomet%2F68.1.275).
8. Har-Peled, S., Roth, D., Zimak, D. (2003) "Constraint Classification for Multiclass
Classification and Ranking." In: Becker, B., Thrun, S., Obermayer, K. (Eds) Advances in
Neural Information Processing Systems 15: Proceedings of the 2002 Conference, MIT
Press. ISBN 0-262-02550-7
9. "A Tour of The Top 10 Algorithms for Machine Learning Newbies" (https://builtin.com/data-sci
ence/tour-top-10-algorithms-machine-learning-newbies). Built In. 2018-01-20. Retrieved
2019-06-10.
10. Peter Mills (2011). "Efficient statistical classification of satellite measurements". International
Journal of Remote Sensing. 32 (21): 6109–6132. arXiv:1202.2194 (https://arxiv.org/abs/120
2.2194). Bibcode:2011IJRS...32.6109M (https://ui.adsabs.harvard.edu/abs/2011IJRS...32.61
09M). doi:10.1080/01431161.2010.507795 (https://doi.org/10.1080%2F01431161.2010.507
795). S2CID 88518570 (https://api.semanticscholar.org/CorpusID:88518570).

Retrieved from "https://en.wikipedia.org/w/index.php?title=Statistical_classification&oldid=1159063753"

You might also like