In statistics, the probability integral transform or transformation relates to the result that data values that are modelled as being random variables from any given continuous distribution can be converted to random variables having a uniform distribution. This holds exactly provided that the distribution being used is the true distribution of the random variables; if the distribution is one fitted to the data the result will hold approximately in large samples.
The result is sometimes modified or extended so that the result of the transformation is a standard distribution other than the uniform distribution, such as the exponential distribution.
One use for the probability integral transform in statistical data analysis is to provide the basis for testing whether a set of observations can reasonably be modelled as arising from a specified distribution. Specifically, the probability integral transform is applied to construct an equivalent set of values, and a test is then made of whether a uniform distribution is appropriate for the constructed dataset. Examples of this are P-P plots and Kolmogorov-Smirnov tests.
In mathematics, an integral transform is any transform T of the following form:
The input of this transform is a function f, and the output is another function Tf. An integral transform is a particular kind of mathematical operator.
There are numerous useful integral transforms. Each is specified by a choice of the function K of two variables, the kernel function, integral kernel or nucleus of the transform.
Some kernels have an associated inverse kernel K−1(u, t) which (roughly speaking) yields an inverse transform:
A symmetric kernel is one that is unchanged when the two variables are permuted.
Mathematical notation aside, the motivation behind integral transforms is easy to understand. There are many classes of problems that are difficult to solve—or at least quite unwieldy algebraically—in their original representations. An integral transform "maps" an equation from its original "domain" into another domain. Manipulating and solving the equation in the target domain can be much easier than manipulation and solution in the original domain. The solution is then mapped back to the original domain with the inverse of the integral transform.
In mathematics, the error function (also called the Gauss error function) is a special function (non-elementary) of sigmoid shape that occurs in probability, statistics, and partial differential equations describing diffusion. It is defined as:
The complementary error function, denoted erfc, is defined as
which also defines erfcx, the scaled complementary error function (which can be used instead of erfc to avoid arithmetic underflow). Another form of is known as Craig's formula:
The imaginary error function, denoted erfi, is defined as
where D(x) is the Dawson function (which can be used instead of erfi to avoid arithmetic overflow).
Despite the name "imaginary error function", is real when x is real.
When the error function is evaluated for arbitrary complex arguments z, the resulting complex error function is usually discussed in scaled form as the Faddeeva function:
The error function is used in measurement theory (using probability and statistics), and its use in other branches of mathematics is typically unrelated to the characterization of measurement errors.