Abstract
In many fields of science, high-dimensional integration is required. Numerical methods have been developed to evaluate these complex integrals. We introduce the code i-flow, a Python package that performs high-dimensional numerical integration utilizing normalizing flows. Normalizing flows are machine-learned, bijective mappings between two distributions. i-flow can also be used to sample random points according to complicated distributions in high dimensions. We compare i-flow to other algorithms for high-dimensional numerical integration and show that i-flow outperforms them for high dimensional correlated integrals. The i-flow code is publicly available on gitlab at https://gitlab.com/i-flow/i-flow.
1. Introduction
Simulation based on first principles is an important practice, because it is the only way that a theoretical model can be checked against experiments or real-world data. In high-energy physics (HEP) experiments, a thorough understanding of the properties of known physics forms the basis of any searches that look for new effects. This can only be achieved by an accurate simulation, which in many cases boils down to performing an integral and sampling from it. Often high-dimensional phase space integrals with non-trivial correlations between dimensions are required in important theory calculations. Monte-Carlo (MC) methods still remain as the most important techniques for solving high-dimensional problems across many fields, including for instance: biology [1, 2], chemistry [3], astronomy [4], medical physics [5], finance [6] and image rendering [7]. In high-energy physics, all analyses at the Large Hadron Collider (LHC) rely strongly on multipurpose Monte Carlo event generators [8, 9] for signal or background prediction. However, the extraordinary performance of the experiments requires an amount of simulated data that soon cannot be delivered with current algorithms and computational resources [10, 11].
A main endeavour in the field of MC methods is to improve the error estimate. In particular, stratified sampling—dividing the integration domain in sub-domains, and importance sampling—sampling from non-uniform distributions [12] are two ways of reducing the variance. Currently, the most widely used numerical algorithm that exploits importance sampling is the VEGAS algorithm [13, 14]. But VEGAS assumes the factorizability of the integrand, which can be a bad approximation if the variables have complex correlations amongst one another. Foam [15] is a popular alternative that tries to address this issue. It uses an adaptive strategy to attempt to model correlations, but requires exponentially large samples in high dimensions.
Lately, the burgeoning field of machine learning (ML) has brought new techniques into the game. For the following discussion, we restrict ourselves to focus on progress made in the field of high-energy physics, see [16] for a recent review. However, these techniques are also widely applied in other areas of research. Concerning event generation, [17] used boosted decision trees and generative adversarial networks (GANs) to improve MC integration. Reference [18] proposed a novel idea that uses a dense neural network (DNN) to learn the phase space directly and shows promising results. In principle, once the neural network (NN)-based algorithm for MC integration is trained, one can invert the network and use it for sampling. However, the inversion of the NN requires evaluating its Jacobian, which incurs a computational cost that scales as for D-dimensional integrals 1 . Therefore, it is extremely inefficient to use a standard NN-based algorithm for sampling.
In addition to generating events from scratch, it is possible to generate additional events from a set of precomputed events. References [19–25] used GANs and Variational Autoencoders (VAEs) to achieve this goal. While their work is promising, they have a few downsides. The major advantage of this approach is the drastic speed improvement over standard techniques. They report improvements in generation of a factor around 1000. However, this approach requires a significant number of events already generated which may be cost prohibitive for interesting, high-multiplicity problems. Furthermore, these approaches can only generate events similar to those already generated. Therefore, this would not improve the corners of distributions [26] and can even result in incorrect total cross-sections. Yet another approach to speed up event generation is to use NN as interpolator and learn the Matrix Element [27].
Our goal is to explore NN architectures that allow both efficient MC integration and sampling. A ML algorithm based on normalizing flows (NF) provides such a candidate. The idea was first proposed by non-linear independent components estimation (NICE) [28, 29], and generalized in [30–32], for example. They introduced coupling layers (CL) allowing the inclusion of NNs in the construction of a bijective mapping between the target and initial distributions such that the evaluation of the Jacobian can be reduced to an analytic expression. This expression can now be evaluated in time. These techniques have also been combined with Markov Chain Monte Carlo methods, showing promising results [33–35].
Our contribution is a complete, openly available implementation of normalizing flows into TensorFlow [36], to be used for any high-dimensional integration problem at hand. Our code includes the original proposal of [31] and the additions of [32]. We further include various different loss functions, based on the class of f-divergences [37]. The paper is organized in the following way. The basic principles of MC integration and importance sampling are reviewed in section 2. In section 3, we review the concept of normalizing flows and work done on CL-based flow by [28, 29, 31, 32]. We investigate the minimum number of CLs required to capture the correlations between every other input dimension. Section 4 sets up the stage for a comparison between our code, VEGAS, and Foam on various trial functions, of which we give results in section 5. This comparison is based on several criteria, allowing a potential user to judge whether it might be worth trying out. Section 6 contains our conclusion and outlook.
2. Monte Carlo integrators
While techniques exist for accurate one-dimensional integration, such as double exponential integration [38], using them for high dimensional integrals requires repeated evaluation of one dimensional integrals. This leads to an exponential growth in computation time as a function of the number of dimensions. This is often referred to as the curse of dimensionality. In other words, when the dimensionality of the integration domain increases, the points become more and more sparse and no statistically significant statement can be made without increasing the number of points exponentially. This can be seen in the ratio of the volume of a D-dimensional hypersphere to the D-dimensional hypercube, which vanishes as D goes to infinity. However, Monte-Carlo techniques are statistical in nature and thus always converge as for any number of dimensions.
Therefore, MC integration is the most important technique in solving high-dimensional integrals numerically. The naive MC approach samples uniformly on the integration domain (Ω). Given N uniform samples, the integral of can be approximated by,
and the uncertainty is determined by the standard deviation of the mean,
where V is the volume encompassed by Ω and indicates that the average is taken with respect to a uniform distribution in x. While this works for simple or low-dimensional problems, it soon becomes inefficient for high-dimensional problems. This is what our work is concerned with. In particular, we are going to focus on improving current methods for MC integration that are based on importance sampling.
In importance sampling, instead of sampling from an uniform distribution, one samples from a distribution g(x) that ideally has the same shape as the integrand f(x). Using the transformation dx = dG(x)/g(x), with G(x) the cumulative distribution function of g(x), one obtains
In the ideal case when g(x)→f(x)/I, equation 3 would be estimated with vanishing uncertainty. However, this requires already knowing the analytic solution to the integral! The goal is thus to find a distribution g(x) that resembles the shape of f(x) most closely, while being integrable and invertible such as to allow for fast sampling. We review the current MC integrators that are widely used, especially in the field of high-energy physics.
VEGAS [13, 14] approximates all 1-dimensional projections of the integrand using a histogram and an adaptive algorithm. This algorithm adjusts the bin widths such that the area of the bins are roughly equal. To sample a random point from VEGAS can be done in two steps. First, select a bin randomly for each dimension. Second, sample a point from each bin according to a uniform distribution. However, this algorithm is limited because it assumes that the integrand factorizes, i.e.
where and . High-dimensional integrals with non-trivial correlations between integration variables, that are often needed for LHC data analyses, cannot be integrated efficiently with the VEGAS algorithm (c.f. [39]). The resulting uncertainty can be reduced further by applying stratified sampling, in addition to the VEGAS algorithm, after the binning [40].
Foam [15] uses a cellular approximation of the integrand and is therefore able to learn correlations between the variables. In the first phase of the algorithm, the so-called exploration phase, the cell grid is built by subsequent binary splits of existing cells. Since the first cell consists of the full integration domain, all regions of the integration space are explored by construction. The second phase of the algorithm uses this grid to generate points either for importance sampling or as an event generator. In this work we use the implementation of [41], which implemented an additional reweighting of the cells at the end of the optimization.
However, both Foam and VEGAS are based on histograms, whose edge effects would be detrimental to numerical analyses that demand high precision. As we will explain below, our code i-flow uses a spline approximation which does not suffer from these effects. These edge effects are an important source of uncertainty for high-precision physics [42].
3. Importance sampling with normalizing flows
As we detailed in the previous section, importance sampling requires finding an approximation g(x) that can easily be integrated and subsequently inverted, so that we can use it for sampling. Mathematically, this corresponds to a coordinate transformation with an inverse Jacobian determinant that is given by g(x). General ML algorithms incorporate NNs in learning the transformation, which inevitably involve evaluating the Jacobian of the NNs. This results in inefficient sampling. Coupling Layer-based Normailzing Flow algorithms precisely circumvent this problem. To begin, let us review the concept of a normalizing flow (NF).
Let ck , with k = 1, ..., K, be a series of bijective mappings on the random variable :
Based on the chain rule, the output 's probability distribution, gK , can be inferred given the base probability distribution g0 from which is drawn:
One sees that the target and base distributions are related by the inverse Jacobian determinant of the transformation. For practical uses, the Jacobian determinant must be easy to compute, restricting the allowed functional forms of ck . However, with the help of coupling layers, first proposed by [28, 29], then generalized by [31, 32], one can incorporate NNs into the construction of ck , thus greatly enhancing the level of complexity and expressiveness of NF without introducing any intractable Jacobian computations.
Figure 1 shows the basic structure of a coupling layer, which is a special design of the bijective mapping c. For each map, the input variable is partitioned into two subsets, and , which can be determined arbitrarily so long as neither is the empty set. This arbitrary partitioning will be referred to as a masking. Without loss of generality, one simple partitioning is given by and . Different maskings can be achieved via permutations of the simple example above. Under the bijective map, C, the resulting variable transforms as
The NN takes xA
as inputs and outputs that represents the parameters of the invertible 'Coupling Transform', C, that will be applied to xB
. We detail various choices for C, like piecewise linear, piecewise quadratic, or piecewise rational quadratic spline functions in appendix
which leads to the simple Jacobian
Note that equation (9) does not require the computation of the gradient of , which would scale as with D the number of dimensions. In addition, taking to be diagonal further reduces the computation complexity of the determinant to be linear with respect to the dimensionality of the problem. Linear scaling makes this approach tractable even for high dimensional problems. In summary, the NN learns the parameters of a transformation and not the transformation itself, thus the Jacobian can be calculated analytically.
To construct a complete Normalizing Flow, one simply compounds a series of Coupling Layers with the freedom of choosing any of the outputs of the previous layer to be transformed in the subsequent layer. We show in Section 3.1 that number of Coupling Layers are required in order to express all non-separable structures of the integrand.
3.1. Number of coupling layers
The minimum number of coupling layers required to capture all possible correlations between every dimension of the integration variable, , depends on the dimensionality of the integral, D [31]. In the cases of D = 2 and D = 3, each dimension is transformed once based on the other dimension(s) and thus and , respectively. This way of counting could be generalized to higher D. In fact, this is what autoregressive flows are based on [43]. Here we show that the number of coupling layers required to capture all the correlations is for D > 5, and D layers for D ≤ 5. This can be considered the minimum number of layers required in order to capture all correlations, adding an additional layer will not add any new information, and similar effects should be achieved with increasing the depth of the network associated with each layer. On the other hand, this can be considered the maximum number of layers needed to capture all the correlations. If a function has fewer correlations, then all the correlations can be captured with less than .
Theorem. Given a set of correlated random variables x, if a transformation exists that takes the variables x to z, such that the correlation between the variables z is zero, then a composition of normalizing flows can create such a transformation. Given a set of infinitely wide NNs that are universal function approximators, and requiring that all variables are transformed equal number of times, it is possible to represent all the correlations between variables in a normalizing flow using layers for D > 5. When D ≤ 5 it is possible to represent all correlations with D layers.
Proof. Given the random variables , with means and joint probability distribution , the correlation between all the variables is given by:
Using two layers of a normalizing flow network, which can be seen as a universal function approximator, defines a transformation , with the bounds of integration being mapped such that and , with the sets and , such that , and with the Jacobian J(y, x). The transformation also maps the means: . This decomposition is possible following from the arguments section 2.2 of [44]. Applying the transformation to equation (10) gives:
If we now consider the correlation between the variables y, we obtain:
The result of the transformation shows that the variables {ya } are now not correlated with {yb }. We can construct a subsequent transformation , with and , and the sets , , , and , such that with the constraint that such a transformation does not introduce new correlations between the variables that have already been decorrelated. In other words, the composition of T and can be defined as a transformation , with and , such that:
and the means are mapped from µ to µz . Thus, the correlation between the variables z is given by:
The above transformations can be iterated until all the variables are decorrelated. A method of determining the mapping for each step can be obtained by the following procedure:
- (a)Reindex the dimension numbers from [1, D] to [0, D − 1]
- (b)Convert all dimensions to their binary representation, using the minimum number of bits required to represent the number D − 1
- (c)Consider the least significant bit for each dimension, and define the transformation as , with , where is the set of variables with a 0 (1) for the least significant bit
- (d)Repeat the third step taking the next least significant bit, until the most significant bit is reached
See table 1 for an example of the steps above. In that example, transformation 1 would groups the first 8 dimensions in g({x0}) and the last 4 in h({x1}) etc
Table 1. Finding the unique masking to capture all correlations in a D = 12 space, using the procedure detailed above.
Dimension | 0 | 1 | 2 | 3 | 4 | 5 | 6 | 7 | 8 | 9 | 10 | 11 |
---|---|---|---|---|---|---|---|---|---|---|---|---|
Transformation 1 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 1 |
Transformation 2 | 0 | 0 | 1 | 1 | 0 | 0 | 1 | 1 | 0 | 0 | 1 | 1 |
Transformation 3 | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 |
Transformation 4 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 1 |
The number of steps for this procedure can easily be seen to be . However, since we need two layers per transformation to ensure that all variables are equally transformed by the network leads to a requirement of .
In the situation of D ≤ 5, the coupling layer can be defined to take the variable and transform the variable, such that it is not correlated with any other variable. This leads to a requirement of D layers.
The requirement on the above theorem is that we require that a transformation exists in order to perform the above mapping. However, even if a transformation does not exist for the integrand itself, with the use of importance sampling, it is only necessary to find a function g which is as close to the integrand as possible. In i-flow splines are used to create the function g, and according to the Stone-Weierstrass Theorem [45–47], it is possible to represent g such that it is ε close to f. A corollary of the Stone-Weierstrass Theorem for can be expressed as:
Corollary. Given a function , ε > 0, and : the space of all real-valued continuous functions in , there exists a polynomial spline such that:
for all .
Proof. The space Rn is a subset of the spaces proved in the Stone-Weierstrass Theorem, and thus the proof of the corollary follows directly from the Stone-Weierstrass Theorem.
Furthermore, this can be extended to a sum of piecewise polynomials, such that any continuous and bounded function f can be represented by an infinite series of polynomials (see Theorem C from [45]). In i-flow, we will consider the case of discontinuous functions, but these can be approximated as a continuous function with a slope of 1/ε in the region of discontinuity. This will lead to some difference between f and g, but since the goal is to find a function g as close to f as possible, then this is acceptable and should still allow for high precision importance sampling.
3.2. Using i-flow
The i-flow package requires three pieces of information from the user: the function to be integrated, the normalizing flow network, and the method of optimizing the network. Figure 2 shows schematically how one step in the training of i-flow works. The code is publicly available on gitlab at https://gitlab.com/i-flow/i-flow. Running the script iflow_test.py will produce the results presented in section 5.
Download figure:
Standard image High-resolution image3.2.1. Integrand
The function to be integrated has very few requirements on how it is implemented in the code. Firstly, the function must accept an array of points with shape , where is the number of points to sample per training step. Secondly, the function must return an array with shape to be used to estimate the integral. Finally, the number of dimensions in the integral is required to be at least 2. However, one dimension can be treated as a dummy dimension integrated from 0 to 1, which will not change any result.
3.2.2. Normalizing flow network
A normalizing flow network consists of a series of coupling layers compounded together. To construct each coupling layer, one needs to specify the choice of coupling transform C (cf appendix.
The neural networks m(xA ) must be provided by the user. However, we provide examples for a dense network and the U-shape network of [31]. This provides the user the flexibility to achieve the expressiveness required for their specific problem.
3.2.3. Optimizing the network
To uniquely define the optimization algorithm of the network, two pieces of information are required. Firstly, the loss function to be minimized is required. We supply a large set of loss functions from the set of f-divergences, which can be found in appendix
3.2.4. Hyperparameters
The setup we presented here has several hyperparameters that can be adjusted for better performance. However, i-flow has the flexibility for the user to implement additional features in each section beyond what is discussed below. This would come with additional hyperparameters as well.
The first group concerns the architecture of the NNs m(xA ). Once the general type of network (dense or U-shape) is set, the number of layers and nodes per layer have to be specified. In the case of the U-shape network, the user can specify the number of nodes in the first layer and the number of 'downward' steps.
The second group of hyperparameters concerns the optimization process. Apart from setting an optimizer (e.g. ADAM [48]), a learning schedule (e.g. constant or exponentially decaying), an initial learning rate, and a loss function have to be specified. Some of these options come with their own, additional set of hyperparameters. The number of points per training epoch and the number of epochs have to be set as well.
The third group of hyperparameters concerns the setup of i-flow directly. As was discussed in [31], there are two ways to pass xA into m(xA ): either directly or with one-blob encoding. i-flow supports both of these options. One-blob encoding [31] is a generalization of one-hot encoding. The input xA is passed through a Gaussian kernel and several adjacent bins are activated. If one-blob encoding is used, the number of input bins has to be specified, the width of the Gaussian is set to the inverse of the number of bins. Further, the type of coupling function , the number of output bins, the number of CLs and the maskings have to be set.
3.2.5. Putting it all together
The networks are trained by sampling a fixed number of points using the current state of g(x) 2 . We use one of the statistical divergences as a measure for how much the distribution g(x) resembles the shape of the integrand f(x), and an optimizer to minimize it. Because we can generate an infinite set of random numbers and evaluate the target function for each of the points, this approach corresponds to supervised learning with an infinite dataset. Drawing a new set of points at every training epoch automatically also ensures that the networks cannot overfit.
4. Integrator Comparison
To illustrate the performance of i-flow and compare it to VEGAS and Foam, we present a set of six test functions, each highlighting a different aspect of high-dimensional integration and sampling. These functions demonstrate how each algorithm handles the cases of a purely separable function, functions with correlations, and functions with non-factorizing hard cuts. In most cases, an analytic solution to the integral is known.
The first test function is an n-dimensional Gaussian, serving as a sanity check:
The result of integrating f1 from zero to one is given by:
In the following, we use α = 0.2.
The second test function is an n-dimensional Camel function, which would show how i-flow learns correlations that VEGAS (without stratified sampling) would not learn:
The result of integrating f2 from zero to one is given by:
In the following, we use α = 0.2.
The third case is given by
This function has two circles with shifted centers, varying thickness and height. Also, the function exhibits non-factorizing behavior. The integral of f3 between 0 and 1 can be computed numerically using Mathematica [49], which is 0.013 684 8 ± (5 · 10−9), with and a = 3 3 .
The fourth case is an annulus function with hard cuts:
This function demonstrates how i-flow learns hard, non-factorizing cuts. The result of integrating f4 from zero to one is given by: .
The fifth case is motivated by high energy physics, and is a one-loop scalar box integral representative of an integral required for the calculation of gg → gh in the Standard Model. This calculation is an important contribution for the total production cross-section of the Higgs boson. As explained in appendix
The result of integrating f5 from zero to one can be obtained through the use of LoopTools [51], which gives a numerical result of 1.936 964 023 8 · 10−10 for the inputs .
As a sixth test function, we consider the polynomial
The result of integrating f6 from zero to one is given by:
This function can easily be integrated in a high number of dimensions and, unlike the Gaussian or Camel functions, has support in almost all of the integration domain. It therefore does not suffer that much from the curse of dimensionality.
Further applications to event generation of high-energy particle collisions is discussed in [52] and also in [53]. These papers investigate using normalizing flows to improve upon phase space integration for event simulation at particle colliders. The integral dimension for processes with nf particles in the final state is D = 4nf −3. In [52], we studied processes with nf ≤ 6.
5. Results
In this section we show the performance of i-flow and compare it to VEGAS and Foam based on the test functions we introduced in section 4. For the VEGAS algorithm, we use the default parameters as implemented in [40]. This includes the use of stratified sampling and a maximum of 1000 bins per axis. We further set the number of points per iteration to 5000. However, the implementation in [40] uses this number as a maximum, so we monitor the actual number of function calls separately. The setup of Foam requires a number of points per cell, which we fix to 5000. In the setup of i-flow, we use number of coupling layers with the masking discussed in section 3.1, and the coupling transform C taken to be a Piecewise Rational Quadratic spline (appendix A.3). The neural network in each CL is taken to be a DNN of 5 layers with 32 nodes in each of the first four layers. The number of nodes in the last layer depends on the coupling transform C and the dimensionality of the integrand. For the case of Piecewise Rational Quadratic splines, the number of nodes is given by , where d is the number of dimensions to be transformed. We further set the number of bins () in each output dimension to 16. The learning rate was set to 1 · 10−3 in all cases. We use the exponential divergence, see equation (B19), as loss function.
To compare the integrators, we set a relative uncertainty on the integral estimate as target. We then optimize the algorithms until the standard deviation of the inverse-variance weighted combination of the estimates of each optimization iteration (epoch) reaches this target. The inverse-variance weighted combination is defined as:
where is the mean (standard deviation) of the epoch and µ(σ) is the combination. The relative uncertainty is defined as the uncertainty of the given estimate, normalized to the true value of the integral 4 . Given this setup, there are three metrics that we use to compare the integrators: 1) the number of function calls needed to reach the target uncertainty; 2) how close the estimated integral value is to the true value; 3) the uncertainty of the estimates in the last iterations. Each of those highlights a different aspect of the integrator and we detail them below. The results are shown in tables 2–4. We chose a relative target uncertainty of 10−4 for the non-polynomial test functions and 10−5 for the polynomials. For Gaussian and Camel functions, we use α = 0.2. In addition, we set a cut-off at 5 · 107 function calls.
Table 2. Number of functional calls to reach a total relative uncertainty of 10−4 (for the first 11 cases) or 10−5 (for the last 3 cases). The total relative uncertainty is defined as the inverse-variance weighted combination of the uncertainties of each optimization iteration divided by the true integral value. The integrator with the fewest functional calls, which also is within 5 standard deviations of the true result, is highlighted in boldface. We set an upper cut-off of 5 · 107 calls. A indicates that the algorithm did not converge to the true integral value within 5 standard deviations (see table 3), a * indicates cases where the algorithm ran out of memory before the cut-off was reached. α = 0.2 for Gaussian and Camel functions.
Dim | VEGAS | Foam | i-flow | |
---|---|---|---|---|
Gaussian | 2 | 164,436 | 6, 259, 812 | 2, 310, 000 |
4 | 631,874 | 24, 094, 679 | 2, 285, 000 | |
8 | 1,299,718 | 3, 095, 000 | ||
16 | 2,772,216 | 7, 230, 000 | ||
Camel | 2 | 421,475 | 5, 619, 646 | 2, 225, 000 |
4 | 24, 139, 889 | 21, 821, 075 | 8, 220, 000 | |
8 | 19, 460, 000 | |||
16 | 5 | |||
Entangled circles | 2 | 43, 367, 192 | 17,499,823 | 23, 105, 000 |
Annulus w. cuts | 2 | 11,219,498 | 17, 435, 000 | |
Scalar-top-loop | 3 | 152, 957 | 5, 290, 142 | 685, 000 |
Polynomial | 18 | 42, 756, 678 | > 50, 000, 000 | 585, 000 |
54 | > 50, 000, 000 | > 21, 505, 000 * | 685, 000 | |
96 | > 10, 325, 000* | > 1, 145, 000 |
Table 3. Integral estimate and uncertainty of the runs of table 2 together with their relative deviations ('pull'), defined in equation 25. A indicates that the algorithm reached a cut-off of 5 · 107 function calls before the target uncertainty was reached, a * indicates cases where the algorithm ran out of memory before the cut-off was reached. The result with the smallest relative deviation is boldfaced. α = 0.2 for Gaussian and Camel functions.
Dim | VEGAS | (pull) | Foam | (pull) | i-flow | (pull) | true value | |
---|---|---|---|---|---|---|---|---|
Gaussian | 2 | 0.999 25(10) | 0.7 | 0.999 25(10) | 0.6 | 0.999 186 | ||
4 | 0.998 61(10) | 2.4 | 0.998 41(10) | 0.4 | 0.998 373 | |||
8 | 0.996 94(10) | 1.9 | −6.4 | 0.996 749 | ||||
16 | 0.993 57(10) | 0.6 | −188 | 0.993 509 | ||||
Camel | 2 | 0.981 75(10) | 0.9 | 0.981 63(10) | −0.3 | 0.981 66 | ||
4 | 0.963 45(10) | −2.2 | 0.963 61(10) | −0.5 | 0.963 657 | |||
8 | −13 | −3.5 | 0.928 635 | |||||
16 | 0.431 37(9) | −5001 | −72 | [55] | 0.862 363 | |||
Entangled circles | 2 | 0.013 679 8(14) | −3.6 | 0.013 682 9(14) | −1.4 | 0.013 684 8 | ||
Annulus w. cuts | 2 | 0.509 813(51) | −14 | 0.510 559(51) | 1.0 | 0.510 508 | ||
Scalar-top-loop | 3 | 1.937 11(19)· 10−10 | 0.7 | 1.936 77(19)· 10−10 | −1.0 | 1.936 964 · 10−10 | ||
Polynomial | 18 | 2.999 89(3) | −3.6 | −1.1 | 3 | |||
54 | −1.5 | 9.000 13(32) * | 0.4 | 9 | ||||
96 | −30683 | 16.000 4(3) * | 1.7 | 16 |
Table 4. Relative uncertainty on the integral estimate of the last iteration of the runs of table 2, based on a sample of 5000 points. The integrator that adapted best to the integrand is boldfaced. A * indicates when the value was still decreasing and had not yet converged, a is in place where the algorithm did not converge to the true integrand.
Dim | VEGAS | Foam | i-flow | |
---|---|---|---|---|
Annulus w. cuts | 2 | 4 · 10−3 * | 5 · 10−3 | |
Scalar-top-loop | 3 | 7 · 10−4 | ||
Polynomial | 18 | 1.5 · 10−3 | 1.5 · 10−3 * | |
54 | 3 · 10−3 | 9 · 10−4 * | ||
96 | 8 · 10−4 * |
Number of function calls. This number shows how often the integrand was evaluated by the algorithm until the target uncertainty was reached. Having fewer function calls is especially important when the function is numerically expensive to evaluate and the computational overhead of the integration algorithm becomes subleading. The results are shown in table 2. We highlight the entry with the fewest calls in boldface. In addition, we mark entries in which the final integral estimate differs by more than 5 standard deviations from the true result with a and entries in which too much memory was required by a *.
Integral estimate and uncertainty. This obviously shows how well the integrator estimated the value of the integral. We show our results in table 3 and compare them to the true, known results. We highlight in boldface the entry with the smallest relative deviation ('pull'), defined as
Here, is the result from VEGAS, Foam, or i-flow, is the true value of the integral, and the ΔI terms signify the uncertainty in the integral. Note, that is only non-zero for the case of the entangle circles for which it is 5 · 10−9. Cases in which the cut-off for function calls was reached (see table 2) are marked with a , cases that ran into memory problems are marked with a *.
Relative uncertainty on the integral estimate in the last iterations. The uncertainty on the integral estimate after adaptation is a measure for how well the algorithm adapted to the integrand. Once the algorithm is fully adapted, the uncertainty of a single integral estimate will be constant and the combination of all iterations will follow the scaling law for MC estimates based on N points. A better adapted algorithm introduces a smaller coefficient for that scaling and therefore require fewer function calls to reach a smaller uncertainty. We show our results in table 4. Cases in which VEGAS failed to converge to the right integral value are marked with a , a * shows entries that still showed a downward trend at the end of the optimization, indicating that the algorithm was still adapting to the integrand. We highlight the integrator with the smallest uncertainty in boldface.
For the Gaussians, VEGAS always has the fewest calls. This is expected, since the integrand factorizes. However, the number grows rapidly for increasing integrand dimension, whereas the number for i-flow grows slower. Foam is not able to reach the target uncertainty for D = 8, 16 before the cut-off of 5 · 107 function calls. i-flow has adapted best to all Gaussians of D > 2, as can be seen in table 4. This means that if a sufficiently small target uncertainty is required, i-flow would potentially need fewer function calls to reach it. The fact that the optimization of i-flow was not complete when the target uncertainty was reached can also be seen in figure 3(a), where the accumulated uncertainty of i-flow (red line) was falling quicker than (dashed gray lines). In almost all of the cases, the integral estimate of i-flow was closest to the true integral value.
Download figure:
Standard image High-resolution imageFor the Camel functions, VEGAS only has the fewest calls for D = 2, in higher dimensions i-flow needs fewer calls. Note that in 16 dimensions, VEGAS completely misses one of the two peaks, yielding an estimate that is off by a factor of two. Since in this case the integrand is like a Gaussian, VEGAS converges quicker than the other algorithms. The integral estimate of i-flow seems also off, 5 but this is due to the fact that it needs roughly 200 epochs to 'see' the structure of the integrand and all of those iterations contribute to the final number in table 2. Again, Foam needs too many points for D > 4 to reach the target uncertainty, the integral estimates of i-flow are closest to the true value, and i-flow has adapted best to the integrand. The latter can be seen by the small relative uncertainties in table 4 and the scaling in the 4 dimensional case shown in figure 3(b).
We discuss the entangled circles and the annulus after the polynomials. For the scalar top loop, VEGAS needs the fewest function calls, but all 3 integrators estimate the true value within one standard deviation. It is, however, interesting to see that both VEGAS and Foam seem to be fully adapted, whereas the uncertainties of the estimates of i-flow were improving much faster than the expectation, see figure 3(d).
The polynomials show the strength of i-flow. It has no problems adapting to the high-dimensional integrand, as can be seen in table 4. Therefore, i-flow needs comparatively few function calls to reach the target uncertainty. Since the polynomial does not factorize, VEGAS does not adapt well, or in the case of D = 96 not at all. The difference between the adaptation of the algorithms is also visible in figure 3(c). There, however, we see an interesting pattern in the accumulated uncertainty of Foam that we want to comment on. First, since Foam estimates the integral and uncertainty of a given iteration always on the points of all previous iterations, the uncertainty can grow for a growing number of points if the central value shifts. Second, due to the symmetry of the polynomial integrand, we see a periodic pattern that we can understand as follows. We start with an uncertainty based on the first 5000 points. Adding more points at this initial stage lets the algorithm 'see' more structure of the integrand and the uncertainty grows. A large cell with a large spread of functional values within it is then further split consecutively into many smaller cells. That reduces the spread of functional values per cell and therefore the uncertainty of the integral estimate. Once the uncertainty drops below a certain value, Foam stops splitting these (smaller) cells and returns to one of the 'bigger' cells it did not split in the beginning and starts splitting it. This initially increases the uncertainty again, because the spread of functional values in the large cell is larger than it was in the smaller cells. The result is the oscillating pattern we see in figure 3(c). Note that the minima of this pattern follow the scaling.
The entangled circles are best integrated by Foam, as it is only 2 dimensional, yet non-factorizable. i-flow is slightly worse, but not by much. VEGAS, however, does not perform well. Similar statements can be made about the annulus function with hard cuts. VEGAS does the worst because of the non-factorizing structure of the integrand and Foam does well because it is only a 2-dimensional problem. As discussed in the earlier sections, i-flow also allows efficient sampling once it 'learns' the integral up to small uncertainty, we therefore use these test functions to illustrate the sampling performance of i-flow. As an example, figure 4 shows a sample distribution after training i-flow with 5 · 106 points (1000 epochs with 5000 points per epoch) on the annulus function of equation (20). For training, we used a learning schedule with exponential decay. An initial learning rate of 2 · 10−3 is halved every 250 epochs. The cut efficiency, defined as the fraction of the generated points that pass the hard cut, is 89.6%. Figure 5 shows the weights of 10 000 points sampled after training with 10 M points on the Entangled Circles of equation (19). In the ideal case of g → f/I, we expect the weight distribution to approach a delta function. In figure 5, we see that the trained results are much more like a delta function than the flat prior, showing significant improvement in the ability to draw samples from this function.
Download figure:
Standard image High-resolution imageDownload figure:
Standard image High-resolution imageIt is clear from these considerations that for the low-dimensional integrals (D ≤ 4), all three integrators achieve reasonable results. If the target uncertainty is not very small, VEGAS or Foam provide the best integrator, depending on the integrand at hand. If, however, a very small target uncertainty is needed, i-flow is the better option as it adapts really well to the shape of the integrand. It is only the fact that i-flow adapts slower than VEGAS that makes i-flow lose in the beginning, as illustrated in figure 3. For higher-dimensional integrands (D ≥ 4) i-flow requires fewer function calls because it adapts better to the integrand. For example, VEGAS fails in the integration of 16-dimensional Camel function completely (missing one of the peaks) and Foam has a large uncertainty on the final result, even though it has much more function calls. Foam also performs poorly in the case of the Gaussian in 16 dimensions. In both of these cases, Foam approximately requires bD number of cells to map out all the features of a function, where b is the average number of bins in each dimension. If b is taken to be 2, for 16 dimensions, the number of cells required is at least 216, which is far greater than our set cut-off of 10 000 cells. Therefore, when dealing with high-dimensional integrals, Foam is the least efficient integrator.
To quantify the computational overhead of i-flow in comparison to VEGAS, we trained both for 100 iterations with 5000 points per iteration on the polynomial function. It took VEGAS consistently 2 seconds for 2, 4, 8, 16, and 32 dimensions, and it took i-flow 14.7, 37.2, 80.1, 176.4, and 359.2 seconds, respectively, on a laptop with Intel(R) Core(TM) i7-7700HQ CPU @ 2.80GHz. This increase is due to needing more Coupling Layers and therefore increasing the number of trainable parameters in higher dimensions. Working out the time for the 32 dimensions, we find that if the function evaluation takes much longer than 720 µs then the overhead starts to become unimportant. Additionally, if the difference in function evaluations to reach a target precision are taken into account, the time for function evaluation is even smaller in order for the additional overhead of i-flow to become insignificant.
To summarize, i-flow provides the best integrator for integrals in 4 or more dimensions, especially if a high precision is needed and/or the integrand is numerically expensive and slow to evaluate.
6. Conclusion and outlook
As shown in the previous section, i-flow tends to do better than both VEGAS and Foam for all the test cases provided. However, i-flow comes with a few downsides. Since i-flow has to learn all the correlations of the function, it takes significantly longer to achieve optimal performance compared to the other integrators. This can be seen in figure 3. This obviously translates to longer training times. Additionally, the memory footprint required for i-flow is much larger due to requiring storage for quicker parameter updates within the NNs. Both of these can be overcome with future improvements.
There are several directions in which we plan to improve the presented setup in the future. So far, we only used simple NN architectures in our coupling layers. Using convolutional NNs instead might improve the convergence of the normalizing flow for complicated integrands, as these networks have the ability to learn complicated shapes in images with fewer parameters than dense networks.
The setup suggested in [54] would allow the extension of i-flow to discrete distributions, which also has applications in HEP [52, 53]. Another way to implement this type of information is utilizing Conditional Normalizing Flows [55].
The implementation of transflow-learning, which was suggested in [56], would allow the use of a trained normalizing flow on different, but similar problems without retraining the network. Such problems arise in HEP when new-physics effects from high energy scales modify scattering properties at low energies slightly and are described in an effective field theory framework. Another application for transflow-learning would be to train one network for a given dimensionality and adapt the network for another problem with the same dimensionality.
Using techniques like gradient checkpointing [57] have the potential to reduce the memory usage substantially, therefore allowing more points to be used at each training step or larger NN architectures.
The setup presented in [58], which introduces invertible 1 × 1 convolutions, showed an improved performance over the vanilla implementation of the normalizing flows, which possibly also applies to our case. These 1 × 1 convolutions are generalizations of permutation operators acting on the inputs. Additionally, this would modify the maximum number of coupling layers required by having more expressive permutations.
Acknowledgments
We thank Joao M Goncalves Caldeira, Felix Kling, Luisa Lucie-Smith, Tilman Plehn, Holger Schulz, Nhan Tran, Paddy Fox, William Jay, David Shih, and the participants of the Aspen workshop 'The Energy Frontier Beyond the LHC Run 2' for their comments and discussions. We further thank Stefan Hóche for helpful discussions, comments, and for his Foam implementation [41].
This manuscript has been authored by Fermi Research Alliance, LLC under Contract No. DE-AC02-07CH11 359 with the US Department of Energy, Office of Science, Office of High Energy Physics. This work was performed in part at Aspen Center for Physics, which is supported by National Science Foundation grant PHY-1607 611. CK acknowledges the support of the Alexander von Humboldt Foundation.
Appendix A.: Coupling layer details
The implementation of the layers available in i-flow are detailed below. The layers are based on the work of [31, 32] and are reproduced here for the convenience of the reader.
Appendix A.1. Piecewise Linear
For the piecewise linear coupling layer [31], given K bins of width w, the probability density function (PDF) is defined as:
The cumulative distribution function (CDF) is defined by the integral giving:
where b is the bin in which occurs (), and . Alternatively, we can define b as the maximal b for which . The inverse CDF is given by:
The Jacobian for this network is straightforward to calculate, and gives:
The piecewise linear layers require fixed bin widths in each layer. For details on why this is required, see appendix B of [31].
Appendix A.2. Piecewise quadratic
For the piecewise quadratic coupling layer [31], given K bins with widths Wik , with K + 1 vertex heights given by Vik , the PDF is defined as:
Integrating the above equation leads to the CDF:
where b is defined as the solution to 6 , and is the relative position of in bin b. Inverting the CDF leads to:
where b is defined as the solution to
and β is the relative position of Ci in the bin b, and is given by:
Appendix A.3. Piecewise rational quadratic
For the piecewise rational quadratic coupling layer [32], given K + 1 knot points that are monotonically increasing, with and , and K + 1 non-negative derivatives , the CDF can be calculated using the algorithm from [59], which is roughly reproduced below.
First, we define the bin widths () and the slopes (). We next obtain the fractional distance (ξ) between the two knots that the point of interest (x) lies (, where k is the bin x lies in). The CDF is given by:
where the details of α(ξ) and β(ξ) can be found in [59], but simplifies to:
which is noted to be less prone to numerical issues [59]. The inverse can be found by solving a quadratic equation [32]:
where the coefficients are given in [32], solving this equation for the solution that gives a monotonically increasing x results in:
where the second form is numerically more precise when 4ac is small, and is also valid for a = 0 [32].
Appendix B.: Loss functions
We implemented several different divergences that can be used as loss functions. They differ in symmetry, relative weight between small and large deviations, treatment of p = 0 case (also in the derivative), and numerical complexity. All of them are from the class of f-divergences [37].
Pearson χ2 divergence:
Kullback-Leibler divergence:
squared Hellinger distance:
Jeffreys divergence:
Chernoff's α-divergence:
exponential divergence:
(α, β)-product divergence:
Jensen-Shannon divergence:
Appendix C.: Sector decomposition of scalar loop integrals
Following [50], we give the integral representations of triangle and box functions in 4 dimensions using the Feynman parametrisation. To begin with, the triangle integral with external particles of energy and internal propagators of masses is given by
The 3-dimensional integral is further split into 3 sectors by the decomposition:
For example, when , after the variable transformation , the integral simplifies to
Therefore,
One can perform the same trick to treat the box integral with 4 external fields and 4 propagators. After sector decomposition, one gets
Footnotes
- 1
An N particle final state phase space is a D ≈ 4N − 3 dimensional integral, when including recursive multichannel selection in the integral.
- 2
Since we initialize the last layer of each network with vanishing bias and weights, in the first sampling g(x) is constant.
- 3
There is no known analytic solution to this given function.
- 4
Note that Foam directly gives the uncertainty including all sampled points up to the given iteration and no combination is needed.
- 5
The estimate of i-flow only deviates from the true value because the estimates from all iterations are combined and the first 200 epochs only "see" one of the two peaks. Combining 15 of the last epochs yields 0.86377(136), which is closer to the true value.
- 6
Note that this definition means b ∈ [1, K].