Abstract
The field of dark matter detection is a highly visible and highly competitive one. In this paper, we propose recommendations for presenting dark matter direct detection results particularly suited for weak-scale dark matter searches, although we believe the spirit of the recommendations can apply more broadly to searches for other dark matter candidates, such as very light dark matter or axions. To translate experimental data into a final published result, direct detection collaborations must make a series of choices in their analysis, ranging from how to model astrophysical parameters to how to make statistical inferences based on observed data. While many collaborations follow a standard set of recommendations in some areas, for example the expected flux of dark matter particles (to a large degree based on a paper from Lewin and Smith in 1995), in other areas, particularly in statistical inference, they have taken different approaches, often from result to result by the same collaboration. We set out a number of recommendations on how to apply the now commonly used Profile Likelihood Ratio method to direct detection data. In addition, updated recommendations for the Standard Halo Model astrophysical parameters and relevant neutrino fluxes are provided. The authors of this note include members of the DAMIC, DarkSide, DARWIN, DEAP, LZ, NEWS-G, PandaX, PICO, SBC, SENSEI, SuperCDMS, and XENON collaborations, and these collaborations provided input to the recommendations laid out here. Wide-spread adoption of these recommendations will make it easier to compare and combine future dark matter results.
Similar content being viewed by others
Avoid common mistakes on your manuscript.
1 Introduction and purpose of this paper
The nature of dark matter (DM) is one of the highest-priority topics in high energy particle physics. Many collaborations around the world are building exquisitely sensitive detectors to search for dark matter particles, often in direct competition with each other, and in the future, collaborations may wish to combine data from complementary targets to draw even stronger conclusions about dark matter models, especially in light of neutrino backgrounds [1] and model uncertainties [2].
In going from data to a final dark matter result, or even in projecting the potential sensitivity of a proposed experiment, direct detection collaborations make a series of choices, ranging from how to model the dark matter halo in the Milky Way to which test statistic to use to perform statistical inference. Different approaches can lead to significant differences in the interpretation of a result even if the underlying data are the same, complicating comparisons and combinations of results. In a recent example, the LUX collaboration deployed a power constrained limit [3] (discussed in Sect. 2.2.1) for their dark matter limits [4, 5], but chose a different power threshold in the two results; making the same choice in Ref. [4] as in Ref. [5] would have changed the resulting limit by a factor of \(\sim \)2. Similarly, the XENON1T collaboration presented a first result by approximating their likelihood ratio with an asymptotic distribution [6], an approximation that led incorrectly to a \(\sim \)50% more sensitive result. For their second science run, XENON1T corrected this treatment [7].
Background modeling is another area where collaborations make choices with potentially significant implications on inferred results. While many backgrounds are unique to each detector, there are some elements that are shared by all direct detection experiments, such as those induced by astrophysical neutrinos. To model solar or atmospheric neutrino backgrounds, collaborations rely on external data, with varying possible interpretations of the rates in dark matter detectors. As direct detection experiments increase in exposure, measurements of these astrophysical neutrino fluxes will be among the primary determinants of sensitivity [8].
Dating back to the paper of Lewin and Smith [9], dark matter collaborations have mostly (but not entirely) used similar assumptions about the phase-space distribution of dark matter. However, the community has not converged on a similar consensus regarding how to make statistical inferences from direct detection data. In this paper, we lay out recommendations for statistical methods aimed primarily at the Profile Likelihood Ratio (PLR) method, now commonly used in searches for weak-scale dark matter candidates, although some of these recommendations do apply more generally. We recognize that not all analyses lend themselves to the PLR and we hope that collaborations will follow the spirit of these recommendations when applicable. We take the opportunity to make updated recommendations for modeling the distribution of dark matter in our galaxy, as well as to discuss neutrino backgrounds that will be observed by many experiments in the near future.
This effort grew out of a Phystat-DM workshop [10] held in Stockholm, Sweden, in August 2019, under the umbrella of the Phystat conference series. The authors of this note include members of the DAMIC, DarkSide, DARWIN, DEAP, LZ, NEWS-G, PandaX, PICO, SBC, SENSEI, SuperCDMS, and XENON collaborations, and these collaborations provided input to the recommendations laid out here. Our approach is similar in spirit to that of the ATLAS and CMS experiments in the period prior to the discovery of the Higgs, when the two collaborations agreed in advance on what statistical treatment to use in combining Higgs data sets [11], although we make different recommendations that we feel are more appropriate for our application.
In writing this white paper, we recognize the large influence of chance when analysing dark matter data; due to the low backgrounds, the expected statistical fluctuations for direct detection upper limits are around twice as large as those in the Higgs discovery [12, 13]. Nevertheless, settling on common standards will enable more accurate comparisons of projections and results from different experiments and technologies as well as statistical combinations across collaborations. If, as we hope will be the case, this work is used as a reference in future dark matter publications, the underlying works on which our recommendations are based should also be cited. In addition to the specific recommendations given here, we suggest that collaborations continue to communicate with each other on these topics and adapt as necessary when new results are released.
The paper is organized as follows: Sect. 2 discusses Profile Likelihood Ratio analyses, Sect. 3 discusses astrophysical models, with Sect. 3.1 focusing on the dark matter halo distribution, summarized in Table 1, and Sect. 3.2 focusing on astrophysical neutrinos, summarized in Table 4. An overall summary of our recommendations is provided in Sect. 4.
2 Profile likelihood ratio analyses
Frequentist hypothesis testing has traditionally been the preferred method in direct dark matter searches to place constraints on regions of parameter space. Our recommendations are developed for analyses deploying the profile likelihood ratio (PLR) method [14,15,16], although some can be applied more generally. Using a likelihood-based test statistic like the PLR has the advantage that experimental uncertainties can conveniently be accounted for by parameterizing them as nuisance parameters. The PLR method has been described in great detail elsewhere, and we follow the discussion and notation of Ref. [15]. We strongly recommend readers to review Sect. 2 of Refs. [15, 16] as we do not attempt to cover the subject fully here.
For a set of parameters of interest, \(\varvec{\mu }\), and a collection of nuisance parameters, \(\varvec{\theta }\), the profile likelihood ratio is defined as
with \(\mathcal {L}\) as the likelihood function. The maximum of \(\mathcal {L}\) is found at \(\hat{\varvec{\mu }}\) and \(\hat{\varvec{\theta }}\), the maximum likelihood (ML) estimators of the parameters, while the maximum for a given \(\varvec{\mu }\) is found at \(\hat{\hat{\varvec{\theta }}}\). By construction, the parameter \(\lambda (\varvec{\mu })\) is constrained between 0 and 1, and values of \(\lambda \) close to 1 are indicative of a good agreement between the data and the hypothesized value of \(\varvec{\mu }\).
Direct dark matter searches most often take the hypothesis under test to be a signal model (generally the signal strength or cross section \(\sigma \)) at a single dark matter mass, M, and then 2D curves are constructed by computing significance and confidence intervals for each mass separately. In this strategy, known as a “raster scan”, only a single parameter of interest is constrained and therefore, \(\varvec{\mu } = \mu \) (as the name suggests, the procedure is typically repeated for different, fixed values of other signal parameters such as particle mass). An alternative 2D approach would be to constrain \(\sigma \) and M at the same time. As discussed in Ref. [17], the raster scan looks for the best region of \(\sigma \) at each mass M separately, while the 2D approach searches for a region around optimal values of \(\sigma \) and M. For the reasons laid out in Ref. [17] and in keeping with convention to date, we advocate following the raster scan approach in most of what follows, but we return to this question in Sect. 2.4.
Given \(\lambda ({\mu })\), one can define
which is distributed between 0 and infinity. As originally shown in Ref. [18], Wilks’ theorem states that the distribution of \(t_{\mu }\) approaches a Chi-square distribution in the asymptotic limit of infinite data. Several conditions must be fulfilled for Wilks’ theorem to hold, including that the true value of all parameters should be in the interior of the parameter space, that the sample should be sufficiently large, that no parameter may vary without the model changing, and that the hypotheses are nested [19].
The level of disagreement between the observed data and the hypothesis under test (a given value of \(\mu \)) is usually quantified via the p value. This corresponds to the probability for getting a value of \(t_\mu \) for a given \(\mu \) as large, or larger, than the one observed:
where \(f(t_{{\mu }} | {\mu })\) is the probability density function for \(t_{{\mu }}\).
In the case of dark matter, the sought-after signal can only increase the data count (i.e. \(\mu \) is defined strictly positive). One can modify Eq. (2) to become
which takes into account that for \(\hat{\mu }<0\), the maximum likelihood estimator will always be \(\mu =0\). Note that here we follow the prescription in Ref. [15], which treats \(\hat{\mu }\) as an effective estimator that can take negative values even if the condition \(\mu \ge 0\) is required by the physical model.
2.1 Discovery
The primary objective for direct detection experiments is to search for the presence of new signal processes. In this case, the background-only null hypothesis, \(H_0\), with \(\mu =0\), is the crucial hypothesis to test. With signals expected to lead to an excess of events over the background, a special case of the test statistic in Eq. (4) evaluated at \(\mu =0\), \(\tilde{t}_0\) (also called \(q_0\) in Ref. [15]), should be used to assess the compatibility between the data and \(H_0\):
The level of disagreement with the background-only hypothesis is computed as
where \(f(\tilde{t}_{0} | 0)\) is the probability distribution of \(\tilde{t}_0\) under the assumption of the background-only hypothesis, \(\mu =0\). The background-only hypothesis is rejected if \(p_0\) falls below a predefined value, indicating that the data are not compatible with the no-signal hypothesis.
2.1.1 Discovery claims
As is conventional in particle physics, this p value can be expressed as a discovery significance in terms of the number of standard deviations Z above the mean of a Gaussian that would result in an upper-tail probability equal to \(p_{0}\), \(\Phi ^{-1}(1-p_{0})\), where \(\Phi ^{-1}\) is the inverse of the cumulative distribution function of the normal Gaussian distribution. In this formulation, a \(3\sigma \) significance corresponds to a p value of \(p_{3\sigma }=1.4\times 10^{-3}\) and a \(5\sigma \) significance to a probability of \(p_{5\sigma }=2.9\times 10^{-7}\).
Following the convention in particle physics, we recommend that a global p value smaller than \(p_{3\sigma }\) is required for a claim of “evidence.” Section 2.1.2 details the difference between the global p value, which takes into account the effect of searching for several signals, and the local, \(p_0\), which is computed only with reference to a fixed signal model. We recommend always reporting the smallest observed \(p_0\) regardless of the presence or absence of any claim. Lastly, we recommend making available in supplementary material or upon request a plot of \(p_0\) as a function of particle mass. We do not make a recommendation regarding the level of significance needed to claim “discovery.”
2.1.2 Look elsewhere effect
The “look elsewhere effect” (LEE) is a well-known phenomenon in searches for new physics [20] where, if testing the null hypothesis on the same set of data with respect to multiple alternatives, such as signal hypotheses featuring differing particle masses, the p value needs to be corrected to account for the fact that a statistical fluctuation might be observed for any of the possible signal hypotheses.Footnote 1 Failing to account for the LEE can lead to an overestimate in the apparent significance of a result. The size of the effect can be quantified by calculating the trial factor, the ratio of the p value for observing an excess in a particular region to the global p value for observing an excess for any of the signal-hypotheses. The size of this effect depends on the number of alternative models to the null hypothesis tested and the ability of the analysis to distinguish between them – high-resolution peak searches will feature a large trial factor, while a counting experiment that cannot distinguish what signal model produces an excess will have a trial factor of 1. Therefore, the class of hypotheses considered when applying this correction should be included in reporting results.
The LEE has not historically been evaluated in direct dark matter searches, except for a recent XENON1T publication [21]. To test the necessity of the LEE for dark matter searches, we follow the prescription of Ref. [22]. Toy Monte-Carlos (MC) are generated and the discovery significance for every candidate mass M is computed for each data set. The smallest local p value for each toy data set, \(p = \min _M(p_0(M))\), is recorded to estimate a “probability distribution of smallest local p values,” called f(p). The global significance \(p_{\text {data}}^{\text {global}}\) for an observed excess with the smallest p value, \(p_{\text {data}}\), is then:
Figure 1 shows the LEE evaluated for various types of searches in a simplified model of a LXe-TPC, demonstrating that the LEE can be significant when considering a range of dark matter masses and detector resolution typical for LXe-TPC searches. For searches restricted to masses above about 40 \({\mathrm {GeV}/c^2}\), the LEE is less important as the predicted recoil spectra are almost degenerate in the observable space. A search for monoenergetic peaks (such as an axion search), which effectively scans a range of statistically independent regions, leads to global p values that are an order of magnitude greater than the minimum local p value. Given the large computational cost associated with calculating the LEE, we propose that it be accounted for only if a local excess approaching or exceeding 3\(\sigma \) is observed. If the computation needed to reach the relevant significance is unfeasible, alternative methods may be deployed if they can be shown to correctly account for the size of the effect (see for instance Ref. [20]).
2.2 Limit setting
Confidence intervals and upper limits may be constructed via repeated hypothesis testing of a range of \(\mu \) values, setting the endpoints of the interval (which may be one- or two-sided) at the critical point such that the p value is equal to a predetermined value \(\alpha \), also called “the size of the test,” or equivalently that the confidence level (CL) is \(1-\alpha \). Deciding which possible observations should be included in the confidence band for a certain true parameter value is referred to as choosing an “ordering parameter,” a test statistic with which to compute p values that are used to define the confidence interval. This test statistic may or may not be the same test statistic that is used to compute discovery significance. Using the log-likelihood ratio as the test statistic to define the confidence interval yields the “unified” or Feldman–Cousins intervals [23], or, if there are nuisance parameters that are profiled over, the “profile construction” [16, 24]. In this way, for a two-sided interval \([\mu _1, \mu _2]\),
Here, the interval endpoints \(\mu _1\) and \(\mu _2\) are random variables that depend on the experiment, with \(\mu _{\mathrm {true}}\) the true, unknown value of the parameter of interest. A confidence interval method that fulfills Eq. (8) for all possible signal hypotheses is said to have coverage – a fraction \((1-\alpha )\) of the confidence intervals would contain the true value over repeated experiments. For direct detection of dark matter, \(\alpha \) is most often 0.1, leading to \(90\%\) confidence levels, although a value of 0.05 (\(95\%\) CL) is sometimes used.
The two-sided test statistic most often used in dark matter limit setting is \({\tilde{t}}_\mu \) of Eq. (4). An alternative one-sided test statistic is:
where \(\lambda (\mu )\) is the profile likelihood ratio defined in Eq. (1). With this definition, only the case \(\hat{\mu } \le \mu \) is regarded as incompatible with the null hypothesis (see Ref. [15] for more details).
It is important to note that the choice of one-sided or two-sided test statistic can change the inferred result by a significant fraction for the same data set, as shown in Fig. 2. Different direct dark matter experiments have used either the one- or two-sided PLR test statistic in their science papers (see, for instance, Refs. [21, 25, 26]). Here, we recommend the two-sided construction of Eq. 4. This decision is motivated by the desire to use the same test statistic for limit setting as for discovery (recall that \(q_0\) of Eq. (5) is a limiting case of Eq. (4)), with the only difference being the size of the test. If an excess is observed, the two-sided interval will naturally “lift off” from a value of \(\mu =0\), rejecting cases where \(\mu \) is too small, while results compatible with the background still yield an upper limit. Using a single, unified Neyman construction [23] that provides both these results as a function of the data avoids the potential of an experiment flip-flopping between several constructions.Footnote 2 Equation (4) corresponds to the profile construction described by the Particle Data Group [16], and, in the absence of nuisance parameters, is equivalent to that of Feldman & Cousins [15, 23]. The cost of choosing the two-sided construction is a marginally weaker upper limit (see Fig. 2). We argue that this is acceptable if the recommendation is widely adopted among dark matter collaborations, as no “unfair” advantage in the apparent limit can be gained by switching from two-sided to one-sided. We also note that assessing the viability of a particular physics model in light of a published upper limit is subject to hidden uncertainties that dominate the difference between the two test statistics; in any case, such assessments should always be undertaken with caution.
We recommend the use of MC techniques to construct the test-statistic distributions (see Sect. 2.3), as opposed to assuming that these distributions follow an asymptotic approximation. We also recommend performing coverage checks to show that the actual coverage of the hypothesis test is similar to the nominal confidence, including if the true values of nuisance parameters differ from those assumed in the construction of the confidence interval; in the presence of nuisance parameters, coverage is not guaranteed, but practice has shown that it generally provides correct coverage. In [21], the coverage was checked with MC simulations assuming a different true nuisance parameter value than that assumed for the profile construction, investigating the robustness of the method to errors in the estimated nuisance parameters.
Because limits are commonly set at \(90\%\) CL, in the two-sided construction it is not unlikely that a data set will result in a non-zero lower limit on the parameter of interest despite not satisfying the requirement that the statistical significance is at least 3\(\sigma \) to claim evidence of a positive signal. This is a natural consequence of frequentist hypothesis testing. As an example of such a case, the top panel of Fig. 3 shows \(90\%\) CL upper and lower limits from a hypothetical background-only experiment. Because \(\alpha =0.10\), the backgrounds will fluctuate to give a lower bound in \(10\%\) of cases. The lower panel presents the p value versus WIMP mass, to show that these data do not approach a 3\(\sigma \) significance.
For a case like this, we recommend that collaborations should decide in advance on a significance threshold for reporting of a lower limit; for example, in the recent XENON1T publication [7], if a result was less than 3\(\sigma \) significant, no lower limit would be shown. As stated previously, we recommend publication of the smallest observed p value for the background-only hypothesis in addition to an upper limit in all cases, even if that p value is not significant. We also recommend collaborations publish the expected sensitivity of a result by showing a median expected limit with an uncertainty band (often called the “Brazil band”).
2.2.1 Cases with limited power
Sometimes confidence interval constructions may yield upper limits corresponding to signals much smaller than the ones to which the detector has any appreciable sensitivity or discovery power.Footnote 3 In the case of an upper-limit only construction, this is purely an effect of the requirement to not cover even arbitrarily small signals a fraction \(\alpha \) of the time. As an example, Fig. 4 shows in gray an expected distribution of upper limits from the XENON1T experiment, with the distribution of upper limits extending to signal expectations of less than 2 events due to downward fluctuations of the background.
A number of alternatives have been developed to address this concern [3, 27,28,29], and LUX [26], PandaX-II [25], and XENON1T [7] have all at times applied the “power-constraint limit” (PCL) of Ref. [3] to their upper limits, while the LHC community settled on the CLs construction of Refs. [27, 28]. Either of these constructions will cause overcoverage at very low signals, illustrated for example in Fig. 11 of [21].
Here, we recommend applying the power constraint to limits obtained following Sect. 2.2. We choose the PCL over the alternatives for its conceptual simplicity and because the CLs overcovers to higher quantiles. The principle behind power-constrained limits (PCL) is to use the power of the experiment, \(\pi (\mu )\), defined as the probability of rejecting the null, background-only hypothesis in the presence of a signal, as the metric to decide on a smallest signal that an experiment will exclude. Setting a minimal discovery power, \(\pi _\mathrm {crit}=\pi (\mu _\mathrm {crit})\), gives a minimal signal \(\mu _\mathrm {crit}\). Upper limits that would otherwise fall below this threshold are then set to \(\mu _\mathrm {crit}\).
The publication of Ref. [3] led to vigorous discussion on potential limitations of PCL in the literature and at various Phystat workshops, for example Ref. [29]. A significant concern was whether increasing systematic uncertainties could lead to more stringent limits for certain choices of \(\pi _\mathrm {crit}\) away from the median. For our purposes, an increase in systematic uncertainty not only widens the expected sensitivity bands of the limit but also raises (makes less sensitive) the median limit. The result is that both the median limit and, for example, the \(-1\sigma \) band move up when a systematic is increased, and conservatism is maintained. For this reason, we feel comfortable moving forward with the PCL.
For an upper-limit-only construction, the PCL method gives a coverage of 1 for \(\mu <\mu _\mathrm {crit}\), and \(1-\alpha \) otherwise, where \(\alpha \) is the size of the test. Reference [3] includes an example using a Gaussian measurement, and shows that an intuitive rule of thumb exists in this case; for \(\alpha =0.05\), and a power threshold of \(\pi _\mathrm {crit}=\Phi ^{-1}(-1\sigma )=0.159\), \(\mu _\mathrm {crit}\) corresponds to the \(-1\sigma \) line of the sensitivity plot.
The power threshold used in the power constraint is a fiducial choice in the analysis. A more conservative analysis might choose a higher threshold, such as the first LUX analysis [4], which demanded \(\pi _\text {crit}=0.5\). However, given the large random variation in results of rare event searches (about a factor 2 around the median upper limit, see the difference between the median and 1\(\sigma \) lines in green in Fig. 4), this choice would somewhat arbitrarily limit the ability of experiments to constrain a considerable swath of parameter space. The most recent publications by LUX, PandaX-II and XENON1T used a power threshold of \(\pi _\mathrm {crit}=\Phi ^{-1}(-1\sigma )=0.159\) to maximize sensitivity while preserving the original purpose of the power constraint.
For unified intervals, the intuitive properties of one-sided intervals are not exactly retained. For a Gaussian, with mean bounded to be non-negative with \(\alpha =0.1\), the power that corresponds to the 15.9th percentile of upper limits is \(\pi _\mathrm {crit}=0.32\), as shown in Fig. 5. The overcoverage varies with the signal, but does not exceed two percentage points for the Gaussian case. We therefore recommend using \(\pi _\mathrm {crit}=0.32\) when using PCL in conjunction with the test statistic of Eq. (4).
2.3 Asymptotic approximations
Asymptotic formulae for test statistic distributions exist in the limit of infinite data [15], and using the asymptotic approximation is a reasonable decision to save on computing time. In many cases, the approach to the asymptotic limit can be swift; for example, a counting experiment will reasonably approach the asymptotic result even for moderate expectation values (\(\sim \)5 events for \(\alpha =0.1\)). However, given the large background discrimination power in direct detection experiments, even results with hundreds of events may not converge to the asymptotic case because the expectation value in the signal region after discrimination is \(\mathcal {O}(1)\) or less. Figure 6 shows the distribution of a test statistic (solid colors) compared to an asymptotic approximation (dashed black) as the signal size increases for a simplified but representative simulation of a 1000 \({\mathrm {GeV}/c^2}\) dark matter search, similar to what is shown in Ref. [7]. For small numbers of signal events (darker colors), the asymptotic result poorly approximates the true test statistic distribution which is needed to compute discovery significances and confidence intervals.
If, as is often the case, toy MC simulations are used to estimate the distribution of the test statistic, a very significant result may require commensurately significant computational power to generate, for instance, the \(> 10^7\) toy simulations needed to characterize a \(5\sigma \) result. Nevertheless, we recommend that any usage of the asymptotic approximation be supported by MC studies to show its validity. In general, we recommend that sensitivity be calculated directly using adequate simulation, with the MC studies cross checked against uncertainty in the simulated values of the nuisance parameters. If a set procedure for this computation is in place, the actual simulation may need only be performed in the case that a highly significant result is seen. In the absence of adequate computing power to do full MC studies, arguments must be presented to justify whatever alternative methods are deployed.
2.4 Contours in the event of discovery
In the discussion so far, we have assumed the hypothesis under test to be a signal model at a single dark matter mass. For excesses that approach discovery significance, however, collaborations may wish to perform parameter estimation of both the mass M and cross section \(\sigma \) in a vector-like parameter of interest \(\varvec{\mu }\) to form a 2D confidence contour. In such a case, we do recommend that collaborations set a significance threshold including the LEE effect before completing the analysis and removing any bias mitigation steps (see Sect. 2.6) to determine whether a mass-cross-section contour should be included in a publication. The pre-determined threshold should be set high enough that flip-flopping between the per-mass cross sections and the 2D contour would introduce minimal bias (most likely satisfied by the requirement for a significant excess in the first place). Even if a 2D contour is reported, the per-mass confidence limit should still be included.
2.5 Modeling backgrounds and detector response
One of the requirements of the PLR method is that the model of detector response and backgrounds is correct. Modelling these, and the validation thereof, is highly detector-specific and outside the scope of this paper. However, we believe it is essential for experiments to satisfactorily demonstrate goodness-of-fit for their background and detector models in order to properly utilize the methods presented here. This includes setting criteria for background model acceptance prior to an analysis, and clear presentation of those criteria in any eventual publication. One example of a goodness-of-fit criterion is the recent XENON publication, which required a background model p value \(\ge 0.05\) in a validation region in order to search for DM and solar \(^8\)B neutrino events in their data [30]. Whenever possible, models should be validated, both on calibration data or side-bands, and computing the goodness-of-fit of the best-fit model. The power of the goodness-of-fit test to detect impactful deviations from the assumed model should ideally be investigated. Uncertainties in the background model, when quantifiable, should be incorporated directly into the likelihood function as nuisance parameters and acknowledged as such in any publications.
2.6 Experimenter bias mitigation
Experimenter bias is an effect which can, in general, drive a reported, measured value of a quantity away from its true value. In this case, the choices that the analyzer makes regarding cuts and cut thresholds, analysis methods, and when to stop searching for errors in an analysis, are influenced by the quantitative result of the analysis. Numerous examples in the historical physics literature have been identified in which new measurements of a physical quantity appear to be scattered around previous measurements, instead of being scattered around what we currently accept as the true values of those quantities [31].
Methods to control for experimenter bias share a common approach: all choices an analyzer makes are taken without the analyzer knowing what effect those choices have on the final result. In the case of DM experiments, four approaches have been employed, listed below. We make no specific recommendations regarding bias mitigation, and leave such choices to the authors of a given result.
-
Signal blinding A plot is generally made in which observed events fall into various regions of parameter space characterized as more or less signal-like. Often, DM experiments plot an electronic/nuclear recoil discriminant versus energy, and the low-energy “nuclear-recoil-like” area of the plot is considered to be the signal region. In signal blinding, this region is masked for science data, but not for calibration data. Only after all details of the analysis are frozen is this mask removed. In this way, analysis details cannot be tuned based on the number of DM-like events that were observed. The benefit of this type of bias mitigation strategy is that it is robust and simple to implement. The drawback is that rare backgrounds might exist in the data which will not be discovered until after the mask is removed. Many examples of DM searches using signal blinding exist in the literature, including Refs. [7, 32,33,34,35,36].
-
Veto blinding A rare-event search such as a DM experiment will often entail the use of veto signals, which can identify when an event definitely does not result from the process under study. Examples of such veto signals are ones which can tag cosmic rays in nearby materials, or acoustic sensors which can tag alpha decays in bubble chambers. If such a veto signal uniquely identifies background signals, one can choose to blind analyzers to that signal until all analysis details are finalized. This provides analyzers a view of the signal region, but they are not able to know which events are signal and which are not. The benefit of this type of approach is that analyzers have the opportunity to discover rare backgrounds because the signal region is viewable. The drawback is that the background signals vetoed by such a tag may often not look quite like true signals, and therefore this technique may not be viable for some experiments. Examples of this technique in use can be found in Refs. [37, 38].
-
Salting An approach similar to that of veto blinding, salting is a technique where fake signal events are injected into the data stream. Analyzers may explore the signal region, but the identity, quantity, and distribution of these fake, injected events are kept blind to the analyzers. The identities of the fake events are revealed only after the details of the analysis are finalized. In this way, like veto blinding, this technique provides the benefit of allowing the analyzer to identify rare backgrounds while being ignorant of the effect that analysis details have on the signal result. The drawback to this approach is that it can be difficult to generate a collection of fake signal events that are believable. The LIGO experiment has been able to inject fake gravitational waves by the use of hardware actuators [39]; the LUX experiment constructed fake signal events from a sequestered calibration data set [26].
-
Data scrambling An experiment may randomly smear data so that data in a control region and data in the signal region are randomly mixed. As an example, Antares introduced a random time-offset to each event when searching for neutrinos from dark matter in the Sun [40] – without removing this offset, it was impossible to determine if an event came from the Sun or another location on the sky. Similarly to salting, this allows all real events to be scrutinised before unblinding.
While one may, and often should, take steps to control for experimenter bias, it is important to note that this is not the only effect which can adversely influence the results of an analysis. A holistic view, in which all systematic features are considered, is warranted.
3 Astrophysical models
3.1 WIMP signal model: Standard Halo Model
The flux of WIMPs passing through the Earth is a necessary ingredient in the signal model for a WIMP hypothesis. Their galactic-frame velocity distribution, \(f(\mathbf {v}_\text {gal})\), is usually assumed to be an isotropic Maxwell–Boltzmann distribution whose velocity dispersion \(\sigma _0\) is defined by the local standard of rest at the location of the Sun, \(|\mathbf {v}_0|=\sqrt{2}\sigma _0\), the Sun’s peculiar velocity, , and the Earth’s velocity relative to the Sun, . Requiring that dark matter be gravitationally bound in the galaxy imposes an additional cut-off at the galactic escape speed, \(v_\text {esc}\). These assumptions result in a galactic-frame velocity distribution,
where \(\rho _\chi \) and \(m_\chi \) are the local WIMP density and the WIMP mass, respectively, \(\mathbf {v}_\text {lab}\) is the lab-frame WIMP velocity, and \(\Theta (x)\) is the Heaviside step function. This “Standard Halo Model” (SHM) speed distribution is illustrated in the lab-frame in Fig. 7.
The time-dependent velocity of the Earth relative to the Sun is calculated in Refs. [41, 42]. Defining the velocity vector as \((v_r,v_\phi ,v_\theta )\), with r pointing radially inward and \(\phi \) in the direction of the Milky Way’s rotation, this can be written as,
where \(\omega ={0.0172}{\mathrm{d}^{-1}}\) and \(\Delta t\) is the number of days since March 22, 2018 (an arbitrary date, and the choice of year has little effect). The average speed of the Earth is , given in Table 1.
The variation of the lab-frame WIMP speed due to the time evolution of is illustrated in Fig. 7. For most analyses not looking for annular modulation effects, it is sufficient to use the distribution averaged over the full year, which comes out to be approximately equivalent to the distribution evaluated at March 9,
With the exception of \(m_\chi \), the parameters in Eq. (10) constitute the SHM astrophysical parameters. Since the model used to describe the flux of WIMPs influences the exclusion curves that are drawn, a unified approach to excluding WIMPs requires a consistent treatment of these parameters. Recommended values for them are given in Table 1. The rationales for these parameter choices are discussed below.
Other authors have suggested updates to the SHM that differ from those presented here, including Refs. [43, 44] among others. We recommend to report results with respect to the nominal halo model, which will give a common point of comparison, while knowledge of the dark matter halo continues to improve, unless new measurements significantly alter the expected spectra, particularly at high masses. Shape variations in the velocity distribution can have an appreciable effect for limits on WIMPs that produce signals near the energy threshold of an experiment, but is otherwise not expected to have a major effect. Changes that affect the signal normalization but not the shape of the signal distribution, such as variations in the local dark matter density, can be easily accounted for by scaling published limits.
The SHM WIMP speed distribution is illustrated in Fig. 7, where the effects of varying the SHM parameters over the range of values motivated by galactic survey analyses are shown. In general, these effects tend to be comparable to or much smaller than the variation of the speed distribution over the course of a year. The effects of varying these parameters on XENON1T’s limits [6] are explored in Ref. [49].
Recent observations call into question the adequacy of the SHM itself, as evidence for several kinematically distinctive substructures have emerged from studies of data released by the Gaia mission [50] and the Sloan Digital Sky Survey (SDSS) [51]. These substructures are likely the result of the Milky Way’s formation history and merger events with other galaxies, and may include the Gaia Sausage (or Gaia Enceladus) [44, 52,53,54,55], among several others [42, 56,57,58,59,60,61]. The effects of such substructures on direct detection experiments are demonstrated in Refs. [2, 62]. Additionally, N-body simulations of the Large Magellanic Cloud indicate that its passage through the Milky Way could have produced a significant fraction of the local dark matter above the galactic escape speed [63]. Due to these effects, quoted uncertainties on the SHM parameters do not accurately reflect the uncertainties in the dark matter halo, nor do they represent likelihood distributions that can be meaningfully profiled over. The authors of this document therefore suggest that these parameters be fixed to clearly stated values, so that they can be reinterpreted under varying halo models. We note that this is the approach followed by most collaborations in the field over the last decade.
Most of the values suggested in Table 1 are consistent with those already in common use for WIMP direct detection experiments [5, 64, 65]. The most significant change suggested here is an updated value of \(\mathbf {v}_0\). We emphasize here again that if these parameter values are adopted, the relevant references should always be cited.
3.1.1 Local dark matter density: \(\rho _\chi \)
Values for \(\rho _\chi \) vary significantly between different measurements, typically in the range 0.2 to 0.6 \({\mathrm{GeV}/\mathrm{c}^2/\mathrm{cm}^3}\). The range of past and proposed measurements are best described in Refs. [66, 67]. This parameter normalizes the overall flux, but does not affect the predicted velocity distribution or the resulting WIMP-nucleon recoil spectra; as such, the total number of WIMP events expected in a direct detection experiment scales directly with \(\rho _\chi \), and the net effect of changing its value is to linearly scale exclusion curves with the same factor by which \(\rho _\chi \) changed. Interpreting current limits in terms of different values of this parameter is therefore trivial, and the recommended value is the one most commonly used in direct detection experiments, as suggested by Ref. [9].
3.1.2 Galactic escape speed: \(v_\text {esc}\)
The galactic escape speed was measured by the RAVE survey [68] and later improved with the additions of SDSS [51] and Gaia [50] data. Measurements of \(v_\text {esc}\) are summarized in Table 2. While some recent measurements seem to be trending towards somewhat lower values of \(v_\text {esc}\), the values in Table 2 are broadly consistent with each other and with a value around 550 km/s. This value is also consistent with the value estimated in Ref. [69], using the Gaia DR2 dataset. As such, the recommendation put forth in this document is to use \(v_\text {esc}={544} \mathrm{km/s}\) to maintain consistency with assumptions used for existing WIMP-nucleon cross section limits.
3.1.3 Average galactocentric Earth speed:
In this document, we support the use of
as suggested in Ref. [41], along with the accompanying time-evolving definition of approximately summarized in Eq. (11). This value of is consistent with the one suggested in Ref. [9].
3.1.4 Solar peculiar velocity:
The Sun’s peculiar velocity was determined in [46], by fitting data from the Geneva-Copenhagen Survey [77]. Based on this analysis, the authors of Ref. [46] derive a peculiar velocity of
with additional systematic uncertainties of (1, 2, 0.5) km/s. We support using this value in dark matter searches.
We note that the velocity in the galactic plane is faster than had been reported by previous measurements, based on an analysis of the Hipparcos catalog [78], which reported a value of km/s [79]. The decision to support the more recent measurement over the older one is based on the arguments in Ref. [46].
3.1.5 Local standard of rest velocity: \(\mathbf {v}_0\)
In Ref. [80], the proper motion of Sagittarius A\(^*\) was measured to high precision, implying that the angular velocity of the Sun around the center of the galaxy is given by
where and give the components of and \(v_0\) in the galactic plane (the \(\phi \) component), and is the distance from the Sun to the galactic center.
Uncertainties in most previous estimates of \(\mathbf {v}_0\) were driven by uncertainties in . This distance was recently reported as [48], implying km/s.
Combined with measurements of the Sun’s peculiar velocity, , this velocity implies that the local standard of rest has a speed of 238.0±1.5 km/s. We note that this velocity is consistent with the independently measured circular speed of 240±8 km/s suggested in Ref. [81], and the value 229±11 km/s in Ref. [82], albeit with smaller uncertainties. Uncertainties in \(v_0\) are driven by uncertainties in the Sun’s peculiar velocity.
Previous limits on WIMP-nucleon cross sections used a value of 220 km/s [5, 7, 65], as suggested by Refs. [83,84,85,86], which quote an uncertainty around \({\pm 20}\) km/s. We recommend updating this parameter to 238 km/s; while this new value is within the uncertainty of the old one, the choice of this parameter and its smaller uncertainty can have a material impact on dark matter searches [87].
3.2 Astrophysical neutrinos
Astrophysical neutrinos are expected to be an important background for the next generation of direct detection experiments. There are several sources of neutrinos arriving at Earth [88], but not all of them are relevant for direct dark matter searches. In this section we outline the dominant neutrino background sources and make some recommendations that are pertinent to direct detection experiments.
Figure 8 shows the neutrino fluxes that populate the relevant energy range for direct detection experiments. Low energy neutrinos from the pp and \(^{8}\)Be solar reactions give rise to neutrino-electron scattering, which can become a prominent source of low energy electronic recoils. Nevertheless, the ultimate background might come from neutrino-induced nuclear recoils created by coherent neutrino-nucleus scattering, a process that has been recently confirmed experimentally by COHERENT [89]. For example, in a xenon target \(^{8}\)B and hep solar neutrinos can mimic a WIMP signal with a mass of approximately 6GeV/c\(^2\), while atmospheric neutrinos and neutrinos from the diffuse supernova neutrino background (DSNB) will mimic a WIMP signal for masses above 10GeV/c\(^2\). Next, we describe each of these neutrino sources separately.
3.2.1 Solar neutrinos
Our current understanding of the processes happening inside the Sun is best summarised by the Standard Solar Model (SSM), which originated more than three decades ago [90]. According to the SSM, the Sun produces its energy by fusing protons into \(^{4}\)He via the pp chain (\(\sim \)99%) and the CNO cycle (\(\sim \)1%). The SSM has been under constant revision since then, as more precise measurements and calculations of the solar surface composition and nuclear reaction rates become available. However, when modelling the solar interior based on a new generation of solar abundances [91], the recent SSMs have failed to reproduce helioseismology data [92,93,94,95]; this is the so-called “solar abundance problem”.
The new generation of solar models are usually classified as high-Z and low-Z models, which reflect their different assumptions on the solar metallicity (Z). In the present work we adopt the most recent solar models developed in Ref. [96]. Figure 8 shows the main contributions from the pp chain and CNO cycle, and the overall normalization values are shown in Table 3 for a high-Z (B16-GS98) and a low-Z (B16-AGSS09met) model, respectively. There is also a neutrino component arising from electron capture on the \(^{13}\)N, \(^{15}\)O and \(^{17}\)F nuclei [97, 98], but their expected fluxes are very low. Since the CNO cycle has a strong dependence on the assumed metallicity, high-Z and low-Z models will predict different CNO neutrino fluxes. Also, a low-metallicity Sun will have a moderately cooler core, lowering the expected flux from the most temperature-sensitive neutrinos, such as those from the \(^{8}\)B and \(^{7}\)Be reactions [99, 100]. Note that more precise measurements of the neutrino fluxes, by solar neutrino experiments or a next-generation liquid noble detector, will be crucial to resolve the solar abundance problem.
The photon luminosity of the Sun has been measured to a precision of less than 1% [101]. The solar output is distributed into photon and neutrino channels, which introduces a direct constraint to the neutrino fluxes based on the measurement of the photon luminosity, most commonly known as the “luminosity constraint” [102]. Since the pp and \(^{7}\)Be reactions are dominant, their neutrino fluxes will also be dominant and the predicted uncertainty will be small to satisfy this constraint [103, 104]. The CNO fluxes are also affected by this constraint, but on a smaller scale. For a recent discussion on this topic, see Ref. [105].
Experimental measurements are also indicated in the last column of Table 3. These measurements are not entirely model-independent, and correlations between CNO and pp chain neutrinos must be taken into account. There are two notable exceptions: the measurements of the \(^{8}\)B and \(^{7}\)Be neutrino fluxes. In the former case, the SNO experiment observed \(^{8}\)B neutrinos via three different reactions: neutral current (NC), charged current (CC), and elastic scattering (ES) [107]. Due to this favourable situation, the only theoretical input required for this analysis was the shape of the \(^{8}\)B energy spectrum, with the overall normalization being constrained by the NC measurement. In the latter case, the end-point energy of \(^{7}\)Be is well separated from all the known backgrounds and other neutrino signals, allowing for a measurement of this flux with an uncertainty below 3% [106].
If a direct detection experiment were to take only the experimental values from Table 3, there would be a risk of adopting some measurements with overly large uncertainties, which are mainly driven by detector-specific effects. This could potentially be controlled by performing a global analysis that includes the likelihood from each of these neutrino experiments, albeit this might prove to be impractical. Similarly, the predictions from the solar models also present some problems and, as mentioned above, there is currently not one fully consistent solar model. Taking all this information into account, we recommend using the solar neutrino predictions described in Ref. [96], except for the \(^{8}\)B and \(^{7}\)Be fluxes, for which we recommend adopting the experimental values due to their small uncertainty and independence from other neutrino signals. We believe this choice will provide the best sensitivity for direct detection experiments, while using a reasonable collection of flux uncertainties.
Furthermore, there are a few important ingredients that need to be taken into account when converting a neutrino flux into a recoil rate: neutrino oscillations [110, 111], the choice of form factor [112, 113], electron binding effects [114], and electroweak uncertainties [16], to name the main ones. We leave the particular considerations for each of these factors at the discretion of each collaboration. Also, we recommend using the prediction from the high-Z model presented in Table 3, except for those cases in which the difference in the expected event count between the two models is sufficiently large, in which case we recommend that the predictions from both models be reported. The level at which this difference is considered important is also left at the discretion of each collaboration, but the crucial point is that this comparison should be made in terms of expected counts at the detector under consideration.
3.2.2 Atmospheric neutrinos
Atmospheric neutrinos arise from the collision of cosmic rays in the atmosphere and the subsequent decay of mesons and muons. This neutrino flux spans a wide range of energies, and while the high-end (\(>1\) GeV) has been well studied, the low energy region remains largely unexplored, which is the most relevant for dark matter searches. Currently, the best predictions on the atmospheric neutrino flux in the sub-GeV regime are based on the 2005 FLUKA simulations [115]. The sum of the predicted electron, anti-electron, muon and anti-muon neutrino fluxes from this simulation is shown in Fig. 8. At higher energies, we recommend adopting the more recent calculation of Honda et al. [116].
The two main uncertainties associated with this flux at low energies are the uncertainty on the interaction cross section between cosmic rays and air nuclei, and the one arising from the Earth’s geomagnetic field, which introduces a cut-off in the low end of the energy spectrum. Taking into account these two effects, the uncertainty on the atmospheric neutrino flux below 100MeV is approximately 20% [117, 118]. It should be highlighted that the cut-off induced by the Earth’s geomagnetic field is dependent on the detector’s location, resulting in a larger atmospheric flux for detectors that are nearer to the poles [117]. Our recommendation for the total flux and its uncertainty is shown in Table 4.
3.2.3 Diffuse supernova neutrinos
The diffuse supernova neutrino background (DSNB) refers to the cumulative flux of neutrinos from supernova explosions over the history of the Universe. The expected total flux of the DSNB is not large compared to other neutrino sources, but it can be relevant for direct dark matter searches since it extends to a higher energy range than solar neutrinos.
The neutrino spectrum of a core-collapse supernova is well-approximated by a Fermi-Dirac distribution, with temperatures in the range 3–8 MeV [119]. The DSNB flux shown in Fig. 8 assumes the following temperatures for each neutrino flavour: 3 and 5 MeV for electron and anti-electron neutrinos, respectively, and 8 MeV for the total contribution of the remaining neutrinos. For more details on this calculation, see Refs. [120, 121]. There are some large theoretical uncertainties in this calculation, and therefore, following the recommendations from Refs. [121, 122], we recommend assigning an uncertainty of 50% on the DSNB flux.
4 Overall recommendations
We conclude by providing a list of the main recommendations from the sections above. These recommendations do not preclude the development of new methods if they can be shown to have appropriate statistical properties, as long as comparisons with previous results are reported in a transparent manner. Instead, this set of recommendations provide a common framework that will facilitate the comparison of results between different experiments.
4.1 Statistical analysis
-
We recommend collaborations to decide on all the choices covered in this paper before proceeding with final analyses, regardless of whether collaborations are employing bias mitigation techniques such as blinding or salting. Changes in the analysis due to, e.g. discovering bugs after an unblinding, should be pointed out when reporting results.
-
We recommend PLR as the test statistic to use to assess discovery significance and to construct confidence intervals. Alternate methods should fulfill similar statistical properties, in particular of coverage.
-
For standard WIMP searches we advocate performing these assessments on a per-mass basis, as in a raster scan.
-
Discovery significance should be assessed with the discovery test statistic, Eq. (5) (Eq. 4) evaluated at \(\mu =0\)).
-
If the signal hypothesis has free parameters not defined under the null hypothesis, a look-elsewhere-effect (LEE) computation should be performed to calculate a global significance, at least for a local significance that approaches or exceeds \(3 \sigma \).
-
Claims of evidence should require at least a \(3\sigma \) global discrepancy with the background-only hypothesis. We do not make a recommendation regarding discovery significance.
-
Experiments should publish their discovery p value, both local and, if needed, global for any analysis.
-
The unified confidence interval approach should be used to construct confidence intervals, using the two-sided test statistic of Eq. (4). Staying with past convention in the field, the primary limit should use \(\alpha =0.1\) (i.e. 90% CL). We recommend collaborations publish the expected sensitivity of a result by showing a median expected limit with an uncertainty band (often called the “Brazil band”).
-
The two-sided confidence interval will “lift off” from 0 signal when \(p<\alpha \). Collaborations may decide, before proceeding to the final analysis, to apply an excess reporting threshold to report the lower limit only above some greater significance level. Note, however, that this approach will in general lead to overcoverage. See Ref. [21] for a previous example by XENON1T.
-
-
To avoid large underfluctuations that would exclude parameter space where a discovery is very unlikely, we advocate the use of a power-constraint (PCL) on the confidence intervals obtained using the test statistic of Eq. (4), with a power of \(\pi _\mathrm {crit}=0.32\). This value corresponds to a sensitivity to \(\mu _\mathrm {crit}\) at the -1\(\sigma \) contour of predicted sensitivity in the asymptotic case. We leave it to the individual collaborations whether or not to also publicly present the unconstrained bound in some form (e.g. a dashed line), though we recommend that data be available upon request.
-
For excesses approaching or exceeding \(3\sigma \), a separate mass-cross-section confidence contour could be included. The complete procedure including this step would be:
-
Compute per-mass (local) discovery significance and per-mass confidence intervals – both of these should always be reported.
-
If a local discovery significance indicates an excess, compute and apply the look-elsewhere-effect to report a global discovery significance.
-
If the global discovery significance exceeds a pre-determined threshold, a separate mass-cross-section contour may also be included as part of reporting on the excess. The pre-determined threshold should be set high enough that flip-flopping between the per-mass cross-sections and the two-dimensional contour is less of a concern, and the per-mass confidence limit should still be included when reporting the result.
-
-
We recommend that the distribution of test statistics be estimated using either toy simulations or approximations (asymptotic or otherwise) verified using toy simulations.
-
Whenever possible, models should be validated, both on calibration data or side-bands, using goodness-of-fit tests chosen to discover relevant model discrepancies. Tests and criteria should be decided before data is unblinded.
-
We recommend that collaborations work to make their data more usable to the physics community than specific limits, by making results computer-readable and accessible by default, and by working to develop open statistical models/likelihoods for use by the community.
-
To avoid analysis biases, experiments should perform blind or salted analyses to the extent possible, committing to analysis and statistical conventions before studying the science data.
4.2 Astrophysical models
-
The overall recommendation is to use the SHM parameters in Table 1. The most significant is an updated value of \(\mathbf {v}_0\), with all the other parameters being equal to the most commonly used values. We emphasize that if these parameter values are adopted, the relevant references should always be cited and citing this reference only is not sufficient.
-
Due to non-parametric uncertainties in the form of the SHM itself, it is recommended not to profile over the SHM parameters’ uncertainties. Instead, we recommend that these parameters are fixed to clearly stated values, so they can easily be reinterpreted under different halo models.
-
The list of suggested normalization values for the relevant neutrino fluxes is shown in Table 4. We recommend using the theoretical prediction for all the neutrino sources, except for \(^{7}\)Be and \(^{8}\)B, for which the most recent experimental values have a low uncertainty and are completely uncorrelated to other neutrino signals.
-
We leave at the discretion of each collaboration to make the choices that they consider most appropriate to convert neutrino fluxes into recoil rates.
Data Availability Statement
This manuscript has no associated data or the data will not be deposited. [Authors’ comment: There is no original data associated with this publication, although further details of original studies presented here can be provided upon request.]
Notes
Technically, the condition is that one or more parameters of the signal hypothesis are degenerate under the null hypothesis.
Flip-flopping is a term used to refer to the fact that the coverage probability of a confidence interval may be different to the nominal value if one makes an analysis choice, for example between a one- or two-sided test after looking at the data.
In some cases an experiment can set a stronger limit in the presence of a (downwardly-fluctuating) background than an identical experiment with no background.
References
F. Ruppin, J. Billard, E. Figueroa-Feliciano, L. Strigari, Complementarity of dark matter detectors in light of the neutrino background. Phys. Rev. D 90(8), 083510 (2014). https://doi.org/10.1103/PhysRevD.90.083510. arXiv:1408.3581 [hep-ph]
J. Buch, J. Fan, J.S.C. Leung, Implications of the Gaia sausage for dark matter nuclear interactions. Phys. Rev. D 101(6), 063026 (2020). https://doi.org/10.1103/PhysRevD.101.063026
G. Cowan, K. Cranmer, E. Gross, O. Vitells, Power-constrained limits. arXiv:1105.3166 [physics.data-an]
LUX Collaboration, D.S. Akerib et al., First results from the LUX dark matter experiment at the Sanford Underground Research Facility. Phys. Rev. Lett. 112, 091303 (2014). https://doi.org/10.1103/PhysRevLett.112.091303. arXiv:1310.8214 [astro-ph.CO]
D. Akerib et al., Results from a search for dark matter in the complete LUX exposure. Phys. Rev. Lett. 118(2), 021303 (2017). https://doi.org/10.1103/PhysRevLett.118.021303
XENON Collaboration, E. Aprile et al., First dark matter search results from the XENON1T experiment. Phys. Rev. Lett. 119(18), 181301 (2017). https://doi.org/10.1103/PhysRevLett.119.181301. arXiv:1705.06655 [astro-ph.CO]
XENON Collaboration, E. Aprile et al., Dark matter search results from a one ton-year exposure of XENON1T. Phys. Rev. Lett. 121(11), 111302 (2018). https://doi.org/10.1103/PhysRevLett.121.111302
C.A. O’Hare, Can we overcome the neutrino floor at high masses? Phys. Rev. D 102(6), 063024 (2020). https://doi.org/10.1103/PhysRevD.102.063024. arXiv:2002.07499 [astro-ph.CO]
J.D. Lewin, P.F. Smith, Review of mathematics, numerical factors, and corrections for dark matter experiments based on elastic nuclear recoil. Astropart. Phys. 6(1), 87–112 (1996). https://doi.org/10.1016/S0927-6505(96)00047-3
O. Behnke, J. Conrad, W.H. Lippincott, L. Lyons, O.S. Tarek, Phystat dark matter 2019. Stockholm University, August (2019). https://indico.cern.ch/event/769726/overview
ATLAS, CMS, LHC Higgs Combination Group Collaboration, Procedure for the LHC Higgs boson search combination in summer (2011)
ATLAS Collaboration, G. Aad et al., Observation of a new particle in the search for the Standard Model Higgs boson with the ATLAS detector at the LHC. Phys. Lett. B 716, 1–29 (2012). https://doi.org/10.1016/j.physletb.2012.08.020. arXiv:1207.7214 [hep-ex]
CMS Collaboration, S. Chatrchyan et al., Observation of a New Boson with Mass Near 125 GeV in \(pp\) Collisions at \(\sqrt{s}\) = 7 and 8 TeV. JHEP 06, 081 (2013). https://doi.org/10.1007/JHEP06(2013)081. arXiv:1303.4571 [hep-ex]
W.A. Rolke, A.M. Lopez, J. Conrad, Limits and confidence intervals in the presence of nuisance parameters. Nucl. Instrum. Methods A 551, 493–503 (2005). https://doi.org/10.1016/j.nima.2005.05.068. arXiv:physics/0403059
G. Cowan, K. Cranmer, E. Gross, O. Vitells, Asymptotic formulae for likelihood-based tests of new physics. Eur. Phys. J. C 71, 1554 (2011). https://doi.org/10.1140/epjc/s10052-011-1554-0 [Erratum: Eur. Phys. J. C73,2501(2013)]
Particle Data Group Collaboration, M. Tanabashi et al., Review of particle physics. Phys. Rev. D 98, 030001 (2018). https://doi.org/10.1103/PhysRevD.98.030001
L. Lyons, Raster scan or 2-D approach? arXiv:1404.7395 [hep-ex]
S.S. Wilks, The large-sample distribution of the likelihood ratio for testing composite hypotheses. Ann. Math. Stat. 9(1), 60–62 (1938). https://doi.org/10.1214/aoms/1177732360
S. Algeri, J. Aalbers, K.D. Morå, J. Conrad, Searching for new phenomena with profile likelihood ratio tests. Nat. Rev. Phys. 2(5), 245–252 (2020). https://doi.org/10.1038/s42254-020-0169-5
E. Gross, O. Vitells, Trial factors for the look elsewhere effect in high energy physics. Eur. Phys. J. C 70, 525–530 (2010). https://doi.org/10.1140/epjc/s10052-010-1470-8. arXiv:1005.1891 [physics.data-an]
XENON Collaboration, E. Aprile et al., XENON1T dark matter data analysis: Signal and background models and statistical inference. Phys. Rev. D 99(11), 112009 (2019). https://doi.org/10.1103/PhysRevD.99.112009. arXiv:1902.11297 [physics.ins-det]
K.D. Morå, Statistical Modelling and Inference for XENON1T. Ph.D. thesis, Stockholm University, Department of Physics, 2019. http://urn.kb.se/resolve?urn=urn:nbn:se:su:diva-163390. Equation 5.6 has been corrected with an http://su.diva-portal.org/smash/get/diva2:1275477/ERRATA02.pdf erratum, the integral runs from 0 to \(p_{\rm data}\)
G.J. Feldman, R.D. Cousins, A unified approach to the classical statistical analysis of small signals. Phys. Rev. D 57, 3873–3889 (1998). https://doi.org/10.1103/PhysRevD.57.3873
K. Cranmer, Statistical challenges for searches for new physics at the LHC, in Statistical Problems in Particle Physics, Astrophysics and Cosmology, vol. 9 (2005). https://doi.org/10.1142/9781860948985_0026. arXiv:physics/0511028
PandaX-II Collaboration, X. Cui et al., Dark matter results from 54-ton-day exposure of PandaX-II experiment. Phys. Rev. Lett. 119(18), 181302 (2017). https://doi.org/10.1103/PhysRevLett.119.181302
LUX Collaboration, D.S. Akerib et al., Results from a search for dark matter in the complete LUX exposure. Phys. Rev. Lett. 118(2), 021303 (2017). https://doi.org/10.1103/PhysRevLett.118.021303
A.L. Read, Presentation of search results: the CLs technique. J. Phys. G Nucl. Part. Phys. 28(10), 2693 (2002)
A.L. Read, Modified frequentist analysis of search results (the \(cl_{s}\) method). http://cds.cern.ch/record/451614
R.D. Cousins, Negatively biased relevant subsets induced by the most-powerful one-sided upper confidence limits for a bounded physical parameter. arXiv:1109.2023 [physics.data-an]
XENON Collaboration, E. Aprile et al., Search for coherent elastic scattering of solar \(^8\)B neutrinos in the XENON1T dark matter experiment. Phys. Rev. Lett. 126, 091301 (2021). https://doi.org/10.1103/PhysRevLett.126.091301. arXiv:2012.02846 [hep-ex]
J.R. Klein, A. Roodman, Blind analysis in nuclear and particle physics. Ann. Rev. Nucl. Part. Sci. 55, 141–163 (2005). https://doi.org/10.1146/annurev.nucl.55.090704.151521
XENON Collaboration, J. Angle et al., First results from the XENON10 dark matter experiment at the gran sasso National Laboratory. Phys. Rev. Lett. 100, 021303 (2008). https://doi.org/10.1103/PhysRevLett.100.021303. arXiv:0706.0039 [astro-ph]
XENON100 Collaboration, E. Aprile et al., Dark matter results from 225 live days of XENON100 Data. Phys. Rev. Lett. 109, 181301 (2012). https://doi.org/10.1103/PhysRevLett.109.181301. arXiv:1207.5988 [astro-ph.CO]
V.N. Lebedenko et al., Result from the first science run of the ZEPLIN-III dark matter search experiment. Phys. Rev. D 80, 052010 (2009). https://doi.org/10.1103/PhysRevD.80.052010. arXiv:0812.1150 [astro-ph]
SuperCDMS Collaboration, R. Agnese et al., Search for low-mass weakly interacting massive particles with SuperCDMS. Phys. Rev. Lett. 112(24), 241302 (2014). https://doi.org/10.1103/PhysRevLett.112.241302. arXiv:1402.7137 [hep-ex]
DarkSide Collaboration Collaboration, P. Agnes et al., DarkSide-50 532-day dark matter search with low-radioactivity argon. Phys. Rev. D 98(10), 102006 (2018). https://doi.org/10.1103/PhysRevD.98.102006
PICO Collaboration, C. Amole et al., Dark Matter Search Results from the PICO-60 C\(_3\)F\(_8\) Bubble Chamber. Phys. Rev. Lett. 118(25), 251301 (2017). https://doi.org/10.1103/PhysRevLett.118.251301. arXiv:1702.07666 [astro-ph.CO]
SNO Collaboration, B. Aharmim et al., Measurement of the \(\nu _e\) and Total \(^{8}\)B Solar Neutrino Fluxes with the Sudbury Neutrino Observatory Phase-III Data Set. Phys. Rev. C 87(1), 015502 (2013). https://doi.org/10.1103/PhysRevC.87.015502. arXiv:1107.2901 [nucl-ex]
C. Biwer et al., Validating gravitational-wave detections: the advanced LIGO hardware injection system. Phys. Rev. D 95(6), 062002 (2017). https://doi.org/10.1103/PhysRevD.95.062002. arXiv:1612.07864 [astro-ph.IM]
ANTARES Collaboration, S. Adrian-Martinez et al., First results on dark matter annihilation in the sun using the ANTARES neutrino telescope. JCAP 11, 032 (2013). https://doi.org/10.1088/1475-7516/2013/11/032. arXiv:1302.6516 [astro-ph.HE]
C. McCabe, The Earth’s velocity for direct detection experiments. J. Cosmol. Astropart. Phys. 2014(02), 027 (2014). https://doi.org/10.1088/1475-7516/2014/02/027
O’Hare et al., Velocity substructure from Gaia and direct searches for dark matter. Phys. Rev. D 101(2), 023006 (2020). https://doi.org/10.1103/PhysRevD.101.023006
A. Radick, A.-M. Taki, T.-T. Yu, Dependence of dark matter: electron scattering on the galactic dark matter velocity distribution. J. Cosmol. Astropart. Phys. 2021(02), 004 (2021). https://doi.org/10.1088/1475-7516/2021/02/004
N.W. Evans, C.A.J. O’Hare, C. McCabe, SHM++: A Refinement of the Standard Halo Model for Dark Matter Searches in Light of the Gaia Sausage. arXiv: 1810.11468
M.C. Smith et al., The RAVE survey: constraining the local Galactic escape speed. Mon. Not. R. Astron. Soc. 379(2), 755–772 (2007). https://doi.org/10.1111/j.1365-2966.2007.11964.x
R. Schönrich, J. Binney, W. Dehnen, Local kinematics and the local standard of rest. Mon. Not. R. Astron. Soc. 403(4), 1829–1833 (2010). https://doi.org/10.1111/j.1365-2966.2010.16253.x
J. Bland-Hawthorn, O. Gerhard, The galaxy in context: structural, kinematic, and integrated properties. Ann. Rev. Astron. Astrophys. 54(1), 529–596 (2016). https://doi.org/10.1146/annurev-astro-081915-023441
GRAVITY Collaboration, R. Abuter et al., Improved GRAVITY astrometric accuracy from modeling optical aberrations. Astron. Astrophys. 647, A59 (2021). https://doi.org/10.1051/0004-6361/202040208. https://www.aanda.org/articles/aa/abs/2021/03/aa40208-20/aa40208-20.html
Y. Wu, K. Freese, C. Kelso, P. Stengel, M. Valluri, Uncertainties in direct dark matter detection in light of Gaia’s escape velocity measurements. J. Cosmol. Astropart. Phys. 2019(10), 034 (2019). https://doi.org/10.1088/1475-7516/2019/10/034
Gaia Collaboration, A.G.A. Brown, A. Vallenari, T. Prusti, J.H.J. de Bruijne, C. Babusiaux, C.A.L. Bailer-Jones, Gaia data release 2. Summary of the contents and survey properties. Astron. Astrophys. 616, A1 (2018). https://doi.org/10.1051/0004-6361/201833051. arXiv:1804.09365
D.G. York et al., The Sloan digital sky survey: technical summary. Astron. J. 120(3), 1579 (2000). https://doi.org/10.1086/301513
L. Necib, M. Lisanti, V. Belokurov, Inferred evidence for dark matter kinematic substructure with SDSS-Gaia. Astrophys. J. 874(1), 3 (2019). https://doi.org/10.3847/1538-4357/ab095b
L. Necib, M. Lisanti, V. Belokurov, Inferred evidence for dark matter kinematic substructure with SDSS-Gaia. Astrophys. J. 874(1), 3 (2019). https://doi.org/10.3847/1538-4357/ab095b
G.C. Myeong et al., The sausage globular clusters. Astrophys. J. 863(2), L28 (2018). https://doi.org/10.3847/2041-8213/aad7f7. arXiv:1805.00453
N. Bozorgnia, A. Fattahi, C.S. Frenk, A. Cheek, D.G. Cerdeño, F.A. Gómez, R.J. Grand, F. Marinacci, The dark matter component of the gaia radially anisotropic substructure. J. Cosmol. Astropart. Phys. 2020(07), 036 (2020). https://doi.org/10.1088/1475-7516/2020/07/036
G.C. Myeong et al., Discovery of new retrograde substructures: the shards of \(\omega \) Centauri? Mon. Not. R. Astron. Soc. 478(4), 5449–5459 (2018)
G.C. Myeong, N.W. Evans, V. Belokurov, N.C. Amorisco, S.E. Koposov, Halo substructure in the SDSS-Gaia catalogue: streams and clumps. Mon. Not. R. Astron. Soc. 475(2), 1537–1548 (2018). https://doi.org/10.1093/mnras/stx3262
H. Koppelman, A. Helmi, J. Veljanoski, One large blob and many streams frosting the nearby Stellar Halo in Gaia DR2. Astrophys. J. 860(1), L11 (2018). https://doi.org/10.3847/2041-8213/aac882
H.H. Koppelman, A. Helmi, D. Massari, S. Roelenga, U. Bastian, Characterization and history of the Helmi streams with Gaia DR2. Astron. Astrophys. 625, A5 (2019). https://doi.org/10.1051/0004-6361/201834769
A. Helmi, The stellar halo of the Galaxy. Astron. Astrophys. Rev. 15(3), 145–188 (2008). https://doi.org/10.1007/s00159-008-0009-6
L. Necib, B. Ostdiek, M. Lisanti, T. Cohen, M. Freytsis, S. Garrison-Kimmel, P.F. Hopkins, A. Wetzel, R. Sanderson, Evidence for a vast prograde stellar stream in the solar vicinity. Nat. Astron. 4(11), 1078–1083 (2020). https://doi.org/10.1038/s41550-020-1131-2. https://www.nature.com/articles/s41550-020-1131-2. Number: 11 Publisher: Nature Publishing Group
DEAP Collaboration, P. Adhikari et al., Constraints on dark matter-nucleon effective couplings in the presence of kinematically distinct halo substructures using the DEAP-3600 detector. Phys. Rev. D 102(8), 082001 (2020). https://doi.org/10.1103/PhysRevD.102.082001
G. Besla, A. Peter, N. Garavito-Camargo, The highest-speed local dark matter particles come from the Large Magellanic Cloud. J. Cosmol. Astropart. Phys. 2019(11), 013 (2019). https://doi.org/10.1088/1475-7516/2019/11/013. arXiv:1909.04140
E. Aprile et al., Dark matter search results from a one tonne \(\times \) year exposure of XENON1T. Phys. Rev. Lett. 121(11), 111302 (2018). https://doi.org/10.1103/PhysRevLett.121.111302. arXiv:1805.12562
DEAP Collaboration, R. Ajaj et al., Search for dark matter with a 231-day exposure of liquid argon using DEAP-3600 at SNOLAB. Phys. Rev. D 100(2), 022004 (2019). https://doi.org/10.1103/PhysRevD.100.022004
J.I. Read, The local dark matter density. J. Phys. G Nucl. Part. Phys. 41(6), 063101 (2014). https://doi.org/10.1088/0954-3899/41/6/063101
P.F. de Salas, A. Widmark, Dark matter local density determination: recent observations and future prospects (2020). arXiv:2012.11477 [astro-ph, physics:hep-ph]
M. Steinmetz et al., The radial velocity experiment (rave): first data release. Astron. J. 132(4), 1645 (2006)
H.H. Koppelman, A. Helmi, Determination of the escape velocity of the Milky Way using a proper motion selected halo sample (2021) . arXiv:2006.16283 [astro-ph]
A.J. Deason, A. Fattahi, V. Belokurov, W. Evans, R.J. Grand, F. Marinacci, R. Pakmor, The local high velocity tail and the Galactic escape speed. Mon. Not. R. Astron. Soc. 485(3), 3514–3526 (2019). https://doi.org/10.1093/mnras/stz623. arXiv:1901.02016
T. Piffl et al., The RAVE survey: the Galactic escape speed and the mass of the Milky Way. Astron. Astrophys. 562, A91 (2014). https://doi.org/10.1051/0004-6361/201322531. arXiv:1309.4293
G. Kordopatis et al., The radial velocity experiment (RAVE): fourth data release. Astron. J. 146(5), 134 (2013)
A.A. Williams, V. Belokurov, A.R. Casey, N.W. Evans, On the run: mapping the escape speed across the Galaxy with SDSS. Mon. Not. R. Astron. Soc. 468(2), 2359–2371 (2017). https://doi.org/10.1093/mnras/stx508
C.P. Ahn et al., The ninth data release of the sloan digital sky survey: first spectroscopic data from the SDSS-III Baryon Oscillation spectroscopic survey. Astrophys. J. Suppl. Ser. 203(2), 21 (2012)
G. Monari et al., The escape speed curve of the Galaxy obtained from Gaia DR2 implies a heavy Milky Way. Astron. Astrophys. 616, L9 (2018). https://doi.org/10.1051/0004-6361/201833748
L. Necib T. Lin, Substructure at high speed II: the local escape velocity and Milky Way Mass with Gaia DR2 (2021). arXiv:2102.02211 [astro-ph, physics:hep-ph]
J. Holmberg, B. Nordström, J. Andersen, The Geneva-Copenhagen survey of the solar neighbourhood-III. Improved distances, ages, and kinematics. Astron. Astrophys. 501(3), 941–947 (2009). https://doi.org/10.1051/0004-6361/200811191
F.V. Leeuwen, Validation of the new Hipparcos reduction. Astron. Astrophys. 474(2), 653–664 (2007). https://doi.org/10.1051/0004-6361:20078357
W. Dehnen, J.J. Binney, Local stellar kinematics from Hipparcos data. Mon. Not. R. Astron. Soc. 298(2), 387–394 (1998). https://doi.org/10.1046/j.1365-8711.1998.01600.x
M.J. Reid, A. Brunthaler, The Proper Motion of Sagittarius A*. II. The Mass of Sagittarius A*. Astrophys. J. 616(2), 872 (2004). https://doi.org/10.1086/424960
M.J. Reid et al., Trigonometric parallaxes of high mass star forming regions: the structure and kinematics of the Milky Way. Astrophys. J. 783, 130 (2014). https://doi.org/10.1088/0004-637X/783/2/130
A.-C. Eilers, D.W. Hogg, H.-W. Rix, M. Ness, The circular velocity curve of the Milky Way from \$5\$ to \$25\$ kpc. Astrophys. J. 871(1), 120 (2019). https://doi.org/10.3847/1538-4357/aaf648. arXiv:1810.09466
L.E. Strigari, Galactic searches for dark matter. Phys. Rep. 531(1), 1–88 (2013). https://doi.org/10.1016/j.physrep.2013.05.004
A.M. Green, Astrophysical uncertainties on the local dark matter distribution and direct detection experiments. J. Phys. G Nucl. Part. Phys. 44(8), 084001 (2017). https://doi.org/10.1088/1361-6471/aa7819
L.M. Krauss, J.L. Newstead, Extracting particle physics information from direct detection of dark matter with minimal assumptions (2018). arXiv:1801.08523 [astro-ph, physics:hep-ex, physics:hep-ph, physics:nucl-ex]
S.E. Koposov, H.-W. Rix, D.W. Hogg, Constraining the milky way potential with a six-dimensional phase-space map of the GD-1 stellar stream. Astrophys. J. 712(1), 260–273 (2010). https://doi.org/10.1088/0004-637X/712/1/260
C. McCabe, Astrophysical uncertainties of dark matter direct detection experiments. Phys. Rev. D 82(2), 023530 (2010). https://doi.org/10.1103/PhysRevD.82.023530
E. Vitagliano, I. Tamborra, G. Raffelt, Grand unified neutrino spectrum at earth: sources and spectral components. arXiv:1910.11878 [astro-ph.HE]
COHERENT Collaboration, D. Akimov et al., Observation of coherent elastic neutrino–nucleus scattering. Science 357(6356), 1123–1126 (2017). https://doi.org/10.1126/science.aao0990. arXiv:1708.01294 [nucl-ex]
J.N. Bahcall, W.A. Fowler, J. Iben, I.R.L. Sears, Solar neutrino flux. Astrophys. J. 137, 344–346 (1963). https://doi.org/10.1086/147513
M. Asplund, N. Grevesse, A.J. Sauval, P. Scott, The chemical composition of the sun. Annu. Rev. Astron. Astrophys. 47(1), 481–522 (2009). https://doi.org/10.1146/annurev.astro.46.060407.145222. arXiv:0909.0948 [astro-ph.SR]
S. Basu, H.M. Antia, Constraining solar abundances using helioseismology. Astrophys. J. 606(1), L85–L88 (2004). https://doi.org/10.1086/421110. arXiv:astro-ph/0403485
J.N. Bahcall, S. Basu, M. Pinsonneault, A.M. Serenelli, Helioseismological implications of recent solar abundance determinations. Astrophys. J. 618(2), 1049–1056 (2005). https://doi.org/10.1086/426070
F. Delahaye, M.H. Pinsonneault, The solar heavy-element abundances. I. Constraints from stellar interiors. Astrophys. J. 649(1), 529–540 (2006). https://doi.org/10.1086/505260. arXiv:astro-ph/0511779
A. Serenelli, S. Basu, J.W. Ferguson, M. Asplund, New solar composition: the problem with solar models revisited. Astrophys. J. Lett. 705, L123–L127 (2009). https://doi.org/10.1088/0004-637X/705/2/L123. arXiv:0909.2668 [astro-ph.SR]
N. Vinyoles, A.M. Serenelli, F.L. Villante, S. Basu, J. Bergström, M. Gonzalez-Garcia, M. Maltoni, C.P. Garay, N. Song, A new generation of standard solar models. Astrophys. J. 835(2), 202 (2017). https://doi.org/10.3847/1538-4357/835/2/202. arXiv:1611.09867 [astro-ph.SR]
L. Stonehill, J. Formaggio, R. Robertson, Solar neutrinos from CNO electron capture. Phys. Rev. C 69, 015801 (2004). https://doi.org/10.1103/PhysRevC.69.015801. arXiv:hep-ph/0309266
F. Villante, ecCNO solar neutrinos: a challenge for gigantic ultra-pure liquid scintillator detectors. Phys. Lett. B 742, 279–284 (2015). https://doi.org/10.1016/j.physletb.2015.01.043. arXiv:1410.2796 [hep-ph]
J.N. Bahcall, A. Ulmer, The temperature dependence of solar neutrino fluxes. Phys. Rev. D 53, 4202–4210 (1996). https://doi.org/10.1103/PhysRevD.53.4202. arXiv:astro-ph/9602012
W. Haxton, R.H. Robertson, A.M. Serenelli, Solar neutrinos: status and prospects. Ann. Rev. Astron. Astrophys. 51, 21–61 (2013). https://doi.org/10.1146/annurev-astro-081811-125539. arXiv:1208.5723 [astro-ph.SR]
C. Fröhlich, J. Lean, The Sun’s total irradiance: cycles, trends and related climate change uncertainties since 1976. Geophys. Res. Lett. 25(23), 4377–4380 (1998). https://doi.org/10.1029/1998GL900157
J.N. Bahcall, The luminosity constraint on solar neutrino fluxes. Phys. Rev. C 65, 025801 (2002). https://doi.org/10.1103/PhysRevC.65.025801. arXiv:hep-ph/0108148
O.Y. Smirnov et al., Measurement of solar pp-neutrino flux with Borexino: results and implications. J. Phys. Conf. Ser. 675(1), 012027 (2016). https://doi.org/10.1088/1742-6596/675/1/012027
J. Bergstrom, M. Gonzalez-Garcia, M. Maltoni, C. Pena-Garay, A.M. Serenelli, N. Song, Updated determination of the solar neutrino fluxes from solar neutrino data. JHEP 03, 132 (2016). https://doi.org/10.1007/JHEP03(2016)132. arXiv:1601.00972 [hep-ph]
D. Vescovi, C. Mascaretti, F. Vissani, L. Piersanti, O. Straniero, The luminosity constraint in the era of precision solar physics. arXiv:2009.05676 [astro-ph.SR]
Borexino Collaboration, M. Agostini et al., Simultaneous precision spectroscopy of pp, \(^{7}{\rm Be}\), and \(pep\) solar neutrinos with Borexino Phase-II. Phys. Rev. D 100, 082004 (2019). https://doi.org/10.1103/PhysRevD.100.082004
SNO Collaboration, B. Aharmim et al., Combined analysis of all three phases of solar neutrino data from the sudbury neutrino observatory. Phys. Rev. C 88, 025501 (2013). https://doi.org/10.1103/PhysRevC.88.025501. arXiv:1109.0763 [nucl-ex]
B. Aharmim et al., A search for neutrinos from the solar hep reaction and the diffuse supernova neutrino background with the sudbury neutrino observatory. Astrophys. J. 653(2), 1545–1551 (2006). https://doi.org/10.1086/508768
Borexino Collaboration, M. Agostini et al., First direct experimental evidence of CNO neutrinos. arXiv:2006.15115 [hep-ex]
E.K. Akhmedov, J. Kopp, Neutrino oscillations: quantum mechanics vs. quantum field theory. JHEP 04 008, (2010). https://doi.org/10.1007/JHEP04(2010)008. arXiv:1001.4815 [hep-ph] (Erratum: JHEP 10, 052 (2013))
Super-Kamiokande Collaboration, Y. Fukuda et al., Evidence for oscillation of atmospheric neutrinos. Phys. Rev. Lett. 81, 1562–1567 (1998). https://doi.org/10.1103/PhysRevLett.81.1562. arXiv:hep-ex/9807003
R.H. Helm, Inelastic and elastic scattering of 187-Mev electrons from selected even-even nuclei. Phys. Rev. 104, 1466–1475 (1956). https://doi.org/10.1103/PhysRev.104.1466
K. Patton, J. Engel, G.C. McLaughlin, N. Schunck, Neutrino–nucleus coherent scattering as a probe of neutron density distributions. Phys. Rev. C 86, 024612 (2012). https://doi.org/10.1103/PhysRevC.86.024612
J.-W. Chen, H.-C. Chi, C.-P. Liu, C.-P. Wu, Low-energy electronic recoil in xenon detectors by solar neutrinos. Phys. Lett. B 774, 656–661 (2017). https://doi.org/10.1016/j.physletb.2017.10.029
G. Battistoni, A. Ferrari, T. Montaruli, P. Sala, The atmospheric neutrino flux below 100-MeV: the FLUKA results. Astropart. Phys. 23, 526–534 (2005). https://doi.org/10.1016/j.astropartphys.2005.03.006
M. Honda, M.S. Athar, T. Kajita, K. Kasahara, S. Midorikawa, Atmospheric neutrino flux calculation using the NRLMSISE-00 atmospheric model. Phys. Rev. D 92(2), 023004 (2015). https://doi.org/10.1103/PhysRevD.92.023004. arXiv:1502.03916 [astro-ph.HE]
M. Honda, T. Kajita, K. Kasahara, S. Midorikawa, Improvement of low energy atmospheric neutrino flux calculation using the JAM nuclear interaction model. Phys. Rev. D 83, 123001 (2011). https://doi.org/10.1103/PhysRevD.83.123001
K. Okumura, Measurements of the atmospheric neutrino flux by Super-Kamiokande: energy spectra, geomagnetic effects, and solar modulation. J. Phys. Conf. Ser. 888, 012116 (2017). https://doi.org/10.1088/1742-6596/888/1/012116
M.T. Keil, G.G. Raffelt, H.-T. Janka, Monte Carlo study of supernova neutrino spectra formation. Astrophys. J. 590, 971–991 (2003). https://doi.org/10.1086/375130. arXiv:astro-ph/0208035
L.E. Strigari, Neutrino coherent scattering rates at direct dark matter detectors. New J. Phys. 11, 105011 (2009). https://doi.org/10.1088/1367-2630/11/10/105011. arXiv:0903.3630 [astro-ph.CO]
J.F. Beacom, The diffuse supernova neutrino background. Annu. Rev. Nucl. Part. Sci. 60(1), 439–462 (2010). https://doi.org/10.1146/annurev.nucl.010909.083331
S. Horiuchi, J.F. Beacom, E. Dwek, The diffuse supernova neutrino background is detectable in Super-Kamiokande. Phys. Rev. D 79, 083013 (2009). https://doi.org/10.1103/PhysRevD.79.083013. arXiv:0812.3157 [astro-ph]
Acknowledgements
The authors are indebted to Olaf Behnke, Louis Lyons, and Tarek Saab as co-organizers of the Phystat-DM workshop, and for subsequent comments and useful discussions on the recommendations found here. We would like to thank the Knut and Alice Wallenberg foundation for contributions to the Phystat-DM workshop. This material is based upon work supported by the U.S. Department of Energy, Office of Science, Office of High Energy Physics, under Award Number DE-SC0011702. DB is supported by the Kavli Institute for Cosmological Physics at The University of Chicago through an endowment from the Kavli Foundation. IB is grateful for the support of the Alexander Zaks Scholarship, The Buchmann Scholarship, and the Azrieli Foundation. JD is supported by the Science and Technologies Facilities Council (STFC) Grant No. ST/R003181/1. CM is supported by the Science and Technology Facilities Council (STFC) Grant ST/N004663/1. BvK is supported by the Deutsche Forschungsgemeinschaft (DFG, German Research Foundation) under the Emmy Noether Grant No. 420484612, and under Germany’s Excellence Strategy - EXC 2121 “Quantum Universe” –390833306. MCP and DF are supported by the McDonald Institute.
Author information
Authors and Affiliations
Corresponding author
Rights and permissions
Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article’s Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article’s Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/.
Funded by SCOAP3
About this article
Cite this article
Baxter, D., Bloch, I.M., Bodnia, E. et al. Recommended conventions for reporting results from direct dark matter searches. Eur. Phys. J. C 81, 907 (2021). https://doi.org/10.1140/epjc/s10052-021-09655-y
Received:
Accepted:
Published:
DOI: https://doi.org/10.1140/epjc/s10052-021-09655-y