Nothing Special   »   [go: up one dir, main page]

Skip to main content
eScholarship
Open Access Publications from the University of California

UCLA

UCLA Electronic Theses and Dissertations bannerUCLA

Plug-in Estimation Approaches to Causal Inference and Discovery

Abstract

This dissertation covers techniques for the estimation of parameters related to making causal inferences and discoveries. Both for its generality and its simplicity, the focus is in the plug-in estimation of these parameters, whereby the statistical estimator(s) of a parameter(s) is plugged in to obtain an estimator for another, possibly more difficult to estimate, parameter. In particular, the following is addressed.

In Chapter 2, we focus on causal discovery, the learning of causality in a data mining scenario. Causal discovery has been of strong scientific and theoretical interest as a starting point to identify ``what causes what?'' Contingent on assumptions and a proper learning algorithm, it is sometimes possible to identify and accurately estimate a causal directed acyclic graph (DAG), as opposed to a Markov equivalence class of graphs that gives ambiguity of causal directions. The focus of this chapter is in highlighting the identifiability and estimation of DAGs with general error distributions through a general sequential sorting procedure that orders variables one at a time, starting at root nodes, followed by children of the root nodes, and so on until completion. We demonstrate a novel application of this general approach to estimate the topological ordering of a DAG. At each step of the procedure, only simple likelihood ratio scores are calculated on regression residuals to decide the next node to append to the current partial ordering. The computational complexity of our algorithm on a p-node problem is O(pd), where d is the maximum neighborhood size. Under mild assumptions, the population version of our procedure provably identifies a true ordering of the underlying DAG. We provide extensive numerical evidence to demonstrate that this sequential procedure scales to possibly thousands of nodes and works well for high-dimensional data. We accompany these numerical experiments with an application to a single-cell gene expression dataset.

The focus of the Chapter 3 is the Linear Non-Gaussian Acyclic Model (LiNGAM). Compared to what has been done, we present a novel estimation approach which involves specifying a parametric objective function and arguing when our sequential optimization approach will be statistically consistent, including if the dimension of underlying graph diverges, and when we can provide finite sample guarantees on its accuracy. This involves (1) defining well our target parameter: an ordering of the Directed acyclic graph (DAG)'s vertices such that parents always precede children; and (2) translating deviation bounds on the parameters for the corresponding structural equation model (SEM) into a statement about our topological order estimate's deviation from a true topological ordering. We also incorporate the use of a priori known neighborhood sets to our theoretical results.

In Chapter 4, we assume that the underlying causal structure is known, for example, due to the successful application of a causal discovery algorithm similar to those in the previous two chapters. This grants us the identifiability of parameters on the distribution of so-called potential outcomes, the key random variables we would like to make causal claims about. The premise of this chapter, in a vein similar to predictive inference with quantile regression, is that observations may lie far away from their conditional expectation. In the context of causal inference, due to the missing-ness of one outcome, it is difficult to check whether an individual's treatment effect lies close to its prediction given by the estimated Average Treatment Effect (ATE) or Conditional Average Treatment Effect (CATE). With the aim of augmenting the inference with these estimands in practice, we further study an existing distribution-free framework for the plug-in estimation of bounds on the probability an individual benefits from treatment (PIBT), a generally inestimable quantity that would concisely summarize an intervention's efficacy if it could be known. Given the innate uncertainty in the target population-level bounds on PIBT, we seek to better understand the margin of error for the estimation of these target parameters in order to help discern whether estimated bounds on treatment efficacy are tight (or wide) due to random chance or not. In particular, we present non-asymptotic guarantees to the estimation of bounds on marginal PIBT for a randomized experiment setting. We also derive new non-asymptotic results for the case where we would like to understand heterogeneity in PIBT across strata of pre-treatment covariates, with one of our main results in this setting making strategic use of regression residuals. These results, especially those in the randomized experiment case, can be used to help with formal statistical power analyses and frequentist confidence statements for settings where we are interested in inferring PIBT through the target bounds under minimal parametric assumptions. Our results extend to both real-valued and binary-valued outcomes, and these results can also instead be applied to reason about whether an individual is likely to be harmed by an intervention.

Main Content
For improved accessibility of PDF content, download the file to your device.
Current View