Nothing Special   »   [go: up one dir, main page]

Skip to main content
NIHPA Author Manuscripts logoLink to NIHPA Author Manuscripts
. Author manuscript; available in PMC: 2020 Sep 1.
Published in final edited form as: Conf Proc IEEE Eng Med Biol Soc. 2019 Jul;2019:4803–4806. doi: 10.1109/EMBC.2019.8856953

A Novel Scanning Algorithm for MEG/EEG imaging using Covariance Partitioning and Noise Learning

Chang Cai 1, Kensuke Sekihara 2, Srikantan S Nagarajan 1
PMCID: PMC7461716  NIHMSID: NIHMS1623432  PMID: 31946936

Abstract

In this paper, we present a novel scanning algorithm, called Covariance Optimization Garnering Noise for Active Cancellation (COGNAC), for magnetoencephalography (MEG) and electroencephalography (EEG) source localization. COGNAC uses a probabilistic graphical generative model for describing sensor data. This novel generative model partitions contributions to sensor data from sources at a particular scan location and from sources outside the scan location, with corresponding multi-resolution variance parameters that are estimated from data. Maximizing a convex upper bound on the marginal likelihood of the data under this generative model results in a cost function that can be optimized efficiently. Importantly, this generative model enables learning of sensor noise without the need for additional baseline or pre-stimulus data. The resulting inference algorithm is quite robust to reconstruction of highly correlated sources and to the effect of high levels of interference and noise sources. Algorithm performance was compared to representative benchmark algorithms on both simulated and real brain activity. In simulations, performance of our novel algorithm is consistently superior to benchmarks. We also demonstrate that the new algorithm is robust to correlated brain activity present in real MEG/EEG data.

I. INTRODUCTION

Magnetoencephalography (MEG) and electroencephalography (EEG) are two popular non-invasive techniques for detecting brain activity by measuring magnetic fields or electric potentials from on or near the scalp surface with excellent temporal resolution. Brain activity can then be reconstructed from MEG/EEG recordings. The reconstruction of brain sources is a challenging problem because it involves solving for unknown brain activity across thousands of voxels from the recordings of just a few hundred sensors. To circumvent this nonuniqueness, various estimation procedures incorporate prior knowledge and constraints about source characteristics.

Most of the source reconstruction algorithms can be viewed in a Bayesian framework [1]. This perspective is useful because at a high level, the prior distribution, implicitly or explicitly imposed, can be used to differentiate and compare the various source localization methods. Recently, we have developed Champagne, a novel tomographic source reconstruction algorithm, that is derived in an empirical Bayesian and incorporates deep theoretical ideas about sparse-source recovery from noisy, constrained measurements. Champagne improves upon existing methods of source reconstruction in terms of reconstruction accuracy, robustness, and computational efficiency [2]. Experiments with preliminary simulated and real data, presented in [3], show that compared to other commonly-used source localization algorithms, Champagne is more robust to correlated sources and noisy data. However, baseline data is necessary for accurate noise estimation and reconstruction of source activity. In certain source configurations, such as in the resting state, such baseline or pre-stimulus information is not available for analysis. Thus, it is unclear how reconstruction methodologies such as Champagne should be implemented in these scenarios.

In order to solve the problem above, we present a novel Bayesian scanning algorithm, called Covariance Optimization Garnering Noise for Active Cancellation (COGNAC), for magnetoencephalography (MEG) and electroencephalography (EEG) source localization, which uses a probabilistic graphical generative model for describing sensor data measurements. Our novel generative model partitions contributions to the sensor data from sources at a particular scan location and from sources outside the scan location, with corresponding Gaussian variance parameters that are estimated from data. Bayesian inference of this generative model, i.e. maximing the marginal likelihood of the data, results in a cost function that can be optimized efficiently using convex bounding techniques. The resulting inference algorithm is quite robust to reconstruction of highly correlated sources and to the effect of high levels of interference and noise sources. The performance of the algorithm is tested in simulations and real data.

II. Theory

The generative model for the observed MEG and EEG data which use an array of sensors to take electromagnetic field (or voltage potential) measurements from on or near the scalp surface with excellent temporal resolution is as follows. In both cases, the observed sensor data y(t) = [y1(t), y2(t),…, yM (t)]T at time point t is used for mapping from source activity which is generated by the same synchronous, compact current sources located within the brain, where M is the total number of sensors. We express the whole sensor time courses as y = y(t1), y(t2),…, y(tK) where refers to the κ-th time point. Typical brain activity source models apply voxel discretization over a whole brain volume, and assume a fixed source at each voxel. The sources’ activities are denoted by s(t) = [s1(t)T, s2(t)T ,…, sN (t)T]T, and N is total number of voxels. s(t) is called the voxel vector and the whole sources time courses are expressed as s = s(t1), s(t2),…, s(tK) where K is the number of time-samples. The source at i-th voxel is expressed using a dc dimensional vector si(t)dc×1 where dc denotes the number of directions1. We define the lead-field matrix as l = [l1,…, lN] and for i-th voxel as liM×dc; The p-th column of li represents the signal vector that would be observed at the scalp given a unit current source/dipole at the i-th voxel with a fixed orientation in the p-th direction.

We assume that the whole brain activity can be represented by one source at a particular scan location (grid point) plus sources at other locations in the brain. The data vector can be expressed as:

y(t)=lisi+j=1,jiNljsj(t)+ε. (1)

The model in Eq. (1) is converted into a probabilistic model when we specify prior distributions for the unknown variables, such that

p(si(t))=N(si(t)0,vi1). (2)

We divide the non-scan voxel space into R a priori specified regions (or tiles). The specification of regions could be based on either anatomical or functional criteria and the algorithm is agnostic about that. We assume that regions are non-overlapping, i.e., each voxel belongs to exactly a single region, and the r-th region contains Nr voxels. Importantly, we assume that voxels within the same region have the same prior regional variance.

p(sj(t))=N(sj(t)0,Ωr1), (3)

where vidc×dc and Ωrdc×dc are the prior precisions for i-th scanning voxel activity and for the r-th region. The Gaussian noise is assumed as that

p(ε)=N(ε0,λ1), (4)

where λ is a diagonal precision matrix. The i-th scanning source vector is estimated as its posterior mean s¯i(tk) which is given by

s¯i(tk)=E[si(tk)y]=vi1liTΣi1y(tk), (5)

where Σi is the data model covariance, expressed as

Σi=λ1+livi1liT+r=1Rjζr,jiljΩr1ljT, (6)

jζr,ji indicates summation of voxels belonging to the r-th region while excluding the i-th scanning voxel. Estimating the parameters of the model, vi, Ωr involve minimizing a convex upper bound of the marginal likelihood of the data [4], F(vi,ψi,Ω,Λ) such that

F=1Kk=1K[λ(y(tk)Is¯(tk))T(y(tk)Is¯(tk))+s¯i(tk)Tvis¯i(tk)+r=1Rjςr,jis¯j(tk)TΩrs¯j(tk)]ψiTvi1r=1Rtr(ΛrTΩr1)+const, (7)

where the posterior mean of s() is denoted as s¯i(tk) The update rules for parameters v^i and Ω^r are given by

v^i1=1Kk=1Ks¯i(tk)s¯iT(tk)ψi, (8)
Ω^r1=1KNrk=1Kjζr,jis¯j(tk)s¯j(tk)TΛr. (9)

The update rules for auxillary variables ψi and Λr are

ψi=liTΣy1li, (10)
Λr=jζr,jiljTΣy1lj. (11)

Defining

n(tk)=y(tk)lis¯i(tk)r=1Rjζr,jiljs¯j(tk), (12)

the noise precision can be estimated using

λ1=1Kk=1Kdiag[E[n(tk)nT(tk)]], (13)

where diag(A) indicates the operation that create a diagonal matrix whose diagonal entries are equal to the diagonal elements of a matrix A. For j-th non-scan voxel which belongs to r-th region, the source vector is estimated as its posterior mean s¯i(tk), which is given by

s¯j(tk)=Ωr1ljTΣy1y(tk). (14)

A summary of the proposed algorithm is shown in Table 1.

Table 1.

The COGNAC Algorithm

repeat
 1: Set the i-th target voxel as i = 1.
 2: Set appropriate initial values to vi, Ωr, λ, compute ψi, Λr using Eqs. (10), (11), and s¯i(tk) using Eq (5) for k = 1, … , K. repeat
  3: Update vi, Ωr, λ using Eqs. (8), (9), (13), (14).
  4: Update ψi, Λr, s¯i(tk) with the updated values of vi, Ωr, λ using Eqs. (10), (11), (5).
 until The cost function of Eq. (7) converges.
 5: Set the target voxel to the (i + 1)-th voxel.
until All voxels are scanned (i = N).

We note that the computational complexity of the proposed algorithm is on the order O(MK(R – 1)N), roughly equivalent to a single dipole scan, which is on the order O(K(M2 + N)). These are much smaller than the complexity of a multi-dipole scan which is on the order O(KNP) where P is the number of dipoles. According to our tests with the right initialization described below, if we use a scalar leadfield matrix with more than 8000 voxels, COGNAC can finish a whole brain scan and localize activity within half an hour (running MATLAB on a Xeon workstation with 12 CPU cores and 16G memory).

III. Performance evaluation

Several representative source localization algorithms were chosen to evaluate the performance of COGNAC: an adaptive spatial filtering method, linearly constrained minimum variance beamformer (referred to as Beamformer) [5]; a non-adaptive weighted mininum-norm method, standardized low-resolution tomographic analysis (referred to as sLORETA) [6]; one Bayesian algorithms, MSP [7].

We tested our novel algorithm using scalar leadfield matrix constructed within the brain volume assuming a single-shell spherical model [8] as implemented in SPM12 (http://www.fil.ion.ucl.ac.uk/spm) at the default spatial resolution of 8196 voxels at approximately 5 mm spacing. We simulated 5 point sources, then set the SNR=0 dB and the correlation between all sources as 0.5. We ran this configuration for more than 50 times and then averaged results. Two parameters A prime metric and the aggregate performance (AP) were used to evaluate the performance of the novel algorithm [2], [3], [4]. The A metric estimates the area under the FROC curve for one hit rate (HR) and false rate (FR) pair, or in our case, for each simulation. If the area under the FROC curve is large, then the hit rate is higher compared to the false positive rate. The aggregate performance (AP) captures both the accuracy of the location and the time courses of algorithms.

For real data sets test, we chose the resting state MEG data analysis, subjects were instructed simply to keep their eyes closed and do not think of anything in particular, for COGNAC, we use the diagonal noise covariance estimated using a variational Bayesian factor analysis (VBFA) model [9]. The EEG data (128-channel ActiveTwo system) was downloaded from the SPM website (http://www.fil.ion.ucl.ac.uk/spm/data/mmfaces) and the lead field was calculated in SPM8 using coarse resolution. The EEG data paradigm involves randomized presentation of at least 86 faces and 86 scrambled faces. Here we use 86 averaged faces data to subtract the averaged scrambled-faces data to study the differential response to faces versus scrambled faces [10]. The EEG data has been reported in our prior publications using the Champagne algorithm, and details about this dataset can be found in [3].

IV. Results

A. Simulation Results

COGNAC shows us better source reconstruction results with scalar leadfield matrix. Figure 1 is the simulation results using scalar leadfield matrix with 5 active point sources and configurations are set as SNR=0 dB and correlation among sources are 0.5. COGNAC performs the best among all source reconstruction algorithms.

Fig. 1:

Fig. 1:

Performance evaluation with 5 point sources. The SNR, and inter-source correlation were set as 0 dB, and 0.5. These results were averaged with 50 times simulations.

B. Real data: Resting State Data

The localization results for resting state data analysis from three subjects are shown in Figure 2. As is seen, COGNAC can consistently localize brain activity during rest, while other benchmark algorithms do not have consistent localization of resting-state activity. For resting state analysis, even though there is no pre-stimulus data for background noise information estimation, COGNAC still recovers reasonable activity. However, Beamformer and sLORETA can not localize brain activity consistently for all subjects.

Fig. 2:

Fig. 2:

Resting state data analysis with three subjects. Benchmark algorithms Beamformer and sLORETAare are plotted at first to second columns. COGNAC is shown at the last column. Thresholding is performed at 10 percent of the maximum power.

C. Real data: Face-Processing task: EEG

In Figure 3, we present the results from using novel algorithm and benchmarks on the face-processing task EEG data set. Figure 3 shows the average power, M100 peak power, and M170 peak power at different rows separately. We see that COGNAC is able to localize brain activity with sparse peaks at visual areas and the fusiform gyrus. While the benchmark algorithms produce brain activity with incorrect and/or diffuse localization.

Fig. 3:

Fig. 3:

Results for EEG face processing EEG data from novel algorithm and benchmarks. The first row is the average power mapping from 0 ms to 400 ms. The second and third rows are for peak power activity at 100 ms and 170 ms.

V. DSISCUSSION

We have described a novel scanning algorithm for MEG source localization that systematically accounts for contributions to sensors from activity outside the scan location. Importantly, the algorithm also estimates heteroscedastic sensor noise from observed data without the need for additional pre-stimulus or baseline data. We show that performance of COGNAC is superior to existing benchmarks both on simulations and real MEG/EEG data with complex source configurations. COGNAC is robust to reconstruction of highly correlated sources and to the effect of high levels of interference and noise sources. Superior performance of this algorithm arises from an explicit parameterization of the covariance partitions to sensor data from both non-scan location activity and sensor noise. No other existing source localization algorithm also simultaneously estimates sensor noise in data while retaining accuracy in localization. Our future work will include extensions to non-Gaussian source estimation and spatio-temporal smoothness in the COGNAC generative models.

Acknowledgments

This work was supported by NIH grants R01EB022717, R01DC013979, R01NS100440 and MRP-17-454755

Footnotes

1

It is common to assume dc = 2 (for MEG) or dc = 3 (for EEG). For simplicity, we assume dc = 1 during derivation.

References

  • [1].Sekihara K and Nagarajan SS, Electromagnetic brain imaging: A Bayesian perspective. Berlin, Heidelber: Springer-Verlag, 2015. [Google Scholar]
  • [2].Wipf DP, Owen JP, Attias HT, Sekihara K, and Nagarajan SS, “Robust bayesian estimation of the location, orientation, and time course of multiple correlated neural sources using MEG,” Neuroimage, vol. 49, no. 1, pp. 641–655, 2010. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [3].Owen JP, Wipf DP, Attias HT, Sekihara K, and Nagarajan SS, “Performance evaluation of the champagne source reconstruction algorithm on simulated and real M/EEG data,” Neuroimage, vol. 60, no. 1, pp. 305–323, 2012. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [4].Cai C, Sekihara K, and Nagarajan SS, “Hierarchical multiscale bayesian algorithm for robust meg/eeg source reconstruction,” Neuroimage, vol. 183, pp. 698–715, 2018. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [5].Sekihara K and Scholz B, “Generalized Wiener estimation of three-dimensional current distribution from biomagnetic measurements,” in Biomag 96: Proceedings of the Tenth International Conference on Biomagnetism, Aine CJ et al. , Eds. New York: Springer-Verlag, 1996, pp. 338–341. [DOI] [PubMed] [Google Scholar]
  • [6].Dale AM, Liu AK, Fischl BR, Buckner RL, Belliveau JW, Lewine JD, and Halgren E, “Dynamic statistical parametric mapping: Combining fMRI and MEG for high-resolution imaging of cortical activity,” Neuron, vol. 26, pp. 55–67, 2000. [DOI] [PubMed] [Google Scholar]
  • [7].Friston K, Harrison L, Daunizeau J, Kiebel S, Phillips C, Trujillo-Barreto N, Henson R, Flandin G, and Mattout J, “Multiple sparse priors for the M/EEG inverse problem,” Neuroimage, vol. 39, no. 3, pp. 1104–1120, 2008. [DOI] [PubMed] [Google Scholar]
  • [8].Hallez H, Vanrumste B, Grech R, Muscat J, De Clercq W, Vergult A, D’Asseler Y, Camilleri KP, Fabri SG, Van Huffel S et al. , “Review on solving the forward problem in EEG source analysis,” Journal of neuroengineering and rehabilitation, vol. 4, no. 1, p. 46, 2007. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [9].Nagarajan SS, Attias H, H. KE 2nd, and Sekihara K, “A probabilistic algorithm for robust interference suppression in bioelectromagnetic sensor data,” Stat. Med, vol. 20, pp. 3886–3910, 2007. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [10].Henson RN, Flandin G, Friston KJ, and Mattout J, “A Parametric Empirical Bayesian framework for fMRI-constrained MEG/EEG source reconstruction,” Human brain mapping, vol. 31, no. 10, pp. 1512–1531, 2010. [DOI] [PMC free article] [PubMed] [Google Scholar]

RESOURCES