Nothing Special   »   [go: up one dir, main page]

Skip to main content

Advertisement

Log in

Fusion of ULS Group Constrained High- and Low-Order Sparse Functional Connectivity Networks for MCI Classification

  • Original Article
  • Published:
Neuroinformatics Aims and scope Submit manuscript

Abstract

Functional connectivity networks, derived from resting-state fMRI data, have been found as effective biomarkers for identifying mild cognitive impairment (MCI) from healthy elderly. However, the traditional functional connectivity network is essentially a low-order network with the assumption that the brain activity is static over the entire scanning period, ignoring temporal variations among the correlations derived from brain region pairs. To overcome this limitation, we proposed a new type of sparse functional connectivity network to precisely describe the relationship of temporal correlations among brain regions. Specifically, instead of using the simple pairwise Pearson’s correlation coefficient as connectivity, we first estimate the temporal low-order functional connectivity for each region pair based on an ULS Group constrained-UOLS regression algorithm, where a combination of ultra-least squares (ULS) criterion with a Group constrained topology structure detection algorithm is applied to detect the topology of functional connectivity networks, aided by an Ultra-Orthogonal Least Squares (UOLS) algorithm to estimate connectivity strength. Compared to the classical least squares criterion which only measures the discrepancy between the observed signals and the model prediction function, the ULS criterion takes into consideration the discrepancy between the weak derivatives of the observed signals and the model prediction function and thus avoids the overfitting problem. By using a similar approach, we then estimate the high-order functional connectivity from the low-order connectivity to characterize signal flows among the brain regions. We finally fuse the low-order and the high-order networks using two decision trees for MCI classification. Experimental results demonstrate the effectiveness of the proposed method on MCI classification.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Subscribe and save

Springer+ Basic
$34.99 /Month
  • Get 10 units per month
  • Download Article/Chapter or eBook
  • 1 Unit = 1 Article or 1 Chapter
  • Cancel anytime
Subscribe now

Buy Now

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6
Fig. 7
Fig. 8
Fig. 9
Fig. 10
Fig. 11
Fig. 12
Fig. 13
Fig. 14
Fig. 15
Fig. 16

Similar content being viewed by others

References

Download references

Acknowledgements

This work was supported by the National Natural Science Foundation of China [U1809209, 61671042, 61403016, 31871113], Beijing Natural Science Foundation [L182015, 4172037], and Open Fund Project of Fujian Provincial Key Laboratory in Minjiang University [MJUKF201702]. An earlier version of this paper was presented at the International Workshop on Machine Learning in Medical Imaging (MLMI 2017).

Author information

Authors and Affiliations

Authors

Corresponding authors

Correspondence to Jingyu Liu or Dinggang Shen.

Ethics declarations

Ethical Approval

All procedures performed in studies involving human participants were in accordance with the ethical standards of the institutional and/or national research committee and with the 1964 Helsinki declaration and its later amendments or comparable ethical standards. This study was approved by the local ethical committee.

Informed Consent

Informed consent was obtained from all individual participants included in the study.

Conflict of Interest

The authors declare that they have no conflicts of interest.

Additional information

Publisher’s Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Appendix

Appendix

The weak derivative and Ultra-Least Squares Criterion

A linear system with k inputs and one output can be described by a linear regression model below:

$$ y(t)={\sum}_{i=1}^k{\theta}_i{x}_i(t)+e, $$
(11)

where y(t) and xi(t) denote the system output and input variables, θi is the system parameter, and e is the system noise. For this system, the ordinary least squares regression problem can be solved via the least squares criterion as follows:

$$ {J}_{LS}={\left\Vert y(t)-{\varSigma}_{i=1}^k{\theta}_i{x}_i(t)\right\Vert}_2^2, $$
(12)

where t ∈ [0, T], y(t) and xi(t) are time dependent signals with finite amplitude on the interval [0, T], and thus y(t) and xi(t) are L2 integrable functions belong to the Lebesgue space L2([0, T]), where L2([0, T]) = {x(t)| ∫[0, T]|x(t)|2dt <  + ∞}. Supposing \( \widehat{y}(t) \) is prediction function of y(t), it is obvious that the least squares criterion only measures the discrepancy between y(t) and \( \widehat{y}(t) \) on the whole interval [0, T], ignoring how the discrepancy distributes at every individual time point. Therefore, the least squares criterion cannot accurately describe the similarity of function shapes and discards the information of correlations among data points, leading to a common overfitting problem for identification of the dynamic system (Li et al. 2018a; Guo et al. 2016).

In order to overcome this limitation, we integrate a weak derivative part into the least squares criterion to construct an ULS criterion:

$$ {J}_{ULS}={\left\Vert y(t)-{\varSigma}_{i=1}^k{\theta}_i{x}_i(t)\right\Vert}_2^2+{\varSigma}_{l=1}^L{\left\Vert {D}^ly(t)-{\varSigma}_{i=1}^k{\theta}_i{D}^l{x}_i(t)\right\Vert}_2^2 $$
(13)

where Dl is the l-th order weak derivative (l = 1, 2, … , L). The weak derivative, which measures interconnections among the data points, is a generalization of the derivative that is in the usual sense. Different from the derivatives which can be calculated only for the differentiable functions, the weak derivatives can be calculated for all integrable functions. Supposing that x(t) belong to the Lebesgue space L2([0, T]), the l-th order weak derivative of x(t) is defined as the function Dlx(tL2([0, T]) which satisfies

$$ {\int}_{\left[0,T\right]}x(t){D}^l\varphi (t) dt={\left(-1\right)}^l{\int}_{\left[0,T\right]}\varphi (t){D}^lx(t) dt, $$
(14)

for all infinitely differentiable functions φ(t) with φ(0) = φ(T) = 0. As discussed in (Guo et al. 2016), the regression model fitted by weak derivatives takes into account the relationship among data points and is therefore more effective and accurate. Given discrete observations of the system signals, {y(j)}, {xi(j)}, j = 1, 2, … , J, the l-th order weak derivative can be calculated as

$$ {D}^ly(h)={\varSigma}_{j=h}^{h+{J}_0}y(j){\varphi}^{(l)}\left(j-h\right)\ \left(h=1,2,\dots, J-{J}_0\right) $$
(15)
$$ {D}^l{x}_i(h)={\varSigma}_{j=h}^{h+{J}_0}{x}_i(j){\varphi}^{(l)}\left(j-h\right)\ \left(h=1,2,\dots, J-{J}_0\right) $$
(16)

where φ(t) (t ∈ [0, J0]) is the test function, which is l-th order derivable on the interval [0, J0], φ(l)(t) denotes the l-th order derivative of the φ(t). Due to the l-th order weak derivative (l = 1, 2, … , L) of original signals is used in this work, the test function is required to have L-th order derivative. Therefore, the (L + 1)-th order B-spline basis function which satisfies the above condition is adopted as the test function in this paper. More details of B-spline basis function and weak derivative can be found at Guo et al. (2016).

The Lebesgue space L2([0, T]) = {x(t)| ∫[0, T]|x(t)|2dt <  + ∞} is a function space, in which the functions are L2 integrable (i.e. the l2-norm of the function is finite). Meanwhile, the Sobolev space HL([0, T]) = {x(t)| x(t) ∈ L2([0, T]), Dlx ∈ L2([0, T]), l = 1, 2, ⋯, L} is a subspace of L2([0, T]), in which not only the functions but also the l-th order weak derivatives of the functions (l = 1, 2, … , L) are L2 integrable (i.e. belong to L2([0, T])). The definition of Sobolev space HL([0, T]) can also be written as

$$ {H}^L\left(\left[0,T\right]\right)=\left\{x(t)|{\int}_{\left[0,T\right]}{\left|x(t)\right|}^2 dt<+\infty, {\int}_{\left[0,T\right]}{\left|{D}^lx\right|}^2 dt<+\infty, l=1,2,\cdots, L\right\}. $$
(17)

The least squares criterion only needs to calculate l2-norm of the discrepancy between the observed signal y(t) and the model prediction function \( {\sum}_{i=1}^k{\theta}_i{x}_i(t) \), and thus is defined in the Lebesgue space L2([0, T]). However, the ULS criterion calculate not only the l2-norm of the discrepancy between y(t) and \( {\sum}_{i=1}^k{\theta}_i{x}_i(t) \), but also the l2-norm of the discrepancy between the weak derivatives of y(t) and \( {\sum}_{i=1}^k{\theta}_i{x}_i(t) \). Therefore, the weak derivatives of y(t) and \( {\sum}_{i=1}^k{\theta}_i{x}_i(t) \) are required to belong to L2([0, T]), and further the functions y(t) and xi(t) of the ULS criterion are required to belong to HL([0, T]). For these reasons, the ULS criterion is defined in the Sobolev space HL([0, T]).

The fMRI time series is a low-frequency signal with finite energy. Thus, the fMRI time series and its weak derivatives are L2 integrable functions (i.e. belong to L2([0, T])). The fMRI time series can be further considered as the discrete observations of the signals belonging to HL([0, T]). Therefore, the ULS criterion is applicable to the study of fMRI time series.

The new criterion considers not only the discrepancy between the observed signal and the model prediction function, but also the discrepancy between their weak derivatives. Thus, the ULS criterion is a more accurate evaluation standard for the model fitness. Essentially, the ULS criterion is the combination of the least squares criterion with the weak derivative of the original signals. By connecting the original signals y(t) and xi(t) with their weak derivatives Dly(t) and Dlxi(t)(l = 1, 2,  … , L), we generate the corresponding ultra-signals \( \overset{\sim }{y}(t)={\left[{\left(y(t)\right)}^T,{\left({D}^1y(t)\right)}^T,{\left({D}^2y(t)\right)}^T,\dots, {\left({D}^Ly(t)\right)}^T\right]}^T \) and \( {\overset{\sim }{x}}_i(t)={\left[{\left({x}_i(t)\right)}^T,{\left({D}^1{x}_i(t)\right)}^T,{\left({D}^2{x}_i(t)\right)}^T,\dots, {\left({D}^L{x}_i(t)\right)}^T\right]}^T \), and Eq. (13) can be rewritten as

$$ {J}_{ULS}={\left\Vert \overset{\sim }{y}(t)-{\varSigma}_{i=1}^k{\theta}_i{\overset{\sim }{x}}_i(t)\right\Vert}_2^2. $$
(18)

Therefore, we can integrate the ULS criterion into our proposed framework by incorporating the weak derivatives into the original time series.

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Li, Y., Liu, J., Peng, Z. et al. Fusion of ULS Group Constrained High- and Low-Order Sparse Functional Connectivity Networks for MCI Classification. Neuroinform 18, 1–24 (2020). https://doi.org/10.1007/s12021-019-09418-x

Download citation

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s12021-019-09418-x

Keywords

Navigation