Nothing Special   »   [go: up one dir, main page]

skip to main content
10.1007/978-3-540-74958-5_29guideproceedingsArticle/Chapter ViewAbstractPublication PagesConference Proceedingsacm-pubtype
Article

Bayesian Inference for Sparse Generalized Linear Models

Published: 17 September 2007 Publication History

Abstract

We present a framework for efficient, accurate approximate Bayesian inference in generalized linear models (GLMs), based on the expectation propagation (EP) technique. The parameters can be endowed with a factorizing prior distribution, encoding properties such as sparsity or non-negativity. The central role of posterior log-concavity in Bayesian GLMs is emphasized and related to stability issues in EP. In particular, we use our technique to infer the parameters of a point process model for neuronal spiking data from multiple electrodes, demonstrating significantly superior predictive performance when a sparsity assumption is enforced via a Laplace prior distribution.

References

[1]
Berry, M., Warland, D., Meister, M.: The structure and precision of retinal spike trains (1997).
[2]
Carandini, M., Demb, J., Mante, V., Tolhurst, D., Dan, Y., Olshausen, B., Gallant, J., Rust, N.: Do we know what the early visual system does? J Neurosci 25(46), 10577-10597 (2005).
[3]
Gilks, W.R., Wild, P.: Adaptive rejection sampling for Gibbs sampling. Applied Statistics 41(2), 337-348 (1992).
[4]
Harris, K., Csicsvari, J., Hirase, H., Dragoi, G., Buzsaki, G.: Organization of cell assemblies in the hippocampus. Nature 424(6948), 552-556 (2003).
[5]
McCullach, P., Nelder, J.A.: Generalized Linear Models. In: Monographs on Statistics and Applied Probability, 1st edn. no. 37, Chapman & Hall (1983).
[6]
Minka, T.: Divergence measures and message passing. Technical Report MSR-TR- 2005-173, Microsoft Research, Cambridge (2005).
[7]
Minka, T.: Expectation propagation for approximate Bayesian inference. Uncertainty in AI 17 (2001).
[8]
Nodelman, U., Koller, D., Shelton, C.: Expectation propagation for continuous time Bayesian networks. Uncertainty in AI 21, 431-440 (2005).
[9]
Opper, M., Winther, O.: Gaussian processes for classification: Mean field algorithms. N. Comp. 12(11), 2655-2684 (2000).
[10]
Paninski, L.: Maximum likelihood estimation of cascade point-process neural encoding models. Network: Computation in Neural Systems 15, 243-262 (2004).
[11]
Park, T., Casella, G.: The Bayesian Lasso. Technical report, University of Florida (2005).
[12]
Qi, Y., Minka, T., Picard, R., Ghahramani, Z.: Predictive automatic relevance determination by expectation propagation. In: Proceedings of ICML 21 (2004).
[13]
Rajaram, S., Graepel, T., Herbrich, R.: Poisson networks: A model for structured point processes. AI and Statistics 10 (2005).
[14]
Rieke, F., Warland, D., van Steveninck, R.R., Bialek, W.: Spikes: Exploring the Neural Code, 1st edn. MIT Press, Cambridge (1999).
[15]
Seeger, M.: Expectation propagation for exponential families. Technical report, University of California at Berkeley (2005) See http://www.kyb.tuebingen.mpg.de/bs/people/seeger
[16]
Seeger, M., Steinke, F., Tsuda, K.: Bayesian inference and optimal design in the sparse linear model. AI and Statistics 11 (2007).
[17]
Simoncelli, E., Paninski, L., Pillow, J., Schwartz, O.: Characterization of neural responses with stochastic stimuli. In: Gazzaniga, M. (ed.) The Cognitive Neurosciences, 3rd edn., MIT Press, Cambridge (2004).
[18]
Snyder, D., Miller, M.: Random point processes in time and space. Springer Texts in Electrical Engineering (1991).
[19]
Tibshirani, R.: Regression shrinkage and selection via the Lasso. J. Roy. Stat. Soc. B 58, 267-288 (1996).
[20]
Tipping, M.: Sparse Bayesian learning and the relevance vector machine. J. M. Learn. Res. 1, 211-244 (2001).
[21]
Wilkinson, D.: Stochastic Modelling for Systems Biology. Chapman & Hall (2006).
[22]
Wipf, D., Palmer, J., Rao, B.: Perspectives on sparse Bayesian learning. In: Advances in NIPS 16 (2004).
[23]
Zeck, G., Xiao, Q., Masland, R.: The spatial filtering properties of local edge detectors and brisk-sustained retinal ganglion cells. Eur J Neurosci 22(8), 2016-2026 (2005).

Cited By

View all
  • (2019)Pairwise Comparisons with Flexible Time-DynamicsProceedings of the 25th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining10.1145/3292500.3330831(1236-1246)Online publication date: 25-Jul-2019
  • (2010)A hierarchical Bayesian model for frame representationIEEE Transactions on Signal Processing10.1109/TSP.2010.205556258:11(5560-5571)Online publication date: 1-Nov-2010
  • (2008)Bayesian Inference and Optimal Design for the Sparse Linear ModelThe Journal of Machine Learning Research10.5555/1390681.13907079(759-813)Online publication date: 1-Jun-2008

Recommendations

Comments

Please enable JavaScript to view thecomments powered by Disqus.

Information & Contributors

Information

Published In

cover image Guide Proceedings
ECML '07: Proceedings of the 18th European conference on Machine Learning
September 2007
805 pages

Publisher

Springer-Verlag

Berlin, Heidelberg

Publication History

Published: 17 September 2007

Qualifiers

  • Article

Contributors

Other Metrics

Bibliometrics & Citations

Bibliometrics

Article Metrics

  • Downloads (Last 12 months)0
  • Downloads (Last 6 weeks)0
Reflects downloads up to 14 Nov 2024

Other Metrics

Citations

Cited By

View all
  • (2019)Pairwise Comparisons with Flexible Time-DynamicsProceedings of the 25th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining10.1145/3292500.3330831(1236-1246)Online publication date: 25-Jul-2019
  • (2010)A hierarchical Bayesian model for frame representationIEEE Transactions on Signal Processing10.1109/TSP.2010.205556258:11(5560-5571)Online publication date: 1-Nov-2010
  • (2008)Bayesian Inference and Optimal Design for the Sparse Linear ModelThe Journal of Machine Learning Research10.5555/1390681.13907079(759-813)Online publication date: 1-Jun-2008

View Options

View options

Get Access

Login options

Media

Figures

Other

Tables

Share

Share

Share this Publication link

Share on social media