default search action
13. AISTATS 2010: Sardinia, Italy
- Yee Whye Teh, D. Mike Titterington:
Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, AISTATS 2010, Chia Laguna Resort, Sardinia, Italy, May 13-15, 2010. JMLR Proceedings 9, JMLR.org 2010 - Yee Whye Teh, D. Mike Titterington:
Preface. - Ryan Prescott Adams, Hanna M. Wallach, Zoubin Ghahramani:
Learning the Structure of Deep Sparse Graphical Models. 1-8 - Alekh Agarwal, Peter L. Bartlett, Max Dama:
Optimal Allocation Strategies for the Dark Pool Problem. 9-16 - Morteza Alamgir, Moritz Grosse-Wentrup, Yasemin Altun:
Multitask Learning for Brain-Computer Interfaces. 17-24 - Mauricio A. Álvarez, David Luengo, Michalis K. Titsias, Neil D. Lawrence:
Efficient Multioutput Gaussian Processes through Variational Inducing Kernels. 25-32 - Arthur U. Asuncion, Qiang Liu, Alexander Ihler, Padhraic Smyth:
Learning with Blocks: Composite Likelihood and Contrastive Divergence. 33-40 - Haakon Michael Austad, Nial Friel:
Deterministic Bayesian inference for the p* model. 41-48 - Bing Bai, Jason Weston, David Grangier, Ronan Collobert, Corinna Cortes, Mehryar Mohri:
Half Transductive Ranking. 49-56 - Gilles Blanchard, Nicole Krämer:
Kernel Partial Least Squares is Universally Consistent. 57-64 - Antoine Bordes, Nicolas Usunier, Ronan Collobert, Jason Weston:
Towards Understanding Situated Natural Language. 65-72 - Hei Chan, Manabu Kuroki:
Using Descendants as Instrumental Variables for the Identification of Direct Causal Effects in Linear SEMs. 73-80 - Shaunak Chatterjee, Stuart Russell:
Why are DBNs sparse? 81-88 - Anton Chechetka, Carlos Guestrin:
Focused Belief Propagation for Query-Specific Inference. 89-96 - Yutian Chen, Max Welling:
Parametric Herding. 97-104 - Fabio Corradi:
Mass Fatality Incident Identification based on nuclear DNA evidence. 105-112 - Corinna Cortes, Mehryar Mohri, Ameet Talwalkar:
On the Impact of Kernel Approximation on Learning Accuracy. 113-120 - Botond Cseke, Tom Heskes:
Improving posterior marginal approximations in latent Gaussian models. 121-128 - Shai Ben-David, Tyler Lu, Teresa Luu, Dávid Pál:
Impossibility Theorems for Domain Adaptation. 129-136 - Ofer Dekel, Ohad Shamir:
Multiclass-Multilabel Classification with More Classes than Examples. 137-144 - Guillaume Desjardins, Aaron C. Courville, Yoshua Bengio, Pascal Vincent, Olivier Delalleau:
Tempered Markov Chain Monte Carlo for training of Restricted Boltzmann Machines. 145-152 - Paramveer S. Dhillon, Dean P. Foster, Lyle H. Ungar:
Feature Selection using Multiple Streams. 153-160 - Christos Dimitrakakis:
Bayesian variable order Markov models. 161-168 - Nan Ding, Yuan (Alan) Qi, Rongjing Xiang, Ian M. Molloy, Ninghui Li:
Nonparametric Bayesian Matrix Factorization by Power-EP. 169-176 - Trinh Minh Tri Do, Thierry Artières:
Neural conditional random fields. 177-184 - Frederick Eberhardt, Patrik O. Hoyer, Richard Scheines:
Combining Experiments to Discover Linear Cyclic Models with Latent Variables. 185-192 - Michael Eichler:
Graphical Gaussian modelling of multivariate time series with latent variables. 193-200 - Dumitru Erhan, Aaron C. Courville, Yoshua Bengio, Pascal Vincent:
Why Does Unsupervised Pre-training Help Deep Learning? 201-208 - Ayse Erkan, Yasemin Altun:
Semi-Supervised Learning via Generalized Maximum Entropy. 209-216 - Raphael Fonteneau, Susan A. Murphy, Louis Wehenkel, Damien Ernst:
Model-Free Monte Carlo-like Policy Evaluation. 217-224 - Florence Forbes, Senan Doyle, Daniel García-Lorenzo, Christian Barillot, Michel Dojat:
A Weighted Multi-Sequence Markov Model For Brain Lesion Segmentation. 225-232 - Cameron E. Freer, Daniel M. Roy:
Posterior distributions are computable from predictive distributions. 233-240 - Thomas Furmston, David Barber:
Variational methods for Reinforcement Learning. 241-248 - Xavier Glorot, Yoshua Bengio:
Understanding the difficulty of training deep feedforward neural networks. 249-256 - Vibhav Gogate, Rina Dechter:
On Combining Graph-based Variance Reduction schemes. 257-264 - Dian Gong, Fei Sha, Gérard G. Medioni:
Locally Linear Denoising on Image Manifolds. 265-272 - Steffen Grünewälder, Jean-Yves Audibert, Manfred Opper, John Shawe-Taylor:
Regret Bounds for Gaussian Process Bandit Problems. 273-280 - Hui Guo, A. Philip Dawid:
Sufficient covariates and linear propensity analysis. 281-288 - Shengbo Guo, Scott Sanner:
Real-time Multiattribute Bayesian Preference Elicitation with Pairwise Comparison Queries. 289-296 - Michael Gutmann, Aapo Hyvärinen:
Noise-contrastive estimation: A new estimation principle for unnormalized statistical models. 297-304 - Timothy Hancock, Hiroshi Mamitsuka:
Boosted Optimization for Network Classification. 305-312 - Lauren Hannah, David M. Blei, Warren B. Powell:
Dirichlet Process Mixtures of Generalized Linear Models. 313-320 - Steve Hanneke, Liu Yang:
Negative Results for Active Learning with Convex Losses. 321-325 - Philipp Hennig, David H. Stern, Thore Graepel:
Coherent Inference on Optimal Play in Game Trees. 326-333 - Bert Huang, Tony Jebara:
Collaborative Filtering via Rating Concentration. 334-341 - Jim C. Huang, Nebojsa Jojic:
Maximum-likelihood learning of cumulative distribution functions on graphs. 342-349 - Tzu-Kuo Huang, Le Song, Jeff G. Schneider:
Learning Nonlinear Dynamic Models from Non-sequenced Data. 350-357 - Tommi S. Jaakkola, David A. Sontag, Amir Globerson, Marina Meila:
Learning Bayesian Network Structure using LP Relaxations. 358-365 - Rodolphe Jenatton, Guillaume Obozinski, Francis R. Bach:
Structured Sparse Principal Component Analysis. 366-373 - Hachem Kadri, Emmanuel Duflos, Philippe Preux, Stéphane Canu, Manuel Davy:
Nonlinear functional regression: a functional RKHS approach. 374-380 - Sham M. Kakade, Ohad Shamir, Karthik Sindharan, Ambuj Tewari:
Learning Exponential Families in High-Dimensions: Strong Convexity and Sparsity. 381-388 - Alexandros Karatzoglou, Alexander J. Smola, Markus Weimer:
Collaborative Filtering on a Budget. 389-396 - Jingu Kim, Haesun Park:
Fast Active-set-type Algorithms for L1-regularized Linear Regression. 397-404 - Marius Kloft, Pavel Laskov:
Online Anomaly Detection under Adversarial Impact. 405-412 - Mladen Kolar, Eric P. Xing:
Ultra-high Dimensional Multiple Output Learning With Simultaneous Orthogonal Matching Pursuit: Screening Approach. 413-420 - Branislav Kveton, Michal Valko, Ali Rahimi, Ling Huang:
Semi-Supervised Learning with Max-Margin Graph Cuts. 421-428 - Nevena Lazic, Brendan J. Frey, Parham Aarabi:
Solving the Uncapacitated Facility Location Problem Using Message Passing Algorithms. 429-436 - Guy Lever:
Relating Function Class Complexity and Cluster Structure in the Function Domain with Applications to Transduction. 437-444 - Fuxin Li, Cristian Sminchisescu:
The Feature Selection Path in Kernel Methods. 445-452 - Jun Li, Dacheng Tao:
Simple Exponential Family PCA. 453-460 - Han Liu, Jian Zhang, Xiaoye Jiang, Jun Liu:
The Group Dantzig Selector. 461-468 - Alexander Lorbert, Peter J. Ramadge:
Descent Methods for Tuning Parameter Refinement. 469-476 - Alexander Lorbert, David J. Eis, Victoria Kostina, David M. Blei, Peter J. Ramadge:
Exploiting Covariate Similarity in Sparse Regression via the Pairwise Elastic Net. 477-484 - Tyler Lu, Dávid Pál, Martin Pal:
Contextual Multi-Armed Bandits. 485-492 - Justin Ma, Alex Kulesza, Mark Dredze, Koby Crammer, Lawrence K. Saul, Fernando Pereira:
Exploiting Feature Covariance in High-Dimensional Online Learning. 493-500 - Kai Mao, Feng Liang, Sayan Mukherjee:
Supervised Dimension Reduction Using Bayesian Mixture Modeling. 501-508 - Benjamin M. Marlin, Kevin Swersky, Bo Chen, Nando de Freitas:
Inductive Principles for Restricted Boltzmann Machine Learning. 509-516 - James Martens, Ilya Sutskever:
Parallelizable Sampling of Markov Random Fields. 517-524 - Julian J. McAuley, Tibério S. Caetano:
Exploiting Within-Clique Factorizations in Junction-Tree Algorithms. 525-532 - Mehryar Mohri, Pedro J. Moreno, Eugene Weinstein:
Discriminative Topic Segmentation of Text and Speech. 533-540 - Iain Murray, Ryan Prescott Adams, David J. C. MacKay:
Elliptical slice sampling. 541-548 - Blaine Nelson, Benjamin I. P. Rubinstein, Ling Huang, Anthony D. Joseph, Shing-hon Lau, Steven J. Lee, Satish Rao, Anthony Tran, J. Doug Tygar:
Near-Optimal Evasion of Convex-Inducing Classifiers. 549-556 - Duy Nguyen-Tuong, Jan Peters:
Incremental Sparsification for Real-time Online Model Learning. 557-564 - Yung-Kyun Noh, Byoung-Tak Zhang, Daniel D. Lee:
Fluid Dynamics Models for Low Rank Discriminant Analysis. 565-572 - Jimmy Olsson, Jonas Ströjby:
Approximation of hidden Markov models by mixtures of experts with application to particle filtering. 573-580 - Silvia Pandolfi, Francesco Bartolucci, Nial Friel:
A generalization of the Multiple-try Metropolis algorithm for Bayesian estimation and model selection. 581-588 - Pekka Parviainen, Mikko Koivisto:
Bayesian structure discovery in Bayesian networks with less space. 589-596 - Jonas Peters, Dominik Janzing, Bernhard Schölkopf:
Identifying Cause and Effect on Discrete Data using Additive Noise Models. 597-604 - Barnabás Póczos, Sergey Kirshner, Csaba Szepesvári:
REGO: Rank-based Estimation of Renyi Information using Euclidean Graph Optimization. 605-612 - Piyush Rai, Hal Daumé III:
Infinite Predictor Subspace Models for Multitask Learning. 613-620 - Marc'Aurelio Ranzato, Alex Krizhevsky, Geoffrey E. Hinton:
Factored 3-Way Restricted Boltzmann Machines For Modeling Natural Images. 621-628 - Vikas C. Raykar, Linda H. Zhao:
Nonparametric prior for adaptive sparsity. 629-636 - Mark D. Reid, Robert C. Williamson:
Convexity of Proper Composite Binary Losses. 637-644 - Jaakko Riihimäki, Aki Vehtari:
Gaussian processes with monotonicity information. 645-652 - Lorenzo Rosasco, Matteo Santoro, Sofia Mosci, Alessandro Verri, Silvia Villa:
A Regularization Approach to Nonlinear Variable Selection. 653-660 - Stéphane Ross, Drew Bagnell:
Efficient Reductions for Imitation Learning. 661-668 - Andreas Ruttor, Manfred Opper:
Approximate parameter inference in a stochastic reaction-diffusion model. 669-676 - Hannes P. Saal, Jo-Anne Ting, Sethu Vijayakumar:
Active Sequential Learning with Tactile Feedback. 677-684 - Sivan Sabato, Nathan Srebro, Naftali Tishby:
Reducing Label Complexity by Learning From Bags. 685-692 - Ruslan Salakhutdinov, Hugo Larochelle:
Efficient Learning of Deep Boltzmann Machines. 693-700 - Mathieu Salzmann, Carl Henrik Ek, Raquel Urtasun, Trevor Darrell:
Factorized Orthogonal Latent Spaces. 701-708 - Mark Schmidt, Kevin P. Murphy:
Convex Structure Learning in Log-Linear Models: Beyond Pairwise Potentials. 709-716 - Nic Schraudolph:
Polynomial-Time Exact Inference in NP-Hard Binary MRFs via Reweighted Perfect Matching. 717-724 - Kevin Sharp, Magnus Rattray:
Dense Message Passing for Sparse Principal Component Analysis. 725-732 - Pannagadatta K. Shivaswamy, Tony Jebara:
Empirical Bernstein Boosting. 733-740 - Sajid M. Siddiqi, Byron Boots, Geoffrey J. Gordon:
Reduced-Rank Hidden Markov Models. 741-748 - Aarti Singh, Robert D. Nowak, A. Robert Calderbank:
Detecting Weak but Hierarchically-Structured Patterns in Networks. 749-756 - Nikolai Slavov:
Inference of Sparse Networks with Unobserved Variables. Application to Gene Regulatory Networks. 757-764 - Le Song, Arthur Gretton, Carlos Guestrin:
Nonparametric Tree Graphical Models. 765-772 - Bharath K. Sriperumbudur, Kenji Fukumizu, Gert R. G. Lanckriet:
On the relation between universality, characteristic kernels and RKHS embedding of measures. 773-780 - Masashi Sugiyama, Ichiro Takeuchi, Taiji Suzuki, Takafumi Kanamori, Hirotaka Hachiya, Daisuke Okanohara:
Conditional Density Estimation via Least-Squares Density Ratio Estimation. 781-788 - Ilya Sutskever, Tijmen Tieleman:
On the Convergence Properties of Contrastive Divergence. 789-795 - Charles Sutton, Michael I. Jordan:
Inference and Learning in Networks of Queues. 796-803 - Taiji Suzuki, Masashi Sugiyama:
Sufficient Dimension Reduction via Squared-loss Mutual Information Estimation. 804-811 - Daniel Tarlow, Inmar E. Givoni, Richard S. Zemel:
HOP-MAP: Efficient Message Passing with High Order Potentials. 812-819 - Matus Telgarsky, Andrea Vattani:
Hartigan's Method: k-means Clustering without Voronoi. 820-827 - Evangelos A. Theodorou, Jonas Buchli, Stefan Schaal:
Learning Policy Improvements with Path Integrals. 828-835 - Ivan Titov, Alexandre Klementiev, Kevin Small, Dan Roth:
Unsupervised Aggregation for Classification Problems with Large Numbers of Categories. 836-843 - Michalis K. Titsias, Neil D. Lawrence:
Bayesian Gaussian Process Latent Variable Model. 844-851 - Péter Torma, András György, Csaba Szepesvári:
A Markov-Chain Monte Carlo Approach to Simultaneous Localization and Mapping. 852-859 - Sofia Triantafilou, Ioannis Tsamardinos, Ioannis G. Tollis:
Learning Causal Structure from Overlapping Variable Sets. 860-867 - Ryan D. Turner, Marc Peter Deisenroth, Carl Edward Rasmussen:
State-Space Inference and Learning with Gaussian Processes. 868-875 - Yener Ülker, Bilge Günsel, Ali Taylan Cemgil:
Sequential Monte Carlo Samplers for Dirichlet Process Mixtures. 876-883 - Nicolas Usunier, Antoine Bordes, Léon Bottou:
Guarantees for Approximate Incremental SVMs. 884-891 - Hanna M. Wallach, Shane T. Jensen, Lee H. Dicker, Katherine A. Heller:
An Alternative Prior Process for Nonparametric Bayesian Clustering. 892-899 - Shijun Wang, Rong Jin, Hamed Valizadegan:
A Potential-based Framework for Online Multi-class Learning with Partial Feedback. 900-907 - Zhuang Wang, Slobodan Vucetic:
Online Passive-Aggressive Algorithms on a Budget. 908-915 - David J. Weiss, Benjamin Taskar:
Structured Prediction Cascades. 916-923 - Sinead Williamson, Peter Orbanz, Zoubin Ghahramani:
Dependent Indian Buffet Processes. 924-931 - Yan Yan, Rómer Rosales, Glenn Fung, Mark Schmidt, Gerardo Hermosillo Valadez, Luca Bogoni, Linda Moy, Jennifer G. Dy:
Modeling annotator expertise: Learning when everybody knows a bit of something. 932-939 - Ji Won Yoon, Simon P. Wilson, K. Hun Mok:
A highly efficient blocked Gibbs sampler reconstruction of multidimensional NMR spectra. 940-947 - Chao Zhang, Dacheng Tao:
Risk Bounds for Levy Processes in the PAC-Learning Framework. 948-955 - Xinhua Zhang, Thore Graepel, Ralf Herbrich:
Bayesian Online Learning for Multi-label and Multi-variate Performance Measures. 956-963 - Yu Zhang, Dit-Yan Yeung:
Multi-Task Learning using Generalized t Process. 964-971 - Zhihua Zhang, Guang Dai, Donghui Wang, Michael I. Jordan:
Bayesian Generalized Kernel Models. 972-979 - Zhihua Zhang, Guang Dai, Michael I. Jordan:
Matrix-Variate Dirichlet Process Mixture Models. 980-987 - Yang Zhou, Rong Jin, Steven C. H. Hoi:
Exclusive Lasso for Multi-task Feature Selection. 988-995
manage site settings
To protect your privacy, all features that rely on external API calls from your browser are turned off by default. You need to opt-in for them to become active. All settings here will be stored as cookies with your web browser. For more information see our F.A.Q.