Nothing Special   »   [go: up one dir, main page]

skip to main content
Volume 76, Issue 2-3September 2009
Reflects downloads up to 16 Nov 2024Bibliometrics
Skip Table Of Content Section
article
Sparse kernel SVMs via cutting-plane training

We explore an algorithm for training SVMs with Kernels that can represent the learned rule using arbitrary basis vectors, not just the support vectors (SVs) from the training set. This results in two benefits. First, the added flexibility makes it ...

article
Learning multi-linear representations of distributions for efficient inference

We examine the class of multi-linear representations (MLR) for expressing probability distributions over discrete variables. Recently, MLR have been considered as intermediate representations that facilitate inference in distributions represented as ...

article
Combining instance-based learning and logistic regression for multilabel classification

Multilabel classification is an extension of conventional classification in which a single instance can be associated with multiple labels. Recent research has shown that, just like for conventional classification, instance-based learning algorithms ...

article
On structured output training: hard cases and an efficient alternative

We consider a class of structured prediction problems for which the assumptions made by state-of-the-art algorithms fail. To deal with exponentially sized output sets, these algorithms assume, for instance, that the best output for a given input can be ...

article
Hybrid least-squares algorithms for approximate policy evaluation

The goal of approximate policy evaluation is to "best" represent a target value function according to a specific criterion. Different algorithms offer different choices of the optimization criterion. Two popular least-squares algorithms for performing ...

article
A self-training approach to cost sensitive uncertainty sampling

Uncertainty sampling is an effective method for performing active learning that is computationally efficient compared to other active learning methods such as loss-reduction methods. However, unlike loss-reduction methods, uncertainty sampling cannot ...

article
Cost-sensitive learning based on Bregman divergences

This paper analyzes the application of a particular class of Bregman divergences to design cost-sensitive classifiers for multiclass problems. We show that these divergence measures can be used to estimate posterior probabilities with maximal accuracy ...

Comments

Please enable JavaScript to view thecomments powered by Disqus.