Export Citations
Save this search
Please login to be able to save your searches and receive alerts for new content matching your search criteria.
- articleSeptember 2016
Rényi divergence minimization based co-regularized multiview clustering
Machine Language (MALE), Volume 104, Issue 2-3Pages 411–439https://doi.org/10.1007/s10994-016-5543-2Multiview clustering is a framework for grouping objects given multiple views, e.g. text and image views describing the same set of entities. This paper introduces co-regularization techniques for multiview clustering that explicitly minimize a weighted ...
- articleOctober 2014
Collaborative filtering with information-rich and information-sparse entities
In this paper, we consider a popular model for collaborative filtering in recommender systems. In particular, we consider both the clustering model, where only users (or items) are clustered, and the co-clustering model, where both users and items are ...
- articleFebruary 2014
Learning from natural instructions
Machine Language (MALE), Volume 94, Issue 2Pages 205–232Machine learning is traditionally formalized and investigated as the study of learning concepts and decision functions from labeled examples, requiring a representation that encodes information about the domain of the decision function to be learned. We ...
- articleSeptember 2012
Structured learning with constrained conditional models
Making complex decisions in real world problems often involves assigning values to sets of interdependent variables where an expressive dependency structure among these can influence, or even dictate, what assignments are possible. Commonly used models ...
-
- articleMay 2010
Ensemble clustering using semidefinite programming with applications
In this paper, we study the ensemble clustering problem, where the input is in the form of multiple clustering solutions. The goal of ensemble clustering algorithms is to aggregate the solutions into one solution that maximizes the agreement in the ...
- articleSeptember 2009
Learning multi-linear representations of distributions for efficient inference
We examine the class of multi-linear representations (MLR) for expressing probability distributions over discrete variables. Recently, MLR have been considered as intermediate representations that facilitate inference in distributions represented as ...
- articleMarch 2003
Learning to Match the Schemas of Data Sources: A Multistrategy Approach
The problem of integrating data from multiple data sources—either on the Internet or within enterprises—has received much attention in the database and AI communities. The focus has been on building data integration systems that provide a uniform query ...
- articleOctober 2001
Iterated Phantom Induction: A Knowledge-Based Approach to Learning Control
We advance a knowledge-based learning method that allows prior domain knowledge to be effectively utilized by machine learning systems. The domain knowledge is incorporated not into the learning algorithm itself but instead affects only the training ...
- articleJanuary 2001
Linear Concepts and Hidden Variables
Machine Language (MALE), Volume 42, Issue 1-2Pages 123–141We study a learning problem which allows for a “fair” comparison between unsupervised learning methods—probabilistic model construction, and more traditional algorithms that directly learn a classification. The merits of each approach are intuitively ...
- articleJanuary 2000
A Multistrategy Approach to Classifier Learning from Time Series
We present an approach to inductive concept learning using multiple models for time series. Our objective is to improve the efficiency and accuracy of concept learning by decomposing learning tasks that admit multiple types of learning architectures and ...
- articleOctober 1999
Efficient Read-Restricted Monotone CNF/DNF Dualization by Learning with Membership Queries
We consider exact learning monotone CNF formulas in which each variable appears at most some constant k times (read-k monotone CNF). Let f : {0,1} ^n ý {0,1} be expressible as a read-k monotone CNF formula for some natural number k. We give an ...
- articleMay 1999
Learning to Reason with a Restricted View
The Learning to Reason framework combines the study of Learning and Reasoning into a single task. Within it, learning is done specifically for the purpose of reasoning with the learned knowledge. Computational considerations show that this is a useful ...
- articleFebruary 1999
A Winnow-Based Approach to Context-Sensitive Spelling Correction
A large class of machine-learning problems in natural language require the characterization of linguistic context. Two characteristic properties of such problems are that their feature space is of very high dimensionality, and their target concepts depend ...
- articleOctober 1995
Searching for Representations to Improve Protein Sequence Fold-Class Prediction
Predicting the fold, or approximate 3D structure, of a protein from its amino acid sequence is an important problem in biology. The homology modeling approach uses a protein database to identify fold-class relationships by sequence similarity. The main ...
- articleJune 1995
On the Learnability of Disjunctive Normal Form Formulas
We present two related results about the learnability of disjunctive normal form (DNF) formulas. First we show that a common approach for learning arbitrary DNF formulas requires exponential time. We then contrast this with a polynomial time algorithm ...
- articleJanuary 1993
Synthesis of UNIX Programs Using Derivational Analogy
The feasibility of derivational analogy as a mechanism for improving problem-solving behavior has been shown for a variety of problem domains by several researchers. However, most of the implemented systems have been empirically evaluated in the ...
- articleJuly 1992
Learning Conjunctions of Horn Clauses
An algorithm is presented for learning the class of Boolean formulas that are expressible as conjunctions of Horn clauses. (A Horn clause is a disjunction of literals, all but at most one of which is a negated variable.) The algorithm uses equivalence ...
- articleSeptember 1990
Empirical Learning as a Function of Concept Character
Concept learning depends on data character. To discover how, some researchers have used theoretical analysis to relate the behavior of idealized learning algorithms to classes of concepts. Others have developed pragmatic measures that relate the ...