1532-4435

Journal of Machine Learning Research (JMLR) - Issue 7 论文列表

点击这里查看 Journal of Machine Learning Research 的JCR分区、影响因子等信息
卷期号: Issue 7
发布时间:
卷期年份: 2003
卷期官网:
本期论文列表
Introduction to Special Issue on Independent Components Analysis.

Blind Separation of Post-nonlinear Mixtures using Linearizing Transformations and Temporal Decorrelation.

Learning over Sets using Kernel Principal Angles.

Dependence, Correlation and Gaussianity in Independent Component Analysis.

An Approximate Analytical Approach to Resampling Averages.

Tree-Structured Neural Decoding.

Task Clustering and Gating for Bayesian Multitask Learning.

Designing Committees of Models through Deliberate Weighting of Data Points.

Generalization Error Bounds for Bayesian Mixture Algorithms.

Concentration Inequalities for the Missing Mass and for Histogram Rule Error.

ILP: A Short Look Back and a Longer Look Forward.

Optimally-Smooth Adaptive Boosting and Application to Agnostic Learning.

On Inclusion-Driven Learning of Bayesian Networks.

Statistical Dynamics of On-line Independent Component Analysis.

Blind Source Separation via Generalized Eigenvalue Decomposition.

Think Globally, Fit Locally: Unsupervised Learning of Low Dimensional Manifold.

Fusion of Domain Knowledge with Data for Structural Learning in Object Oriented Domains.

Beyond Independent Components: Trees and Clusters.

Blind Source Recovery: A Framework in the State Space.

Preference Elicitation via Theory Refinement.

Sparseness of Support Vector Machines.

Relational Learning as Search in a Critical Region.

Nash Q-Learning for General-Sum Stochastic Games.

MISEP -- Linear and Nonlinear ICA Based on Mutual Information.

The Principled Design of Large-Scale Recursive Neural Network Architectures--DAG-RNNs and the Protein Structure Prediction Problem.

A Maximum Likelihood Approach to Single-channel Source Separation.

Introduction to the Special Issue on Learning Theory.

ICA Using Spacings Estimates of Entropy.

The em Algorithm for Kernel Matrix Completion with Auxiliary Data.

Learning Behavior-Selection by Emotions and Cognition in a Multi-Goal Robot Task.

On Nearest-Neighbor Error-Correcting Output Codes with Application to All-Pairs Multiclass Support Vector Machines.

Tree Induction vs. Logistic Regression: A Learning-Curve Analysis.

Query Transformations for Improving the Efficiency of ILP Systems.

Energy-Based Models for Sparse Overcomplete Representations.

Inducing Grammars from Sparse Data Sets: A Survey of Algorithms and Results.

Bottom-Up Relational Learning of Pattern Matching Rules for Information Extraction.

Path Kernels and Multiplicative Updates.

Learning Semantic Lexicons from a Part-of-Speech and Semantically Tagged Corpus Using Inductive Logic Programming.

Smooth Boosting and Learning with Malicious Noise.

On the Proper Learning of Axis-Parallel Concepts.

Combining Knowledge from Different Sources in Causal Probabilistic Models.

Tracking Linear-threshold Concepts with Winnow.

Greedy Algorithms for Classification -- Consistency, Convergence Rates, and Adaptivity.

An Empirical Study of the Use of Relevance Information in Inductive Logic Programming.

A Unified Framework for Model-based Clustering.

ICA for Watermarking Digital Images.

Least-Squares Policy Iteration.

On the Performance of Kernel Classes.

Learning Probabilistic Models: An Expected Utility Maximization Approach.

Optimality of Universal Bayesian Sequence Prediction for General Loss and Alphabet.

FINkNN: A Fuzzy Interval Number k-Nearest Neighbor Classifier for Prediction of Sugar Production from Populations of Samples.

A Generative Model for Separating Illumination and Reflectance from Images.

Overlearning in Marginal Distribution-Based ICA: Analysis and Solutions.

Speedup Learning for Repair-based Search by Identifying Redundant Steps.

A Multiscale Framework For Blind Separation of Linearly Mixed Signals.

An Efficient Boosting Algorithm for Combining Preferences.

On the Rate of Convergence of Regularized Boosting Classifiers.

Comparing Bayes Model Averaging and Stacking When Model Approximation Error Cannot be Ignored.