Exploiting label dependencies for improved sample complexity
Learning with infinitely many features
Ranking data with ordinal labels: optimality and pairwise aggregation
Regularization of non-homogeneous dynamic Bayesian networks with global information-coupling based on hierarchical Bayesian models
Learning a factor model via regularized PCA
Alignment based kernel learning with a continuous set of base kernels
Minimax PAC bounds on the sample complexity of reinforcement learning with a generative model
Adaptive regularization of weight vectors
Semi-supervised learning with density-ratio estimation
Sparse non Gaussian component analysis by semidefinite programming
Completing causal networks by meta-level abduction