A consistent and flexible framework for deep matrix factorizations

作者:

Highlights:

• Two new loss functions for deep matrix factorizations are introduced.

• Our loss functions are weighted sums of layer-based errors.

• A general convergent optimization framework is presented to tackle our formulations.

• Several priors are incorporated, e.g., sparsity, nonnegativity and minimum-volume.

• Experiments on synthetic and real data validate the effectiveness of our framework.

摘要

•Two new loss functions for deep matrix factorizations are introduced.•Our loss functions are weighted sums of layer-based errors.•A general convergent optimization framework is presented to tackle our formulations.•Several priors are incorporated, e.g., sparsity, nonnegativity and minimum-volume.•Experiments on synthetic and real data validate the effectiveness of our framework.

论文关键词:Deep matrix factorization,Loss functions,Constrained optimization,First-order methods,Hyperspectral unmixing

论文评审过程:Received 19 June 2022, Revised 2 September 2022, Accepted 4 October 2022, Available online 12 October 2022, Version of Record 12 October 2022.

论文官网地址:https://doi.org/10.1016/j.patcog.2022.109102