Abstract
Decentralized low-rank learning is an active research domain with extensive practical applications. A common approach to producing low-rank and robust estimations is to employ a combination of the nonsmooth quantile regression loss and nuclear-norm regularizer. Nevertheless, directly applying existing techniques may result in slow convergence rates due to the doubly nonsmooth objective. To expedite the computation process, a decentralized surrogate matrix quantile regression method is proposed in this article. The proposed algorithm has a simple implementation and can provably converge at a linear rate. Additionally, we provide a statistical guarantee that our estimate can achieve an almost optimal convergence rate, regardless of the number of nodes. Numerical simulations confirm the efficacy of our approach.
Supplementary Materials
Supplement: This supplement contains the technical proofs and additional simulations. (RobustLRL_supp_jcgs.pdf)
Codes for reproducibility: R code for reproducing simulation in the main context. (RobustLRL.zip)
Disclosure Statement
No potential conflict of interest was reported by the author(s).