18796

Near Optimal Sample Complexity For Matrix And Tensor Normal Models Via Geodesic Convexity

APA

(2021). Near Optimal Sample Complexity For Matrix And Tensor Normal Models Via Geodesic Convexity. The Simons Institute for the Theory of Computing. https://simons.berkeley.edu/talks/near-optimal-sample-complexity-matrix-and-tensor-normal-models-geodesic-convexity

MLA

Near Optimal Sample Complexity For Matrix And Tensor Normal Models Via Geodesic Convexity. The Simons Institute for the Theory of Computing, Nov. 29, 2021, https://simons.berkeley.edu/talks/near-optimal-sample-complexity-matrix-and-tensor-normal-models-geodesic-convexity

BibTex

          @misc{ scivideos_18796,
            doi = {},
            url = {https://simons.berkeley.edu/talks/near-optimal-sample-complexity-matrix-and-tensor-normal-models-geodesic-convexity},
            author = {},
            keywords = {},
            language = {en},
            title = {Near Optimal Sample Complexity For Matrix And Tensor Normal Models Via Geodesic Convexity},
            publisher = {The Simons Institute for the Theory of Computing},
            year = {2021},
            month = {nov},
            note = {18796 see, \url{https://scivideos.org/Simons-Institute/18796}}
          }
          
Akshay Ramachandran (University of Amsterdam)
Talk number18796
Source RepositorySimons Institute

Abstract

The matrix normal model, the family of Gaussian matrix-variate distributions whose covariance matrix is the Kronecker product of two lower dimensional factors, is frequently used to model matrix-variate data. The tensor normal model generalizes this family to Kronecker products of three or more factors. We study the estimation of the Kronecker factors of the covariance matrix in the matrix and tensor models. We show nonasymptotic bounds for the error achieved by the maximum likelihood estimator (MLE) in several natural metrics. In contrast to existing bounds, our results do not rely on the factors being well-conditioned or sparse. For the matrix normal model, all our bounds are minimax optimal up to logarithmic factors, and for the tensor normal model our bound for the largest factor and overall covariance matrix are minimax optimal up to constant factors provided there are enough samples for any estimator to obtain constant Frobenius error. In the same regimes as our sample complexity bounds, we show that an iterative procedure to compute the MLE known as the flip-flop algorithm converges linearly with high probability. Our main tool is geodesic strong convexity in the geometry on positive-definite matrices induced by the Fisher information metric. This strong convexity is determined by the expansion of certain random quantum channels. We also provide numerical evidence that combining the flip-flop algorithm with a simple shrinkage estimator can improve performance in the undersampled regime.