Nonnegative Tucker decomposition

Given a nonnegative tensor mathcal{M}, the problem seeks a nonnegative core tensor mathcal{C} and nonnegative matrices A_1,cdots,A_N such that mathcal{C}times_1 A_1cdotstimes_N A_N =mathcal{M}~ or ~mathcal{C}times_1 A_1cdotstimes_N A_N approx mathcal{M}~, where “times_n” denotes mode-n tensor-matrix product.

Given a tensor mathcal{X}inmathbb{R}^{I_1timescdotstimes I_N} and a matrix Binmathbf{R}^{Jtimes I_n}, the product mathcal{X}times_n B is defined by

(mathcal{X}times_n B)_{i_1,cdots,i_{n-1},j,i_{n+1},cdots,i_N}=sum_{i_n=1}^{I_n}mathcal{X}_{i_1,cdots,i_{n-1},i_n,i_{n+1},cdots,i_N}B_{j,i_n}.

The decomposition is modeled as the optimization problem

 mathop{mathrm{minimize}} ~frac{1}{2}|mathcal{C}times_1 A_1cdotstimes_N A_N - M|_F^2,~mbox{subject to}~mathcal{C}ge 0, A_1ge 0,cdots,A_Nge 0,

where the Frobenius norm of a tensor mathcal{X}inmathbb{R}^{I_1timescdotstimes I_N} is defined as

 |mathcal{X}|_F=sqrt{sum_{i_1=1}^{I_1}cdotssum_{i_N=1}^{I_N}x_{i_1,cdots,i_N}^2}.

More information about tensor and its decomposition can be found in this review paper.


The algorithm is described in Algorithm 2 of this report.

Matlab code of BCU with prox-linear update and extrapolation