geotensor
geotensor copied to clipboard
Geometric low-rank tensor completion for color image inpainting.
geotensor
Motivation
- Color image inpainting: Various missing patterns.
![]() Missing at random (MAR) |
![]() Row-wise MAR |
![]() Column-wise MAR |
![]() (Row, column)-wise MAR |
- Low-rank tensor completion: Characterizing images with graph (e.g., adjacent smoothness matrix based graph regularizer).
Implementation
One notable thing is that unlike the complex equations in our models, our Python implementation (relies on numpy) is extremely easy to work with. Take GLTC-Geman as an example, its kernel only has few lines:
def supergradient(s_hat, lambda0, theta):
"""Supergradient of the Geman function."""
return (lambda0 * theta / (s_hat + theta) ** 2)
def GLTC_Geman(dense_tensor, sparse_tensor, alpha, beta, rho, theta, maxiter):
"""Main function of the GLTC-Geman."""
dim0 = sparse_tensor.ndim
dim1, dim2, dim3 = sparse_tensor.shape
dim = np.array([dim1, dim2, dim3])
binary_tensor = np.zeros((dim1, dim2, dim3))
binary_tensor[np.where(sparse_tensor != 0)] = 1
tensor_hat = sparse_tensor.copy()
X = np.zeros((dim1, dim2, dim3, dim0)) # \boldsymbol{\mathcal{X}} (n1*n2*3*d)
Z = np.zeros((dim1, dim2, dim3, dim0)) # \boldsymbol{\mathcal{Z}} (n1*n2*3*d)
T = np.zeros((dim1, dim2, dim3, dim0)) # \boldsymbol{\mathcal{T}} (n1*n2*3*d)
for k in range(dim0):
X[:, :, :, k] = tensor_hat
Z[:, :, :, k] = tensor_hat
D1 = np.zeros((dim1 - 1, dim1)) # (n1-1)-by-n1 adjacent smoothness matrix
for i in range(dim1 - 1):
D1[i, i] = -1
D1[i, i + 1] = 1
D2 = np.zeros((dim2 - 1, dim2)) # (n2-1)-by-n2 adjacent smoothness matrix
for i in range(dim2 - 1):
D2[i, i] = -1
D2[i, i + 1] = 1
w = []
for k in range(dim0):
u, s, v = np.linalg.svd(ten2mat(Z[:, :, :, k], k), full_matrices = 0)
w.append(np.zeros(len(s)))
for i in range(len(np.where(s > 0)[0])):
w[k][i] = supergradient(s[i], alpha, theta)
for iters in range(maxiter):
for k in range(dim0):
u, s, v = np.linalg.svd(ten2mat(X[:, :, :, k] + T[:, :, :, k] / rho, k), full_matrices = 0)
for i in range(len(np.where(w[k] > 0)[0])):
s[i] = max(s[i] - w[k][i] / rho, 0)
Z[:, :, :, k] = mat2ten(np.matmul(np.matmul(u, np.diag(s)), v), dim, k)
var = ten2mat(rho * Z[:, :, :, k] - T[:, :, :, k], k)
if k == 0:
var0 = mat2ten(np.matmul(inv(beta * np.matmul(D1.T, D1) + rho * np.eye(dim1)), var), dim, k)
elif k == 1:
var0 = mat2ten(np.matmul(inv(beta * np.matmul(D2.T, D2) + rho * np.eye(dim2)), var), dim, k)
else:
var0 = Z[:, :, :, k] - T[:, :, :, k] / rho
X[:, :, :, k] = np.multiply(1 - binary_tensor, var0) + sparse_tensor
uz, sz, vz = np.linalg.svd(ten2mat(Z[:, :, :, k], k), full_matrices = 0)
for i in range(len(np.where(sz > 0)[0])):
w[k][i] = supergradient(sz[i], alpha, theta)
tensor_hat = np.mean(X, axis = 3)
for k in range(dim0):
T[:, :, :, k] = T[:, :, :, k] + rho * (X[:, :, :, k] - Z[:, :, :, k])
X[:, :, :, k] = tensor_hat.copy()
return tensor_hat
Have fun if you work with our code!
-
Competing Models
-
Inpainting Examples (by
GLTC-Geman)
![]() Missing at random (MAR) |
![]() Row-wise MAR |
![]() Column-wise MAR |
![]() (Row, column)-wise MAR |
![]() RSE = 6.74% |
![]() RSE = 8.20% |
![]() RSE = 10.80% |
![]() RSE = 8.38% |
Reference
- General Matrix/Tensor Completion
| No | Title | Year | Code | |
|---|---|---|---|---|
| 1 | Tensor Completion for Estimating Missing Values in Visual Data | 2013 | TPAMI | - |
| 2 | Efficient tensor completion for color image and video recovery: Low-rank tensor train | 2016 | arxiv | - |
| 3 | Tensor Robust Principal Component Analysis: Exact Recovery of Corrupted Low-Rank Tensors via Convex Optimization | 2016 | CVPR | Matlab |
| 4 | Geometric Matrix Completion with Recurrent Multi-Graph Neural Networks | 2017 | NeurIPS | Python |
| 5 | Efficient Low Rank Tensor Ring Completion | 2017 | ICCV | Matlab |
| 6 | Spatio-Temporal Signal Recovery Based on Low Rank and Differential Smoothness | 2018 | IEEE | - |
| 7 | Exact Low Tubal Rank Tensor Recovery from Gaussian Measurements | 2018 | IJCAI | Matlab |
| 8 | Tensor Robust Principal Component Analysis with A New Tensor Nuclear Norm | 2018 | TPAMI | Matlab |
- Singular Value Thresholding (SVT)
| No | Title | Year | Code | |
|---|---|---|---|---|
| 1 | A General Iterative Shrinkage and Thresholding Algorithm for Non-convex Regularized Optimization Problems | 2013 | ICML | - |
| 2 | Fast Randomized Singular Value Thresholding for Nuclear Norm Minimization | 2015 | CVPR | - |
| 3 | A Fast Implementation of Singular Value Thresholding Algorithm using Recycling Rank Revealing Randomized Singular Value Decomposition | 2017 | arxiv | - |
| 4 | Fast Randomized Singular Value Thresholding for Low-rank Optimization | 2018 | TPAMI | - |
| 5 | Fast Parallel Randomized QR with Column Pivoting Algorithms for Reliable Low-rank Matrix Approximations | 2018 | arxiv | - |
| 6 | Low-Rank Matrix Approximations with Flip-Flop Spectrum-Revealing QR Factorization | 2018 | arxiv | - |
- Proximal Methods
| No | Title | Year | Code | |
|---|---|---|---|---|
| 1 | Accelerated Proximal Gradient Methods for Nonconvex Programming | 2015 | NIPS | Supp |
| 2 | Incorporating Nesterov Momentum into Adam | 2016 | ICLR | - |
- Fast Alternating Direction Method of Multipliers
| No | Title | Year | Code | |
|---|---|---|---|---|
| 1 | Differentiable Linearized ADMM | 2019 | ICML | - |
| 2 | Faster Stochastic Alternating Direction Method of Multipliers for Nonconvex Optimization | 2019 | ICML | - |
- Tensor Train Decomposition
| No | Title | Year | Code | |
|---|---|---|---|---|
| 1 | Math Lecture 671: Tensor Train decomposition methods | 2016 | slide | - |
| 2 | Introduction to the Tensor Train Decomposition and Its Applications in Machine Learning | 2016 | slide | - |
- Matrix/Tensor Completion + Nonconvex Regularization
| No | Title | Year | Code | |
|---|---|---|---|---|
| 1 | Fast and Accurate Matrix Completion via Truncated Nuclear Norm Regularization | 2013 | TPAMI | - |
| 2 | Generalized noncon-vex nonsmooth low-rank minimization | 2014 | CVPR | Matlab |
| 3 | Generalized Singular Value Thresholding | 2015 | AAAI | - |
| 4 | Partial Sum Minimization of Singular Values in Robust PCA: Algorithm and Applications | 2016 | TPAMI | - |
| 5 | Efficient Inexact Proximal Gradient Algorithm for Nonconvex Problems | 2016 | arxiv | - |
| 6 | Scalable Tensor Completion with Nonconvex Regularization | 2018 | arxiv | - |
| 7 | Large-Scale Low-Rank Matrix Learning with Nonconvex Regularizers | 2018 | TPAMI | - |
| 8 | Nonconvex Robust Low-rank Matrix Recovery | 2018 | arxiv | Matlab |
| 9 | Matrix Completion via Nonconvex Regularization: Convergence of the Proximal Gradient Algorithm | 2019 | arxiv | Matlab |
| 10 | Efficient Nonconvex Regularized Tensor Completion with Structure-aware Proximal Iterations | 2019 | ICML | Matlab |
| 11 | Guaranteed Matrix Completion under Multiple Linear Transformations | 2019 | CVPR | - |
- Rank Approximation + Nonconvex Regularization
| No | Title | Year | Code | |
|---|---|---|---|---|
| 1 | A General Iterative Shrinkage and Thresholding Algorithm for Non-convex Regularized Optimization Problems | 2013 | ICML | - |
| 2 | Rank Minimization with Structured Data Patterns | 2014 | ECCV | - |
| 3 | Minimizing the Maximal Rank | 2016 | CVPR | - |
| 4 | Convex Low Rank Approximation | 2016 | IJCV | - |
| 5 | Non-Convex Rank/Sparsity Regularization and Local Minima | 2017 | ICCV, Supp | - |
| 6 | A Non-Convex Relaxation for Fixed-Rank Approximation | 2017 | ICCV | - |
| 7 | Inexact Proximal Gradient Methods for Non-Convex and Non-Smooth Optimization | 2018 | AAAI | - |
| 8 | Non-Convex Relaxations for Rank Regularization | 2019 | slide | - |
| 9 | Geometry and Regularization in Nonconvex Low-Rank Estimation | 2019 | slide | - |
| 10 | Large-Scale Low-Rank Matrix Learning with Nonconvex Regularizers | 2018 | IEEE TPAMI | - |
- Weighted Nuclear Norm Minimization
| No | Title | Year | Code | |
|---|---|---|---|---|
| 1 | Weighted Nuclear Norm Minimization with Application to Image Denoising | 2014 | CVPR | Matlab |
| 2 | A Nonconvex Relaxation Approach for Rank Minimization Problems | 2015 | AAAI | - |
| 3 | Multi-Scale Weighted Nuclear Norm Image Restoration | 2018 | CVPR | Matlab |
| 4 | On the Optimal Solution of Weighted Nuclear Norm Minimization | - | - |
Collaborators
![]() Xinyu Chen 💻 |
![]() Jinming Yang 💻 |
![]() Lijun Sun 💻 |
See the list of contributors who participated in this project.
License
This work is released under the MIT license.










