Abstract
The image noise level indicates the degree to which an image is contaminated by noise, serving as a crucial parameter in image processing. For the third-order tensor corresponding to a color image, traditional methods for estimating the noise level disrupt the data structure of the tensor. To address these issues, we directly extract patches from the tensor using a sliding block of size and then rearrange them. The main contributions of this paper are as follows: (1) We propose a novel noise level estimation model based on tensor decomposition. The third-order tensor is decomposed into a covariance matrix form according to the definition of the T-product, enabling estimation of the noise level of the color image and achieving favorable experimental results. (2) Through theoretical analysis, we prove that the multiple eigenvalues of the matrix obtained via T-product of the third-order tensor have a direct relationship with the noise level of the color image.
Full Text
Preamble
An Image Noise Level Estimation Based on Tensor T-Product
Hanxin Liua, Yisheng Songa,∗
aSchool of Mathematical Sciences, Chongqing Normal University, Chongqing, 401331
Email: 2531417503@qq.com (Liu); yisheng.song@cqnu.edu.cn (Song)
Abstract
Many existing algorithms estimate the noise level of color images by processing each channel of the third-order tensor separately using sliding blocks of size M₁ × M₁. This approach disrupts the inherent tensor data structure and introduces estimation errors. To preserve the tensor structure, we propose directly extracting tensor blocks using a sliding window of size M₁ × M₁ × 3 and subsequently rearranging them. The resulting tensor is decomposed into a block diagonal matrix form through the T-product, and we demonstrate that the eigenvalues of this matrix are related to the noise level of the color image. The relationship coefficients are then trained using learning methods to obtain the final noise level estimate. Numerical experiments verify the effectiveness of the proposed algorithm and demonstrate its high estimation accuracy.
Keywords: Noise level estimation, Tensor, T-product, Gaussian noise, Eigenvalue
1. Introduction
The image noise level, which indicates the degree of contamination in an image, serves as a crucial parameter in image processing pipelines. For instance, blind image denoising \cite{1}, blind image restoration \cite{2}, and blind image deblurring \cite{3} all require prior estimation of the noise level. Consequently, developing accurate noise level estimation methods is of significant importance.
Over the past decade, noise level estimation has been an active research area in image processing. Existing methods can be categorized into filtering-based \cite{4}, transformation-based \cite{5}, and block-based \cite{6} approaches. This study focuses primarily on block-based noise level estimation, which divides an image into overlapping small blocks using sliding windows, selects blocks with consistent statistical properties (such as variance and kurtosis) to form uniform or weakly textured regions, and finally calculates the noise level from these selected blocks.
Pyatykh et al. \cite{7} first proposed a block-based noise level estimation method using principal component analysis of image blocks. This approach can estimate noise levels with reasonable accuracy even when the noisy image contains very few uniform areas. Building upon this work, Liu et al. \cite{8} proposed an algorithm that selects weakly textured image blocks from images with rich texture information for noise level estimation, demonstrating effective performance on textured images. Both algorithms compute the covariance matrix of selected image blocks and use the minimum eigenvalue as the noise level estimate. For color images, which can be represented as third-order tensors, each channel is treated as a separate matrix, and the operations are performed independently on the three matrices. However, Heng et al. \cite{9} noted that using a single eigenvalue as a function of the minimum eigenvalue often leads to underestimation. They proposed an iterative approach that compares the median and mean of eigenvalues, removing the largest eigenvalue repeatedly until the median and mean converge.
Recognizing the limitations of using a single eigenvalue, several researchers have incorporated additional factors. Fang et al. \cite{10} analyzed eigenvalues in 2019 and argued that using the minimum eigenvalue underestimates noise variance while using the mean overestimates it. They performed linear fitting on these bounds and proved that the estimated noise level follows the relationship σ² = (d₁σ²_f + d₂σ²_w)/(d₁ + d₂), where σ²_f represents an underestimated result and σ²_w represents an overestimated result. In 2020, Jiang et al. \cite{11} considered the number of image blocks and sliding window size, revealing that the relationship between noise level and the minimum eigenvalue λ_min, number of blocks s, and window size w follows σ² = λ₁/(1 − 1.8606√(w² − 2)/s). Liu et al. \cite{12} estimated noise levels by fitting multiple eigenvalues using a learning-based approach on training data.
When applying these methods to color images, sliding windows process the three color channels separately, constructing three distinct covariance matrices and estimating noise level from their eigenvalues. However, color images are naturally third-order tensors, and grayscale images are second-order tensors (matrices). Splitting a third-order tensor into three matrices disrupts its structural integrity, potentially leading to estimation errors.
From a tensor perspective, directly computing eigenvalues of higher-order tensors is NP-hard. Unlike matrices, tensors have multiple eigenvalue definitions, including H-eigenvalues \cite{13}, Z-eigenvalues \cite{14}, M-eigenvalues \cite{15}, D-eigenvalues \cite{16}, and B-eigenvalues \cite{17}. Since Professor Qi Liqun proposed the definition of tensor H-eigenvalues in 2005 \cite{13}, tensor eigenvalue computation has remained an active research area. Qi et al. \cite{18} proposed orthogonal transformation methods for directly computing Z-eigenvalues of third-order three-dimensional tensors. Kofidis et al. introduced the Higher-Order Power Method (HOPM) \cite{19} and Symmetric Higher-Order Power Method (S-HOPM) \cite{19} for large-scale tensor eigenvalue computation. Kolda et al. subsequently enhanced S-HOPM with shift parameters, creating the Shifted Symmetric High-Order Power Method (SS-HOPM) \cite{20}, though this algorithm is highly parameter-dependent. To address this limitation, they proposed an adaptive shift method that selects parameters based on the definiteness of the Hessian matrix, known as the Generalized Eigenproblem Adaptive Power (GEAP) method \cite{21}.
These algorithms compute eigenvalues for specific tensor types but cannot directly handle the third-order tensors representing color images. Therefore, decomposition methods such as CP decomposition \cite{22} and Tucker decomposition \cite{23} are needed to transform tensors into matrix-like forms. Kilmer et al. \cite{24} first introduced T-SVD decomposition and the T-product, which decomposes higher-order tensors into forms resembling outer products of matrices under the T-product definition. Inspired by T-SVD, this paper transforms third-order tensors into matrix form under the T-product definition, computes eigenvalues of the resulting covariance matrix, and estimates color image noise levels accordingly.
The main contributions of this paper are:
* A novel noise level estimation model based on tensor decomposition that preserves tensor structure by decomposing third-order tensors into covariance matrix forms using the T-product definition, achieving excellent experimental results.
* Theoretical analysis proving that multiple eigenvalues of the block diagonal matrix obtained through T-product decomposition have a direct relationship with color image noise levels.
The remainder of this paper is organized as follows: Section 2 introduces fundamental concepts, including traditional noise level estimation methods and the T-product. Section 3 describes the proposed noise level estimation algorithm. Section 4 presents experimental results comparing our method with state-of-the-art algorithms. Section 5 concludes the paper and outlines future research directions.
2.1. Image Noise Level Estimation
For an observed image y contaminated with additive Gaussian white noise, the model can be expressed as:
$$y = x + e$$
where x represents the noise-free image patch and e denotes signal-independent additive white Gaussian noise with zero mean and variance σ². Assuming the observed image y has dimensions S₁ × S₂ × c, we divide it into s = (S₁ − M₁ + 1) × (S₂ − M₁ + 1) × c patches using an M₁ × M₁ sliding window, then reorder each patch into a column vector of size M₁² × 1. The observation image y can thus be represented as Yₛ = {yₜ}ₛₜ₌₁ ∈ ℝ^(M₁² × s).
The covariance matrix Σ_y of the noisy image y is defined as:
$$\Sigma_y = \sum (y_i - u)(y_i - u)^T$$
where u is the mean column vector of the dataset {y_i}. The covariance matrix Σ_y satisfies the following assumption:
Assumption 1: The covariance matrix Σ_y follows a gamma distribution with shape parameter (s − 1)/2 and scale parameter (s − 1)/2σ²:
$$\Sigma_y \sim \gamma\left(\frac{s-1}{2}, \frac{s-1}{2}\sigma^2\right)$$
where γ denotes the gamma distribution with expectation σ² and variance 2σ⁴/(s − 1).
Under Assumption 1, the minimum eigenvalues of the observed and original image covariance matrices satisfy:
$$\lambda_{\min}(\Sigma_y) = \lambda_{\min}(\Sigma_x) + \sigma^2$$
where Σ_y is the covariance matrix of noisy patch y_i, Σ_x is the covariance matrix of noise-free patch x_i, and λ_min(Σ) represents the minimum eigenvalue of matrix Σ. The noise level σ² can be calculated if λ_min(Σ_x) is known. Since the minimum eigenvalue of weak texture patches' covariance matrix is zero, we select weakly textured blocks for noise level estimation. Liu et al. \cite{8} used an iterative threshold τ to obtain weak texture patches, defined as:
$$\tau = \sigma^2 F^{-1}(\delta, \text{tr}(D_h^T D_h + D_v^T D_v))$$
where F⁻¹(δ, α, β) is the inverse gamma cumulative distribution function with shape parameter α and scale parameter β, and D_h and D_v represent horizontal and vertical operator matrices, respectively. When the maximum eigenvalue of an image patch's covariance matrix is less than τ, the patch is considered weakly textured. Using this method, the noise level of observed image y can be estimated as:
$$\hat{\sigma}^2 = g(\lambda_{\min})$$
2.2. Tensor T-Product
If v = [v₀ v₃]ᵀ, then circ(v) forms a circulant matrix. A color image can be represented as a third-order tensor A of size n₁ × n₂ × 3. Similar to circulant matrix creation, we can construct a block circulant matrix from tensor slices. A tensor A ∈ ℝ^(n₁×n₂×n₃) can be transformed into:
$$\text{circ}(A) = \begin{bmatrix} A_1 & A_{n_3} & \cdots & A_2 \ A_2 & A_1 & \cdots & A_3 \ \vdots & \vdots & \ddots & \vdots \ A_{n_3} & A_{n_3-1} & \cdots & A_1 \end{bmatrix}$$
where A_i = A(:, :, i) for i = 1, 2, ..., n₃.
Definition: Let tensor A ∈ ℝ^(n₁×n₂×n₃) and tensor B ∈ ℝ^(n₂×n₄×n₃). The t-product A ∗ B ∈ ℝ^(n₁×n₄×n₃) is defined as:
$$A \ast B = \text{fold}(\text{circ}(A) \cdot \text{MatVec}(B))$$
where MatVec(B) represents the operation that converts B into matrix form, and fold is the inverse operation that restores MatVec(A) back to tensor form.
3.1. Model in This Paper
Traditional methods for calculating covariance matrices disrupt the tensor data structure of color images. This paper operates on the entire tensor, using a sliding window of size M₁ × M₁ × 3 to extract block tensors, rearranging each block tensor into a column vector, and reorganizing all blocks into a matrix of size M₁² × s × 3. Specifically, we extract s = (n₁ − M₁ + 1) × (n₂ − M₁ + 1) block tensors from a third-order tensor of size n₁ × n₂ × 3, rearrange each slice into a column vector, and combine them to form a third-order tensor A of size M₁² × s × 3. Let y_j^i represent the column vector of the j-th slice in the i-th column of A, and let u_j represent the mean of {y_j^i}. The covariance matrix of each slice of tensor A can be expressed as:
$$\Sigma_{A_j} = \sum (y_j^i - u)(y_j^i - u)^T$$
We reorganize these covariance matrices into a third-order tensor B = (Σ_{A_j}) (j = 1, 2, 3).
To decompose this tensor into a form resembling a covariance matrix, we introduce a tensor I_i where the i-th slice is a unit matrix E. The third-order tensor B ∈ ℝ^(M₁²×M₁²×3) is transformed as follows to obtain matrix B:
$$B = \text{unfold}(B \ast I_2) = \begin{bmatrix} \Sigma_{A_1} & \Sigma_{A_3} & \Sigma_{A_2} \ \Sigma_{A_2} & \Sigma_{A_1} & \Sigma_{A_3} \ \Sigma_{A_3} & \Sigma_{A_2} & \Sigma_{A_1} \end{bmatrix}$$
We use the operator bdiag(·) to transform matrix B into a block diagonal matrix:
$$\text{bdiag}(B) = \begin{bmatrix} \Sigma_{A_1} & 0 & 0 \ 0 & \Sigma_{A_2} & 0 \ 0 & 0 & \Sigma_{A_3} \end{bmatrix}$$
Since each Σ_{A_j} is a covariance matrix composed of slices of A and satisfies Assumption 1, the following theorem holds:
Theorem 2: Let λ₁, λ₂, ..., λ_r be the r eigenvalues of matrix bdiag(B) sorted in ascending order. Under Assumption 1, there exist θ₁, θ₂, ..., θ_n (n ≤ r) with θ₁ + θ₂ + ... + θ_n = 1 such that:
$$\frac{2r\sigma^2}{\sqrt{s-1}} \leq \theta_1\lambda_1 + \theta_2\lambda_2 + \cdots + \theta_n\lambda_n \leq \sigma^2 + \frac{2r\sigma^2}{\sqrt{s-1}}$$
Proof. The eigenvalues of bdiag(B) are the union of eigenvalues from Σ_{A_3}, Σ_{A_2}, and Σ_{A_1}. We first examine Σ_{A_3}. Let λ₃¹, λ₃², ..., λ₃^{r₃} be the eigenvalues of Σ_{A_3}. For any real numbers β_j (j = 1, 2, ..., r₃), we have:
$$\sum_{j=1}^{r_3} \beta_j(\bar{\lambda}3 - \lambda_3^j) = r_3\bar{\beta}\bar{\lambda}_3 - \sum \beta_j\lambda_3^j$$}^{r_3
where \bar{β} = (1/r₃)∑β_j. By the Cauchy-Schwarz inequality:
$$\left|\sum_{j=1}^{r_3} \beta_j(\bar{\lambda}3 - \lambda_3^j)\right| \leq \sqrt{\sum$$}^{r_3} (\beta_j - \bar{\beta})^2 \cdot \sum_{j=1}^{r_3} (\bar{\lambda}_3 - \lambda_3^j)^2
Under Assumption 1, E(Σ_{A_3}) = \bar{λ}3 = σ² and D(Σ) = 2r₃σ⁴/(s−1). Therefore:
$$\sum_{j=1}^{r_3} (\bar{\lambda}3 - \lambda_3^j)^2 \leq \sum$$}^{r_3} (\lambda_3^j)^2 - r_3(\bar{\lambda}_3)^2 = \frac{2r_3\sigma^4}{s-1
Letting β₁ = 1 and β_j = 0 (j ≠ 1), we obtain:
$$|\sigma^2 - \lambda_3^1| \leq \frac{2r\sigma^2}{\sqrt{s-1}}$$
Thus:
$$\frac{2r_3\sigma^2}{\sqrt{s-1}} \leq \lambda_3^1 \leq \sigma^2 + \frac{2r_3\sigma^2}{\sqrt{s-1}}$$
Similarly, we can show:
$$\frac{2r_1\sigma^2}{\sqrt{s-1}} \leq \lambda_1^i \leq \sigma^2 + \frac{2r_1\sigma^2}{\sqrt{s-1}} \quad \text{for } i = 1, ..., r_1$$
$$\frac{2r_2\sigma^2}{\sqrt{s-1}} \leq \lambda_2^i \leq \sigma^2 + \frac{2r_2\sigma^2}{\sqrt{s-1}} \quad \text{for } i = 1, ..., r_2$$
Letting r = max{r₁, r₂, r₃} and ordering all eigenvalues, we obtain the desired inequality. ∎
From Theorem 2, there exists θ₀ ∈ [−2rσ²/√(s−1), 2rσ²/√(s−1)] such that:
$$\hat{\sigma}^2 = \theta_0 + \theta_1\lambda_1 + \theta_2\lambda_2 + \cdots + \theta_n\lambda_n$$
where \hat{σ}² is the estimated noise level and λ₁, λ₂, ..., λ_n are the n smallest eigenvalues of matrix bdiag(B).
3.2. Algorithm Flow
To determine the values of θ_j (j = 0, 1, ..., n), this paper adopts a learning algorithm. We use M observed images with known noise levels as the training set to compute model parameters θ_j (j = 0, 1, ..., n) by fitting model (22). Specifically, let λ₀ = 1. For M images, we construct the loss function:
$$J(\theta_0, \theta_1, ..., \theta_n) = \frac{1}{2M} \sum_{i=1}^M \left(f(\lambda_0^{(i)}, \lambda_1^{(i)}, ..., \lambda_n^{(i)}) - \sigma^{(i)}\right)^2$$
where λ₁^{(i)}, λ₂^{(i)}, ..., λ_n^{(i)} are the n smallest eigenvalues of bdiag(B) for the i-th image, and f(λ₀, λ₁, ..., λ_n) = θ₀λ₀ + θ₁λ₁ + θ₂λ₂ + ... + θ_nλ_n. The parameters θ_j are solved using gradient descent with step size α and termination condition ε. The gradient of the loss function with respect to θ_j is:
$$\frac{\partial J}{\partial \theta_j} = \sum_{i=1}^M \left(f(\lambda_0^{(i)}, \lambda_1^{(i)}, ..., \lambda_n^{(i)}) - \sigma^{(i)}\right)\lambda_j^{(i)}$$
The descent distance is computed by multiplying the gradient by step size α. If the descent distance does not satisfy α|∂J/∂θ_j| ≤ ε, the iterative update for θ_j is:
$$\theta_j = \theta_j - \alpha \frac{\partial J}{\partial \theta_j}$$
Algorithm 1 details the complete procedure. For a color image A of size n₁ × n₂ × 3, we extract s = (n₁ − M₁ + 1) × (n₂ − M₁ + 1) block tensors w(k) using an M₁ × M₁ × 3 sliding window. Each block tensor is rearranged into a matrix of size M₁² × s × 3. We compute the covariance matrix for each slice, reorganize them into a third-order tensor, and compute the n smallest eigenvalues λ₁, λ₂, ..., λ_n of the block diagonal matrix bdiag(B). Finally, combining the trained model parameters θ₀, θ₁, ..., θ_n from the training set, the noise level estimate is:
$$\hat{\sigma}^2 = \theta_0\lambda_0 + \theta_1\lambda_1 + \cdots + \theta_n\lambda_n$$
Algorithm 1: Color Image Noise Level Estimation Based on Tensor Decomposition
Input: Noisy image A; Training set \bar{A}; Termination distance ε
Output: Noise level \hat{σ}
- Extract M₁ × M₂ × 3 overlapping patches W(k) from \bar{A}
- Compute bdiag(B) using homogeneous block tensors in W(k)
- For i = 1, 2, ..., M:
λ₁^{(i)}, λ₂^{(i)}, ..., λ_n^{(i)} ← eigenvalues of bdiag(B) - train_x ← [λ₀^{(i)}, λ₁^{(i)}, ..., λ_n^{(i)}] (i = 1, 2, ..., M)
- train_y ← σ^{(i)} ± ε
- Learn θ₀, θ₁, ..., θ_n from train_x and train_y
- Extract M₁ × M₂ × 3 overlapping patches w(k) from A
- Compute bdiag(B) using homogeneous block tensors in w(k)
- λ₁, λ₂, ..., λ_n ← eigenvalues of bdiag(B)
- \hat{σ}² = θ₀λ₀ + θ₁λ₁ + ... + θ_nλ_n
4. Experimental Results
This section presents numerical experiments conducted on the color image database TID2008 \cite{8}, comparing our method with traditional noise level estimation algorithms. Parameter optimization and model training were performed on the BSD500 database \cite{10}. All comparative experiments were implemented in MATLAB R2020a (Intel(R) UHD Graphics).
4.1. Accuracy Measures
Let σ_i denote the estimated noise level. We evaluate algorithm performance using Root Mean Square Error (RMSE) and Mean Absolute Error (MAE):
$$\text{RMSE} = \sqrt{\frac{1}{N}\sum_{i=1}^N (\sigma_i - \bar{\sigma}i)^2}$$
$$\text{MAE} = \frac{1}{N}\sum_i|$$}^N |\sigma_i - \bar{\sigma
where \bar{σ}_i represents the mean estimation and N is the total number of samples. Smaller RMSE and MAE values indicate higher accuracy and smaller errors, respectively.
4.2. Parameter Determination
The learning algorithm involves three main parameters: M₁ (sliding block size), M (training set size), and n (number of selected eigenvalues). We use RMSE as the decision criterion. Following Liu et al. \cite{8}, we fix M₁ = 7. We randomly select M training images from BSD500, varying M from 10 to 30 in increments of 1, and estimate noise levels for 200 BSD500 images with noise levels of 5, 10, 15, 20, 25, and 30. The results are analyzed in Fig. 1 [FIGURE:1]. Overall, RMSE values for M = 10–16 are larger than those for M ≥ 20 in most cases. The minimum estimation errors occur at M = 20, 21, 24, 29, and 30. However, larger training sets increase computational time, so we select M = 20 to balance accuracy and efficiency.
The estimation performance also depends on the number of eigenvalues n. We tested n from 5 to 25 in increments of 1 on 200 BSD500 test images with noise levels of 5–30. The resulting RMSE and MAE values are shown in Fig. 2 [FIGURE:2]. As n increases, both RMSE and MAE increase. For small n values, the errors are minimized at n = 8 across different noise levels, so we select n = 8 for subsequent experiments.
4.3. Estimation Performance of the Proposed Algorithm
We randomly selected 20 images with known noise levels from BSD500 as the training set. The learned parameters are shown in Table 1 [TABLE:1], which demonstrates that different noise levels yield different learning coefficients and that our algorithm achieves good estimation results.
To further validate feasibility, we conducted experiments on the TID2008 database, with results presented in Table 2 [TABLE:2]. The algorithm consistently produces accurate estimates across various noise levels. The maximum estimation error is 0.0075 for image '8.bmp' at noise level 20 (highlighted in bold italics), while the minimum error reaches as low as 0.0001.
4.4. Experimental Comparison
We compared our algorithm with traditional methods (Pyatykh \cite{7}, Liu \cite{8}) and recent approaches (Fang \cite{10}, Liu \cite{12}). Table 3 [TABLE:3] shows estimation results for 25 TID2008 images at noise levels of 5, 10, 15, 20, 25, and 30, with optimal results marked in bold. While our algorithm does not achieve the best results on every individual image, it consistently performs best across most experiments.
For intuitive comparison, Fig. 3 [FIGURE:3] presents box plots of the five algorithms' results. Since our method and Liu et al.'s method employ learning approaches, both achieve generally high accuracy. At noise levels 15, 20, and 30, our accuracy is similar to Liu et al.'s but with fewer outliers. At noise levels 5, 10, and 25, our algorithm shows smaller estimation fluctuations, indicating better stability.
Fig. 4 [FIGURE:4] shows RMSE and MAE values. When noise level exceeds 20, Pyatykh et al.'s algorithm exhibits maximum RMSE and MAE. When noise level is below 20, Fang et al.'s algorithm shows maximum errors. At noise levels 20 and 30, our RMSE and MAE are comparable to Liu et al.'s. Overall, our algorithm demonstrates relatively high estimation accuracy.
Table 4 [TABLE:4] compares running times. Liu et al.'s algorithm is fastest because it only processes weakly textured blocks, significantly improving efficiency. Our algorithm requires training learning parameters and performing tensor decomposition, resulting in longer execution times.
5. Conclusions and Future Work
This paper presents a tensor decomposition-based method for estimating color image noise levels. By avoiding the structural disruption caused by traditional channel-wise processing, our approach improves estimation accuracy. Theorem 2 proves that eigenvalues of the block diagonal matrix obtained through T-product decomposition relate directly to image noise levels, and learning methods train the coefficients mapping these eigenvalues to noise levels.
We selected parameters M (training set size) and n (number of eigenvalues) on BSD500 and trained learning parameters on this dataset. To demonstrate accuracy, we compared four algorithms on TID2008, conducting experiments on 25 color images at noise levels of 5, 10, 15, 20, 25, and 30. Box plots and RMSE/MAE calculations confirm our algorithm's high accuracy. However, the computational cost is relatively high due to parameter training and tensor decomposition.
Our tensor decomposition method does not fully preserve the tensor's special structure, and the obtained eigenvalues cannot completely represent the tensor's true eigenvalues. Future work will focus on better maintaining tensor structure, investigating the relationship between tensor eigenvalues and image noise levels, and reducing computational time.
Acknowledgments
This study was supported by the National Natural Science Foundation of P.R. China (Grant No. 12171064).
Declarations
The authors declare no conflict of interest.
References
[1] K. Zhang, Y. Li, J. Liang, et al. Practical blind image denoising via Swin-Conv-UNet and data synthesis. Machine Intelligence Research, 2023, 20(6): 822-836.
[2] S. Wu, C. Dong, and Y. Qiao. Blind image restoration based on cycle-consistent network. IEEE Transactions on Multimedia, 2022, 25: 1111-1124.
[3] H. Liu, Z. Fang, L. Tang, et al. Plug-and-Play ADMM for Embedded Noise Level Estimation. Journal of Mathematical Imaging and Vision, 2025, 67(4): 36.
[4] P. Han, C. Ting, and L. Xi. De-correlated unbiased sequential filtering based on best unbiased linear estimation for target tracking in Doppler radar. Journal of Systems Engineering and Electronics, 2020, 31(6): 1167-1177.
[5] P. Gupta, C. G. Bampis, Y Jin, et al. Natural scene statistics for noise estimation. IEEE Southwest Symposium on Image Analysis and Interpretation, 2018: 85-88.
[6] X. Liu, M. Tanaka, and M. Okutomi. Single-image noise level estimation for blind denoising. IEEE transactions on image processing, 2013, 22(12): 5226-5237.
[7] S. Pyatykh, J. Hesser, and Z. Lei. Image noise level estimation by principal component analysis. IEEE Trans Image Process A Publication IEEE Signal Process Soc, 2013, 22(2): 687-699.
[8] X. Liu, M. Tanaka, and M. Okutomi. Noise level estimation using weak textured patches of a single noisy image, IEEE International conference on image processing, 2012: 665-668.
[9] G. Chen, F. Zhu, and P. A. Heng. An efficient statistical method for image noise level estimation. In: International conference on computer vision, 2015: 477-485.
[10] Z. Fang, X. Yi. A novel natural image noise level estimation based on flat patches and local statistics. Multimedia Tools and Applications, 2019, 78(13): 1-22.
[11] P. Jiang, Q. Wang, and J. Wu. Efficient noise-level estimation based on principal image texture. IEEE Trans Circuits Syst Video Technol, 2020, 30(7): 1987-1999.
[12] H. Liu, Z. Fang, and W. Lu. Noise level estimation based on eigenvalue learning. Multimedia Tools and Applications, 2024, 83(15): 44503-44525.
[13] L. Qi. Eigenvalues of a real supersymmetric tensor, Journal of Symbolic Computation, 2005, 40: 1302-1324.
[14] L. Qi. Eigenvalues and invariants of tensors, J. Math. Anal. Appl., 2007, 2: 1363-1377.
[15] L. Qi, H. H. Dai, and D. Han. Conditions for strong ellipticity and M-eigenvalues, Front. Math. China, 2009, 4: 349-364.
[16] L. Qi, Y. Wang, and E.X. Wu. D-eigenvalues of diffusion kurtosis tensors, J. Comput. Appl. Math., 2008, 221: 150-157.
[17] C. F. Cui, Y. H. Dai, and J. Nie, All real eigenvalues of symmetric tensors, SIAM J. Matrix Anal. Appl., 2014, 35: 1582-1601.
[18] L. Qi, F. Wang, and Y. Wang, Z-eigenvalue methods for a global polynomial optimization problem, Math. Program., 2009, 118: 301-316.
[19] L. De Lathauwer, B. De Moor, and J. Vandewalle, On the best rank-1 and rank-(R1; R2; ... ; RN) approximation of higher-order tensors, SIAM Journal on Matrix Analysis and Applications, 2000, 21: 1324-1342.
[20] T. G. Kolda. J. R. Mayo, Shifted power method for computing tensor eigenpairs, SIAM J. Matrix Anal. Appl., 2011, 32: 1095-1124.
[21] T. G. Kolda. J. R. Mayo, An adaptive shifted power method for computing generalized tensor eigenpairs, SIAM J. Matrix Anal. Appl., 2014, 35: 1563-1581.
[22] J. Carroll, J. Chang, Analysis of individual differences in multidimensional scaling via an n-way generalization of Eckart-Young decomposition, Psychometrika, 1970, 35: 283-319.
[23] L. Tucker, Some mathematical notes on three-mode factor analysis, Psychometrika, 1966, 31: 279-311.
[24] M. E. Kilmer, and C. D. Martin. Factorization strategies for third-order tensors. Linear Algebra and Its Applications, 2011, 435(3): 641-658.