首页 | 官方网站   微博 | 高级检索  
相似文献
 共查询到19条相似文献,搜索用时 828 毫秒
1.
目的 各类终端设备获取的大量数据往往由于信息丢失而导致数据不完整,或经常受到降质问题的困扰。为有效恢复缺损或降质数据,低秩张量补全备受关注。张量分解可有效挖掘张量数据的内在特征,但传统分解方法诱导的张量秩函数无法探索张量不同模式之间的相关性;另外,传统张量补全方法通常将全变分约束施加于整体张量数据,无法充分利用张量低维子空间的平滑先验。为解决以上两个问题,提出了基于稀疏先验与多模式张量分解的低秩张量恢复方法。方法 在张量秩最小化模型基础上,融入多模式张量分解技术以及分解因子局部稀疏性。首先对原始张量施加核范数约束,以此捕获张量的全局低秩性,然后,利用多模式张量分解将整体张量沿着每个模式分解为一组低维张量和一组因子矩阵,以探索不同模式之间的相关性,对因子矩阵施加因子梯度稀疏正则化约束,探索张量子空间的局部稀疏性,进一步提高张量恢复性能。结果 在高光谱图像、多光谱图像、YUV(也称为YCbCr)视频和医学影像数据上,将本文方法与其他8种修复方法在3种丢失率下进行定量及定性比较。在恢复4种类型张量数据方面,本文方法与深度学习GP-WLRR方法(global prior refined weighted low-rank representation)的修复效果基本持平,本文方法的MPSNR(mean peak signal-to-noise ratio)在所有丢失率及张量数据上的总体平均高0.68dB,MSSIM(mean structural similarity)总体平均高0.01;与其他6种张量建模方法相比,本文方法的MPSNR及MSSIM均取得最优结果。结论 提出的基于稀疏先验与多模式张量分解的低秩张量恢复方法,可同时利用张量的全局低秩性与局部稀疏性,能够对受损的多维视觉数据进行有效修复。  相似文献   

2.
在实际应用中,恢复缺失的高阶数据一直是重要的研究热点,而基于张量分解的方法能够有效地提取数据的低秩结构,预测丢失的数据,为该问题提供了新的思路.针对传统张量环补全模型的秩松弛问题,建立了基于Lp(0相似文献   

3.
由于在网络测量中存在不可避免的数据损失,网络监测数据通常是不完备的甚至是稀疏的,这使得大象流的精确检测成为一个具有挑战性的问题.本文提出了一种基于数据补全的离线大象流检测方法.为实现对于大象流的精准检测,首先实现了一个基于矩阵分解的数据补全算法,将流量数据补全问题转化为一个低秩矩阵奇异值分解问题.其次,在此基础上进行高阶扩展,引申出张量补全模型,利用张量CP分解实现数据补全,将原问题转化为通过最小化张量秩来恢复缺失条目的张量补全问题.最后对上面使用的矩阵补全算法和张量补全算法进行了仿真实验,对比了各算法精准度,评估了超参数,并展示了张量补全算法的时间开销.实验结果证明该方法取得了较好的效果.  相似文献   

4.
在低秩矩阵、张量最小化问题中,凸函数容易求得最优解,而非凸函数可以得到更低秩的局部解.文中基于非凸替换函数的低秩张量恢复问题,提出基于lp范数的非凸张量模型.采用迭代加权核范数算法求解模型,实现低秩张量最小化.在合成数据和真实图像上的大量实验验证文中方法的恢复性能.  相似文献   

5.
低秩张量填充旨在基于不同张量分解模型恢复缺失数据,由于在挖掘一些高阶数据结构的具有明显的优势,低秩张量环模型已经被广泛应用于张量填充问题。先前的研究已经提出很多关于张量核范数的定义。然而,它们不能很好地近似张量真实的秩,也不能在优化环节利用低秩特性。因此,基于很好近似张量秩的截断平衡展开核范数,提出一种基于截断平衡展开核范数的鲁棒张量环填充模型。在算法优化部分,利用以前提出的矩阵奇异值分解和交替方向乘子法。实验证明,在图像恢复和视频的背景建模问题上,效果比其他算法好。  相似文献   

6.
张量补全算法及其在人脸识别中的应用   总被引:4,自引:0,他引:4  
数据丢失问题通常可以归结为矩阵补全问题,而矩阵补全是继压缩感知理论之后的又一种重要的信号获取方法。在实际应用中,数据样例往往具有多线性性,即数据集可以表示成高阶张量。本文研究了张量补全问题及其在人脸识别中的应用。基于张量的低维Tucker分解,提出张量补全的迭代算法,并且证明在算法的迭代过程中,估计张量与其Tucker逼近张量的距离是单调递减的。实验结果表明张量补全算法在补全张量和人脸识别上的可行性与有效性。  相似文献   

7.
为了提高图像分类准确率,提出了一种基于低秩表示的非负张量分解算法。作为压缩感知理论的推广和发展,低秩表示将矩阵的秩作为一种稀疏测度,由于矩阵的秩反映了矩阵的固有特性,所以低秩表示能有效的分析和处理矩阵数据,本文把低秩表示引入到张量模型中,即引入到非负张量分解算法中,进一步扩展非负张量分解算法。实验结果表明,本文所提算法与其他相关算法相比,分类结果较好。  相似文献   

8.
由于探测器和通信设备的故障,交通数据的缺失是不可避免的,这种缺失给智能交通系统(ITS)带来了不利的影响。针对此问题,运用张量平均秩的概念,对张量核范数进行最小化,从而构建了新的低秩张量补全模型,并且在此基础上,基于张量奇异值分解(T-SVD)和阈值分解(TSVT)理论,分别使用坐标梯度下降法(CGD)和交替乘子法(ADMM)对模型进行求解,提出两个张量补全算法LRTC-CGD和LRTC-TSVT。在公开的真实时空交通数据集上进行实验。结果表明,LRTC-CGD和LRTC-TSVT算法在不同的缺失场景和缺失率条件下,补全精度要优于现行的其他补全算法,并且在数据极端缺失情况下(70%~80%),补全的效果更加稳定。  相似文献   

9.
针对不完整张量数据的特征提取问题,传统的“两步走”方法,即先张量补全再特征提取,难以避免无关特征增大填补误差,进而影响特征提取的效果;而近年提出的TDVM方法尽管可以同时进行张量补全和特征提取,但由于没有考虑数据的局部结构特点,特征提取效果仍不理想.因此,本文提出一个基于流形学习和张量分解的不完整张量特征提取方法:MLTD.首先,利用“部分距离法”和非负对称矩阵分解得到完整的样本相似矩阵,进而得到样本近邻图;然后,根据近邻图建立基于流形学习和张量分解的特征提取模型,主要思想是将方差最大化和局部保持投影策略融入张量分解中.该方法可以直接从不完整张量中提取有效特征,同时保留数据的局部结构特点.本文在4个图像数据集上与5种较新的方法进行对比.实验结果表明,新提出的方法在张量补全和利用所提取的特征进行分类时性能上都有显著的优越性.  相似文献   

10.
互质阵列因其大阵列孔径和高自由度特性在波束成形领域受到广泛关注.为了充分利用该特性,近年来学者们提出了基于孔洞填充的算法,有效提高了互质阵列波束成形的性能.然而,这些算法存在计算量大、噪声鲁棒性弱等缺点,难以适应复杂多变的实际环境.为此,本文利用张量的多维结构在参数估计上的性能优势,提出了一种基于低管秩张量分解的互质阵列自适应波束成形算法.首先将互质阵列的多采样虚拟信号矩阵重排为张量形式,利用其低管秩特性补全缺失的互相关信息;然后从补全后的张量数据中提取信号参数,并与目标先验进行匹配,最终得到波束成形权矢量.本算法分别利用ADMM和Tucker分解提高了张量补全和分解的运算效率;所设计的目标匹配方案也有效控制了算法误差.仿真结果展示了本算法在性能和计算复杂度相对于现有方法的优势,尤其是在低信噪比和少采样数的情况下.  相似文献   

11.
In microscopic image processing for analyzing biological objects, structural characters of objects such as symmetry and orientation can be used as a prior knowledge to improve the results. In this study, we incorporated filamentous local structures of neurons into a statistical model of image patches and then devised an image processing method based on tensor factorization with image patch rotation. Tensor factorization enabled us to incorporate correlation structure between neighboring pixels, and patch rotation helped us obtain image bases that well reproduce filamentous structures of neurons. We applied the proposed model to a microscopic image and found significant improvement in image restoration performance over existing methods, even with smaller number of bases.  相似文献   

12.
Tensor provides a better representation for image space by avoiding information loss in vectorization. Nonnegative tensor factorization (NTF), whose objective is to express an n-way tensor as a sum of k rank-1 tensors under nonnegative constraints, has recently attracted a lot of attentions for its efficient and meaningful representation. However, NTF only sees Euclidean structures in data space and is not optimized for image representation as image space is believed to be a sub-manifold embedded in high-dimensional ambient space. To avoid the limitation of NTF, we propose a novel Laplacian regularized nonnegative tensor factorization (LRNTF) method for image representation and clustering in this paper. In LRNTF, the image space is represented as a 3-way tensor and we explicitly consider the manifold structure of the image space in factorization. That is, two data points that are close to each other in the intrinsic geometry of image space shall also be close to each other under the factorized basis. To evaluate the performance of LRNTF in image representation and clustering, we compare our algorithm with NMF, NTF, NCut and GNMF methods on three standard image databases. Experimental results demonstrate that LRNTF achieves better image clustering performance, while being more insensitive to noise.  相似文献   

13.
Remote sensing image fusion is considered a cost effective method for handling the tradeoff between the spatial, temporal and spectral resolutions of current satellite systems. However, most current fusion methods concentrate on fusing images in two domains among the spatial, temporal and spectral domains, and a few efforts have been made to comprehensively explore the relationships of spatio-temporal–spectral features. In this study, we propose a novel integrated spatio-temporal–spectral fusion framework based on semicoupled sparse tensor factorization to generate synthesized frequent high-spectral and high-spatial resolution images by blending multisource observations. Specifically, the proposed method regards the desired high spatio-temporal–spectral resolution images as a four-dimensional tensor and formulates the integrated fusion problem as the estimation of the core tensor and the dictionary along each mode. The high-spectral correlation across the spectral domain and the high self-similarity (redundancy) features in the spatial and temporal domains are jointly exploited using the low dimensional and sparse core tensors. In addition, assuming that the sparse coefficients in the core tensors across the observed and desired image spaces are not strictly the same, we formulate the estimation of the core tensor and the dictionaries as a semicoupled sparse tensor factorization of available heterogeneous spatial, spectral and temporal remote sensing observations. Finally, the proposed method can exploit the multicomplementary spatial, temporal and spectral information of any combination of remote sensing data based on this single unified model. Experiments on multiple data types, including spatio-spectral, spatio-temporal, and spatio-temporal–spectral data fusion, demonstrate the effectiveness and efficiency of the proposed method.  相似文献   

14.

A rain streak in an image can degrade visual quality of that image to the human eye. Unfortunately, removing the rain streak from a single image represents a very challenging task. In this paper, a single image rain removal process based on non-negative matrix factorization is proposed. First, the rain image is broken down into a low-frequency and high-frequency part by a Gaussian filter. Therefore, the rain component, which lies mostly in the middle frequency range, can be discarded in high and low frequency domains. Next, non-negative matrix factorization (NMF) method is applied to deal with the rain streak in the low frequency domain. Finally, Canny edge detection and block copy strategy are performed separately to remove the rain component in the high frequency domain to improve image quality. In comparison with state-of-the-art approaches, the proposed method achieves competitive results without the need for an extra image database to train the dictionary.

  相似文献   

15.
One of the main difficulties in tensor completion is the calculation of the tensor rank. Recently a tensor nuclear norm, which is equal to the weighted sum of matrix nuclear norms of all unfoldings of the tensor, was proposed to address this issue. However, in this approach, all the singular values are minimized simultaneously. Hence the tensor rank may not be well approximated. In addition, many existing algorithms ignore the structural information of the tensor. This paper presents a tensor completion algorithm based on the proposed tensor truncated nuclear norm, which is superior to the traditional tensor nuclear norm. Furthermore, to maintain the structural information, a sparse regularization term, defined in the transform domain, is added into the objective function. Experimental results showed that our proposed algorithm outperforms several state-of-the-art tensor completion schemes.  相似文献   

16.
本文为了改善动态MR图像重建质量,提出了一种结合张量奇异值分解和全变分稀疏模型(TV)的动态核磁共振图像重建算法。算法对动态MR图像进行了低秩约束规范和稀疏约束规范,分别使用了张量奇异值分解阈值方法和全变分稀疏变化基方法求解。实验结果和重建视觉效果表明,在相同采样率下本文算法与单独使用全变分方法,k-t SLR方法,单独使用张量奇异值分解方法相比重建质量更优,在峰值信噪比(PSNR),均方差(MSE)和结构相似性度量(SSIM)的评价指标上有所提高,对图像去噪去模糊重建有具体的应用价值。  相似文献   

17.
Zhang  Tianheng  Zhao  Jianli  Sun  Qiuxia  Zhang  Bin  Chen  Jianjian  Gong  Maoguo 《Applied Intelligence》2022,52(7):7761-7776
Applied Intelligence - In recent years, low-rank tensor completion has been widely used in color image recovery. Tensor Train (TT), as a balanced tensor rank minimization method, has achieved good...  相似文献   

18.
基于张量表示的直推式多模态视频语义概念检测   总被引:4,自引:0,他引:4  
吴飞  刘亚楠  庄越挺 《软件学报》2008,19(11):2853-2868
提出了一种基于高阶张量表示的视频语义分析与理解框架.在此框架中,视频镜头首先被表示成由视频中所包含的文本、视觉和听觉等多模态数据构成的三阶张量;其次,基于此三阶张量表达及视频的时序关联共生特性设计了一种子空间嵌入降维方法,称为张量镜头;由于直推式学习从已知样本出发能对特定的未知样本进行学习和识别.最后在这个框架中提出了一种基于张量镜头的直推式支持张量机算法,它不仅保持了张量镜头所在的流形空间的本征结构,而且能够将训练集合外数据直接映射到流形子空间,同时充分利用未标记样本改善分类器的学习性能.实验结果表明,该方法能够有效地进行视频镜头的语义概念检测.  相似文献   

19.
Most existing research on applying the matrix factorization approaches to query-focused multi-document summarization (Q-MDS) explores either soft/hard clustering or low rank approximation methods. We employ a different kind of matrix factorization method, namely weighted archetypal analysis (wAA) to Q-MDS. In query-focused summarization, given a graph representation of a set of sentences weighted by similarity to the given query, positively and/or negatively salient sentences are values on the weighted data set boundary. We choose to use wAA to compute these extreme values, archetypes, and hence to estimate the importance of sentences in target documents set. We investigate the impact of using the multi-element graph model for query focused summarization via wAA. We conducted experiments on the data of document understanding conference (DUC) 2005 and 2006. Experimental results evidence the improvement of the proposed approach over other closely related methods and many of state-of-the-art systems.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司    京ICP备09084417号-23

京公网安备 11010802026262号