共查询到19条相似文献,搜索用时 48 毫秒
1.
为了解决当前推荐方法在向网络用户提供个性化推荐服务时,由于没有考虑时间因素导致最终推荐结果与用户实际需求偏差较大、推荐覆盖率较低、收敛速度较慢,且容易出现过拟合等问题,提出了基于局部低秩张量分解的协同过滤推荐方法,该方法在构建网络用户信息模型基础上,采用余弦相似度、Pearson相关系数、Jaccard相关系数三种不同方法计算网络用户对相同项目的评分相似度,实现了基于用户信息模型的协同过滤推荐,为了使得最终推荐结果具有更好的扩展性和推荐准确性、实时性,需要同时考虑用户、项目以及服务方这三个因素来预测网络用户对项目的偏好程度,将二维矩阵扩展为三维张量来反映影响用户偏好的隐形因素,即通过分解网络用户-项目-服务方局部低秩张量来预测用户的隐性偏好度,最后选取预测结果中排名靠前的项目推荐给网络用户,完成个性化推荐服务。仿真测试结果证明,所提方法具有较高的推荐准确率、推荐覆盖率和较快的收敛速度,同时避免了过拟合现象。 相似文献
2.
传统的推荐模型是静态的,忽略了时间因素。部分推荐算法虽然将时间因素考虑在内,但只是简单使用最近的数据或者 降低 过去数据的权重,这样可能会造成有用信息的丢失。针对这一问题,提出了一种考虑时间因素的局部低秩张量分解推荐算法。在传统的推荐算法的基础上,放松用户对项目的评分矩阵是低秩的这一假设,认为整个评分矩阵可能不是低秩的而是局部低秩的,即特定用户项目序偶的近邻空间是低秩的;同时又考虑时间因素,把评分矩阵看作是用户、项目和时间3个维度的张量,将传统的推荐算法延伸到张量领域。实验表明,所提算法能显著提升排名推荐性能。 相似文献
3.
4.
高光谱图像变化检测可提供地球表面的时间维变化信息,对城乡规划和管理至关重要.因具有较高的光谱分辨率,高光谱图像常被用于检测更精细的变化.针对高光谱变化检测的问题,提出一种基于协同稀疏与非局部低秩张量的高光谱图像变化检测方法.该方法首先求得前后时间点的高光谱差分图像,再根据差分图像中图像块的非局部分布特点,提取不同的非局... 相似文献
5.
在实际应用中,恢复缺失的高阶数据一直是重要的研究热点,而基于张量分解的方法能够有效地提取数据的低秩结构,预测丢失的数据,为该问题提供了新的思路.针对传统张量环补全模型的秩松弛问题,建立了基于Lp(0
相似文献
6.
7.
目的 各类终端设备获取的大量数据往往由于信息丢失而导致数据不完整,或经常受到降质问题的困扰。为有效恢复缺损或降质数据,低秩张量补全备受关注。张量分解可有效挖掘张量数据的内在特征,但传统分解方法诱导的张量秩函数无法探索张量不同模式之间的相关性;另外,传统张量补全方法通常将全变分约束施加于整体张量数据,无法充分利用张量低维子空间的平滑先验。为解决以上两个问题,提出了基于稀疏先验与多模式张量分解的低秩张量恢复方法。方法 在张量秩最小化模型基础上,融入多模式张量分解技术以及分解因子局部稀疏性。首先对原始张量施加核范数约束,以此捕获张量的全局低秩性,然后,利用多模式张量分解将整体张量沿着每个模式分解为一组低维张量和一组因子矩阵,以探索不同模式之间的相关性,对因子矩阵施加因子梯度稀疏正则化约束,探索张量子空间的局部稀疏性,进一步提高张量恢复性能。结果 在高光谱图像、多光谱图像、YUV(也称为YCbCr)视频和医学影像数据上,将本文方法与其他8种修复方法在3种丢失率下进行定量及定性比较。在恢复4种类型张量数据方面,本文方法与深度学习GP-WLRR方法(global prior refined weighted low-rank representation)的修复效果基本持平,本文方法的MPSNR(mean peak signal-to-noise ratio)在所有丢失率及张量数据上的总体平均高0.68dB,MSSIM(mean structural similarity)总体平均高0.01;与其他6种张量建模方法相比,本文方法的MPSNR及MSSIM均取得最优结果。结论 提出的基于稀疏先验与多模式张量分解的低秩张量恢复方法,可同时利用张量的全局低秩性与局部稀疏性,能够对受损的多维视觉数据进行有效修复。 相似文献
8.
针对传统矩阵补全算法在图像重建方面的不足,提出了一种基于非局部自相似性和低秩矩阵逼近(NL-LRMA)的补全算法。首先,通过相似性度量找到图像中局部块所对应的非局部相似块,并将相应灰度信息进行向量化,从而构建出非局部相似块矩阵;然后,针对所得相似矩阵的低秩性,对其进行低秩补全操作(LRMA);最后,对补全结果进行重新组合,以达到恢复原始图像的目的。在灰度图像以及RGB图像上进行重建实验,结果表明:在经典数据集上,NL-LRMA算法要比原LRMA算法在平均峰值信噪比(PSNR)上高出4~7 dB;同时,新算法在视觉效果与PSNR值方面也明显优于迭代重加权核范数(IRNN)、加权核范数(WNNM)、LRMA等传统算法。总之,所提算法对传统算法在自然图像重建方面的不足进行了有效弥补,从而为图像重建提供了一种行之有效的解决方案。 相似文献
9.
常见的图像去噪方法只是单独地利用了无噪图像或含噪图像的先验信息,并没有将这两种图像的先验信息有效地结合起来。针对这个问题,提出一种 联合无噪图像块的先验信息和含噪图像块的非局部自相似性进行去噪的图像去噪算法。首先,对无噪图像块进行谱聚类,通过谱聚类进行学习,图像中的相似块被聚集到同一类,并将学习得到的聚类信息用于含噪图像块的聚类;然后,向量化同一类中的含噪图像块并聚集形成一个矩阵,该矩阵中包含的原始图像数据构成一个低秩矩阵;再通过一个低秩逼近过程估计出相应的原始图像数据;最后,根据逼近得到的原始图像数据重建图像。实验结果表明,相较于已有的自适应正则化的非局部均值去噪算法以及基于主成分分析和局部像素聚类的两级图像去噪算法,提出的算法不仅可以获得较大的峰值信噪比,而且还能较好地保存图像的细节,取得了更好的去噪效果。 相似文献
10.
11.
Background/foreground separation is the first step in video surveillance system to detect moving objects. Recent research on problem formulations based on decomposition into low-rank plus sparse matrices shows a suitable framework to separate moving objects from the background. The most representative problem formulation is the Robust Principal Component Analysis (RPCA) solved via Principal Component Pursuit (PCP) which decomposes a data matrix into a low-rank matrix and a sparse matrix. However, similar robust implicit or explicit decompositions can be made in the following problem formulations: Robust Non-negative Matrix Factorization (RNMF), Robust Matrix Completion (RMC), Robust Subspace Recovery (RSR), Robust Subspace Tracking (RST) and Robust Low-Rank Minimization (RLRM). The main goal of these similar problem formulations is to obtain explicitly or implicitly a decomposition into low-rank matrix plus additive matrices. These formulation problems differ from the implicit or explicit decomposition, the loss function, the optimization problem and the solvers. As the problem formulation can be NP-hard in its original formulation, and it can be convex or not following the constraints and the loss functions used, the key challenges concern the design of efficient relaxed models and solvers which have to be with iterations as few as possible, and as efficient as possible. In the application of background/foreground separation, constraints inherent to the specificities of the background and the foreground as the temporal and spatial properties need to be taken into account in the design of the problem formulation. Practically, the background sequence is then modeled by a low-rank subspace that can gradually change over time, while the moving foreground objects constitute the correlated sparse outliers. Although, many efforts have been made to develop methods for the decomposition into low-rank plus additive matrices that perform visually well in foreground detection with reducing their computational cost, no algorithm today seems to emerge and to be able to simultaneously address all the key challenges that accompany real-world videos. This is due, in part, to the absence of a rigorous quantitative evaluation with synthetic and realistic large-scale dataset with accurate ground truth providing a balanced coverage of the range of challenges present in the real world. In this context, this work aims to initiate a rigorous and comprehensive review of the similar problem formulations in robust subspace learning and tracking based on decomposition into low-rank plus additive matrices for testing and ranking existing algorithms for background/foreground separation. For this, we first provide a preliminary review of the recent developments in the different problem formulations which allows us to define a unified view that we called Decomposition into Low-rank plus Additive Matrices (DLAM). Then, we examine carefully each method in each robust subspace learning/tracking frameworks with their decomposition, their loss functions, their optimization problem and their solvers. Furthermore, we investigate if incremental algorithms and real-time implementations can be achieved for background/foreground separation. Finally, experimental results on a large-scale dataset called Background Models Challenge (BMC 2012) show the comparative performance of 32 different robust subspace learning/tracking methods. 相似文献
12.
With the advent of the era “everything is service ”, the emergence of Web services on the Internet is experiencing an exponential growth trend. How to recommend services to users that utilize sequential historical records has become one of the most challenging research topics in service computing. Tensor Factorization (TF) and Long Short Term Memory (LSTM) networks are two typical application paradigms for sequential service recommendation tasks. However, TF can only learn static short-term dependency patterns between users and services, ignoring the dynamic long-term dependency patterns between users and services. Although LSTM in Deep Leaning can learn dynamic long-term dependency patterns, it often encounters the trouble of vanishing gradient due to its complex gated mechanism. To address these critical challenges, we develop a novel model based on Deep Learning named Recurrent Tensor Factorization (RTF) with three innovations: (1) Three-dimensional interaction tensor of user–service-time was granulated into three fixed-size embedding dense vectors. (2) Personalized Gated Recurrent Unit (PGRU) and Generalized Tensor Factorization (GTF) simultaneously work on shared embedding dense vectors to memorize the long-term and short-term dependency patterns between users and services respectively. (3) Armed with GTF and PGRU, RTF is competent to predict the unknown Quality of Service (QoS) through comprehensive analysis. Experiments are conducted on real-world dataset, and the results indicate that our proposed method obviously outperforms six state-of-the-art time-aware service recommendation methods. 相似文献
13.
现有的非负矩阵分解方法直接在原始高维图像数据集上计算低维表示,同时存在对噪声数据、噪声标签、不可靠图敏感及鲁棒性较差的缺点.为了解决上述问题,文中提出基于L21范数的非负低秩图嵌入算法(NLGEL21),同时考虑原始数据集的有效低秩结构和几何信息.在图嵌入和数据重构函数中引入L21范数,进一步提高鲁棒性,并给出求解NLGEL21的乘性迭代公式和收敛性证明.在ORL、CMU PIE、YaleB人脸数据库上的实验验证NLGEL21的优越性. 相似文献
14.
张量补全算法及其在人脸识别中的应用 总被引:4,自引:0,他引:4
数据丢失问题通常可以归结为矩阵补全问题,而矩阵补全是继压缩感知理论之后的又一种重要的信号获取方法。在实际应用中,数据样例往往具有多线性性,即数据集可以表示成高阶张量。本文研究了张量补全问题及其在人脸识别中的应用。基于张量的低维Tucker分解,提出张量补全的迭代算法,并且证明在算法的迭代过程中,估计张量与其Tucker逼近张量的距离是单调递减的。实验结果表明张量补全算法在补全张量和人脸识别上的可行性与有效性。 相似文献
15.
Several low-rank tensor completion methods have been integrated with total variation (TV) regularization to retain edge information and promote piecewise smoothness. In this paper, we first construct a fractional Jacobian matrix to nonlocally couple the structural correlations across components and propose a fractional-Jacobian-extended tensor regularization model, whose energy functional was designed proportional to the mixed norm of the fractional Jacobian matrix. Consistent regularization could thereby be performed on each component, avoiding band-by-band TV regularization and enabling effective handling of the contaminated fine-grained and complex details due to the introduction of a fractional differential. Since the proposed spatial regularization is linear convex, we further produced a novel fractional generalization of the classical primal-dual resolvent to develop its solver efficiently. We then combined the proposed tensor regularization model with low-rank constraints for tensor completion and addressed the problem by employing the augmented Lagrange multiplier method, which provides a splitting scheme. Several experiments were conducted to illustrate the performance of the proposed method for RGB and multispectral image restoration, especially its abilities to recover complex structures and the details of multi-component visual data effectively. 相似文献
16.
17.
Virtually all previous classifier models take vectors as inputs, performing directly based on the vector patterns. But it is highly necessary to consider images as matrices in real applications. In this paper, we represent images as second order tensors or matrices. We then propose two novel tensor algorithms, which are referred to as Maximum Margin Multisurface Proximal Support Tensor Machine (M3PSTM) and Maximum Margin Multi-weight Vector Projection Support Tensor Machine (M3VSTM), for classifying and segmenting the images. M3PSTM and M3VSTM operate in tensor space and aim at computing two proximal tensor planes for multisurface learning. To avoid the singularity problem, maximum margin criterion is used for formulating the optimization problems. Thus the proposed tensor classifiers have an analytic form of projection axes and can achieve the maximum margin representations for classification. With tensor representation, the number of estimated parameters is significantly reduced, which makes M3PSTM and M3VSTM more computationally efficient when handing the high-dimensional datasets than applying the vector representations based methods. Thorough image classification and segmentation simulations on the benchmark UCI and real datasets verify the efficiency and validity of our approaches. The visual and numerical results show M3PSTM and M3VSTM deliver comparable or even better performance than some state-of-the-art classification algorithms. 相似文献
18.
传统的聚类算法一般使用欧氏距离获得数据的相似矩阵,在处理一些较复杂的数据时,欧氏距离由于不能反映全局一致性,因此无法有效地描述出数据点的实际分布。提出了一种基于秩约束密度敏感距离(Rank Constraints Density Sensitive Distance,RCDSD) 的自适应聚类算法。该方法首先引入密度敏感距离的相似性度量得到相似矩阵,有效地扩大了不同类数据点之间的距离,缩小了同类数据点间的距离,从而解决了传统聚类算法使用欧氏距离作为相似性度量导致聚类结果出现偏差的弊端;其次,在相似矩阵的拉普拉斯矩阵上施加秩约束,使相似矩阵的连通区域数等于聚类数,直接将数据点划分到正确的类中,得到最终的聚类结果,而不需要执行k-means或其它离散化程序。在人工仿真数据集和真实数据集上进行了大量实验,结果表明,所提算法得到了准确的聚类结果,并提高了聚类性能。 相似文献
19.
现有张量分解技术在用于知识图谱学习和推理过程中时,只考虑知识图谱中实体与实体间的直接关系,忽略知识图谱图形结构的特点.因此,文中提出基于路径张量分解的知识图谱推理算法(PRESCAL),利用路径排列算法(PRA)获得知识图谱中各实体对间的关系路径.然后对实体对间的关系路径进行张量分解,并在优化更新过程中采用交替最小二乘法.实验表明,在路径问题回答任务和实体链接预测任务中,PRESCAL可以取得较好的预测准确率. 相似文献