首页 | 官方网站   微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
The paper introduces an efficient construction algorithm for obtaining sparse linear-in-the-weights regression models based on an approach of directly optimizing model generalization capability. This is achieved by utilizing the delete-1 cross validation concept and the associated leave-one-out test error also known as the predicted residual sums of squares (PRESS) statistic, without resorting to any other validation data set for model evaluation in the model construction process. Computational efficiency is ensured using an orthogonal forward regression, but the algorithm incrementally minimizes the PRESS statistic instead of the usual sum of the squared training errors. A local regularization method can naturally be incorporated into the model selection procedure to further enforce model sparsity. The proposed algorithm is fully automatic, and the user is not required to specify any criterion to terminate the model construction procedure. Comparisons with some of the existing state-of-art modeling methods are given, and several examples are included to demonstrate the ability of the proposed algorithm to effectively construct sparse models that generalize well.  相似文献   

2.
A new sparse kernel probability density function (pdf) estimator based on zero-norm constraint is constructed using the classical Parzen window (PW) estimate as the target function. The so-called zero-norm of the parameters is used in order to achieve enhanced model sparsity, and it is suggested to minimize an approximate function of the zero-norm. It is shown that under certain condition, the kernel weights of the proposed pdf estimator based on the zero-norm approximation can be updated using the multiplicative nonnegative quadratic programming algorithm. Numerical examples are employed to demonstrate the efficacy of the proposed approach.  相似文献   

3.
针对现有大多数兴趣点推荐算法都存在签到数据稀疏、社交关系难以获取、用户个性难以考虑等问题,文中提出融合地理信息、种类信息与隐式社交关系的兴趣点推荐算法.首先考虑用户签到种类信息,同时分解用户签到地点矩阵和用户签到种类矩阵,减小签到数据稀疏带来的影响.再在显式社交关系的基础上,使用信息熵的方法度量用户的隐式社交关系,缓解社交网络稀疏的问题,并通过正则化的方法在矩阵分解模型中加入该隐式社交关系.最后,使用自适应核密度估计方法个性化建模地理信息对用户签到行为的影响,提高推荐的准确性.在Foursquare、Yelp数据集上的实验验证文中算法的有效性.  相似文献   

4.
针对认知用户接收的未知稀疏度信号,提出一种基于盲稀疏度匹配追踪的协同频谱检测算法。该算法自动调节候选集原子的数量后,在迭代过程中采用阶段转换得到稀疏度,并利用回退机制获得全局最优支撑集,同时通过SNR估计选择最优协作用户进行联合检测,从而实现频谱的快速检测。实验结果表明,在相同条件下,该算法的检测效果优于同类算法,检测率比无选择对象的协作检测方法提高 约25%。  相似文献   

5.
针对目前基于稀疏表示的图像盲卷积算法细节恢复有限等问题,提出一种基于稀疏表示和梯度先验的图像盲卷积算法。虽然每个图像块可以通过字典稀疏表示,但是图像块重构出的图像常常出现“伪像”,本文将梯度先验知识和超拉普拉斯先验知识融入稀疏表示盲卷积模型中,采用迭代方法交替估计中间清晰图像和模糊核,一旦获得模糊核,采用超拉普拉斯非盲去卷积算法恢复出最终的清晰图像。实验结果表明,与其他去模糊算法相比,本文算法在抑制振铃方面效果显著。  相似文献   

6.
Using the classical Parzen window (PW) estimate as the target function, the sparse kernel density estimator is constructed in a forward-constrained regression (FCR) manner. The proposed algorithm selects significant kernels one at a time, while the leave-one-out (LOO) test score is minimized subject to a simple positivity constraint in each forward stage. The model parameter estimation in each forward stage is simply the solution of jackknife parameter estimator for a single parameter, subject to the same positivity constraint check. For each selected kernels, the associated kernel width is updated via the Gauss-Newton method with the model parameter estimate fixed. The proposed approach is simple to implement and the associated computational cost is very low. Numerical examples are employed to demonstrate the efficacy of the proposed approach.  相似文献   

7.
ε-支持向量回归机算法及其应用   总被引:2,自引:0,他引:2  
针对现有传统的一些图像去噪方法难以获得清晰图像边缘的问题,提出了利用ε-SVR技术构建图像去噪滤波器的新方法。ε-支持向量回归机通过引入ε不敏感损失函数,可以实现具有较强鲁棒性的回归,而且回归估计是稀疏的,保留了SVM的所有优点。分析了ε-支持向量回归机理论算法及其在图像去噪中的应用,使用ε-支持向量回归机对图像进行滤波并且与最小值滤波、均值滤波和维纳滤波等常用的滤波方法相比较,还比较了SVM各种核函数对不同噪声的滤波效果和分析了不同阶数的Multinomial核的滤波效果。实验结果表明了ε-支持向量回归机能够有效地去除噪声,不但信噪比较高而且比较清晰,同时具有良好的稀疏性。  相似文献   

8.
随着互联网信息技术的迅速发展,网络数据量快速增长,如何在海量数据中找到用户感兴趣的信息并实现个性化推荐是目前重要的研究方向。协同过滤算法作为推荐系统中的经典方法被广泛应用于不同场景,但是仍然存在数据稀疏,以及在计算相似度时不能考虑到所有数据的问题,只能够利用具有共同评分的数据,严重影响了推荐的精确度。针对上述存在的问题,提出了一种融合上下文信息与核密度估计的协同过滤个性化推荐算法。该算法通过对用户和项目各自的上下文信息和已经存在的用户评分数据进行处理,通过核密度估计构建用户和项目的兴趣模型,充分挖掘了用户和项目的兴趣分布,以获得更准确的用户和项目兴趣相似度,降低预测评分误差。在公开的数据集上验证表明,将该算法对比传统的协同过滤算法,有效提高了推荐的精确度。  相似文献   

9.
A novel sparse kernel density estimation method is proposed based on the sparse Bayesian learning with random iterative dictionary preprocessing. Using empirical cumulative distribution function as the response vectors, the sparse weights of density estimation are estimated by sparse Bayesian learning. The proposed iterative dictionary learning algorithm is used to reduce the number of kernel computations, which is an essential step of the sparse Bayesian learning. With the sparse kernel density estimation, the quadratic Renyi entropy based normalized mutual information feature selection method is proposed. The simulation of three examples demonstrates that the proposed method is comparable to the typical Parzen kernel density estimations. And compared with other state-of-art sparse kernel density estimations, our method also has a shown very good performance as to the number of kernels required in density estimation. For the last example, the Friedman data and Housing data are used to show the property of the proposed feature variables selection method.  相似文献   

10.
为了解决增量式最小二乘孪生支持向量回归机存在构成的核矩阵无法很好地逼近原核矩阵的问题,提出了一种增量式约简最小二乘孪生支持向量回归机(IRLSTSVR)算法。该算法首先利用约简方法,判定核矩阵列向量之间的相关性,筛选出用于构成核矩阵列向量的样本作为支持向量以降低核矩阵中列向量的相关性,使得构成的核矩阵能够更好地逼近原核矩阵,保证解的稀疏性。然后通过分块矩阵求逆引理高效增量更新逆矩阵,进一步缩短了算法的训练时间。最后在基准测试数据集上验证算法的可行性和有效性。实验结果表明,与现有的代表性算法相比,IRLSTSVR算法能够获得稀疏解和更接近离线算法的泛化性能。  相似文献   

11.
模式识别的技术核心就是特征提取,而特征融合则是对特征提取方法的强力补充,对于提高特征的识别效率具有重要作用。本文基于稀疏表示方法,将稀疏表示方法用到高维度空间,并利用核方法在高维度空间进行稀疏表示,用其计算核稀疏表示系数,同时研究了核稀疏保持投影算法(Kernel sparsity preserve projection,KSPP)。将KSPP引入到典型相关分析算法(Canonical correlation analysis,CCA),研究了基于核稀疏保持投影的典 型相关分析算法(Kernel sparsity preserve canonical correlation analysis,K-SPCCA)。在多特征手写体数据库和人脸图像数据库上分别证实了本文提出方法的可靠性和有效性 。  相似文献   

12.
The sparsity driven classification technologies have attracted much attention in recent years, due to their capability of providing more compressive representations and clear interpretation. Two most popular classification approaches are support vector machines (SVMs) and kernel logistic regression (KLR), each having its own advantages. The sparsification of SVM has been well studied, and many sparse versions of 2-norm SVM, such as 1-norm SVM (1-SVM), have been developed. But, the sparsification of KLR has been less studied. The existing sparsification of KLR is mainly based on L 1 norm and L 2 norm penalties, which leads to the sparse versions that yield solutions not so sparse as it should be. A very recent study on L 1/2 regularization theory in compressive sensing shows that L 1/2 sparse modeling can yield solutions more sparse than those of 1 norm and 2 norm, and, furthermore, the model can be efficiently solved by a simple iterative thresholding procedure. The objective function dealt with in L 1/2 regularization theory is, however, of square form, the gradient of which is linear in its variables (such an objective function is the so-called linear gradient function). In this paper, through extending the linear gradient function of L 1/2 regularization framework to the logistic function, we propose a novel sparse version of KLR, the 1/2 quasi-norm kernel logistic regression (1/2-KLR). The version integrates advantages of KLR and L 1/2 regularization, and defines an efficient implementation scheme of sparse KLR. We suggest a fast iterative thresholding algorithm for 1/2-KLR and prove its convergence. We provide a series of simulations to demonstrate that 1/2-KLR can often obtain more sparse solutions than the existing sparsity driven versions of KLR, at the same or better accuracy level. The conclusion is also true even in comparison with sparse SVMs (1-SVM and 2-SVM). We show an exclusive advantage of 1/2-KLR that the regularization parameter in the algorithm can be adaptively set whenever the sparsity (correspondingly, the number of support vectors) is given, which suggests a methodology of comparing sparsity promotion capability of different sparsity driven classifiers. As an illustration of benefits of 1/2-KLR, we give two applications of 1/2-KLR in semi-supervised learning, showing that 1/2-KLR can be successfully applied to the classification tasks in which only a few data are labeled.  相似文献   

13.
针对现有传统的一些图像去噪方法难以获得清晰图像边缘的问题,提出了利用ε-SVR技术构建图像去噪滤波器的新方法。ε-支持向量回归机通过引入ε不敏感损失函数,可以实现具有较强鲁棒性的回归,而且回归估计是稀疏的,保留了SVM的所有优点。分析了ε-支持向量回归机理论算法及其在图像去噪中的应用,使用ε-支持向量回归机对图像进行滤波并且与最小值滤波、均值滤波和维纳滤波等常用的滤波方法相比较,还比较了SVM各种核函数对不同噪声的滤波效果和分析了不同阶数的Multinomial核的滤波效果。实验结果表明了ε-支持向量回归机能够有效地去除噪声,不但信噪比较高而且比较清晰,同时具有良好的稀疏性。  相似文献   

14.
田金鹏  刘小娟  郑国莘 《自动化学报》2016,42(10):1512-1519
针对压缩感知(Compressive sensing,CS)中未知稀疏度信号的重建问题,本文提出一种变步长稀疏度自适应子空间追踪算法.首先,采用一种匹配测试的方法确定固定步长,然后以该固定步长与变步长方式相结合,通过不同支撑集原子个数下的重建残差变化确定信号稀疏度,算法采用子空间追踪方法确定相应支撑集原子,并完成原始信号准确重建.实验结果表明,与同类算法相比,该算法可以更准确重建原始信号,且信号稀疏度值较高时,运算量低于同类算法.  相似文献   

15.
为了提高联合稀疏频谱环境下未知稀疏度信号的检测精度和速度,提出了一种联合稀疏可变步长的匹配追踪感知算法。算法根据信号内部及信号之间的相关性,利用一种原子匹配测试得到稀疏度的粗估计,采用变步长思想逼近全局最优支撑集,初始阶段利用大步长快速匹配以提高收敛速度,根据恢复情况减小步长以实现精确逼近。实验结果表明:改进的算法在检测概率和收敛速度上均优于SOMP和SSAMP算法。  相似文献   

16.
A recent development in penalized probit modelling using a hierarchical Bayesian approach has led to a sparse binomial (two-class) probit classifier that can be trained via an EM algorithm. A key advantage of the formulation is that no tuning of hyperparameters relating to the penalty is needed thus simplifying the model selection process. The resulting model demonstrates excellent classification performance and a high degree of sparsity when used as a kernel machine. It is, however, restricted to the binary classification problem and can only be used in the multinomial situation via a one-against-all or one-against-many strategy. To overcome this, we apply the idea to the multinomial probit model. This leads to a direct multi-classification approach and is shown to give a sparse solution with accuracy and sparsity comparable with the current state-of-the-art. Comparative numerical benchmark examples are used to demonstrate the method.  相似文献   

17.
付卫红  梁漠杨  田德艳  农斌 《计算机仿真》2020,37(2):174-177,311
针对压缩感知理论中,现有的优化L1范数稀疏重构算法在重构源信号时,当且仅当稀疏度小于等于观测信号长度一半时才能够正确重构源信号的问题,提出了部分支撑集的L1范数稀疏重构算法。改进算法采用线性规划方法最小化源信号"尾部"支撑集的L1范数,能够在稀疏度大于观测信号长度一半时正确重构出源信号。仿真结果表明,在不同信噪比和稀疏度条件下,所提算法的重构精度优于现有的优化L1范数的稀疏重构算法和正交匹配追踪的稀疏重构算法。  相似文献   

18.
基于正则化路径的支持向量机近似模型选择   总被引:2,自引:0,他引:2  
模型选择问题是支持向量机的基本问题.基于核矩阵近似计算和正则化路径,提出一个新的支持向量机模型选择方法.首先,发展初步的近似模型选择理论,包括给出核矩阵近似算法KMA-α,证明KMA-α的近似误差界定理,进而得到支持向量机的模型近似误差界.然后,提出近似模型选择算法AMSRP.该算法应用KMA-α计算的核矩阵的低秩近似来提高支持向量机求解的效率,同时应用正则化路径算法来提高惩罚因子C参数调节的效率.最后,通过标准数据集上的对比实验,验证了AMSRP的可行性和计算效率.实验结果显示,AMSRP可在保证测试集准确率的前提下,显著地提高支持向量机模型选择的效率.理论分析与实验结果表明,AMSRP是一合理、高效的模型选择算法.  相似文献   

19.
Probability density estimation from optimally condensed data samples   总被引:8,自引:0,他引:8  
The requirement to reduce the computational cost of evaluating a point probability density estimate when employing a Parzen window estimator is a well-known problem. This paper presents the Reduced Set Density Estimator that provides a kernel-based density estimator which employs a small percentage of the available data sample and is optimal in the L/sub 2/ sense. While only requiring /spl Oscr/(N/sup 2/) optimization routines to estimate the required kernel weighting coefficients, the proposed method provides similar levels of performance accuracy and sparseness of representation as Support Vector Machine density estimation, which requires /spl Oscr/(N/sup 3/) optimization routines, and which has previously been shown to consistently outperform Gaussian Mixture Models. It is also demonstrated that the proposed density estimator consistently provides superior density estimates for similar levels of data reduction to that provided by the recently proposed Density-Based Multiscale Data Condensation algorithm and, in addition, has comparable computational scaling. The additional advantage of the proposed method is that no extra free parameters are introduced such as regularization, bin width, or condensation ratios, making this method a very simple and straightforward approach to providing a reduced set density estimator with comparable accuracy to that of the full sample Parzen density estimator.  相似文献   

20.
A Bayesian approach to joint feature selection and classifier design   总被引:5,自引:0,他引:5  
This paper adopts a Bayesian approach to simultaneously learn both an optimal nonlinear classifier and a subset of predictor variables (or features) that are most relevant to the classification task. The approach uses heavy-tailed priors to promote sparsity in the utilization of both basis functions and features; these priors act as regularizers for the likelihood function that rewards good classification on the training data. We derive an expectation- maximization (EM) algorithm to efficiently compute a maximum a posteriori (MAP) point estimate of the various parameters. The algorithm is an extension of recent state-of-the-art sparse Bayesian classifiers, which in turn can be seen as Bayesian counterparts of support vector machines. Experimental comparisons using kernel classifiers demonstrate both parsimonious feature selection and excellent classification accuracy on a range of synthetic and benchmark data sets.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司    京ICP备09084417号-23

京公网安备 11010802026262号