首页 | 官方网站   微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 0 毫秒
1.
李海霞  张擎 《计算机应用》2015,35(10):2789-2792
针对多模态生物特征识别系统并行融合模式中使用方便性和使用效率方面的问题,在现有序列化多模态生物特征识别系统的基础上,提出了一种结合并行融合和序列化融合的多生物特征识别系统框架。框架中首先采用步态、人脸与指纹三种生物特征的不同组合方式以加权相加的得分级融合算法进行的识别过程;其次,利用在线的半监督学习技术提高弱特征的识别性能,从而进一步增强系统的使用方便性和识别可靠性。理论分析和实验结果表明,在此框架下,随使用时间的推移,系统能够通过在线学习提高弱分类器的性能,用户的使用方便性和系统的识别精度都得到了进一步提升。  相似文献   

2.
A multimodal biometric system that alleviates the limitations of the unimodal biometric systems by fusing the information from the respective biometric sources is developed. A general approach is proposed for the fusion at score level by combining the scores from multiple biometrics using triangular norms (t-norms) due to Hamacher, Yager, Frank, Schweizer and Sklar, and Einstein product. This study aims at tapping the potential of t-norms for multimodal biometrics. The proposed approach renders very good performance as it is quite computationally fast and outperforms the score level fusion using the combination approach (min, mean, and sum) and classification approaches like SVM, logistic linear regression, MLP, etc. The experimental evaluation on three databases confirms the effectiveness of score level fusion using t-norms.  相似文献   

3.
This paper presents a new approach for the adaptive management of multimodal biometrics to meet a wide range of application dependent adaptive security requirements. In this work, ant colony optimization (ACO) is employed for the selection of key parameters like decision threshold and fusion rule, to ensure the optimal performance in meeting varying security requirements during the deployment of multimodal biometrics systems. Particle swarm optimization (PSO) has been widely utilized for the optimal selection of these parameters in the earlier attempts in the literature [Veeramachaneni et al., 2005] and [Kumar et al., 2010]. However, in PSO these parameters are computed in continuous domain while they are assumed to be better represented as discrete variables [Kumar et al., 2010]. This paper therefore proposes the use of ACO, in which discrete biometric verification parameters are computed to ensure the optimal performance from the multimodal biometrics system. The proposed ACO based framework is also extended to the pattern classification approach where fuzzy binary decision tree (FBDT) is utilized for two-class biometrics verification. The experimental results are presented on true multimodal systems from various publicly available databases; IITD databases of palmprint and iris, XM2VTS database of speech and faces, and the NIST BSSR1 databases of faces and fingerprint images. Our experimental results presented in this paper suggest that (i) ACO based approach is capable of operating on significantly small error rates in comparison to the widely employed PSO for automated selection of biometrics fusion rules/parameters, (ii) the score-level fusion yields better performance with lower error rate in comparison to the decision level fusion, and finally (iii) the FBDT based classification approach delivers considerably superior performance for the adaptive biometrics verification.  相似文献   

4.
The problem of information fusion from multiple data-sets acquired by multimodal sensors has drawn significant research attention over the years. In this paper, we focus on a particular problem setting consisting of a physical phenomenon or a system of interest observed by multiple sensors. We assume that all sensors measure some aspects of the system of interest with additional sensor-specific and irrelevant components. Our goal is to recover the variables relevant to the observed system and to filter out the nuisance effects of the sensor-specific variables. We propose an approach based on manifold learning, which is particularly suitable for problems with multiple modalities, since it aims to capture the intrinsic structure of the data and relies on minimal prior model knowledge. Specifically, we propose a nonlinear filtering scheme, which extracts the hidden sources of variability captured by two or more sensors, that are independent of the sensor-specific components. In addition to presenting a theoretical analysis, we demonstrate our technique on real measured data for the purpose of sleep stage assessment based on multiple, multimodal sensor measurements. We show that without prior knowledge on the different modalities and on the measured system, our method gives rise to a data-driven representation that is well correlated with the underlying sleep process and is robust to noise and sensor-specific effects.  相似文献   

5.
为了解决多模生物认证中生物模板安全传输问题,提出把人脸图像隐藏嵌入到指纹图像中,用于多模生物特征认证,以提高生物体征识别的安全性和准确性。利用指纹图像归一化、中心点检测,获取嵌入区域的几何失真不变域,提出在几何失真不变域基于奇异值分解(SVD)的多重嵌入算法,嵌入相应的人脸图像。检测者通过相关性优化算法,盲提取人脸图像,再结合载体指纹图像进行多模认证。实验结果表明,该算法能够抵抗旋转、缩放、平移等几何失真,也能抵抗压缩、滤波、噪声等攻击,提高了生物模板的传输安全性,指纹与人脸双模生物认证相比于单模认证具有更高的正确识别率。  相似文献   

6.
7.
Multimodal biometrics technology consolidates information obtained from multiple sources at sensor level, feature level, match score level, and decision level. It is used to increase robustness and provide broader population coverage for inclusion. Due to the inherent challenges involved with feature-level fusion, combining multiple evidences is attempted at score, rank, or decision level where only a minimal amount of information is preserved. In this paper, we propose the Group Sparse Representation based Classifier (GSRC) which removes the requirement for a separate feature-level fusion mechanism and integrates multi-feature representation seamlessly into classification. The performance of the proposed algorithm is evaluated on two multimodal biometric datasets. Experimental results indicate that the proposed classifier succeeds in efficiently utilizing a multi-feature representation of input data to perform accurate biometric recognition.  相似文献   

8.
Current approaches to personal identity authentication using a single biometric technology are limited, principally because no single biometric is generally considered both sufficiently accurate and user-acceptable for universal application. Multimodal biometrics can provide a more adaptable solution to the security and convenience requirements of many applications. However, such an approach can also lead to additional complexity in the design and management of authentication systems. Additionally, complex hierarchies of security levels and interacting user/provider requirements demand that authentication systems are adaptive and flexible in configuration. In this paper we consider the integration of multimodal biometrics using intelligent agents to address issues of complexity management. The work reported here is part of a major project designated IAMBIC (Intelligent Agents for Multimodal Biometric Identification and Control), aimed at exploring the application of the intelligent agent metaphor to the field of biometric authentication. The paper provides an introduction to a first-level architecture for such a system, and demonstrates how this architecture can provide a framework for the effective control and management of access to data and systems where issues of privacy, confidentiality and trust are of primary concern. Novel approaches to software agent design and agent implementation strategies required for this architecture are also highlighted. The paper further shows how such a structure can define a fundamental paradigm to support the realisation of universal access in situations where data integrity and confidentiality must be robustly and reliably protected .  相似文献   

9.
Many different direct volume rendering methods have been developed to visualize 3D scalar fields on uniform rectilinear grids. However, little work has been done on rendering simultaneously various properties of the same 3D region measured with different registration devices or at different instants of time. The demand for this type of visualization is rapidly increasing in scientific applications such as medicine in which the visual integration of multiple modalities allows a better comprehension of the anatomy and a perception of its relationships with activity. This paper presents different strategies of direct multimodal volume rendering (DMVR). It is restricted to voxel models with a known 3D rigid alignment transformation. The paper evaluates at which steps of the rendering pipeline the data fusion must be realized in order to accomplish the desired visual integration and to provide fast re‐renders when some fusion parameters are modified. In addition, it analyses how existing monomodal visualization algorithms can be extended to multiple datasets and it compares their efficiency and their computational cost. Copyright © 2004 John Wiley & Sons, Ltd.  相似文献   

10.
Neural Computing and Applications - Cost and physical constraints in the engineering applied problems obligate finding the best results that global optimization algorithms cannot realize. For...  相似文献   

11.
药物合成反应,特别是不对称反应是现代药物化学的重要组成部分。化学家们投入了巨大的人力和资源来识别各种化学反应模式,以实现高效合成和不对称催化。量子力学计算和机器学习算法在这一领域的最新研究证明了通过计算机学习现有药物合成反应数据并进行精确虚拟筛选的巨大潜力。然而,现有方法局限于单一模态的数据来源,并且由于数据少的限制,只能使用基本的机器学习方法,使它们在更广泛场景中的普遍应用受到阻碍。因此,提出两种融合多模态数据的药物合成反应的筛选模型来进行反应产率和对映选择性的虚拟筛选,并给出了一种基于Boltzmann分布进行加权的3D构象描述符,从而将分子的立体空间信息与量子力学性质结合起来。这两种多模态数据融合模型在两个代表性的有机合成反应(C-N偶联反应和N,S-缩醛反应)中进行了训练和验证,结果表明前者的R2相对于基线方法在大多数据划分上的提升超过了1个百分点,后者的平均绝对误差(MAE)相对于基线方法在大多数据划分上的下降超过了0.5个百分点。可见,在有机反应筛选的不同任务中采用基于多模态数据融合的模型都会带来好的性能。  相似文献   

12.
一种有效的多峰函数优化算法   总被引:3,自引:0,他引:3  
针对小生境粒子群优化技术中小生境半径等参数选取问题 ,提出了一种新颖的小生境方法 ,无须小生境半径等任何参数。通过监视粒子正切函数值的变化 ,判断各个粒子是否属于同一座山峰 ,使其追踪所在山峰的最优粒子飞行 ,进而搜索到每一座山峰极值。算法实现简单 ,不仅克服了小生境使用中需要参数的弊端 ,而且解决了粒子群算法只能找到一个解的不足。最后通过对多峰值函数的仿真实验 ,验证了算法可以准确地找到所有山峰。  相似文献   

13.
This paper proposes a novel multimodal biometric images hiding approach based on correlation analysis, which is used to protect the security and integrity of transmitted multimodal biometric images for network-based identification. Compared with existing methods, the correlation between the biometric images and the cover image is first analyzed by partial least squares (PLS) and particle swarm optimization (PSO), aiming to make use of the abundant information of cover image to represent the biometric images. Representing the biometric images using the corresponding content of cover image results in the generation of the residual images with much less energy. Then, considering the human visual system (HVS) model, the residual images as the secret images are embedded into the cover image using middle-significant-bit (MSB) method. Extensive experimental results demonstrate that the proposed approach not only provides good imperceptibility but also resists some common attacks and assures the effectiveness of network-based multimodal biometrics identification.  相似文献   

14.
We tackle the crucial challenge of fusing different modalities of features for multimodal sentiment analysis. Mainly based on neural networks, existing approaches largely model multimodal interactions in an implicit and hard-to-understand manner. We address this limitation with inspirations from quantum theory, which contains principled methods for modeling complicated interactions and correlations. In our quantum-inspired framework, the word interaction within a single modality and the interaction across modalities are formulated with superposition and entanglement respectively at different stages. The complex-valued neural network implementation of the framework achieves comparable results to state-of-the-art systems on two benchmarking video sentiment analysis datasets. In the meantime, we produce the unimodal and bimodal sentiment directly from the model to interpret the entangled decision.  相似文献   

15.
An evaluation of multimodal 2D+3D face biometrics   总被引:5,自引:0,他引:5  
We report on the largest experimental study to date in multimodal 2D+3D face recognition, involving 198 persons in the gallery and either 198 or 670 time-lapse probe images. PCA-based methods are used separately for each modality and match scores in the separate face spaces are combined for multimodal recognition. Major conclusions are: 1) 2D and 3D have similar recognition performance when considered individually, 2) combining 2D and 3D results using a simple weighting scheme outperforms either 2D or 3D alone, 3) combining results from two or more 2D images using a similar weighting scheme also outperforms a single 2D image, and 4) combined 2D+3D outperforms the multi-image 2D result. This is the first (so far, only) work to present such an experimental control to substantiate multimodal performance improvement.  相似文献   

16.
结合基于密度估计和归一化两种融合方法的优点,在匹配分数层级提出了一种基于高斯混合模型(Guassian Mixture Model,GMM)和加权和(Weighted Sums,WSUM)的多生物特征二级融合识别方法。利用GMM对匹配分数建模后,采用N-P准则作为第一级融合策略;第二级融合采用基于加权和的归一化方法,较好地解决了分数归一化融合方法在单模识别算法识别率相差较大时融合识别性能差的问题。在ORL、AR人脸数据库和FVC2004组成的人脸-指纹多模数据库上进行了实验,结果表明,该方法有效地提升了识别性能。  相似文献   

17.
In heterogeneous networks, different modalities are coexisting. For example, video sources with certain lengths usually have abundant time-varying audiovisual data. From the users’ perspective, different video segments will trigger different kinds of emotions. In order to better interact with users in heterogeneous networks and improve their user experiences, affective video content analysis to predict users’ emotions is essential. Academically, users’ emotions can be evaluated by arousal and valence values, and fear degree, which provides an approach to quantize the prediction accuracy of the reaction of the audience and users towards videos. In this paper, we propose the multimodal data fusion method for integrating the visual and audio data in order to perform the affective video content analysis. Specifically, to align the visual and audio data, the temporal attention filters are proposed to obtain the time-span features of the entire video segments. Then, by using the two-branch network structure, matched visual and audio features are integrated in the common space. At last, the fused audiovisual feature is employed for the regression and classification subtasks in order to measure the emotional responses of users. Simulation results show that the proposed method can accurately predict the subjective feelings of users towards the video contents, which provides a way to predict users’ preferences and recommend videos according to their own demand.  相似文献   

18.

Information on social media is multi-modal, most of which contains the meaning of sarcasm. In recent years, many people have studied the problem of sarcasm detection. Many traditional methods have been proposed in this field, but the study of deep learning methods to detect sarcasm is still insufficient. It is necessary to comprehensively consider the information of the text,the changes of the tone of the audio signal,the facial expressions and the body posture in the image to detect sarcasm. This paper proposes a multi-level late-fusion learning framework with residual connections, a more reasonable experimental data-set split and two model variants based on different experimental settings. Extensive experiments on the MUStARD show that our methods are better than other fusion models. In our speaker-independent experimental split, the multi-modality has a 4.85% improvement over the single-modality, and the Error rate reduction has an improvement of 11.8%. The latest code will be updated to this URL later: https://github.com/DingNing123/m_fusion

  相似文献   

19.
Multimedia Tools and Applications - The authentication of the Wireless Body Area Networks (WBANs) nodes is a vital factor in its medical applications. This paper, investigates methods of...  相似文献   

20.
Fusion of multimodal medical images increases robustness and enhances accuracy in biomedical research and clinical diagnosis. It attracts much attention over the past decade. In this paper, an efficient multimodal medical image fusion approach based on compressive sensing is presented to fuse computed tomography (CT) and magnetic resonance imaging (MRI) images. The significant sparse coefficients of CT and MRI images are acquired via multi-scale discrete wavelet transform. A proposed weighted fusion rule is utilized to fuse the high frequency coefficients of the source medical images; while the pulse coupled neural networks (PCNN) fusion rule is exploited to fuse the low frequency coefficients. Random Gaussian matrix is used to encode and measure. The fused image is reconstructed via Compressive Sampling Matched Pursuit algorithm (CoSaMP). To show the efficiency of the proposed approach, several comparative experiments are conducted. The results reveal that the proposed approach achieves better fused image quality than the existing state-of-the-art methods. Furthermore, the novel fusion approach has the superiority of high stability, good flexibility and low time consumption.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司    京ICP备09084417号-23

京公网安备 11010802026262号