首页 | 官方网站   微博 | 高级检索  
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   55篇
  国内免费   2篇
  完全免费   13篇
工业技术   70篇
  2019年   1篇
  2017年   1篇
  2016年   3篇
  2015年   3篇
  2014年   4篇
  2013年   7篇
  2012年   5篇
  2011年   4篇
  2010年   4篇
  2009年   5篇
  2008年   5篇
  2007年   7篇
  2006年   4篇
  2005年   1篇
  2004年   4篇
  2003年   1篇
  2002年   3篇
  1998年   1篇
  1995年   1篇
  1993年   1篇
  1990年   1篇
  1988年   1篇
  1984年   1篇
  1982年   2篇
排序方式: 共有70条查询结果,搜索用时 31 毫秒
This study addresses the problem of choosing the most suitable probabilistic model selection criterion for unsupervised learning of visual context of a dynamic scene using mixture models. A rectified Bayesian Information Criterion (BICr) and a Completed Likelihood Akaike’s Information Criterion (CL-AIC) are formulated to estimate the optimal model order (complexity) for a given visual scene. Both criteria are designed to overcome poor model selection by existing popular criteria when the data sample size varies from small to large and the true mixture distribution kernel functions differ from the assumed ones. Extensive experiments on learning visual context for dynamic scene modelling are carried out to demonstrate the effectiveness of BICr and CL-AIC, compared to that of existing popular model selection criteria including BIC, AIC and Integrated Completed Likelihood (ICL). Our study suggests that for learning visual context using a mixture model, BICr is the most appropriate criterion given sparse data, while CL-AIC should be chosen given moderate or large data sample sizes.  相似文献   
In this work, we present a unified bottom-up and top-down automatic model selection based approach for modelling complex activities of multiple objects in cluttered scenes. An activity of multiple objects is represented based on discrete scene events and their behaviours are modelled by reasoning about the temporal and causal correlations among different events. This is significantly different from the majority of the existing techniques that are centred on object tracking followed by trajectory matching. In our approach, object-independent events are detected and classified by unsupervised clustering using Expectation-Maximisation (EM) and classified using automatic model selection based on Schwarz's Bayesian Information Criterion (BIC). Dynamic Probabilistic Networks (DPNs) are formulated for modelling the temporal and causal correlations among discrete events for robust and holistic scene-level behaviour interpretation. In particular, we developed a Dynamically Multi-Linked Hidden Markov Model (DML-HMM) based on the discovery of salient dynamic interlinks among multiple temporal processes corresponding to multiple event classes. A DML-HMM is built using BIC based factorisation resulting in its topology being intrinsically determined by the underlying causality and temporal order among events. Extensive experiments are conducted on modelling activities captured in different indoor and outdoor scenes. Our experimental results demonstrate that the performance of a DML-HMM on modelling group activities in a noisy and cluttered scene is superior compared to those of other comparable dynamic probabilistic networks including a Multi-Observation Hidden Markov Model (MOHMM), a Parallel Hidden Markov Model (PaHMM) and a Coupled Hidden Markov Model (CHMM). First online version published in February, 2006  相似文献   
In this paper we describe an algorithm to recover the scene structure, the trajectories of the moving objects and the camera motion simultaneously given a monocular image sequence. The number of the moving objects is automatically detected without prior motion segmentation. Assuming that the objects are moving linearly with constant speeds, we propose a unified geometrical representation of the static scene and the moving objects. This representation enables the embedding of the motion constraints into the scene structure, which leads to a factorization-based algorithm. We also discuss solutions to the degenerate cases which can be automatically detected by the algorithm. Extension of the algorithm to weak perspective projections is presented as well. Experimental results on synthetic and real images show that the algorithm is reliable under noise.  相似文献   
已知含有多个三维刚体的场景,在运动前后的二维点对应数据集合,其中可以包 含高斯噪声和出格点数据,发展了初始部分匹配的生成-生长技术并运用刚性约束.将上述 二维点对应数据集合.分割成多个分别对应于不同刚体运动的二维点对应数据子集,并能分 离出所有出格点数据.再利用单刚体运动估计算法就可估计出各个刚体运动参数.实验结果 表明了算法的有效性.  相似文献   
红外动态景象仿真中光学系统效应仿真   总被引:8,自引:5,他引:3  
在采用数字图像注入方法对红外成像跟踪搜索系统或红外预警系统进行仿真试验中,红外动态场景仿真是仿真系统的核心技术,而光学系统效应仿真是红外动态场景仿真中的重要部分。由于仿真对象为非相干红外成像系统,光学系统效应可以只考虑其能量域内成像效应和像弥散或能量分散效应。光学系统质量一般用调制传递函数描述,可由调制传递函数计算点扩展函数,并建立基于点扩展函数的光学系统效应模型。只要调制传递函数给定准确,此方法建立的光学系统效应仿真模型的逼真度便有保证,而调制传递函数可以通过普通的光学系统测试方法获得。  相似文献   
This paper deals with stereo camera-based estimation of sensor translation in the presence of modest sensor rotation and moving objects. It also deals with the estimation of object translation from a moving sensor. The approach is based on Gabor filters, direct passive navigation, and Kalman filters.Three difficult problems associated with the estimation of motion from an image sequence are solved. (1) The temporal correspondence problem is solved using multi-scale prediction and phase gradients. (2) Segmentation of the image measurements into groups belonging to stationary and moving objects is achieved using the Mahalanobis distance. (3) Compensation for sensor rotation is achieved by internally representing the inter-frame (short-term) rotation in a rigid-body model. These three solutions possess a circular dependency, forming a cycle of perception. A seeding process is developed to correctly initialize the cycle. An additional complication is the translation-rotation ambiguity that sometimes exists when sensor motion is estimated from an image velocity field. Temporal averaging using Kalman filters reduces the effect of motion ambiguities. Experimental results from real image sequences confirm the utility of the approach.Financial support from the Natural Science and Engineering Research Council (NSERC) of Canada is acknowledged.  相似文献   
地形环境下的动态场景绘制   总被引:1,自引:1,他引:0  
提出了地形环境下的动态场景绘制流程。使用空间离散化的方法实现了运动物体之间以及运动物体和地形之间的快速碰撞检测。为了实现运动物体与地形的匹配,采用了一点匹配法和面匹配法。实现了网格模型、基于图像的模型在地形环境下的绘制。最后给出了坦克和虚拟人在地形环境中绘制的实验结果。该方法可应用于网络游戏、战场视景仿真、数字城市、电影制作等领域。  相似文献   
自动多目标识别和跟踪系统   总被引:2,自引:0,他引:2  
研制的自动多目标识别和跟踪系统采用了三级分类识别、动态景物分析等技术,不仅改善了微弱运动目标的捕获能力、实现了运动多目标识别,还具有预计目标遮掩及实现目标再捕获的能力,使系统具有高的目标识别率,当目标数为10个时,系统识别速率为10帧/s。  相似文献   
在动态场景图像序列中检测运动目标时,如何消除因摄影机运动带来的图像帧间全局运动的影响,以便分割图像中的静止背景和运动物体,是一个必须解决的难题。针对复杂背景下动态场景图像序列的特性,给出了一种新的基于场景图像参考点3D位置恢复的图像背景判别方法和运动目标检测方法。首先,介绍了图像序列的层次化运动模型以及基于它的运动分割方法;然后,利用估计出的投影矩阵计算序列图像中各运动层的参考点3D位置,根据同一景物在不同帧中参考点3D位置恢复值的变化特性,来判别静止背景对应的运动层和运动目标对应的运动层,从而分割出图像中的静止背景和运动目标;最后,给出了动态场景图像序列中运动目标检测的详细算法。实验结果表明,新算法较好地解决了在具有多组帧间全局运动参数的动态场景序列图像中检测运动目标的问题,较大地提高了运动目标跟踪算法的有效性和鲁棒性。  相似文献   
一种动态场景下基于时空信息的视频对象提取算法   总被引:1,自引:0,他引:1  
在实际应用中,许多视频序列具有运动背景,使得从其中提取视频对象变得复杂,为此提出了一种基于运动估计和图形金字塔的动态场景下的视频对象提取算法。该算法首先引入了相位相关法求取运动向量,因避免了视频序列中光照变化的影响,故可提高效率和稳健性;接着再根据参数模型进行全局运动估计来得到最终运动模板;然后利用图形金字塔算法对当前模板内图像区域进行空间分割,最终提取出语义视频对象。与现有算法相比,对于从具有动态场景的视频流中提取运动对象的情况,由于使用该算法能有效地避开精准背景补偿,因而不仅节省了计算量,而且提取出来的语义对象精度较高。实验表明,无论是对动态场景中刚性还是非刚性运动物体的分割,该算法都具有较好的效果。  相似文献   
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司    京ICP备09084417号-23

京公网安备 11010802026262号