首页 | 官方网站   微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 718 毫秒
1.
苏乐  柴金祥  夏时洪 《软件学报》2016,27(S2):172-183
提出一种基于局部姿态先验的从深度图像中实时在线捕获3D人体运动的方法.关键思路是根据从捕获的深度图像中自动提取具有语义信息的虚拟稀疏3D标记点,从事先建立的异构3D人体姿态数据库中快速检索K个姿态近邻并构建局部姿态先验模型,通过迭代优化求解最大后验概率,实时地在线重建3D人体姿态序列.实验结果表明,该方法能够实时跟踪重建出稳定、准确的3D人体运动姿态序列,并且只需经过个体化人体参数自动标定过程,可跟踪身材尺寸差异较大的不同表演者;帧率约25fps.因此,所提方法可应用于3D游戏/电影制作、人机交互控制等领域.  相似文献   

2.
Recently,several important block ciphers are considered to be broken by the brute-force-like cryptanalysis,with a time complexity faster than the exhaustive key search by going over the entire key space but performing less than a full encryption for each possible key.Motivated by this observation,we describe a meetin-the-middle attack that can always be successfully mounted against any practical block ciphers with success probability one.The data complexity of this attack is the smallest according to the unicity distance.The time complexity can be written as 2k(1-),where>0 for all practical block ciphers.Previously,the security bound that is commonly accepted is the length k of the given master key.From our result we point out that actually this k-bit security is always overestimated and can never be reached because of the inevitable loss of the key bits.No amount of clever design can prevent it,but increments of the number of rounds can reduce this key loss as much as possible.We give more insight into the problem of the upper bound of effective key bits in block ciphers,and show a more accurate bound.A suggestion about the relationship between the key size and block size is given.That is,when the number of rounds is fixed,it is better to take a key size equal to the block size.Also,effective key bits of many well-known block ciphers are calculated and analyzed,which also confirms their lower security margins than thought before.The results in this article motivate us to reconsider the real complexity that a valid attack should compare to.  相似文献   

3.
We present a novel approach to track full human body mesh with a single depth camera, e.g. Microsoft Kinect, using a template body model. The proposed observation-oriented tracking mainly targets at fitting the body mesh silhouette to the 2D user boundary in video stream by deforming the body. It is fast to be integrated into real-time or interactive applications, which is impossible with traditional iterative optimization based approaches. Our method is a composite of two main stages: user-specific body shape estimation and on-line body tracking. We first develop a novel method to fit a 3D morphable human model to the actual body shape of the user in front of the depth camera. A strategy, making use of two constrains, i.e. point clouds from depth images and correspondence between foreground user mask contour and the boundary of projected body model, is designed. On-line tracking is made possible in successive steps. At each frame, the joint angles of template skeleton are optimized towards the captured Kinect skeleton. Then, the aforementioned contour correspondence is adopted to adjust the projected body model vertices towards the contour points of foreground user mask, using a Laplacian deformation technique. Experimental results show that our method achieves fast and high quality tracking. We also show that the proposed method is benefit to three applications: virtual try-on, full human body scanning and applications in manufacturing systems.  相似文献   

4.
陈忠泽  黄国玉 《计算机应用》2008,28(5):1251-1254
提出一种由目标的立体图像通过人工神经网络实时估计得到其3D姿态的方法。网络的输入向量由同步立体图像帧上目标特征点的坐标构成;而输出向量则表示目标若干关键位置的三维姿态(进而可以建立目标的3D模型)。拟合该神经网络所需要的输出样本数据由运动捕获系统REACTOR获取。实验表明基于该算法的3D姿态估计误差低于5%,可以有效应用于3D虚拟目标的计算机实时合成等。  相似文献   

5.
《Real》1997,3(6):415-432
Real-time motion capture plays a very important role in various applications, such as 3D interface for virtual reality systems, digital puppetry, and real-time character animation. In this paper we challenge the problem of estimating and recognizing the motion of articulated objects using theoptical motion capturetechnique. In addition, we present an effective method to control the articulated human figure in realtime.The heart of this problem is the estimation of 3D motion and posture of an articulated, volumetric object using feature points from a sequence of multiple perspective views. Under some moderate assumptions such as smooth motion and known initial posture, we develop a model-based technique for the recovery of the 3D location and motion of a rigid object using a variation of Kalman filter. The posture of the 3D volumatric model is updated by the 2D image flow of the feature points for all views. Two novel concepts – the hierarchical Kalman filter (KHF) and the adaptive hierarchical structure (AHS) incorporating the kinematic properties of the articulated object – are proposed to extend our formulation for the rigid object to the articulated one. Our formulation also allows us to avoid two classic problems in 3D tracking: the multi-view correspondence problem, and the occlusion problem. By adding more cameras and placing them appropriately, our approach can deal with the motion of the object in a very wide area. Furthermore, multiple objects can be handled by managing multiple AHSs and processing multiple HKFs.We show the validity of our approach using the synthetic data acquired simultaneously from the multiple virtual camera in a virtual environment (VE) and real data derived from a moving light display with walking motion. The results confirm that the model-based algorithm works well on the tracking of multiple rigid objects.  相似文献   

6.
《Advanced Robotics》2013,27(8):893-911
This study proposes a new approach to virtual realization of force/tactile sensors in machines equipped with no real sensors. The key of our approach is that a machine exploits the user's biological signals. Therefore, this approach is not dependent on controlled objects and is expected to be widely applicable for a variety of machines including robots. This article describes an example robotic system comprised of an industrial robot manipulator, a motion capture system and a surface electromyogram (EMG) measurement apparatus. By monitoring/recording the user's surface EMG and postural information in real-time, we show that a robot equipped with no force/tactile sensors behaved similarly to one possessing sensors over its body. Another advantage of our approach is demonstrated by a task in which a robot and a user cooperatively hold and move a heavy load.  相似文献   

7.
人体运动的空间轨迹追踪是一种利用传感器技术和计算机技术来分析记录人体的运动过程的方法.为了实现人体运动轨迹的空间追踪,本文设计了一种人体可穿戴式的人体运动捕捉系统,通过佩戴在人体关节点的惯性传感器单元来获取肢体的实时姿态信息.惯性传感器由加速度传感器、角速度传感器和磁力计构成.通过微控制单元获取传感器数据,利用低通滤波和卡尔曼滤波来更新四元数,再将预处理后的数据由蓝牙模块实时发送到电脑端.本文通过对肢体运动的不同角度的实验,证明了利用惯性传感器可以追踪人体肢体、运动的空间轨迹.  相似文献   

8.
The widespread use of smart phones with GPS and orientation sensors opens up new possibilities for location-based annotations in outdoor environments. However, a completely different approach is required for indoors. In this study, we introduce IMAF, a novel indoor modeling and annotation framework on a mobile phone. The framework produces a 3D room model in situ with five selections from user without prior knowledge on actual geometry distance or additional apparatus. Using the framework, non-experts can easily capture room dimensions and annotate locations and objects within the room for linking virtual information to the real space represented by an approximated box. For registering 3D room model to the real space, an hybrid method of visual tracking and device sensors obtains accurate orientation tracking result and still achieves interactive frame-rates for real-time applications on a mobile phone. Once the created room model is registered to the real space, user-generated annotations can be attached and viewed in AR and VR modes. Finally, the framework supports object-based space to space registration for viewing and creating annotations from different views other than the view that generated the annotations. The performance of the proposed framework is demonstrated with achieved model accuracy, modeling time, stability of visual tracking and satisfaction of annotation. In the last section, we present two exemplar applications built on IMAF.  相似文献   

9.
针对传统人体动画制作成本高、人体运动受捕获设备限制等缺陷,提出了一种基于单目视频运动跟踪的三维人体动画方法。首先给出了系统实现框架,然后采用比例正交投影模型及人体骨架模型来恢复关节的三维坐标,关节的旋转欧拉角由逆运动学计算得到,最后采用H-anim标准对人体建模,由关节欧拉角驱动虚拟人产生三维人体动画。实验结果表明,该系统能够对人体运动进行准确的跟踪和三维重建,可应用于人体动画制作领域。  相似文献   

10.
人体三维运动实时跟踪与建模系统   总被引:1,自引:0,他引:1  
提出了一种新的人体三维运动实时跟踪与建模系统设计方法,并基于此实现了一套鲁棒的参考应用系统.针对人机交互等对跟踪精度要求不是很高的应用场合,系统在跟踪精确性和简易性与可推广性之间做了很好的折中.系统使用多个摄像头采集图像,实时计算场景深度信息,然后结合使用深度和颜色信息进行人体跟踪.应用一个简易的人体上半身三维模型,并使用基于颜色直方图的粒子滤波算法对头部和手部进行跟踪,从而恢复出模型的各个参数.系统以人脸检测和人手肤色聚类算法为初始化方法.大量实验证明,该系统能在复杂背景下进行人体上半身的跟踪和三维模型恢复,能进行完全自动的初始化,有较强的抗干扰能力和自动错误恢复能力.系统在2.4GHz PC机上能以25帧/秒的速度运行.  相似文献   

11.
设计了一种基于微机电惯性传感器的数据手套.根据惯性导航和刚体动力学原理,构建了基于微型传感器的体感网络,并通过多传感器数据的融合解算以获取运动姿态信息,实现对手指各关节运动数据的捕获.结合计算机图形技术构建三维虚拟手捕获系统进行性能对比评估,实验结果表明:所设计的系统具有良好的稳定性和适应性,能够对手运动信息进行实时有效地捕获.  相似文献   

12.
Juggling, which uses both hands to keep several objects in the air at once, is admired by anyone who sees it. However, skillful real‐world juggling requires long, hard practice. Therefore, we propose an interesting method to enable anyone to juggle skillfully in the virtual world. In the real world, the human motion has to follow the motion of the moving objects; in the virtual world, the objects' motion can be adjusted together with the human motion. By using this freedom, we have generated a juggling avatar that can follow the user's motion. The user simply makes juggling‐like motions in front of a motion sensor. Our system then searches for juggling motions that closely match the user's motions and connects them smoothly. We then generate moving objects that both satisfy the laws of physics and are synchronized with the synthesized motion of the avatar. In this way, we can generate a variety of juggling animations by an avatar in real time. Copyright © 2016 John Wiley & Sons, Ltd.  相似文献   

13.
This paper presents an interactive multi-agent system based on a fully immersive virtual environment. A user can interact with the virtual characters in real time via an avatar by changing their moving behavior. Moreover, the user is allowed to select any character as the avatar to be controlled. A path planning algorithm is proposed to address the problem of dynamic navigation of individual and groups of characters in the multi-agent system. A natural interface is designed for the interaction between the user and the virtual characters, as well as the virtual environment, based on gesture recognition. To evaluate the efficiency of the dynamic navigation method, performance results are provided. The presented system has the potential to be used in the training and evaluation of emergency evacuation and other real-time applications of crowd simulation with interaction.  相似文献   

14.
陈鹏展  李杰  罗漫 《计算机应用》2015,35(8):2316-2320
针对目前基于惯性传感的动作捕捉系统存在的姿态漂移、实时性不强和价格较高的问题,设计了一种低功耗、低成本,能够有效克服姿态数据漂移的人体实时动作捕捉系统。首先通过人体运动学原理,构建分布式关节运动捕捉节点,各捕捉节点采用低功耗模式,当节点采集数据低于预定阈值时,自动进入休眠模式,降低系统功耗;结合惯性导航和Kalman滤波算法对人体运动姿态进行实时的解算,以降低传统的算法存在的数据漂移问题;基于Wi-Fi模块,采用TCP-IP协议对姿态数据进行转发,实现对模型的实时驱动。选取多轴电机测试平台对算法的精度进行了评估,并对比了系统对真实人体的跟踪效果。实验结果表明,改进算法与传统的互补滤波算法相比具有更高的精度,基本能将角度漂移控制在1°以内;且算法的时延相对于互补滤波没有明显的滞后,基本能够实现对人体运动的准确跟踪。  相似文献   

15.
Motion capture is a technique of digitally recording the movements of real entities, usually humans. It was originally developed as an analysis tool in biomechanics research, but has grown increasingly important as a source of motion data for computer animation. In this context it has been widely used for both cinema and video games. Hand motion capture and tracking in particular has received a lot of attention because of its critical role in the design of new Human Computer Interaction methods and gesture analysis. One of the main difficulties is the capture of human hand motion. This paper gives an overview of ongoing research “HandPuppet3D” being carried out in collaboration with an animation studio to employ computer vision techniques to develop a prototype desktop system and associated animation process that will allow an animator to control 3D character animation through the use of hand gestures. The eventual goal of the project is to support existing practice by providing a softer, more intuitive, user interface for the animator that improves the productivity of the animation workflow and the quality of the resulting animations. To help achieve this goal the focus has been placed on developing a prototype camera based desktop gesture capture system to capture hand gestures and interpret them in order to generate and control the animation of 3D character models. This will allow an animator to control 3D character animation through the capture and interpretation of hand gestures. Methods will be discussed for motion tracking and capture in 3D animation and in particular that of hand motion tracking and capture. HandPuppet3D aims to enable gesture capture with interpretation of the captured gestures and control of the target 3D animation software. This involves development and testing of a motion analysis system built from algorithms recently developed. We review current software and research methods available in this area and describe our current work.  相似文献   

16.
赵威  李毅 《计算机应用》2022,42(9):2830-2837
为了生成更准确流畅的虚拟人动画,采用Kinect设备捕获三维人体姿态数据的同时,使用单目人体三维姿态估计算法对Kinect的彩色信息进行骨骼点数据推理,从而实时优化人体姿态估计效果,并驱动虚拟人物模型生成动画。首先,提出了一种时空优化的骨骼点数据处理方法,以提高单目估计人体三维姿态的稳定性;其次,提出了一种Kinect和遮挡鲁棒姿势图(ORPM)算法融合的人体姿态估计方法来解决Kinect的遮挡问题;最后,研制了基于四元数向量插值和逆向运动学约束的虚拟人动画系统,其能够进行运动仿真和实时动画生成。与仅利用Kinect捕获人体运动来生成动画的方法相比,所提方法的人体姿态估计数据鲁棒性更强,具备一定的防遮挡能力,而与基于ORPM算法的动画生成方法相比,所提方法生成的动画在帧率上提高了两倍,效果更真实流畅。  相似文献   

17.
The use of 3D avatars is becoming more frequent with the development of computer technology and the internet. To meet users?? requirements, some software or programs have allowed users to customize the avatar. However, users are only able to customize the avatar using the pre-defined accessories such as hair, clothing and so on. That is, users have limited chance to customize the avatar according to their own styles. It will be of interest to users if they are able to change the appearance of the avatar by their own design, such as creating garments for avatars themselves. This paper provides an easy solution to dressing realistic 3D avatars for non-professional users based on a sketch interface. After a user drawing a 2D garment profile around the avatar, the prototype system can generate an elaborate 3D geometric garment surface dressed on the avatar. The construction of the garment surface is constrained by key body features. And the garment shape is then optimized to remove artefacts. The proposed method can generate a uniform mesh for processing such as mesh refinement, 3D decoration and so on.  相似文献   

18.
We present a physics-based approach to generate 3D biped character animation that can react to dynamical environments in real time. Our approach utilizes an inverted pendulum model to online adjust the desired motion trajectory from the input motion capture data. This online adjustment produces a physically plausible motion trajectory adapted to dynamic environments, which is then used as the desired motion for the motion controllers to track in dynamics simulation. Rather than using Proportional-Derivative controllers whose parameters usually cannot be easily set, our motion tracking adopts a velocity-driven method which computes joint torques based on the desired joint angular velocities. Physically correct full-body motion of the 3D character is computed in dynamics simulation using the computed torques and dynamical model of the character. Our experiments demonstrate that tracking motion capture data with real-time response animation can be achieved easily. In addition, physically plausible motion style editing, automatic motion transition, and motion adaptation to different limb sizes can also be generated without difficulty.  相似文献   

19.
In recent years, the convergence of computer vision and computer graphics has put forth a new field of research that focuses on the reconstruction of real-world scenes from video streams. To make immersive 3D video reality, the whole pipeline spanning from scene acquisition over 3D video reconstruction to real-time rendering needs to be researched. In this paper, we describe latest advancements of our system to record, reconstruct and render free-viewpoint videos of human actors. We apply a silhouette-based non-intrusive motion capture algorithm making use of a 3D human body model to estimate the actor’s parameters of motion from multi-view video streams. A renderer plays back the acquired motion sequence in real-time from any arbitrary perspective. Photo-realistic physical appearance of the moving actor is obtained by generating time-varying multi-view textures from video. This work shows how the motion capture sub-system can be enhanced by incorporating texture information from the input video streams into the tracking process. 3D motion fields are reconstructed from optical flow that are used in combination with silhouette matching to estimate pose parameters. We demonstrate that a high visual quality can be achieved with the proposed approach and validate the enhancements caused by the the motion field step.  相似文献   

20.
In this paper, we demonstrate how a new interactive 3 D desktop metaphor based on two-handed 3 D direct manipulation registered with head-tracked stereo viewing can be applied to the task of constructing animated characters. In our configuration, a six degree-of-freedom head-tracker and CrystalEyes shutter glasses are used to produce stereo images that dynamically follow the user head motion. 3 D virtual objects can be made to appear at a fixed location in physical space which the user may view from different angles by moving his head. To construct 3 D animated characters, the user interacts with the simulated environment using both hands simultaneously: the left hand, controlling a Spaceball, is used for 3 D navigation and object movement, while the right hand, holding a 3 D mouse, is used to manipulate through a virtual tool metaphor the objects appearing in front of the screen. In this way, both incremental and absolute interactive input techniques are provided by the system. Hand-eye coordination is made possible by registering virtual space exactly to physical space, allowing a variety of complex 3 D tasks necessary for constructing 3 D animated characters to be performed more easily and more rapidly than is possible using traditional interactive techniques. The system has been tested using both Polhemus Fastrak and Logitech ultrasonic input devices for tracking the head and 3 D mouse.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司    京ICP备09084417号-23

京公网安备 11010802026262号