首页 | 官方网站   微博 | 高级检索  
相似文献
 共查询到18条相似文献,搜索用时 218 毫秒
1.
一种人体运动重定向方法*   总被引:1,自引:0,他引:1  
提出人体下肢向量的概念,通过分析人体运动指出下肢向量能保持运动的主要特征,由此提出基于下肢向量特征不变性的人体运动重定向方法,以此提高运动捕获数据的可重用性。该方法面向人体下肢的运动重定向,能够将运动数据从原始骨骼模型重定向到具有不同骨骼长度比例的目标骨骼模型,同时保持原始运动的主要特征。实验结果表明,该方法具有较好的运动重定向效果和较快的计算效率。  相似文献   

2.
基于时空约束的运动编辑和运动重定向   总被引:8,自引:2,他引:8  
近年来兴起的运动捕获已成为人体动画中最有应用前景的技术之一,目前运动捕获手段很多,但是通常成本高,而且捕获到的运动类型比较单一,为了提高运动捕获数据的重用性,生成与复杂场景协调的多样的动画,必须对捕获的运动数据进行编辑和重定向处理,介绍了一种基于时空约束的运动编辑和运动重定向方法,通过规定一组时空约束条件,建立相应的目标函数,采用逆向运动学和数值优化方法求解出满足约束条件的运动姿势,实验结果表明,该方法可以生成多种满足不同场景婪泊逼真运动,提出了数据的重用性。  相似文献   

3.
基于中国手语合成技术的虚拟人手语视频显示平台技术是一个全新的课题.为了满足广电新闻节目对手势运动流畅性要求,实现了一种上下文相关的手势运动平滑算法,该方法能够充分利用前后两帧的差异来实现手势运动的平滑过渡,其视觉感观效果较传统插值算法更加平滑自然;同时提出了一种基于统计和规则相结合的手势运动重定向算法,在统计方法的基础上针对不同骨架大小以及运动特性进行规则约束,使得标准模型手势运动数据应用到新模型上而不失其准确性;最后,通过扩展基本手语词表达形式并基于alpha融合技术实现了面向广电新闻节目的虚拟人手语合成显示平台并取得很好的结果.  相似文献   

4.
针对面向关节坐标表示的骨骼运动数据重定向网络缺乏通用性的问题,提出一种能够实现源骨骼到多种骨骼运动重定向的通用双向循环自编码器.该自编码器由基于关节坐标表示的运动数据以重建误差为损失函数训练得到.在完成训练后,首先用自编码器计算源运动数据对应的隐变量和重建运动,然后对重建运动施加骨骼长度约束、足迹约束、根关节位置约束以及骨骼角度约束,并将损失反向传播至隐变量空间中优化隐变量,通过多次迭代得到重定向后运动.在CMU运动数据库上的实验结果表明,提出的自编码器及4种约束能够实现基于关节坐标表示的运动数据的重定向,并且得到的重定向运动在骨骼长度误差、骨骼角度误差、末端效应器轨迹以及平滑性上具有更好的效果.  相似文献   

5.
基于视频的人体动画   总被引:14,自引:0,他引:14  
现有的基于运动捕获的动画普遍存在投资大,人体的运动受到捕获设备限制等缺陷。提出了新的基于视频的人体动画技术,首先从视频中捕获人体运动,然后将编辑处理后的运动重定向到动画角色,产生满足动画师要求的动画片段。对其中的运动捕获、运动编辑和运动重定向等关键技术进行了深入研究,并基于这些关键技术开发了一个基于双像机的视频动画系统(VBHA),系统运行结果表明了从来源广、成本低的视频中捕获运动,并经过运动编辑和运动重定向后生成逼真动画的可行性。  相似文献   

6.
为了捕捉用于动态手势识别的完整手势运动序列,需要同步获取手掌空间位置变化和手掌姿态变化两部分信息,而现有的单一传感器都由于自身的限制而难以实现.因此,提出一种基于双通道异构传感器深度摄像头和陀螺仪融合的动态手势协同识别模型,该模型同时从手掌空间位置的变化和姿态的变化两方面获取完整的手势运动数据,通过数据预处理、基于互信息的特征层融合和分类识别,提高手势识别效率和准确率.通过对数字手势0-9和小写英文字母手势a z的识别实验结果表明,提出的动态手势协同识别模型能够在有效降低特征向量维数和计算复杂度的同时,提高手势识别的准确率.  相似文献   

7.
该文提出并设计了一个能将网络攻击流重定向到蜜网环境中的主动式网络安全系统。并对其中的数据流重定向,数据捕获,数据控制与自动告警等功能给出了具体的实现方法。  相似文献   

8.
提出了一种高效的基于HSV颜色空间的多目标检测跟踪方法,实现通过摄像机实时检测跟踪多个指尖目标;定义了一套基于指尖运动轨迹的动态手势模型,并提出了动态手势识别方法;对于两点动态手势,通过BP神经网络进行手势学习和手势识别,而对于模拟鼠标手势和四点动态手势,利用指尖之间相互位置关系进行手势识别.测试结果表明,该方法能够快速、准确的跟踪多个运动的指尖目标并进行动态多点手势识别.  相似文献   

9.
仿人机器人复杂动作设计中人体运动数据提取及分析方法   总被引:3,自引:0,他引:3  
提出了仿人机器人复杂动作设计中人体运动数据提取及分析方法. 首先, 通过运动捕捉系统获取人体运动数据, 并采用运动重定向技术, 输出人--机简化模型的数据; 然后, 对运动数据进行分析和运动学解算, 给出基于人体运动数据的仿人机器人逆运动学求解方法, 得到仿人机器人模型的关节角数据; 再经过运动学约束和稳定性调节后, 生成能够应用于仿人机器人的运动轨迹. 最终, 通过在仿人机器人BHR-2上进行刀术实验验证了该方法的有效性.  相似文献   

10.
随着计算机图形学和动漫产业的快速发展,通过计算机实现运动编辑和运动重定向技术,形成角色动作库进行重复利用,成为快速生产动漫作品的一个重要手段。提出了一种角色运动轨迹提取及重定向的方法。该方法通过记录角色末端关节的运动轨迹,通过一定的归一化规则提取建立角色运动轨迹库,利用CCD逆向运动学解算器重定向至另外一个角色,实现角色动作的重复利用,达到了较为理想的效果。  相似文献   

11.
为了减轻动画制作劳动强度,提高动画制作产能以及自动产生仿真动画,提出了一种结合运动捕获器(Motion Capture)技术与逆向运动学原理的用于完成关节动画中实时运动重定目标(Motion Retargeting)的新算法,该算法的主要概念是首先根据“原动者”与“标的者”之身材比例,推算出、标的者”末端效应器之定位,然后再利用逆向运动学之算法求得“标的者”各关节之旋转角度,因为该算法充分地利用了捕获器所纪录的“原动者”运动信息的密集重复性,而且所设计之定位法则能满足原运动对未端效应器之约束,所以该方法能展现出与原运动十分相似的动画,同时不违背原设定之约束,实验数据也展示,并说明了该算法之效能与优点。  相似文献   

12.
This paper presents a remote manipulation method for mobile manipulator through operator’s gesture. In particular, a track mobile robot is equipped with a 4-DOF robot arm to grasp objects. Operator uses one hand to control both the motion of mobile robot and the posture of robot arm via scheme of gesture polysemy method which is put forward in this paper. A sensor called leap motion (LM), which can obtain the position and posture data of hand, is employed in this system. Two filters were employed to estimate the position and posture of human hand so as to reduce the inherent noise of the sensor. Kalman filter was used to estimate the position, and particle filter was used to estimate the orientation. The advantage of the proposed method is that it is feasible to control a mobile manipulator through just one hand using a LM sensor. The effectiveness of the proposed human–robot interface was verified in laboratory with a series of experiments. And the results indicate that the proposed human–robot interface is able to track the movements of operator’s hand with high accuracy. It is found that the system can be employed by a non-professional operator for robot teleoperation.  相似文献   

13.
14.
Applying motion‐capture data to multi‐person interaction between virtual characters is challenging because one needs to preserve the interaction semantics while also satisfying the general requirements of motion retargeting, such as preventing penetration and preserving naturalness. An efficient means of representing interaction semantics is by defining the spatial relationships between the body parts of characters. However, existing methods consider only the character skeleton and thus are not suitable for capturing skin‐level spatial relationships. This paper proposes a novel method for retargeting interaction motions with respect to character skins. Specifically, we introduce the aura mesh, which is a volumetric mesh that surrounds a character's skin. The spatial relationships between two characters are computed from the overlap of the skin mesh of one character and the aura mesh of the other, and then the interaction motion retargeting is achieved by preserving the spatial relationships as much as possible while satisfying other constraints. We show the effectiveness of our method through a number of experiments.  相似文献   

15.
Motion capture is a technique of digitally recording the movements of real entities, usually humans. It was originally developed as an analysis tool in biomechanics research, but has grown increasingly important as a source of motion data for computer animation. In this context it has been widely used for both cinema and video games. Hand motion capture and tracking in particular has received a lot of attention because of its critical role in the design of new Human Computer Interaction methods and gesture analysis. One of the main difficulties is the capture of human hand motion. This paper gives an overview of ongoing research “HandPuppet3D” being carried out in collaboration with an animation studio to employ computer vision techniques to develop a prototype desktop system and associated animation process that will allow an animator to control 3D character animation through the use of hand gestures. The eventual goal of the project is to support existing practice by providing a softer, more intuitive, user interface for the animator that improves the productivity of the animation workflow and the quality of the resulting animations. To help achieve this goal the focus has been placed on developing a prototype camera based desktop gesture capture system to capture hand gestures and interpret them in order to generate and control the animation of 3D character models. This will allow an animator to control 3D character animation through the capture and interpretation of hand gestures. Methods will be discussed for motion tracking and capture in 3D animation and in particular that of hand motion tracking and capture. HandPuppet3D aims to enable gesture capture with interpretation of the captured gestures and control of the target 3D animation software. This involves development and testing of a motion analysis system built from algorithms recently developed. We review current software and research methods available in this area and describe our current work.  相似文献   

16.
针对TLD(Tracking-Learning-Detection)算法在光照变化不均、遮挡严重、跟踪目标模糊等情况下会出现跟踪失败的问题,提出一种基于卷积神经网络优化TLD运动手势跟踪算法。选取手势特征作正样本,其背景作负样本,获取手势HOG特征并投入到卷积神经网络中加以训练,得到手势检测分类器,从而确定目标手势区域,实现手势的自动识别;再利用TLD算法对手势进行跟踪与学习,对正负样本进行估计检测并实时校正,同时运用SURF特征匹配更新跟踪器。实验结果验证,该算法对比TLD经典算法跟踪精度提高了4.24%,增强了运动手势的跟踪效果,相比经典跟踪算法拥有更高鲁棒性。  相似文献   

17.
目的 智能适配显示的图像/视频重定向技术近年受到广泛关注。与图像重定向以及2D视频重定向相比,3D视频重定向需要同时考虑视差保持和时域保持。现有的3D视频重定向方法虽然考虑了视差保持却忽略了对视差舒适度的调整,针对因视差过大和视差突变造成视觉不舒适度这一问题,提出了一种基于时空联合视差优化的立体视频重定向方法,将视频视差范围控制在舒适区间。方法 在原始视频上建立均匀网格,并提取显著信息和视差,进而得到每个网格的平均显著值;根据相似性变化原理构建形状保持能量项,利用目标轨迹以及原始视频的视差变化构建时域保持能量项,并结合人眼辐辏调节原理构建视差舒适度调整能量项;结合各个网格的显著性,联合求解所有能量项得到优化后的网格顶点坐标,将其用于确定网格形变,从而生成指定宽高比的视频。结果 实验结果表明,与基于细缝裁剪的立体视频重定向方法对比,本文方法在形状保持、时域保持及视差舒适度方面均具有更好的性能。另外,使用现有的客观质量评价方法对重定向结果进行评价,本文方法客观质量评价指标性能优于均匀缩放和细缝裁剪的视频重定向方法,时间复杂度较低,每帧的时间复杂度至少比细缝裁剪方法降低了98%。结论 提出的时空联合的视差优化方法同时在时域和舒适度上对视差进行优化,并考虑了时域保持,具有良好的视差优化与时域保持效果,展现了较高的稳定性和鲁棒性。本文方法能够用于3D视频的重定向,在保持立体视觉舒适性的同时适配不同尺寸的3D显示屏幕。  相似文献   

18.
Aiming at the use of hand gestures for human–computer interaction, this paper presents a real-time approach to the spotting, representation, and recognition of hand gestures from a video stream. The approach exploits multiple cues including skin color, hand motion, and shape. Skin color analysis and coarse image motion detection are joined to perform reliable hand gesture spotting. At a higher level, a compact spatiotemporal representation is proposed for modeling appearance changes in image sequences containing hand gestures. The representation is extracted by combining robust parameterized image motion regression and shape features of a segmented hand. For efficient recognition of gestures made at varying rates, a linear resampling technique for eliminating the temporal variation (time normalization) while maintaining the essential information of the original gesture representations is developed. The gesture is then classified according to a training set of gestures. In experiments with a library of 12 gestures, the recognition rate was over 90%. Through the development of a prototype gesture-controlled panoramic map browser, we demonstrate that a vocabulary of predefined hand gestures can be used to interact successfully with applications running on an off-the-shelf personal computer equipped with a home video camera.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司    京ICP备09084417号-23

京公网安备 11010802026262号