首页 | 官方网站   微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
2.
3.
4.
提出了一种基于能量谱包络非负矩阵分解的钢琴多音符估计算法.首先对钢琴88个单音片段进行RTFI时频分析,求得对应平均能量谱,经过时序平均、归一化求得平均能量谱包络,拼接成钢琴的单音能量谱包络基矩阵.之后对测试的多音片段,采用同样处理方法求得多音平均能量谱包络,通过非负矩阵分解求得各音符的权重系数,最后通过阈值限定求得多音符估计结果.性能评估实验基于MAPS数据集的UCHO集和RAND集展开,与MIREX中最好的钢琴音乐自动记谱系统相比,本文提出的钢琴多音符估计算法性能有很大幅度的提升.  相似文献   

5.
6.
We present a method to simultaneously estimate 3D body pose and action categories from monocular video sequences. Our approach learns a generative model of the relationship of body pose and image appearance using a sparse kernel regressor. Body poses are modelled on a low-dimensional manifold obtained by Locally Linear Embedding dimensionality reduction. In addition, we learn a prior model of likely body poses and a dynamical model in this pose manifold. Sparse kernel regressors capture the nonlinearities of this mapping efficiently. Within a Recursive Bayesian Sampling framework, the potentially multimodal posterior probability distributions can then be inferred. An activity-switching mechanism based on learned transfer functions allows for inference of the performed activity class, along with the estimation of body pose and 2D image location of the subject. Using a rough foreground segmentation, we compare Binary PCA and distance transforms to encode the appearance. As a postprocessing step, the globally optimal trajectory through the entire sequence is estimated, yielding a single pose estimate per frame that is consistent throughout the sequence. We evaluate the algorithm on challenging sequences with subjects that are alternating between running and walking movements. Our experiments show how the dynamical model helps to track through poorly segmented low-resolution image sequences where tracking otherwise fails, while at the same time reliably classifying the activity type.  相似文献   

7.
We present a method for learning a set of generative models which are suitable for representing selected image-domain features of a scene as a function of changes in the camera viewpoint. Such models are important for robotic tasks, such as probabilistic position estimation (i.e. localization), as well as visualization. Our approach entails the automatic selection of the features, as well as the synthesis of models of their visual behavior. The model we propose is capable of generating maximum-likelihood views, as well as a measure of the likelihood of a particular view from a particular camera position. Training the models involves regularizing observations of the features from known camera locations. The uncertainty of the model is evaluated using cross validation, which allows for a priori evaluation of features and their attributes. The features themselves are initially selected as salient points by a measure of visual attention, and are tracked across multiple views. While the motivation for this work is for robot localization, the results have implications for image interpolation, image-based scene reconstruction and object recognition. This paper presents a formulation of the problem and illustrative experimental results.  相似文献   

8.
Scene Parsing Using Region-Based Generative Models   总被引:1,自引:0,他引:1  
Semantic scene classification is a challenging problem in computer vision. In contrast to the common approach of using low-level features computed from the whole scene, we propose "scene parsing" utilizing semantic object detectors (e.g., sky, foliage, and pavement) and region-based scene-configuration models. Because semantic detectors are faulty in practice, it is critical to develop a region-based generative model of outdoor scenes based on characteristic objects in the scene and spatial relationships between them. Since a fully connected scene configuration model is intractable, we chose to model pairwise relationships between regions and estimate scene probabilities using loopy belief propagation on a factor graph. We demonstrate the promise of this approach on a set of over 2000 outdoor photographs, comparing it with existing discriminative approaches and those using low-level features  相似文献   

9.
International Journal of Computer Vision - Deep generative modelling for human body analysis is an emerging problem with many interesting applications. However, the latent space learned by such...  相似文献   

10.
程载和 《计算机科学》2017,44(Z11):263-266
为了减轻人脸识别中表情以及姿态等因素变化对识别结果的影响,Xu提出了利用原始样本和对称样本的两步人脸识别算法。但当人脸图像受外在因素干扰产生较大变化时,该方法的识别结果并不理想。因此提出了一种基于因素分解模型的两步人脸识别算法。新算法在特征提取过程中利用因素分解模型将“身份因素”和“表情因素”从人脸图像中分离出来,加以控制。然后提取测试集图像中的新身份和新表情,并将其与训练集中的旧身份或旧表情相互作用,合成新的人脸图像。同时为了保证分类精度,在识别阶段针对原始样本和合成样本分别采用两步人脸识别的方法,充分利用了分数层次融合的优势,进一步提高了算法的识别效果。  相似文献   

11.
We present a novel interactive learning‐based method for curating datasets using user‐defined criteria for training and refining Generative Adversarial Networks. We employ a novel batch‐mode active learning strategy to progressively select small batches of candidate exemplars for which the user is asked to indicate whether they match the, possibly subjective, selection criteria. After each batch, a classifier that models the user's intent is refined and subsequently used to select the next batch of candidates. After the selection process ends, the final classifier, trained with limited but adaptively selected training data, is used to sift through the large collection of input exemplars to extract a sufficiently large subset for training or refining the generative model that matches the user's selection criteria. A key distinguishing feature of our system is that we do not assume that the user can always make a firm binary decision (i.e., “meets” or “does not meet” the selection criteria) for each candidate exemplar, and we allow the user to label an exemplar as “undecided”. We rely on a non‐binary query‐by‐committee strategy to distinguish between the user's uncertainty and the trained classifier's uncertainty, and develop a novel disagreement distance metric to encourage a diverse candidate set. In addition, a number of optimization strategies are employed to achieve an interactive experience. We demonstrate our interactive curation system on several applications related to training or refining generative models: training a Generative Adversarial Network that meets a user‐defined criteria, adjusting the output distribution of an existing generative model, and removing unwanted samples from a generative model.  相似文献   

12.
3D models of objects and scenes are critical to many academic disciplines and industrial applications. Of particular interest is the emerging opportunity for 3D graphics to serve artificial intelligence: computer vision systems can benefit from synthetically-generated training data rendered from virtual 3D scenes, and robots can be trained to navigate in and interact with real-world environments by first acquiring skills in simulated ones. One of the most promising ways to achieve this is by learning and applying generative models of 3D content: computer programs that can synthesize new 3D shapes and scenes. To allow users to edit and manipulate the synthesized 3D content to achieve their goals, the generative model should also be structure-aware: it should express 3D shapes and scenes using abstractions that allow manipulation of their high-level structure. This state-of-the-art report surveys historical work and recent progress on learning structure-aware generative models of 3D shapes and scenes. We present fundamental representations of 3D shape and scene geometry and structures, describe prominent methodologies including probabilistic models, deep generative models, program synthesis, and neural networks for structured data, and cover many recent methods for structure-aware synthesis of 3D shapes and indoor scenes.  相似文献   

13.
概率生成模型是知识表示的重要方法,在该模型上计算似然函数的概率推理问题一般是难解的.变分推理是重要的确定性近似推理方法,具有较快的收敛速度、坚实的理论基础.尤其随着大数据时代的到来,概率生成模型变分推理方法受到工业界和学术界的极大关注.综述了多种概率生成模型变分推理框架及最新进展,具体包括:首先综述了概率生成模型变分推理一般框架及基于变分推理的生成模型参数学习过程;然后对于条件共轭指数族分布,给出了具有解析优化式的变分推理框架及该框架下可扩展的随机化变分推理;进一步,对于一般概率分布,给出了基于随机梯度的黑盒变分推理框架,并简述了该框架下多种变分推理算法的具体实现;最后分析了结构化变分推理,通过不同方式丰富变分分布提高推理精度并改善近似推理一致性.此外,展望了概率生成模型变分推理的发展趋势.  相似文献   

14.
We develop a method for the estimation of articulated pose, such as that of the human body or the human hand, from a single (monocular) image. Pose estimation is formulated as a statistical inference problem, where the goal is to find a posterior probability distribution over poses as well as a maximum a posteriori (MAP) estimate. The method combines two modeling approaches, one discriminative and the other generative. The discriminative model consists of a set of mapping functions that are constructed automatically from a labeled training set of body poses and their respective image features. The discriminative formulation allows for modeling ambiguous, one-to-many mappings (through the use of multi-modal distributions) that may yield multiple valid articulated pose hypotheses from a single image. The generative model is defined in terms of a computer graphics rendering of poses. While the generative model offers an accurate way to relate observed (image features) and hidden (body pose) random variables, it is difficult to use it directly in pose estimation, since inference is computationally intractable. In contrast, inference with the discriminative model is tractable, but considerably less accurate for the problem of interest. A combined discriminative/generative formulation is derived that leverages the complimentary strengths of both models in a principled framework for articulated pose inference. Two efficient MAP pose estimation algorithms are derived from this formulation; the first is deterministic and the second non-deterministic. Performance of the framework is quantitatively evaluated in estimating articulated pose of both the human hand and human body. Most of this work was done while the first author was with Boston University.  相似文献   

15.
在大规模无监督语料上的BERT、XLNet等预训练语言模型,通常采用基于交叉熵损失函数的语言建模任务进行训练。模型的评价标准则采用困惑度或者模型在其他下游自然语言处理任务中的性能指标,存在损失函数和评测指标不匹配等问题。为解决这些问题,该文提出一种结合强化学习的对抗预训练语言模型RL-XLNet(Reinforcement Learning-XLNet)。RL-XLNet采用对抗训练方式训练一个生成器,基于上下文预测选定词,并训练一个判别器判断生成器预测的词是否正确。通过对抗网络生成器和判别器的相互促进作用,强化生成器对语义的理解,提高模型的学习能力。由于在文本生成过程中存在采样过程,导致最终的损失无法直接进行回传,故提出采用强化学习的方式对生成器进行训练。基于通用语言理解评估基准(GLUE Benchmark)和斯坦福问答任务(SQuAD 1.1)的实验,结果表明,与现有BERT、XLNet方法相比,RL-XLNet模型在多项任务中的性能上表现出较明显的优势: 在GLUE的六个任务中排名第1,一个任务排名第2,一个任务排名第3。在SQuAD 1.1任务中F1值排名第1。考虑到运算资源有限,基于小语料集的模型性能也达到了领域先进水平。  相似文献   

16.
图像自动标注是模式识别与计算机视觉等领域中重要而又具有挑战性的问题.针对现有模型存在数据利用率低与易受正负样本不平衡影响等问题,提出了基于判别模型与生成模型的新型层叠图像自动标注模型.该模型第一层利用判别模型对未标注图像进行主题标注,获得相应的相关图像集;第二层利用提出的面向关键词的方法建立图像与关键词之间的联系,并使用提出的迭代算法分别对语义关键词与相关图像进行扩展;最后利用生成模型与扩展的相关图像集对未标注图像进行详细标注.该模型综合了判别模型与生成模型的优点,通过利用较少的相关训练图像来获得更好的标注结果.在Corel 5K图像库上进行的实验验证了该模型的有效性.  相似文献   

17.
近年来,随着人工智能技术的发展,更多数据被利用,数据驱动的端到端闲聊机器人技术得到快速发展,受到了学术界和工业界的广泛关注.但是对于闲聊机器人的评价,现在没有标准的自动评价方法,而自动评价方法对于闲聊机器人对话效果的评估及闲聊机器人的快速迭代是十分重要的.该文综述了基于生成模型的闲聊机器人的自动评价方法.首先介绍了自动...  相似文献   

18.
Generative text-to-image models (as exemplified by DALL-E, MidJourney, and Stable Diffusion) have recently made enormous technological leaps, demonstrating impressive results in many graphical domains—from logo design to digital painting to photographic composition. However, the quality of these results has led to existential crises in some fields of art, leading to questions about the role of human agency in the production of meaning in a graphical context. Such issues are central to visualization, and while these generative models have yet to be widely applied in visualization, it seems only a matter of time until their integration is manifest. Seeking to circumvent similar ponderous dilemmas, we attempt to understand the roles that generative models might play across visualization. We do so by constructing a framework that characterizes what these technologies offer at various stages of the visualization workflow, augmented and analyzed through semi-structured interviews with 21 experts from related domains. Through this work, we map the space of opportunities and risks that might arise in this intersection, identifying doomsday prophecies and delicious low-hanging fruits that are ripe for research.  相似文献   

19.
语音发音系统中,多音字的发音一直以来都是个难题。文章针对多音字中出现的一种远距离约束词语-离合词进行了研究,以此来关联多音字的发音。考虑到离合词的特点,提出了触发对的概念,然后用互信息来计算词语的相关度以此对多音字进行读音消歧,实验结果表明,从词语约束层面来考虑多音字对多音离合词的发音有很好的效果。  相似文献   

20.
在混合声音事件检测任务中,不同事件的声音信号相互混杂,从混合语音信号中提取的全局特征无法很好地表达每种单独的事件,导致当声音事件数量增加或者环境变化时,声音事件检测性能急剧下降。目前已存在的方法尚未考虑环境变化对检测性能的影响。鉴于此,文中提出了一种基于多任务学习的环境辅助的声音事件检测模型(Environment-Assisted Multi-Task,EAMT),该模型主要包含场景分类器和事件检测器两大核心部分,其中场景分类器用于学习环境上下文特征,该特征作为事件检测的额外信息与声音事件特征融合,并通过多任务学习方式来辅助声音事件检测,以此提高模型对环境变化的鲁棒性及多目标事件检测的性能。基于声音事件检测领域的主流公开数据集Freesound以及通用性能评估指标F1分数,将所提模型与基准模型(Deep Neural Network,DNN)及主流模型(Convolutional Recurrent Neural Network,CRNN)进行对比,共设置了3组对比实验。实验结果表明:1)相比单一任务的模型,基于多任务学习的EAMT模型的场景分类效果和事件检测性能均有所提升,且环境上下文特征的引入进一步提升了声音事件检测的性能;2)EAMT模型对环境变化具有更强的鲁棒性,在环境发生变化时,EAMT模型事件检测的F1分数高出其他模型2%~5%;3)在目标声音事件数量增加时,相比其他模型,EAMT模型的表现依旧突出,在F1指标上取得了2%~10%的提升。  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司    京ICP备09084417号-23

京公网安备 11010802026262号