首页 | 官方网站   微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
ABSTRACT

This article explores an innovative approach to deliver information about new agricultural technology that combines a versatile and potentially lower cost method of developing animated videos with another low-cost method of sharing it on mobile devices (i.e. mobile phone). It describes a randomized controlled field experiment conducted in Burkina Faso to evaluate the effectiveness of animated videos shown on mobile phone compared with the traditional extension method (live demonstration) in inducing learning and adoption of two post-harvest technologies among low-literate farmers. Results suggest that video-based training was as effective as the traditional method in inducing learning and understanding. For technologies that farmers were already aware of animated video shown on the mobile phone was also as effective as live demonstration in inducing adoption. However, in transferring new technologies, the traditional method was more effective in inducing adoption at p?<?.10, but not at p?<?.05. Potential role of mobile phone-based videos as part of the agricultural extension system is discussed.  相似文献   

2.
ABSTRACT

This study compared the efficacy of linguistically and dialectically localized animated educational videos (LAV) against traditional learning extension (TLE) presentations for learning gains of knowledge around agricultural- and healthcare-related topics within a rural population in Benin. While both approaches demonstrated learning gains, LAV resulted in significantly higher test scores and more detailed knowledge retention. A key contribution of this research, moreover, involves the use of mobile phone technologies to further disseminate educational information. That is, a majority of participants expressed both a preference for the LAV teaching approach and a heightened interest in digitally sharing the information from the educational animations with others. Because the animations are, by design, readily accessible to mobile phones via Africa’s explosively expanding digital infrastructure, this heightened interest in sharing the animated videos also transforms each study participant into a potential a learning node and point of dissemination for the educational video’s material as well.  相似文献   

3.
目的 视频精彩片段提取是视频内容标注、基于内容的视频检索等领域的热点研究问题。视频精彩片段提取主要根据视频底层特征进行精彩片段的提取,忽略了用户兴趣对于提取结果的影响,导致提取结果可能与用户期望不相符。另一方面,基于用户兴趣的语义建模需要大量的标注视频训练样本才能获得较为鲁棒的语义分类器,而对于大量训练样本的标注费时费力。考虑到互联网中包含内容丰富且易于获取的图像,将互联网图像中的知识迁移到视频片段的语义模型中可以减少大量的视频数据标注工作。因此,提出利用互联网图像的用户兴趣的视频精彩片段提取框架。方法 利用大量互联网图像对用户兴趣语义进行建模,考虑到从互联网中获取的知识变化多样且有噪声,如果不加选择盲目地使用会影响视频片段提取效果,因此,将图像根据语义近似性进行分组,将语义相似但使用不同关键词检索得到的图像称为近义图像组。在此基础上,提出使用近义语义联合组权重模型权衡,根据图像组与视频的语义相关性为不同图像组分配不同的权重。首先,根据用户兴趣从互联网图像搜索引擎中检索与该兴趣语义相关的图像集,作为用户兴趣精彩片段提取的知识来源;然后,通过对近义语义图像组的联合组权重学习,将图像中习得的知识迁移到视频中;最后,使用图像集中习得的语义模型对待提取片段进行精彩片段提取。结果 本文使用CCV数据库中的视频对本文提出的方法进行验证,同时与多种已有的视频关键帧提取算法进行比较,实验结果显示本文算法的平均准确率达到46.54,较其他算法相比提高了21.6%,同时算法耗时并无增加。此外,为探究优化过程中不同平衡参数对最终结果的影响,进一步验证本文方法的有效性,本文在实验过程中通过移除算法中的正则项来验证每一项对于算法框架的影响。实验结果显示,在移除任何一项后算法的准确率明显降低,这表明本文方法所提出的联合组权重模型对提取用户感兴趣视频片段的有效性。结论 本文提出了一种针对用户兴趣语义的视频精彩片段提取方法,根据用户关注点的不同,为不同用户提取其感兴趣的视频片段。  相似文献   

4.
目的 目前,特征点轨迹稳像算法无法兼顾轨迹长度、鲁棒性及轨迹利用率,因此容易造成该类算法的视频稳像结果扭曲失真或者局部不稳。针对此问题,提出基于三焦点张量重投影的特征点轨迹稳像算法。方法 利用三焦点张量构建长虚拟轨迹,通过平滑虚拟轨迹定义稳定视图,然后利用三焦点张量将实特征点重投影到稳定视图,以此实现实特征点轨迹的平滑,最后利用网格变形生成稳定帧。结果 对大量不同类型的视频进行稳像效果测试,并且与典型的特征点轨迹稳像算法以及商业软件进行稳像效果对比,其中包括基于轨迹增长的稳像算法、基于对极几何点转移的稳像算法以及商业软件Warp Stabilizer。本文算法的轨迹长度要求低、轨迹利用率高以及鲁棒性好,对于92%剧烈抖动的视频,稳像效果优于基于轨迹增长的稳像算法;对于93%缺乏长轨迹的视频以及71.4%存在滚动快门失真的视频,稳像效果优于Warp Stabilizer;而与基于对极几何点转移的稳像算法相比,退化情况更少,可避免摄像机阶段性静止、摄像机纯旋转等情况带来的算法失效问题。结论 本文算法对摄像机运动模式和场景深度限制少,不仅适宜处理缺少视差、场景结构非平面、滚动快门失真等常见的视频稳像问题,而且在摄像机摇头、运动模糊、剧烈抖动等长轨迹缺乏的情况下,依然能取得较好的稳像效果,但该算法的时间性能还有所不足。  相似文献   

5.
Insecticide resistance management (IRM) programme was launched in 26 cotton-growing districts of India in 2002 to rationalize the use of pesticides. The IRM strategy is presented within a full Integrated Pest Management (IPM) context with the premise that unless full-fledged efforts to understand all aspects of resistance phenomenon are made, any attempt to implement IPM at field level would not bear results. Unlike earlier IPM programmes, this programme is directly implemented by the scientists of state agricultural universities; thus the information flow is directly from research subsystem to farmers. The extension methodology is different from IPM-farmer field school model, but much the same information is provided in didactic form, through active participation of the farmers throughout the cotton-growing season, by deploying scouts in villages. The knowledge gain of the farmers covered under IRM programme was measured by employing before/after quasi-experimental research design. The overall knowledge gain was significant in terms of identification of insect pests and natural enemies of cotton crop, proper use of insecticides and timely sowing of the crop, but farmers did not reach the desired level of knowledge with respect to other cultural practices, which result in suppression of pest buildup. In the absence of any effective bio-agents, the level of IPM integration is limited to cultural practices, thresholds, agro ecosystem analysis and use of insecticides according to good agricultural practices.
Rajinder Peshin (Corresponding author)Email:
Rajinder Kalra (Corresponding author)Email:
A. K. DhawanEmail:
  相似文献   

6.
Abstract

The standing crop of herbaceous biomass produced during the 2-4?month summer rainy season by the annual grasses in the Sahel zone provides an indication of resource availability for livestock for the following 9-month dry season. Combined use of NOAA advanced very high resolution radiometer (AVHRR) local area coverage (LAC) satellite data and biomass data, obtained through vegetation sampling of 25-100 km2 areas, allowed the development of a method for biomass assessment in Niger. Vegetation sampling involved both visual estimates and clipped plots (double sampling). The relationship between time-integrated normalized difference vegetation index (NDVI) statistics derived from NOAA AVHRR LAC data (dependent variable) and total herbaceous biomass (independent variable) was obtained through regression analysis. An inverse prediction was used to estimate biomass from the satellite data. Biomass maps and statistics of the grasslands were produced for the end of each rainy season: 1986, 1987 and 1988. This information is being used for planning purposes by the pastoral resource managers of the Government of Niger.  相似文献   

7.
目的 针对红外热像视频对比度低、成像模糊和难以进行细节观测的缺点,提出一种基于欧拉视角的红外热像视频细微变化放大方法。该方法可以将红外热像视频中细微的色彩变化和动作变化进行放大,将原本人眼无法察觉到的变化清晰地展示出来。方法 该方法首先采用对比度金字塔算法对红外热像视频中每一帧图像进行空域分解,其次对各个尺度的图像进行时域滤波,选择出感兴趣的变化频率并进行线性放大,然后对放大后的信号进行重构,最后对重构得到的图像进行降噪处理,从而获得细微变化放大的红外视频。结果 针对色彩放大和动作放大,实验采集了若干红外热像视频。其中,对人脸侧面的颜色进行放大时,选择像素值变化频率在0.751 Hz 范围内的信号进行滤波并放大,得到像素值变化被放大100倍的视频;对吉他弦的动作进行放大时,选择变化频率在100120 Hz范围内的信号进行滤波并放大,得到弦的动作幅度被放大的视频。结果表明该方法可以使视频中所选择的变化频段得到有效增强。结论 本文方法可以放大红外视频中原本无法观测到的细微变化,并使之清晰呈现,在军用和民用领域中有着广泛用途。  相似文献   

8.
We have developed an easy-to-use and cost-effective system to construct textured 3D animated face models from videos with minimal user interaction. This is a particularly challenging task for faces due to a lack of prominent textures. We develop a robust system by following a model-based approach: we make full use of generic knowledge of faces in head motion determination, head tracking, model fitting, and multiple-view bundle adjustment. Our system first takes, with an ordinary video camera, images of a face of a person sitting in front of the camera turning their head from one side to the other. After five manual clicks on two images to indicate the position of the eye corners, nose tip and mouth corners, the system automatically generates a realistic looking 3D human head model that can be animated immediately (different poses, facial expressions and talking). A user, with a PC and a video camera, can use our system to generate his/her face model in a few minutes. The face model can then be imported in his/her favorite game, and the user sees themselves and their friends take part in the game they are playing. We have demonstrated the system on a laptop computer live at many events, and constructed face models for hundreds of people. It works robustly under various environment settings.  相似文献   

9.
Light field videos express the entire visual information of an animated scene, but their shear size typically makes capture, processing and display an off‐line process, i. e., time between initial capture and final display is far from real‐time. In this paper we propose a solution for one of the key bottlenecks in such a processing pipeline, which is a reliable depth reconstruction possibly for many views. This is enabled by a novel correspondence algorithm converting the video streams from a sparse array of off‐the‐shelf cameras into an array of animated depth maps. The algorithm is based on a generalization of the classic multi‐resolution Lucas‐Kanade correspondence algorithm from a pair of images to an entire array. Special inter‐image confidence consolidation allows recovery from unreliable matching in some locations and some views. It can be implemented efficiently in massively parallel hardware, allowing for interactive computations. The resulting depth quality as well as the computation performance compares favorably to other state‐of‐the art light field‐to‐depth approaches, as well as stereo matching techniques. Another outcome of this work is a data set of light field videos that are captured with multiple variants of sparse camera arrays.  相似文献   

10.
《Ergonomics》2012,55(12):1730-1738
Abstract

Two computer vision algorithms were developed to automatically estimate exertion time, duty cycle (DC) and hand activity level (HAL) from videos of workers performing 50 industrial tasks. The average DC difference between manual frame-by-frame analysis and the computer vision DC was ?5.8% for the Decision Tree (DT) algorithm, and 1.4% for the Feature Vector Training (FVT) algorithm. The average HAL difference was 0.5 for the DT algorithm and 0.3 for the FVT algorithm. A sensitivity analysis, conducted to examine the influence that deviations in DC have on HAL, found it remained unaffected when DC error was less than 5%. Thus, a DC error less than 10% will impact HAL less than 0.5 HAL, which is negligible. Automatic computer vision HAL estimates were therefore comparable to manual frame-by-frame estimates.

Practitioner Summary: Computer vision was used to automatically estimate exertion time, duty cycle and hand activity level from videos of workers performing industrial tasks.  相似文献   

11.
目的 为解决低照度视频亮度和对比度低、噪声大等问题,提出一种将Retinex理论和暗通道先验理论相结合的低照度视频快速增强算法。方法 鉴于增强视频时会放大噪声,在增强之前先对视频进行去噪处理,之后结合引导滤波和中值滤波的优势提出综合去噪算法,并将其应用于YCbCr空间。其次提取亮度分量来估计亮度传播图,利用大气模型复原低照度视频。最后综合考虑帧间处理技术,加入场景检测、边缘补偿和帧间补偿。结果 为了验证本文算法的实际效果和有效性,对低照度视频进行增强实验并将本文算法与Retinex增强算法、去雾技术增强算法进行了比较,本文算法有效地提高了低照度视频的亮度和对比度,减小了噪声,增强了视频的细节信息并减轻了视频闪烁现象,从而改善了视频质量。算法处理速率有着非常明显的优势,相比文中其他两种算法的速率提升了将近十倍。结论 本文算法保持了帧间运动的连续性,在保证增强效果的同时提升了处理速率,对细节和边缘轮廓部分的处理非常精细,具有目前同类算法所不能达到的优良效果,适用于视频监控、目标跟踪、智能交通等众多领域,可实现视频的实时增强。  相似文献   

12.
目的人类行为识别是计算机视觉领域的一个重要研究课题。由于背景复杂、摄像机抖动等原因,在自然环境视频中识别人类行为存在困难。针对上述问题,提出一种基于显著鲁棒轨迹的人类行为识别算法。方法该算法使用稠密光流技术在多尺度空间中跟踪显著特征点,并使用梯度直方图(HOG)、光流直方图(HOF)和运动边界直方图(MBH)特征描述显著轨迹。为了有效消除摄像机运动带来的影响,使用基于自适应背景分割的摄像机运动估计技术增强显著轨迹的鲁棒性。然后,对于每一类特征分别使用Fisher Vector模型将一个视频表示为一个Fisher向量,并使用线性支持向量机对视频进行分类。结果在4个公开数据集上,显著轨迹算法比Dense轨迹算法的实验结果平均高1%。增加摄像机运动消除技术后,显著鲁棒轨迹算法比显著轨迹算法的实验结果平均高2%。在4个数据集(即Hollywood2、You Tube、Olympic Sports和UCF50)上,显著鲁棒轨迹算法的实验结果分别是65.8%、91.6%、93.6%和92.1%,比目前最好的实验结果分别高1.5%、2.6%、2.5%和0.9%。结论实验结果表明,该算法能够有效地识别自然环境视频中的人类行为,并且具有较低的时间复杂度。  相似文献   

13.
14.
Abstract

This paper describes a new method for knowledge elicitation that may contribute to effective expertise transfer from human experts to knowledge-based systems. The method was applied to knowledge transfer in an aerospace design context. Knowledge was transferred directly from an expert designer to both expert and novice “receivers” of information. Transfer occurred in a natural way, without intervention from a knowledge engineer. To evaluate the process, the information receivers were required to recall the transmitted knowledge after a seven week delay. Results suggest that this method can be effective for expertise transfer and can indicate desirable characteristics for knowledge-based systems which aim to be adaptable to users’ differing levels of competence.  相似文献   

15.
目的 立体视频能提供身临其境的逼真感而越来越受到人们的喜爱,而视觉显著性检测可以自动预测、定位和挖掘重要视觉信息,可以帮助机器对海量多媒体信息进行有效筛选。为了提高立体视频中的显著区域检测性能,提出了一种融合双目多维感知特性的立体视频显著性检测模型。方法 从立体视频的空域、深度以及时域3个不同维度出发进行显著性计算。首先,基于图像的空间特征利用贝叶斯模型计算2D图像显著图;接着,根据双目感知特征获取立体视频图像的深度显著图;然后,利用Lucas-Kanade光流法计算帧间局部区域的运动特征,获取时域显著图;最后,将3种不同维度的显著图采用一种基于全局-区域差异度大小的融合方法进行相互融合,获得最终的立体视频显著区域分布模型。结果 在不同类型的立体视频序列中的实验结果表明,本文模型获得了80%的准确率和72%的召回率,且保持了相对较低的计算复杂度,优于现有的显著性检测模型。结论 本文的显著性检测模型能有效地获取立体视频中的显著区域,可应用于立体视频/图像编码、立体视频/图像质量评价等领域。  相似文献   

16.
In this paper, we define time series query filtering, the problem of monitoring the streaming time series for a set of predefined patterns. This problem is of great practical importance given the massive volume of streaming time series available through sensors, medical patient records, financial indices and space telemetry. Since the data may arrive at a high rate and the number of predefined patterns can be relatively large, it may be impossible for the comparison algorithm to keep up. We propose a novel technique that exploits the commonality among the predefined patterns to allow monitoring at higher bandwidths, while maintaining a guarantee of no false dismissals. Our approach is based on the widely used envelope-based lower-bounding technique. As we will demonstrate on extensive experiments in diverse domains, our approach achieves tremendous improvements in performance in the offline case, and significant improvements in the fastest possible arrival rate of the data stream that can be processed with guaranteed no false dismissals. As a further demonstration of the utility of our approach, we demonstrate that it can make semisupervised learning of time series classifiers tractable. Li Wei is a Ph.D. candidate in the Department of Computer Science & Engineering at the University of California, Riverside. She received her B.S. and M.S. degrees from Fudan University, China. Her research interests include data mining and information retrieval. Eamonn Keogh is an Assistant Professor of computer science at the University of California, Riverside. His research interests include data mining, machine learning and information retrieval. Several of his papers have won best paper awards, including papers at SIGKDD and SIGMOD. Dr. Keogh is the recipient of a 5-year NSF Career Award for “Efficient Discovery of Previously Unknown Patterns and Relationships in Massive Time Series Databases”. Helga Van Herle is an Assistant Clinical Professor of medicine at the Division of Cardiology of the Geffen School of Medicine at UCLA. She received her M.D. from UCLA in 1993; completed her residency in internal medicine at the New York Hospital (Cornell University; 1993–1996) and her cardiology fellowship at UCLA (1997–2001). Dr. Van Herle holds an M.Sc. in bioengineering from Columbia University (1987) and a B.Sc. in chemical engineering from UCLA (1985). Agenor Mafra-Neto, Ph.D., is the CEO of ISCA Technologies, Inc., in California and the founder of ISCA Technologies, LTDA, in Brazil. His research interests include the analysis of insect behavior and communication systems, the manipulation of insect behavior, and the automation of pest monitoring and pest control. Dr. Mafra-Neto is currently coordinating the deployment of area-wide smart sensor and effector networks to micromanage agricultural and public health pests in the field in an automatic fashion. Russell J. Abbott is a Professor of computer science at California State University, Los Angeles, and a member of the staff at the Aerospace Corporation, El Segundo, CA. His primary interests are in the field of complex systems. He is currently organizing a workshop to bring together people working in the fields of complex systems and systems engineering.  相似文献   

17.
目的 视频行为识别和理解是智能监控、人机交互和虚拟现实等诸多应用中的一项基础技术,由于视频时空结构的复杂性,以及视频内容的多样性,当前行为识别仍面临如何高效提取视频的时域表示、如何高效提取视频特征并在时间轴上建模的难点问题。针对这些难点,提出了一种多特征融合的行为识别模型。方法 首先,提取视频中高频信息和低频信息,采用本文提出的两帧融合算法和三帧融合算法压缩原始数据,保留原始视频绝大多数信息,增强原始数据集,更好地表达原始行为信息。其次,设计双路特征提取网络,一路将融合数据正向输入网络提取细节特征,另一路将融合数据逆向输入网络提取整体特征,接着将两路特征加权融合,每一路特征提取网络均使用通用视频描述符——3D ConvNets (3D convolutional neural networks)结构。然后,采用BiConvLSTM (bidirectional convolutional long short-term memory network)网络对融合特征进一步提取局部信息并在时间轴上建模,解决视频序列中某些行为间隔相对较长的问题。最后,利用Softmax最大化似然函数分类行为动作。结果 为了验证本文算法的有效性,在公开的行为识别数据集UCF101和HMDB51上,采用5折交叉验证的方式进行整体测试与分析,然后针对每类行为动作进行比较统计。结果表明,本文算法在两个验证集上的平均准确率分别为96.47%和80.03%。结论 通过与目前主流行为识别模型比较,本文提出的多特征模型获得了最高的识别精度,具有通用、紧凑、简单和高效的特点。  相似文献   

18.
Much research on human action recognition has been oriented toward the performance gain on lab-collected datasets. Yet real-world videos are more diverse, with more complicated actions and often only a few of them are precisely labeled. Thus, recognizing actions from these videos is a tough mission. The paucity of labeled real-world videos motivates us to “borrow” strength from other resources. Specifically, considering that many lab datasets are available, we propose to harness lab datasets to facilitate the action recognition in real-world videos given that the lab and real-world datasets are related. As their action categories are usually inconsistent, we design a multi-task learning framework to jointly optimize the classifiers for both sides. The general Schatten \(p\) -norm is exerted on the two classifiers to explore the shared knowledge between them. In this way, our framework is able to mine the shared knowledge between two datasets even if the two have different action categories, which is a major virtue of our method. The shared knowledge is further used to improve the action recognition in the real-world videos. Extensive experiments are performed on real-world datasets with promising results.  相似文献   

19.
为准确有效识别出农作物病虫害类别及位置,构建一款农作物病虫害图像识别App系统,为广大农户、研究人员及管理者提供智能信息服务.该系统基于Android平台开发,在所收集的大量病虫害数据集上,开展了Darknet、YOLO等深度网络模型训练和测试,并使用批量正则化、维度聚类和课程设计学习等技术优化模型,实现了181种作物...  相似文献   

20.
多通道Haar-like特征多示例学习目标跟踪   总被引:1,自引:0,他引:1       下载免费PDF全文
目的 提出一种基于多通道Haar-like特征的多示例学习目标跟踪算法,克服了多示例跟踪算法在处理彩色视频时利用信息少和弱特征不能更换的缺点。方法 首先,针对原始多示例学习跟踪算法对彩色视频帧采用单通道信息或将其简单转化为灰度图像进行跟踪会丢失部分特征信息的缺点,提出在RGB三通道上生成位置、大小和通道完全随机的Haar-like特征来更好地表示目标。其次,针对多示例学习跟踪算法中Haar-like弱特征不能更换,难以反映目标自身和外界条件变化的特点,提出在弱分类器选择过程中,用随机生成的新Haar-like特征实时替换部分判别力最弱的Haar-like特征,从而在目标模型中引入新的信息,以适应目标外观的动态变化。结果 对8个具有挑战性的彩色视频序列的实验结果表明,与原始多示例学习跟踪算法、加权多示例学习跟踪算法、基于分布场的跟踪算法相比,提出的方法不仅获得了最小的平均中心误差,而且平均跟踪准确率比上述3种算法分别高52.85%,34.75%和5.71%,在4种算法中获得最优性能。结论 通过将Haar-like特征从RGB三通道随机生成,并将判别力最弱的部分Haar-like弱特征实时更换,显著提升了原始多示例学习跟踪算法对彩色视频的跟踪效果,扩展了其应用前景。  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司    京ICP备09084417号-23

京公网安备 11010802026262号