首页 | 官方网站   微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 93 毫秒
1.
In plant phenotyping, there is a demand for high-throughput, non-destructive systems that can accurately analyse various plant traits by measuring features such as plant volume, leaf area, and stem length. Existing vision-based systems either focus on speed using 2D imaging, which is consequently inaccurate, or on accuracy using time-consuming 3D methods. In this paper, we present a computer-vision system for seedling phenotyping that combines best of both approaches by utilizing a fast three-dimensional (3D) reconstruction method. We developed image processing methods for the identification and segmentation of plant organs (stem and leaf) from the 3D plant model. Various measurements of plant features such as plant volume, leaf area, and stem length are estimated based on these plant segments. We evaluate the accuracy of our system by comparing the measurements of our methods with ground truth measurements obtained destructively by hand. The results indicate that the proposed system is very promising.  相似文献   

2.
The three-dimensional reconstruction of plants using computer vision methods is a promising alternative to non-destructive metrology in plant phenotyping. However, diversity in plants form and size, different surrounding environments (laboratory, greenhouse or field), and occlusions impose challenging issues. We propose the use of state-of-the-art methods for visual odometry to accurately recover camera pose and preliminary three-dimensional models on image acquisition time. Specimens of maize and sunflower were imaged using a single free-moving camera and a software tool with visual odometry capabilities. Multiple-view stereo was employed to produce dense point clouds sampling the plant surfaces. The produced three-dimensional models are accurate snapshots of the shoot state and plant measurements can be recovered in a non-invasive way. The results show a free-moving low-resolution camera is able to handle occlusions and variations in plant size and form, allowing the reconstruction of different species, and specimens in different stages of development. It is also a cheap and flexible method, suitable for different phenotyping needs. Plant traits were computed from the point clouds and compared to manually measured reference, showing millimeter accuracy. All data, including images, camera calibration, pose, and three-dimensional models are publicly available.  相似文献   

3.
在大田条件下,研究了两种氮素水平对直、弯穗水稻品种植株生长、叶片衰老和产量的影响。结果表明,高氮条件下两品种的朱粗度和叶片性状均优于低氮条件。增施氮肥能提高剑叶的叶绿素含量,延长叶绿素含量缓降期,使植株在生育后期能保持较高的绿叶面积;同时能提高剑叶的全氮含量,延长光合速率高值持续期,增加植株的干物质积累和籽粒产量。在同一氮素水平下,直立穗型品种的植株粗度、剑叶活力、籽粒灌浆速率、千粒重、生物产量及籽粒产量均略优于弯曲穗型品种。  相似文献   

4.
5.
Accurate steering through crop rows that avoids crop damage is one of the most important tasks for agricultural robots utilized in various field operations, such as monitoring, mechanical weeding, or spraying. In practice, varying soil conditions can result in off‐track navigation due to unknown traction coefficients so that it can cause crop damage. To address this problem, this paper presents the development, application, and experimental results of a real‐time receding horizon estimation and control (RHEC) framework applied to a fully autonomous mobile robotic platform to increase its steering accuracy. Recent advances in cheap and fast microprocessors, as well as advances in solution methods for nonlinear optimization problems, have made nonlinear receding horizon control (RHC) and receding horizon estimation (RHE) methods suitable for field robots that require high‐frequency (milliseconds) updates. A real‐time RHEC framework is developed and applied to a fully autonomous mobile robotic platform designed by the authors for in‐field phenotyping applications in sorghum fields. Nonlinear RHE is used to estimate constrained states and parameters, and nonlinear RHC is designed based on an adaptive system model that contains time‐varying parameters. The capabilities of the real‐time RHEC framework are verified experimentally, and the results show an accurate tracking performance on a bumpy and wet soil field. The mean values of the Euclidean error and required computation time of the RHEC framework are equal to 0.0423 m and 0.88 ms, respectively.  相似文献   

6.
This work demonstrates how a high throughput robotic machine vision systems can quantify seedling development with high spatial and temporal resolution.The throughput that the system provides is high enough to match the needs of functional genomics research. Analyzing images of plant seedlings growing and responding to stimuli is a proven approach to finding the effects of an affected gene. However, with 104 genes in a typical plant genome, comprehensive studies will require high throughput methodologies. To increase throughput without sacrificing spatial or temporal resolution, a 3 axis robotic gantry system utilizing visual servoing was developed. The gantry consists of direct drive linear servo motors that can move the cameras at a speed of 1 m/s with an accuracy of 1 μm, and a repeatability of 0.1 μm. Perpendicular to the optical axis of the cameras was a 1 m2 sample fixture holds 36 Petri plates in which 144 Arabidopsis thaliana seedlings (4 per Petri plate) grew vertically along the surface of an agar gel. A probabilistic image analysis algorithm was used to locate the root of seedlings and a normalized gray scale variance measure was used to achieve focus by servoing along the optical axis. Rotation of the sample holder induced a gravitropic bending response in the roots, which are approximately 45 μm wide and several millimeter in length. The custom hardware and software described here accurately quantified the gravitropic responses of the seedlings in parallel at approximately 3 min intervals over an 8-h period. Here we present an overview of our system and describe some of the necessary capabilities and challenges to automating plant phenotype studies.  相似文献   

7.
Safety is undoubtedly the most fundamental requirement for any aerial robotic application. It is essential to equip aerial robots with omnidirectional perception coverage to ensure safe navigation in complex environments. In this paper, we present a light‐weight and low‐cost omnidirectional perception system, which consists of two ultrawide field‐of‐view (FOV) fisheye cameras and a low‐cost inertial measurement unit (IMU). The goal of the system is to achieve spherical omnidirectional sensing coverage with the minimum sensor suite. The two fisheye cameras are mounted rigidly facing upward and downward directions and provide omnidirectional perception coverage: 360° FOV horizontally, 50° FOV vertically for stereo, and whole spherical for monocular. We present a novel optimization‐based dual‐fisheye visual‐inertial state estimator to provide highly accurate state‐estimation. Real‐time omnidirectional three‐dimensional (3D) mapping is combined with stereo‐based depth perception for the horizontal direction and monocular depth perception for upward and downward directions. The omnidirectional perception system is integrated with online trajectory planners to achieve closed‐loop, fully autonomous navigation. All computations are done onboard on a heterogeneous computing suite. Extensive experimental results are presented to validate individual modules as well as the overall system in both indoor and outdoor environments.  相似文献   

8.
Biomass determination usually involves destructive and tedious measurements. This study was conducted to evaluate the usefulness of the Normalized Difference Vegetation Index (NDVI) and the Simple Ratio (SR), calculated from the spectra of individual plants, for the assessment of leaf area per plant (LAP), green area per plant (GAP) and plant dry weight (W) at different growth stages. Two varieties of four cereal species (barley, bread wheat, durum wheat and triticale) were sown in a field experiment at a density of 25 plants m?2. The spectra were captured on three plants per plot on eight occasions from the beginning of jointing to heading using a narrow‐bandwidth visible‐near‐infrared portable field spectroradiometer adapted for measurements at plant level. Strong associations were found between NDVI and SR and growth traits, both indices being better estimators of GAP and W than of LAP. Exponential models fitted to NDVI data were more useful for a wide number of situations than the linear models fitted to SR data. However, SR was able to discriminate between genotypes within a species. The accuracy of the reflectance measurements was comparable to that obtained by destructive measurements of growth traits, in which differences between varieties of over 24% were needed to be statistically significant. However, differences in SR of only 18% were statistically significant (P<0.05). The reliability of the spectral reflectance measurements and the non‐destructive nature convert this methodology into a promising tool for the assessment of growth traits in spaced individual plants.  相似文献   

9.
High‐flying unmanned aerial vehicles (UAVs) are transforming industrial and research agriculture by delivering high spatiotemporal resolution data on a field environment. While current UAVs fly high above fields collecting aerial imagery, future low‐flying aircraft will directly interact with the environment and will utilize a wider variety of sensors. Safely and reliably operating close to unstructured environments requires improving UAVs' sensing, localization, and control algorithms. To this end, we investigate localizing a micro‐UAV in corn phenotyping trials using a laser scanner and IMU to control the altitude and position of the vehicle relative to the plant rows. In this process, the laser scanner is not only a means of localization, but also a scientific instrument for measuring plant properties. Experimental evaluations demonstrate that the is capable of safely and reliably operating in real‐world phenotyping trials. We experimentally validate the system in both low and high wind conditions in fully mature corn fields. Using test data from 18 test flights, we show that the UAV is capable of localizing its position to within one field row of the true position.  相似文献   

10.
This paper describes a light detection and ranging (LiDAR)‐based autonomous navigation system for an ultralightweight ground robot in agricultural fields. The system is designed for reliable navigation under cluttered canopies using only a 2D Hokuyo UTM‐30LX LiDAR sensor as the single source for perception. Its purpose is to ensure that the robot can navigate through rows of crops without damaging the plants in narrow row‐based and high‐leaf‐cover semistructured crop plantations, such as corn (Zea mays) and sorghum ( Sorghum bicolor). The key contribution of our work is a LiDAR‐based navigation algorithm capable of rejecting outlying measurements in the point cloud due to plants in adjacent rows, low‐hanging leaf cover or weeds. The algorithm addresses this challenge using a set of heuristics that are designed to filter out outlying measurements in a computationally efficient manner, and linear least squares are applied to estimate within‐row distance using the filtered data. Moreover, a crucial step is the estimate validation, which is achieved through a heuristic that grades and validates the fitted row‐lines based on current and previous information. The proposed LiDAR‐based perception subsystem has been extensively tested in production/breeding corn and sorghum fields. In such variety of highly cluttered real field environments, the robot logged more than 6 km of autonomous run in straight rows. These results demonstrate highly promising advances to LiDAR‐based navigation in realistic field environments for small under‐canopy robots.  相似文献   

11.
The emerging discipline of plant phenomics aims to measure key plant characteristics, or traits, though as yet the set of plant traits that should be measured by automated systems is not well defined. Methods capable of recovering generic representations of the 3D structure of plant shoots from images would provide a key technology underpinning quantification of a wide range of current and future physiological and morphological traits. We present a fully automatic approach to image-based 3D plant reconstruction which represents plants as series of small planar sections that together model the complex architecture of leaf surfaces. The initial boundary of each leaf patch is refined using a level set method, optimising the model based on image information, curvature constraints and the position of neighbouring surfaces. The reconstruction process makes few assumptions about the nature of the plant material being reconstructed. As such it is applicable to a wide variety of plant species and topologies, and can be extended to canopy-scale imaging. We demonstrate the effectiveness of our approach on real images of wheat and rice plants, an artificial plant with challenging architecture, as well as a novel virtual dataset that allows us to compute distance measures of reconstruction accuracy. We also illustrate the method’s potential to support the identification of individual leaves, and so the phenotyping of plant shoots, using a spectral clustering approach.  相似文献   

12.
Measuring semantic traits for phenotyping is an essential but labor‐intensive activity in horticulture. Researchers often rely on manual measurements which may not be accurate for tasks, such as measuring tree volume. To improve the accuracy of such measurements and to automate the process, we consider the problem of building coherent three‐dimensional (3D) reconstructions of orchard rows. Even though 3D reconstructions of side views can be obtained using standard mapping techniques, merging the two side‐views is difficult due to the lack of overlap between the two partial reconstructions. Our first main contribution in this paper is a novel method that utilizes global features and semantic information to obtain an initial solution aligning the two sides. Our mapping approach then refines the 3D model of the entire tree row by integrating semantic information common to both sides, and extracted using our novel robust detection and fitting algorithms. Next, we present a vision system to measure semantic traits from the optimized 3D model that is built from the RGB or RGB‐D data captured by only a camera. Specifically, we show how canopy volume, trunk diameter, tree height, and fruit count (FC) can be automatically obtained in real orchard environments. The experiment results from multiple data sets quantitatively demonstrate the high accuracy and robustness of our method.  相似文献   

13.
We address the problem of navigating unmanned vehicles safely through urban canyons in two dimensions using only vision‐based techniques. Two commonly used vision‐based obstacle avoidance techniques (namely stereo vision and optic flow) are implemented on an aerial and a ground‐based robotic platform and evaluated for urban canyon navigation. Optic flow is evaluated for its ability to produce a centering response between obstacles, and stereo vision is evaluated for detecting obstacles to the front. We also evaluate a combination of these two techniques, which allows a vehicle to detect obstacles to the front while remaining centered between obstacles to the side. Through experiments on an unmanned ground vehicle and in simulation, this combination is shown to be beneficial for navigating urban canyons, including T‐junctions and 90‐deg bends. Experiments on a rotorcraft unmanned aerial vehicle, which was constrained to two‐dimensional flight, demonstrate that stereo vision allowed it to detect an obstacle to the front, and optic flow allowed it to turn away from obstacles to the side. We discuss the theory behind these techniques, our experience in implementing them on the robotic platforms, and their suitability to the urban canyon navigation problem. © 2009 Wiley Periodicals, Inc.  相似文献   

14.
NASA scenarios for lunar and planetary missions include robotic vehicles that function in both teleoperated and semi-autonomous modes. Under teleoperation, on-board stereo cameras may provide 3-D scene information to human operators via stereographic displays; likewise, under semi-autonomy, machine stereo vision may provide 3-D information for obstacle avoidance. In the past, the slow speed of machine stereo vision systems has posed a hurdle to the semi-autonomous scenario; however, recent work at JPL and other laboratories has produced stereo systems with high reliability and near real-time performance for low-resolution image pairs. In particular, JPL has taken a significant step by achieving the first autonomous, cross-country robotic traverses (of up to 100 meters) to use stereo vision, with all computing on-board the vehicle. Here, we describe the stereo vision system, including the underlying statistical model and the details of the implementation. The statistical and algorithmic aspects employ random field models of the disparity map, Bayesian formulations of single-scale matching, and area-based image comparisons. The implementation builds bandpass image pyramids and produces disparity maps from the 60×64 level of the pyramids at rates of up to two seconds per image pair. All vision processing is done in one 68020 augmented with Datacube image processing boards. We argue that the overall approach provides a unifying paradigm for practical, domain-independent stereo ranging. We close with a discussion of practical and theoretical issues involved in evaluating and extending the performance of the stereo system.  相似文献   

15.
Ranging techniques such as lidar (LIght Detection And Ranging) and digital stereo‐photogrammetry show great promise for mapping forest canopy height. In this study, we combine these techniques to create hybrid photo‐lidar canopy height models (CHMs). First, photogrammetric digital surface models (DSMs) created using automated stereo‐matching were registered to corresponding lidar digital terrain models (DTMs). Photo‐lidar CHMs were then produced by subtracting the lidar DTM from the photogrammetric DSM. This approach opens up the possibility of retrospective mapping of forest structure using archived aerial photographs. The main objective of the study was to evaluate the accuracy of photo‐lidar CHMs by comparing them to reference lidar CHMs. The assessment revealed that stereo‐matching parameters and left–right image dissimilarities caused by sunlight and viewing geometry have a significant influence on the quality of the photo DSMs. Our study showed that photo‐lidar CHMs are well correlated to their lidar counterparts on a pixel‐wise basis (r up to 0.89 in the best stereo‐matching conditions), but have a lower resolution and accuracy. It also demonstrated that plot metrics extracted from the lidar and photo‐lidar CHMs, such as height at the 95th percentile of 20 m×20 m windows, are highly correlated (r up to 0.95 in general matching conditions).  相似文献   

16.
This paper investigates the application of a ground‐based laser scanning system for providing quantitative tree measurements in densely stocked plantation forests. A methodology is tested in Kielder Forest, northern England using stands of mature Sitka spruce (Picea sitchensis) and a structured mixture of Sitka spruce and lodgepole pine (Pinus contorta), standing at tree densities of 600 and 2800?stems?ha?1 respectively. Three laser scans, two in the Sitka spruce and one in structured mixture, were collected using a Reigl Inc. LPM‐300VHS high‐speed laser scanner. Field measurements were recorded at the same time and included tree diameter at breast height (dbh) and tree height. These measurements were then compared with those derived from the scanner. The results demonstrate that accurate measurements of tree diameter can be derived directly from the laser scan point cloud return in instances where the sensor's view of the tree is not obstructed. Measurements of upper stem diameters, branch internodal distance and canopy dimensions can also be measured from the laser scan data. However, at the scanning spatial resolution selected, it was not possible to measure branch size. The level of detail that can be obtained from the scan data is dependent on the number and location of scans within the plot as well as the scanning resolution. Essentially, as the shadowing caused by tree density or branching frequency increases, the amount of useful information contained in the scan decreases.  相似文献   

17.
基于图像处理的株高无损测量方法研究   总被引:2,自引:0,他引:2  
何晶 《测控技术》2015,34(4):39-42
为了在定量化研究植物株高过程中实现无损测量,同时降低户外植物生长监测的硬件成本,方便设备安装和操作,依托立体视觉技术,提出了一种基于图像处理的株高无损测量方法.以测量棉花株高为研究对象,首先,在棉花生长无损监测系统中,通过标定后的单个摄像机获取棉花植株图像,在二维图像空间将棉花株茎简化成两个关键点坐标,然后利用摄像机参数矩阵以及对两个关键点的约束条件,获得植株在世界坐标系中的三维信息,从而获得植株高度.实验结果表明,该算法在进行植株高度测量时,其误差范围可以控制在5 mm以内,该方法是可行的、有效的.  相似文献   

18.
Underwater visual inspection is an important task for checking the structural integrity and biofouling of the ship hull surface to improve the operational safety and efficiency of ships and floating vessels. This paper describes the development of an autonomous in‐water visual inspection system and its application to visual hull inspection of a full‐scale ship. The developed system includes a hardware vehicle platform and software algorithms for autonomous operation of the vehicle. The algorithms for vehicle autonomy consist of the guidance, navigation, and control algorithms for real‐time and onboard operation of the vehicle around the hull surface. The environmental perception of the developed system is mainly based on optical camera images, and various computer vision and optimization algorithms are used for vision‐based navigation and visual mapping. In particular, a stereo camera is installed on the underwater vehicle to estimate instantaneous surface normal vectors, which enables high‐precision navigation and robust visual mapping, not only on flat areas but also over moderately curved hull surface areas. The development process of the vehicle platform and the implemented algorithms are described. The results of the field experiment with a full‐scale ship in a real sea environment are presented to demonstrate the feasibility and practical performance of the developed system.  相似文献   

19.
Abstract— A field‐enhanced rapid‐thermal‐processor (FE‐RTP) system that enables LTPS LCD and AMOLED manufacturers to produce poly‐Si films at low cost, high throughput, and high yield has been developed. The FE‐RTP allows for diverse process options including crystallization, thermal oxidation of gate oxides, and fast pre‐compactions. The process and equipment compatibility with a‐Si TFT manufacturing lines provides a viable solution to produce poly‐Si TFTs using a‐Si TFT lines.  相似文献   

20.
Robotic weeding enables weed control near or within crop rows automatically, precisely and effectively. A computer‐vision system was developed for detecting crop plants at different growth stages for robotic weed control. Fusion of color images and depth images was investigated as a means of enhancing the detection accuracy of crop plants under conditions of high weed population. In‐field images of broccoli and lettuce were acquired 3–27 days after transplanting with a Kinect v2 sensor. The image processing pipeline included data preprocessing, vegetation pixel segmentation, plant extraction, feature extraction, feature‐based localization refinement, and crop plant classification. For the detection of broccoli and lettuce, the color‐depth fusion algorithm produced high true‐positive detection rates (91.7% and 90.8%, respectively) and low average false discovery rates (1.1% and 4.0%, respectively). Mean absolute localization errors of the crop plant stems were 26.8 and 7.4 mm for broccoli and lettuce, respectively. The fusion of color and depth was proved beneficial to the segmentation of crop plants from background, which improved the average segmentation success rates from 87.2% (depth‐based) and 76.4% (color‐based) to 96.6% for broccoli, and from 74.2% (depth‐based) and 81.2% (color‐based) to 92.4% for lettuce, respectively. The fusion‐based algorithm had reduced performance in detecting crop plants at early growth stages.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司    京ICP备09084417号-23

京公网安备 11010802026262号