首页 | 官方网站   微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 172 毫秒
1.
Hyperspectral cameras sample many different spectral bands at each pixel, enabling advanced detection and classification algorithms. However, their limited spatial resolution and the need to measure the camera motion to create hyperspectral images makes them unsuitable for nonsmooth moving platforms such as unmanned aerial vehicles (UAVs). We present a procedure to build hyperspectral images from line sensor data without camera motion information or extraneous sensors. Our approach relies on an accompanying conventional camera to exploit the homographies between images for mosaic construction. We provide experimental results from a low‐altitude UAV, achieving high‐resolution spectroscopy with our system.  相似文献   

2.
An unmanned aerial vehicle (UAV) stabilization strategy based on computer vision and switching controllers is proposed. The main goal of this system is to perform tracking of a moving target on ground. The architecture implemented consists of a quadrotor equipped with an embedded camera which provides real-time video to a computer vision algorithm where images are processed. A vision-based estimator is proposed, which makes use of 2-dimensional images to compute the relative 3-dimensional position and translational velocity of the UAV with respect to the target. The proposed estimator provides the required states measurements to a micro-controller for stabilizing the vehicle during flight. The control strategy consists of switching controllers, which allows making decisions when the target is lost temporarily or when it is out of the camera’s field of view. Real time experiments are presented to demonstrate the performance of the target-tracking system proposed.  相似文献   

3.
Automatic Georeferencing of Images Acquired by UAV’s   总被引:1,自引:1,他引:0  
This paper implements and evaluates experimentally a procedure for automatically georeferencing images acquired by unmanned aerial vehicles (UAV’s) in the sense that ground control points (GCP) are not necessary. Since the camera model is necessary for georeferencing, this paper also proposes a completely automatic procedure for collecting corner pixels in the model plane image to solve the camera calibration problem, i.e., to estimate the camera and the lens distortion parameters. The performance of the complete georeferencing system is evaluated with real flight data obtained by a typical UAV.  相似文献   

4.
Civilian applications for UAVs will bring these vehicles into low flying areas cluttered with obstacles such as building, trees, power lines, and more importantly civilians. The high accident rate of UAVs means that civilian use will come at a huge risk unless we design systems and protocols that can prevent UAV accidents, better train operators and augment pilot performance. This paper presents two methods for generating a chase view to the pilot for UAV operations in cluttered environments. The chase view gives the operator a virtual view from behind the UAV during flight. This is done by generating a virtual representation of the vehicle and surrounding environment while integrating it with the real-time onboard camera images. Method I presents a real-time mapping approach toward generating the surrounding environment and Method II uses a prior model of the operating environment. Experimental results are presented from tests where subjects flew in a H0 scale environment using a 6 DOF gantry system. Results showed that the chase view improved UAV operator performance over using the traditional onboard camera view.  相似文献   

5.
We present the development process behind AtlantikSolar, a small 6.9 kg hand‐launchable low‐altitude solar‐powered unmanned aerial vehicle (UAV) that recently completed an 81‐hour continuous flight and thereby established a new flight endurance world record for all aircraft below 50 kg mass. The goal of our work is to increase the usability of such solar‐powered robotic aircraft by maximizing their perpetual flight robustness to meteorological deteriorations such as clouds or winds. We present energetic system models and a design methodology, implement them in our publicly available conceptual design framework for perpetual flight‐capable solar‐powered UAVs, and finally apply the framework to the AtlantikSolar UAV. We present the detailed AtlantikSolar characteristics as a practical design example. Airframe, avionics, hardware, state estimation, and control method development for autonomous flight operations are described. Flight data are used to validate the conceptual design framework. Flight results from the continuous 81‐hour and 2,338 km covered ground distance flight show that AtlantikSolar achieves 39% minimum state‐of‐charge, 6.8 h excess time and 6.2 h charge margin. These performance metrics are a significant improvement over previous solar‐powered UAVs. A performance outlook shows that AtlantikSolar allows perpetual flight in a 6‐month window around June 21 at mid‐European latitudes, and that multi‐day flights with small optical‐ or infrared‐camera payloads are possible for the first time. The demonstrated performance represents the current state‐of‐the‐art in solar‐powered low‐altitude perpetual flight performance. We conclude with lessons learned from the three‐year AtlantikSolar UAV development process and with a sensitivity analysis that identifies the most promising technological areas for future solar‐powered UAV performance improvements.  相似文献   

6.
We present a system consisting of a miniature unmanned aerial vehicle (UAV) and a small carrier vehicle, in which the UAV is capable of autonomously starting from the moving ground vehicle, tracking it at a constant distance and landing on a platform on the carrier in motion. Our visual tracking approach differs from other methods by using low-cost, lightweight commodity consumer hardware. As main sensor we use a Wii remote infrared (IR) camera, which allows robust tracking of a pattern of IR lights in conditions without direct sunlight. The system does not need to communicate with the ground vehicle and works with an onboard 8-bit microcontroller. Nevertheless the position and orientation relative to the IR pattern is estimated at a frequency of approximately 50 Hz. This enables the UAV to fly fully autonomously, performing flight control, self-stabilisation and visual tracking of the ground vehicle. We present experiments in which our UAV performs autonomous flights with a moving ground carrier describing a circular path and where the carrier is rotating. The system provides small errors and allows for safe, autonomous indoor flights.  相似文献   

7.
Structure from motion with wide circular field of view cameras   总被引:2,自引:0,他引:2  
This paper presents a method for fully automatic and robust estimation of two-view geometry, autocalibration, and 3D metric reconstruction from point correspondences in images taken by cameras with wide circular field of view. We focus on cameras which have more than 180/spl deg/ field of view and for which the standard perspective camera model is not sufficient, e.g., the cameras equipped with circular fish-eye lenses Nikon FC-E8 (183/spl deg/), Sigma 8 mm-f4-EX (180/spl deg/), or with curved conical mirrors. We assume a circular field of view and axially symmetric image projection to autocalibrate the cameras. Many wide field of view cameras can still be modeled by the central projection followed by a nonlinear image mapping. Examples are the above-mentioned fish-eye lenses and properly assembled catadioptric cameras with conical mirrors. We show that epipolar geometry of these cameras can be estimated from a small number of correspondences by solving a polynomial eigenvalue problem. This allows the use of efficient RANSAC robust estimation to find the image projection model, the epipolar geometry, and the selection of true point correspondences from tentative correspondences contaminated by mismatches. Real catadioptric cameras are often slightly noncentral. We show that the proposed autocalibration with approximate central models is usually good enough to get correct point correspondences which can be used with accurate noncentral models in a bundle adjustment to obtain accurate 3D scene reconstruction. Noncentral camera models are dealt with and results are shown for catadioptric cameras with parabolic and spherical mirrors.  相似文献   

8.
Mosaics acquired by pushbroom cameras, stereo panoramas, omnivergent mosaics, and spherical mosaics can be viewed as images taken by non-central cameras, i.e. cameras that project along rays that do not all intersect at one point. It has been shown that in order to reduce the correspondence search in mosaics to a one-parametric search along curves, the rays of the non-central cameras have to lie in double ruled epipolar surfaces. In this work, we introduce the oblique stereo geometry, which has non-intersecting double ruled epipolar surfaces. We analyze the configurations of mutually oblique rays that see every point in space. These configurations, called oblique cameras, are the most non-central cameras among all cameras. We formulate the assumption under which two oblique cameras posses oblique stereo geometry and show that the epipolar surfaces are non-intersecting double ruled hyperboloids and two lines. We show that oblique cameras, and the correspondingoblique stereo geometry, exist and give an example of a physically realizable oblique stereo geometry. We introduce linear oblique cameras as those which can be generated by a linear mapping from points in space to camera rays and characterize those collineations which generate them. We show that all linear oblique cameras are obtained by a collineation from one example of an oblique camera. Finally, we relate oblique cameras to spreads known from incidence geometries.  相似文献   

9.
This paper presents an implementation of an aircraft pose and motion estimator using visual systems as the principal sensor for controlling an Unmanned Aerial Vehicle (UAV) or as a redundant system for an Inertial Measure Unit (IMU) and gyros sensors. First, we explore the applications of the unified theory for central catadioptric cameras for attitude and heading estimation, explaining how the skyline is projected on the catadioptric image and how it is segmented and used to calculate the UAV’s attitude. Then we use appearance images to obtain a visual compass, and we calculate the relative rotation and heading of the aerial vehicle. Additionally, we show the use of a stereo system to calculate the aircraft height and to measure the UAV’s motion. Finally, we present a visual tracking system based on Fuzzy controllers working in both a UAV and a camera pan and tilt platform. Every part is tested using the UAV COLIBRI platform to validate the different approaches, which include comparison of the estimated data with the inertial values measured onboard the helicopter platform and the validation of the tracking schemes on real flights.  相似文献   

10.
Yang  Zhao  Zhao  Yang  Hu  Xiao  Yin  Yi  Zhou  Lihua  Tao  Dapeng 《Multimedia Tools and Applications》2019,78(9):11983-12006

The surround view camera system is an emerging driving assistant technology that can assist drivers in parking by providing top-down view of surrounding situations. Such a system usually consists of four wide-angle or fish-eye cameras that mounted around the vehicle, and a bird-eye view is synthesized from images of these cameras. Commonly there are two fundamental problems for the surround view synthesis, geometric alignment and image synthesis. Geometric alignment performs fish-eye calibration and computes the image perspective transformation between the bird-eye view and images from the surrounding cameras. Image synthesis technique dedicates to seamless stitch between adjacent views and color balancing. In this paper, we propose a flexible central-around coordinate mapping (CACM) model for vehicle surround view synthesis. The CACM model calculates perspective transformation between a top-view central camera coordinate and the around camera coordinates by a marker point based method. With the transformation matrices, we could generate the pixel point mapping relationship between the bird-eye view and images of the surrounding cameras. After geometric alignment, an image fusion method based on distance weighting is adopted for seamless stitch, and an effective overlapping region brightness optimization method is proposed for color balancing. Both the seamless stitch and color balancing can be easily operated by using two types of weight coefficient under the framework of the CACM model. Experimental results show that the proposed approaches could provide a high-performance surround view camera system.

  相似文献   

11.
This paper considers the problem of a small, fixed-wing UAV equipped with a gimbaled camera autonomously tracking an unpredictable moving ground vehicle. Thus, the UAV must maintain close proximity to the ground target and simultaneously keep the target in its camera׳s visibility region. To achieve this objective robustly, two novel optimization-based control strategies are developed. The first assumes an evasive target motion while the second assumes a stochastic target motion. The resulting optimal control policies have been successfully flight tested, thereby demonstrating the efficacy of both approaches in a real-world implementation and highlighting the advantages of one approach over the other.  相似文献   

12.
试飞测试中摄像机标定方法研究   总被引:2,自引:0,他引:2  
胡丙华  晏晖  陈贝 《测控技术》2013,32(5):134-137
随着数码相机技术和摄影测量技术的发展,越来越多的数字摄像机应用于试飞测试中,而摄像机标定是其成功应用于飞行试验的关键之一。为了突破试飞测试中现有的仅以点特征作为控制,充分利用现有设备条件,更好地解决加装在飞机上的摄像机在飞行过程中的实时标定问题,采取内标定与实时外标定两步实现摄像机标定的方法。着重探讨了一种基于平行直线的摄像机内标定方法,详细论述了基于灭点约束和直线几何约束的摄像机标定解算模型,该方法在无控制点的情况下可得到每个摄像机的内方位元素、各项畸变改正系数和外方位角元素;并简要介绍了基于单片后方交会的实时外标定方法。实际数据的试验结果表明,该方法切实可行,能够获得精确、稳定的参数结果,有效减少了摄像机标定过程中所需布设的控制点数,从而提高了试飞测试中精确测量导弹运动轨迹、机翼变形测量等工作的可实施性。  相似文献   

13.
The aim of the paper is to present, test and discuss the implementation of Visual SLAM techniques to images taken from Unmanned Aerial Vehicles (UAVs) outdoors, in partially structured environments. Every issue of the whole process is discussed in order to obtain more accurate localization and mapping from UAVs flights. Firstly, the issues related to the visual features of objects in the scene, their distance to the UAV, and the related image acquisition system and their calibration are evaluated for improving the whole process. Other important, considered issues are related to the image processing techniques, such as interest point detection, the matching procedure and the scaling factor. The whole system has been tested using the COLIBRI mini UAV in partially structured environments. The results that have been obtained for localization, tested against the GPS information of the flights, show that Visual SLAM delivers reliable localization and mapping that makes it suitable for some outdoors applications when flying UAVs.  相似文献   

14.
This paper presents an aircraft attitude and heading estimator using catadioptric images as a principal sensor for UAV or as a redundant system for IMU (Inertial Measure Unit) and gyro sensors. First, we explain how the unified theory for central catadioptric cameras is used for attitude and heading estimation, explaining how the skyline is projected on the catadioptric image and how it is segmented and used to calculate the UAV’s attitude. Then, we use appearance images to obtain a visual compass, and we calculate the relative rotation and heading of the aerial vehicle. Finally the tests and results using the UAV COLIBRI platform and the validation of them in real flights are presented, comparing the estimated data with the inertial values measured on board.  相似文献   

15.
In this study, we proposed a high-density three-dimensional (3D) tunnel measurement method, which estimates the pose changes of cameras based on a point set registration algorithm regarding 2D and 3D point clouds. To detect small deformations and defects, high-density 3D measurements are necessary for tunnel construction sites. The line-structured light method uses an omnidirectional laser to measure a high-density cross-section point cloud from camera images. To estimate the pose changes of cameras in tunnels, which have few textures and distinctive shapes, cooperative robots are useful because they estimate the pose by aggregating relative poses from the other robots. However, previous studies mounted several sensors for both the 3D measurement and pose estimation, increasing the size of the measurement system. Furthermore, the lack of 3D features makes it difficult to match point clouds obtained from different robots. The proposed measurement system consists of a cross-section measurement unit and a pose estimation unit; one camera was mounted for each unit. To estimate the relative poses of the two cameras, we designed a 2D–3D registration algorithm for the omnidirectional laser light, and implemented hand-truck and unmanned aerial vehicle systems. In the measurement of a tunnel with a width of 8.8 m and a height of 6.4 m, the error of the point cloud measured by the proposed method was 162.8 and 575.3 mm along 27 m, respectively. In a hallway measurement, the proposed method generated less errors in straight line shapes with few distinctive shapes compared with that of the 3D point set registration algorithm with Light Detection and Ranging.  相似文献   

16.
针对小型无人飞行器跟踪目标的问题,提出了一种基于双目视觉和Camshift算法的无人飞行器目标跟踪以及定位算法。双目相机得到的左右图像通过Camshift算法处理可得到目标中心特征点,对目标中心特征点进行三维重建,得到机体坐标系下无人飞行器与目标间的相对位置和偏航角,应用卡尔曼滤波算法对测量值进行了优化,将所得估计值作为飞行控制系统的反馈输入值,实现了无人飞行器自主跟踪飞行。结果表明所提算法误差较小,具有较高的稳定性与精确性。  相似文献   

17.

This paper proposes a novel complete navigation system for autonomous flight of small unmanned aerial vehicles (UAVs) in GPS-denied environments. The hardware platform used to test the proposed algorithm is a small, custom-built UAV platform equipped with an onboard computer, RGB-D camera, 2D light detection and ranging (LiDAR), and altimeter. The error-state Kalman filter (ESKF) based on the dynamic model for low-cost IMU-driven systems is proposed, and visual odometry from the RGB-D camera and height measurement from the altimeter are fed into the measurement update process of the ESKF. The pose output of the ESKF is then integrated into the open-source simultaneous location and mapping (SLAM) algorithm for pose-graph optimization and loop closing. In addition, the computationally efficient collision-free path planning algorithm is proposed and verified through simulations. The software modules run onboard in real time with limited onboard computational capability. The indoor flight experiment demonstrates that the proposed system for small UAVs with low-cost devices can navigate without collision in fully autonomous missions while establishing accurate surrounding maps.

  相似文献   

18.
This paper presents a nonlinear controller for terrain following of a vertical take-off and landing vehicle (VTOL). The VTOL vehicle is assumed to be a rigid body, equipped with a minimum sensor suite (camera, IMU and barometric altimeter) maneuvering over a textured rough terrain made of a concatenation of planar surfaces. Assuming that the forward velocity is separately regulated to a desired value, the proposed control approach ensures terrain following and guarantees that the vehicle does not collide with the ground during the task. The proposed control acquires an optical flow from multiple spatially separate observation points, typically obtained via multiple cameras or non collinear directions in a unique camera. The proposed control algorithm has been tested extensively in simulation and then implemented on a quadrotor UAV to demonstrate the performance of the closed-loop system.  相似文献   

19.
The application of adequate nitrogen (N) fertilizers to grass seed crops is important to achieve high seed yield. Application of N will inevitably result in over-fertilization on some fields and, concomitantly, an increased risk of adverse environmental impacts, such as ground- and/or surface-water contamination. This study was designed to estimate the N status of two grass seed crops: red fescue (Festuca rubra L.) and perennial ryegrass (Lolium perenne L.) using images captured with an unmanned aerial vehicle (UAV) mounted multispectral camera. Two types of UAV, a fixed-wing UAV and a multi-rotor UAV, operating at two different heights and mounted with the same multispectral camera, were used in different field experiments at the same location in Denmark in the period from 432 to 861 growing degree-days. Seven vegetation indices, calculated from multispectral images with four bands: red, green, red edge and near infrared (NIR), were evaluated for their relationship to dry matter (DM), N concentration, N uptake and N nutrition index (NNI). The results showed a better prediction of N concentration, N uptake and NNI, than DM using vegetation indices. Furthermore, among all vegetation indices, two red-edge-based indices, normalized difference red edge (NDRE) and red edge chlorophyll index (CIRE), performed best in estimating N concentration (R2 = 0.69–0.88), N uptake (R2 = 0.41–0.84) and NNI (R2 = 0.47–0.86). In addition, there was no effect from the choice of UAV, and thereby flight height, on the estimation of NNI. The choice of UAV type therefore seems not to influence the possibility of diagnosing N status in grass seed crops. We conclude that it is possible to estimate NNI based on multispectral images from drone-mounted cameras, and the method could guide farmers as to whether they should apply additional N to the field. We also conclude that further research should focus on estimating the quantity of N to apply and on further developing the method to include more grass species.  相似文献   

20.
This paper presents a vision-based navigation strategy for a vertical take-off and landing (VTOL) unmanned aerial vehicle (UAV) using a single embedded camera observing natural landmarks. In the proposed approach, images of the environment are first sampled, stored and organized as a set of ordered key images (visual path) which provides a visual memory of the environment. The robot navigation task is then defined as a concatenation of visual path subsets (called visual route) linking the current observed image and a target image belonging to the visual memory. The UAV is controlled to reach each image of the visual route using a vision-based control law adapted to its dynamic model and without explicitly planning any trajectory. This framework is largely substantiated by experiments with an X4-flyer equipped with a fisheye camera.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司    京ICP备09084417号-23

京公网安备 11010802026262号