首页 | 官方网站   微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 62 毫秒
1.
The design of reliable navigation and control systems for Unmanned Aerial Vehicles (UAVs) based only on visual cues and inertial data has many unsolved challenging problems, ranging from hardware and software development to pure control-theoretical issues. This paper addresses these issues by developing and implementing an adaptive vision-based autopilot for navigation and control of small and mini rotorcraft UAVs. The proposed autopilot includes a Visual Odometer (VO) for navigation in GPS-denied environments and a nonlinear control system for flight control and target tracking. The VO estimates the rotorcraft ego-motion by identifying and tracking visual features in the environment, using a single camera mounted on-board the vehicle. The VO has been augmented by an adaptive mechanism that fuses optic flow and inertial measurements to determine the range and to recover the 3D position and velocity of the vehicle. The adaptive VO pose estimates are then exploited by a nonlinear hierarchical controller for achieving various navigational tasks such as take-off, landing, hovering, trajectory tracking, target tracking, etc. Furthermore, the asymptotic stability of the entire closed-loop system has been established using systems in cascade and adaptive control theories. Experimental flight test data over various ranges of the flight envelope illustrate that the proposed vision-based autopilot performs well and allows a mini rotorcraft UAV to achieve autonomously advanced flight behaviours by using vision.  相似文献   

2.
This paper presents a nonlinear controller for terrain following of a vertical take-off and landing vehicle (VTOL). The VTOL vehicle is assumed to be a rigid body, equipped with a minimum sensor suite (camera, IMU and barometric altimeter) maneuvering over a textured rough terrain made of a concatenation of planar surfaces. Assuming that the forward velocity is separately regulated to a desired value, the proposed control approach ensures terrain following and guarantees that the vehicle does not collide with the ground during the task. The proposed control acquires an optical flow from multiple spatially separate observation points, typically obtained via multiple cameras or non collinear directions in a unique camera. The proposed control algorithm has been tested extensively in simulation and then implemented on a quadrotor UAV to demonstrate the performance of the closed-loop system.  相似文献   

3.
In recent years, Unmanned Air Vehicles (UAVs) have become more and more important. These vehicles are employed in many applications from military operations to civilian tasks. Under situations where global positioning system (GPS) and inertial navigation system (INS) do not function, or as an additional sensor, computer vision can be used. Having 360° view, catadioptric cameras might be very useful as they can be used as measurement units, obstacle avoidance sensors or navigation planners. Although many innovative research has been done about this camera, employment of such cameras in UAVs is very new. In this paper, we present the use of catadioptric systems in UAVs to estimate vehicle attitude using parallel lines that exist on many structures in an urban environment. After explanation of the algorithm, the UAV modeling and control will be presented. In order to increase the estimation and control speed an Extended Kalman Filter (EKF) and multi-threading are used and speeds up to 40 fps are obtained. Various simulations have been done to present the effectiveness of the estimation algorithms as well as the UAV controllers. A custom test stand has been designed to perform successful experiments on the UAV. Finally, we will present the experiments and the results of the estimation and control algorithms on a real model helicopter. EKF based attitude estimation and stabilization using catadioptric images has found to be a reliable alternative to other sensor usage.  相似文献   

4.
The use of unmanned aerial vehicles (UAVs) for military, scientific, and civilian sectors are increasing drastically in recent years. This study presents algorithms for the visual-servo control of an UAV, in which a quadrotor helicopter has been stabilized with visual information through the control loop. Unlike previous study that use pose estimation approach which is time consuming and subject to various errors, the visual-servo control is more reliable and fast. The method requires a camera on-board the vehicle, which is already available on various UAV systems. The UAV with a camera behaves like an eye-in-hand visual servoing system. In this study the controller was designed by using two different approaches; image based visual servo control method and hybrid visual servo control method. Various simulations are developed on Matlab, in which the quadrotor aerial vehicle has been visual-servo controlled. In order to show the effectiveness of the algorithms, experiments were performed on a model quadrotor UAV, which suggest successful performance.  相似文献   

5.
Conventional particle filtering-based visual ego-motion estimation or visual odometry often suffers from large local linearization errors in the case of abrupt camera motion. The main contribution of this paper is to present a novel particle filtering-based visual ego-motion estimation algorithm that is especially robust to the abrupt camera motion. The robustness to the abrupt camera motion is achieved by multi-layered importance sampling via particle swarm optimization (PSO), which iteratively moves particles to higher likelihood region without local linearization of the measurement equation. Furthermore, we make the proposed visual ego-motion estimation algorithm in real-time by reformulating the conventional vector space PSO algorithm in consideration of the geometry of the special Euclidean group SE(3), which is a Lie group representing the space of 3-D camera poses. The performance of our proposed algorithm is experimentally evaluated and compared with the local linearization and unscented particle filter-based visual ego-motion estimation algorithms on both simulated and real data sets.  相似文献   

6.
If a visual observer moves through an environment, the patterns of light that impinge its retina vary leading to changes in sensed brightness. Spatial shifts of brightness patterns in the 2D image over time are called optic flow. In contrast to optic flow visual motion fields denote the displacement of 3D scene points projected onto the camera’s sensor surface. For translational and rotational movement through a rigid scene parametric models of visual motion fields have been defined. Besides ego-motion these models provide access to relative depth, and both ego-motion and depth information is useful for visual navigation.In the past 30 years methods for ego-motion estimation based on models of visual motion fields have been developed. In this review we identify five core optimization constraints which are used by 13 methods together with different optimization techniques.1 In the literature methods for ego-motion estimation typically have been evaluated by using an error measure which tests only a specific ego-motion. Furthermore, most simulation studies used only a Gaussian noise model. Unlike, we test multiple types and instances of ego-motion. One type is a fixating ego-motion, another type is a curve-linear ego-motion. Based on simulations we study properties like statistical bias, consistency, variability of depths, and the robustness of the methods with respect to a Gaussian or outlier noise model. In order to achieve an improvement of estimates for noisy visual motion fields, part of the 13 methods are combined with techniques for robust estimation like m-functions or RANSAC. Furthermore, a realistic scenario of a stereo image sequence has been generated and used to evaluate methods of ego-motion estimation provided by estimated optic flow and depth information.  相似文献   

7.
Small unmanned aerial vehicles (UAVs) are becoming popular among researchers and vital platforms for several autonomous mission systems. In this paper, we present the design and development of a miniature autonomous rotorcraft weighing less than 700 g and capable of waypoint navigation, trajectory tracking, visual navigation, precise hovering, and automatic takeoff and landing. In an effort to make advanced autonomous behaviors available to mini‐ and microrotorcraft, an embedded and inexpensive autopilot was developed. To compensate for the weaknesses of the low‐cost equipment, we put our efforts into designing a reliable model‐based nonlinear controller that uses an inner‐loop outer‐loop control scheme. The developed flight controller considers the system's nonlinearities, guarantees the stability of the closed‐loop system, and results in a practical controller that is easy to implement and to tune. In addition to controller design and stability analysis, the paper provides information about the overall control architecture and the UAV system integration, including guidance laws, navigation algorithms, control system implementation, and autopilot hardware. The guidance, navigation, and control (GN&C) algorithms were implemented on a miniature quadrotor UAV that has undergone an extensive program of flight tests, resulting in various flight behaviors under autonomous control from takeoff to landing. Experimental results that demonstrate the operation of the GN&C algorithms and the capabilities of our autonomous micro air vehicle are presented. © 2009 Wiley Periodicals, Inc.  相似文献   

8.
大型固定翼无人机在起飞降落阶段,需要由人工进行遥控飞行。传统的由"二维地图+虚拟仪表+机载摄像机"组成的飞控系统,普遍存在数据反馈不够直观,操控体验差等问题。通过采用三维虚拟地形仿真技术,建立了精确的机场周边及任务区域三维虚拟地形,并且将现实中的无人机飞行数据实时同步到虚拟场景中的虚拟无人机上。为无人机操控员提供了一种更加直观、方便的操控视角。在因为天候不佳或其它原因失去机载摄像机画面时,可以大幅降低操控失误的风险。此功能以辅助设备的形式接入现有飞控系统,无需对现有系统进行改造。独立性强,部署方便。  相似文献   

9.
To participate in the Outback Medical Express UAV Challenge 2016, a vehicle was designed and tested that can autonomously hover precisely, takeoff and land vertically, fly fast forward efficiently, and use computer vision to locate a person and a suitable landing location. The vehicle is a novel hybrid tail‐sitter combining a delta‐shaped biplane fixed‐wing and a conventional helicopter rotor. The rotor and wing are mounted perpendicularly to each other,and the entire vehicle pitches down to transition from hover to fast forward flight where the rotor serves as propulsion. To deliver sufficient thrust in hover while still being efficient in fast forward flight, a custom rotor system was designed. The theoretical design was validated with energy measurements, wind tunnel tests, and application in real‐world missions. A rotor‐head and corresponding control algorithm were developed to allow transitioning flight with the nonconventional rotor dynamics that are caused by the fuselage rotor interaction. Dedicated electronics were designed that meet vehicle needs and comply with regulations to allow safe flight beyond visual line of sight. Vision‐based search and guidance algorithms running on a stereo‐vision fish‐eye camera were developed and tested to locate a person in cluttered terrain never seen before. Flight tests and a competition participation illustrate the applicability of the DelftaCopter concept.  相似文献   

10.
This paper addresses the perception, control, and trajectory planning for an aerial platform to identify and land on a moving car at 15 km/hr. The hexacopter unmanned aerial vehicle (UAV), equipped with onboard sensors and a computer, detects the car using a monocular camera and predicts the car future movement using a nonlinear motion model. While following the car, the UAV lands on its roof, and it attaches itself using magnetic legs. The proposed system is fully autonomous from takeoff to landing. Numerous field tests were conducted throughout the year‐long development and preparations for the Mohamed Bin Zayed International Robotics Challenge (MBZIRC) 2017 competition, for which the system was designed. We propose a novel control system in which a model predictive controller is used in real time to generate a reference trajectory for the UAV, which are then tracked by the nonlinear feedback controller. This combination allows to track predictions of the car motion with minimal position error. The evaluation presents three successful autonomous landings during the MBZIRC 2017, where our system achieved the fastest landing among all competing teams.  相似文献   

11.
This paper presents the control of an indoor unmanned aerial vehicle (UAV) using multi-camera visual feedback. For the autonomous flight of the indoor UAV, instead of using onboard sensor information, visual feedback concept is employed by the development of an indoor flight test-bed. The indoor test-bed consists of four major components: the multi-camera system, ground computer, onboard color marker set, and quad-rotor UAV. Since the onboard markers are attached to the pre-defined location, position and attitude of the UAV can be estimated by marker detection algorithm and triangulation method. Additionally, this study introduces a filter algorithm to obtain the full 6-degree of freedom (DOF) pose estimation including velocities and angular rates. The filter algorithm also enhances the performance of the vision system by making up for the weakness of low cost cameras such as poor resolution and large noise. Moreover, for the pose estimation of multiple vehicles, data association algorithm using the geometric relation between cameras is proposed in this paper. The control system is designed based on the classical proportional-integral-derivative (PID) control, which uses the position, velocity and attitude from the vision system and the angular rate from the rate gyro sensor. This paper concludes with both ground and flight test results illustrating the performance and properties of the proposed indoor flight test-bed and the control system using the multi-camera visual feedback.  相似文献   

12.
Rover navigation using stereo ego-motion   总被引:8,自引:0,他引:8  
Robust navigation for mobile robots over long distances requires an accurate method for tracking the robot position in the environment. Promising techniques for position estimation by determining the camera ego-motion from monocular or stereo sequences have been previously described. However, long-distance navigation requires both a high level of robustness and a low rate of error growth. In this paper, we describe a methodology for long-distance rover navigation that meets these goals using robust estimation of ego-motion. The basic method is a maximum-likelihood ego-motion algorithm that models the error in stereo matching as a normal distribution elongated along the (parallel) camera viewing axes. Several mechanisms are described for improving navigation robustness in the context of this methodology. In addition, we show that a system based on only camera ego-motion estimates will accumulate errors with super-linear growth in the distance traveled, owing to increasing orientation errors. When an absolute orientation sensor is incorporated, the error growth can be reduced to a linear function of the distance traveled. We have tested these techniques using both extensive simulation and hundreds of real rover images and have achieved a low, linear rate of error growth. This method has been implemented to run on-board a prototype Mars rover.  相似文献   

13.
Adaptive robotic visual tracking: theory and experiments   总被引:2,自引:0,他引:2  
The use of a vision sensor in the feedback loop is addressed within the controlled active vision framework. Algorithms are proposed for the solution of the robotic (eye-in-hand configuration) visual tracking and servoing problem. Visual tracking is stated as a problem of combining control with computer vision. The sum-of-squared differences optical flow is used to compute the vector of discrete displacements. The displacements are fed to an adaptive controller (self-tuning regulator) that creates commands for a robot control system. The procedure is based on the online estimation of the relative distance of the target from the camera, but only partial knowledge of the relative distance is required, obviating the need for offline calibration. Three different adaptive control schemes have been implemented, both in simulation and in experiments. The computational complexity and the experimental results demonstrate that the proposed algorithms can be implemented in real time  相似文献   

14.
The view-independent visualization of 3D scenes is most often based on rendering accurate 3D models or utilizes image-based rendering techniques. To compute the 3D structure of a scene from a moving vision sensor or to use image-based rendering approaches, we need to be able to estimate the motion of the sensor from the recorded image information with high accuracy, a problem that has been well-studied. In this work, we investigate the relationship between camera design and our ability to perform accurate 3D photography, by examining the influence of camera design on the estimation of the motion and structure of a scene from video data. By relating the differential structure of the time varying plenoptic function to different known and new camera designs, we can establish a hierarchy of cameras based upon the stability and complexity of the computations necessary to estimate structure and motion. At the low end of this hierarchy is the standard planar pinhole camera for which the structure from motion problem is non-linear and ill-posed. At the high end is a camera, which we call the full field of view polydioptric camera, for which the motion estimation problem can be solved independently of the depth of the scene which leads to fast and robust algorithms for 3D Photography. In between are multiple view cameras with a large field of view which we have built, as well as omni-directional sensors.  相似文献   

15.
We provide a sensor fusion framework for solving the problem of joint ego-motion and road geometry estimation. More specifically we employ a sensor fusion framework to make systematic use of the measurements from a forward looking radar and camera, steering wheel angle sensor, wheel speed sensors and inertial sensors to compute good estimates of the road geometry and the motion of the ego vehicle on this road. In order to solve this problem we derive dynamical models for the ego vehicle, the road and the leading vehicles. The main difference to existing approaches is that we make use of a new dynamic model for the road. An extended Kalman filter is used to fuse data and to filter measurements from the camera in order to improve the road geometry estimate. The proposed solution has been tested and compared to existing algorithms for this problem, using measurements from authentic traffic environments on public roads in Sweden. The results clearly indicate that the proposed method provides better estimates.  相似文献   

16.
In this paper, we design consensus algorithms for multiple unmanned aerial vehicles (UAV). We mainly focus on the control design in the face of measurement noise and propose a position consensus controller based on the sliding mode control by using the distributed UAV information. Within the framework of Lyapunov theory, it is shown that all signals in the closed-loop multi-UAV systems are stabilized by the proposed algorithm, while consensus errors are uniformly ultimately bounded. Moreover, for each local UAV, we propose a mechanism to define the trustworthiness, based on which the edge weights are tuned to eliminate negative influence from stubborn agents or agents exposed to extremely noisy measurement. Finally, we develop software for a nano UAV platform, based on which we implement our algorithms to address measurement noises in UAV flight tests. The experimental results validate the effectiveness of the proposed algorithms.  相似文献   

17.
In this paper, two techniques to control UAVs (Unmanned Aerial Vehicles), based on visual information are presented. The first one is based on the detection and tracking of planar structures from an on-board camera, while the second one is based on the detection and 3D reconstruction of the position of the UAV based on an external camera system. Both strategies are tested with a VTOL (Vertical take-off and landing) UAV, and results show good behavior of the visual systems (precision in the estimation and frame rate) when estimating the helicopter??s position and using the extracted information to control the UAV.  相似文献   

18.
Robust camera pose and scene structure analysis for service robotics   总被引:1,自引:0,他引:1  
Successful path planning and object manipulation in service robotics applications rely both on a good estimation of the robot’s position and orientation (pose) in the environment, as well as on a reliable understanding of the visualized scene. In this paper a robust real-time camera pose and a scene structure estimation system is proposed. First, the pose of the camera is estimated through the analysis of the so-called tracks. The tracks include key features from the imaged scene and geometric constraints which are used to solve the pose estimation problem. Second, based on the calculated pose of the camera, i.e. robot, the scene is analyzed via a robust depth segmentation and object classification approach. In order to reliably segment the object’s depth, a feedback control technique at an image processing level has been used with the purpose of improving the robustness of the robotic vision system with respect to external influences, such as cluttered scenes and variable illumination conditions. The control strategy detailed in this paper is based on the traditional open-loop mathematical model of the depth estimation process. In order to control a robotic system, the obtained visual information is classified into objects of interest and obstacles. The proposed scene analysis architecture is evaluated through experimental results within a robotic collision avoidance system.  相似文献   

19.
Quad-robot type (QRT) unmanned aerial vehicles (UAVs) have been developed for quick detection and observation of the circumstances under calamity environment such as indoor fire spots. The UAV is equipped with four propellers driven by each electric motor, an embedded controller, an Inertial Navigation System (INS) using three rate gyros and accelerometers, a CCD (Charge Coupled Device) camera with wireless communication transmitter for observation, and an ultrasonic range sensor for height control. Accurate modeling and robust flight control of QRT UAVs are mainly discussed in this work. Rigorous dynamic model of a QRT UAV is obtained both in the reference and body frame coordinate systems. A disturbance observer (DOB) based controller using the derived dynamic models is also proposed for robust hovering control. The control input induced by DOB is helpful to use simple equations of motion satisfying accurately derived dynamics. The developed hovering robot shows stable flying performances under the adoption of DOB and the vision based localization method. Although a model is incorrect, DOB method can design a controller by regarding the inaccurate part of the model and sensor noises as disturbances. The UAV can also avoid obstacles using eight IR (Infrared) and four ultrasonic range sensors. This kind of micro UAV can be widely used in various calamity observation fields without danger of human beings under harmful environment. The experimental results show the performance of the proposed control algorithm.  相似文献   

20.
The problem considered in this paper involves the design of a vision-based autopilot for small and micro Unmanned Aerial Vehicles (UAVs). The proposed autopilot is based on an optic flow-based vision system for autonomous localization and scene mapping, and a nonlinear control system for flight control and guidance. This paper focusses on the development of a real-time 3D vision algorithm for estimating optic flow, aircraft self-motion and depth map, using a low-resolution onboard camera and a low-cost Inertial Measurement Unit (IMU). Our implementation is based on 3 Nested Kalman Filters (3NKF) and results in an efficient and robust estimation process. The vision and control algorithms have been implemented on a quadrotor UAV, and demonstrated in real-time flight tests. Experimental results show that the proposed vision-based autopilot enabled a small rotorcraft to achieve fully-autonomous flight using information extracted from optic flow.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司    京ICP备09084417号-23

京公网安备 11010802026262号