首页 | 官方网站   微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
Localization and tracking of vehicles is still an important issue in GPS‐denied environments (both indoors and outdoors), where accurate motion is required. In this work, a localization system based on the random disposition of LiDAR sensors (which share a partially common field of view) and on the use of the Hausdorff distance is addressed. The proposed system uses the Hausdorff distance to estimate both the position of the LiDAR sensors and the pose of the vehicle as it drives within the environment. Our approach is not restricted to the number of LiDAR sensors (the estimation procedure is asynchronous), the number of vehicles (it is a multidimensional approach), or the nature of the environment. However, it is implemented in open spaces, limited by the range of the LiDAR sensors and the geometry of the vehicle. An empirical analysis of the presented approach is also included here, showing that the error in the localization estimation remains bounded in approximately 50 cm. Real‐time experimentation as validation of the proposed localization and tracking techniques as well as the pros and cons of our proposal are also shown in this work.  相似文献   

2.
In this paper, we address the problem of globally localizing and tracking the pose of a camera‐equipped micro aerial vehicle (MAV) flying in urban streets at low altitudes without GPS. An image‐based global positioning system is introduced to localize the MAV with respect to the surrounding buildings. We propose a novel air‐ground image‐matching algorithm to search the airborne image of the MAV within a ground‐level, geotagged image database. Based on the detected matching image features, we infer the global position of the MAV by back‐projecting the corresponding image points onto a cadastral three‐dimensional city model. Furthermore, we describe an algorithm to track the position of the flying vehicle over several frames and to correct the accumulated drift of the visual odometry whenever a good match is detected between the airborne and the ground‐level images. The proposed approach is tested on a 2 km trajectory with a small quadrocopter flying in the streets of Zurich. Our vision‐based global localization can robustly handle extreme changes in viewpoint, illumination, perceptual aliasing, and over‐season variations, thus outperforming conventional visual place‐recognition approaches. The dataset is made publicly available to the research community. To the best of our knowledge, this is the first work that studies and demonstrates global localization and position tracking of a drone in urban streets with a single onboard camera.  相似文献   

3.
GPS‐denied closed‐loop autonomous control of unstable Unmanned Aerial Vehicles (UAVs) such as rotorcraft using information from a monocular camera has been an open problem. Most proposed Vision aided Inertial Navigation Systems (V‐INSs) have been too computationally intensive or do not have sufficient integrity for closed‐loop flight. We provide an affirmative answer to the question of whether V‐INSs can be used to sustain prolonged real‐world GPS‐denied flight by presenting a V‐INS that is validated through autonomous flight‐tests over prolonged closed‐loop dynamic operation in both indoor and outdoor GPS‐denied environments with two rotorcraft unmanned aircraft systems (UASs). The architecture efficiently combines visual feature information from a monocular camera with measurements from inertial sensors. Inertial measurements are used to predict frame‐to‐frame transition of online selected feature locations, and the difference between predicted and observed feature locations is used to bind in real‐time the inertial measurement unit drift, estimate its bias, and account for initial misalignment errors. A novel algorithm to manage a library of features online is presented that can add or remove features based on a measure of relative confidence in each feature location. The resulting V‐INS is sufficiently efficient and reliable to enable real‐time implementation on resource‐constrained aerial vehicles. The presented algorithms are validated on multiple platforms in real‐world conditions: through a 16‐min flight test, including an autonomous landing, of a 66 kg rotorcraft UAV operating in an unconctrolled outdoor environment without using GPS and through a Micro‐UAV operating in a cluttered, unmapped, and gusty indoor environment. © 2013 Wiley Periodicals, Inc.  相似文献   

4.
The development of autonomous mobile machines to perform useful tasks in real work environments is currently being impeded by concerns over effectiveness, commercial viability and, above all, safety. This paper introduces a case study of a robotic excavator to explore a series of issues around system development, navigation in unstructured environments, autonomous decision making and changing the behaviour of autonomous machines to suit the prevailing demands of users. The adoption of the Real-Time Control Systems (RCS) architecture (Albus, 1991) is proposed as a universal framework for the development of intelligent systems. In addition it is explained how the use of Partially Observable Markov Decision Processes (POMDP) (Kaelbling et al., 1998) can form the basis of decision making in the face of uncertainty and how the technique can be effectively incorporated into the RCS architecture. Particular emphasis is placed on ensuring that the resulting behaviour is both task effective and adequately safe, and it is recognised that these two objectives may be in opposition and that the desired relative balance between them may change. The concept of an autonomous system having “values” is introduced through the use of utility theory. Limited simulation results of experiments are reported which demonstrate that these techniques can create intelligent systems capable of modifying their behaviour to exhibit either ‘safety conscious’ or ‘task achieving’ personalities.  相似文献   

5.
A new approach to autonomous land vehicle (ALV) navigation by the person following is proposed. This approach is based on sequential pattern recognition and computer vision techniques, and maintenance of smoothness for indoor navigation is the main goal. The ALV is guided automatically to follow a person who walks in front of the vehicle. The vehicle can be used as an autonomous handcart, go‐cart, buffet car, golf cart, weeder, etc. in various applications. Sequential pattern recognition is used to design a classifier for making decisions about whether the person in front of the vehicle is walking straight or is too right or too left of the vehicle. Multiple images in a sequence are used as input to the system. Computer vision techniques are used to detect and locate the person in front of the vehicle. By sequential pattern recognition, the relation between the location of the person and that of the vehicle is categorized into three classes. Corresponding adjustments of the direction of the vehicle are computed to achieve smooth navigation. The approach is implemented on a real ALV, and successful and smooth navigation sessions confirm the feasibility of the approach. ©1999 John Wiley & Sons, Inc.  相似文献   

6.
We investigate the problem of cooperative multi-robot planning in unknown environments, which is important in numerous applications in robotics. The research community has been actively developing belief space planning approaches that account for the different sources of uncertainty within planning, recently also considering uncertainty in the environment observed by planning time. We further advance the state of the art by reasoning about future observations of environments that are unknown at planning time. The key idea is to incorporate within the belief indirect multi-robot constraints that correspond to these future observations. Such a formulation facilitates a framework for active collaborative state estimation while operating in unknown environments. In particular, it can be used to identify best robot actions or trajectories among given candidates generated by existing motion planning approaches, or to refine nominal trajectories into locally optimal paths using direct trajectory optimization techniques. We demonstrate our approach in a multi-robot autonomous navigation scenario and consider its applicability for autonomous navigation in unknown obstacle-free and obstacle-populated environments. Results indicate that modeling future multi-robot interaction within the belief allows to determine robot actions (paths) that yield significantly improved estimation accuracy.  相似文献   

7.
An algorithmic solution method is presented for the problem of autonomous robot motion in completely unknown environments. Our approach is based on the alternate execution of two fundamental processes: map building and navigation. In the former, range measures are collected through the robot exteroceptive sensors and processed in order to build a local representation of the surrounding area. This representation is then integrated in the global map so far reconstructed by filtering out insufficient or conflicting information. In the navigation phase, an A*-based planner generates a local path from the current robot position to the goal. Such a path is safe inside the explored area and provides a direction for further exploration. The robot follows the path up to the boundary of the explored area, terminating its motion if unexpected obstacles are encountered. The most peculiar aspects of our method are the use of fuzzy logic for the efficient building and modification of the environment map, and the iterative application of A*, a complete planning algorithm which takes full advantage of local information. Experimental results for a NOMAD 200 mobile robot show the real-time performance of the proposed method, both in static and moderately dynamic environments.  相似文献   

8.
We propose a hybrid approach specifically adapted to deal with the autonomous-navigation problem of a mobile robot which is subjected to perform an emergency task in a partially-known environment. Such a navigation problem requires a method that is able to yield a fast execution time, under constraints on the capacity of the robot and on known/unknown obstacles, and that is sufficiently flexible to deal with errors in the known parts of the environment (unexpected obstacles). Our proposal includes an off-line task-independent preprocessing phase, which is applied just once for a given robot in a given environment. Its purpose is to build, within the known zones, a roadmap of near-time-optimal reference trajectories. The actual execution of the task is an online process that combines reactive navigation with trajectory tracking and that includes smooth transitions between these two modes of navigation. Controllers used are fuzzy-inference systems. Both simulation and experimental results are presented to test the performance of the proposed hybrid approach. Obtained results demonstrate the ability of our proposal to handle unexpected obstacles and to accomplish navigation tasks in relatively complex environments. The results also show that, thanks to its time-optimal-trajectory planning, our proposal is well adapted to emergency tasks as it is able to achieve shorter execution times, compared to other waypoint-navigation methods that rely on optimal-path planning.  相似文献   

9.
This paper describes a light detection and ranging (LiDAR)‐based autonomous navigation system for an ultralightweight ground robot in agricultural fields. The system is designed for reliable navigation under cluttered canopies using only a 2D Hokuyo UTM‐30LX LiDAR sensor as the single source for perception. Its purpose is to ensure that the robot can navigate through rows of crops without damaging the plants in narrow row‐based and high‐leaf‐cover semistructured crop plantations, such as corn (Zea mays) and sorghum ( Sorghum bicolor). The key contribution of our work is a LiDAR‐based navigation algorithm capable of rejecting outlying measurements in the point cloud due to plants in adjacent rows, low‐hanging leaf cover or weeds. The algorithm addresses this challenge using a set of heuristics that are designed to filter out outlying measurements in a computationally efficient manner, and linear least squares are applied to estimate within‐row distance using the filtered data. Moreover, a crucial step is the estimate validation, which is achieved through a heuristic that grades and validates the fitted row‐lines based on current and previous information. The proposed LiDAR‐based perception subsystem has been extensively tested in production/breeding corn and sorghum fields. In such variety of highly cluttered real field environments, the robot logged more than 6 km of autonomous run in straight rows. These results demonstrate highly promising advances to LiDAR‐based navigation in realistic field environments for small under‐canopy robots.  相似文献   

10.
Path planning is a fundamental problem in many areas, ranging from robotics and artificial intelligence to computer graphics and animation. Although there is extensive literature for computing optimal, collision‐free paths, there is relatively little work that explores the satisfaction of spatial constraints between objects and agents at the global navigation layer. This paper presents a planning framework that satisfies multiple spatial constraints imposed on the path. The type of constraints specified can include staying behind a building, walking along walls, or avoiding the line of sight of patrolling agents. We introduce two hybrid environment representations that balance computational efficiency and search space density to provide a minimal, yet sufficient, discretization of the search graph for constraint‐aware navigation. An extended anytime dynamic planner is used to compute constraint‐aware paths, while efficiently repairing solutions to account for varying dynamic constraints or an updating world model. We demonstrate the benefits of our method on challenging navigation problems in complex environments for dynamic agents using combinations of hard and soft, attracting and repelling constraints, defined by both static obstacles and moving obstacles. Copyright © 2014 John Wiley & Sons, Ltd.  相似文献   

11.
Terrain‐aided navigation (TAN) is a localisation method which uses bathymetric measurements for bounding the growth in inertial navigation error. The minimisation of navigation errors is of particular importance for long‐endurance autonomous underwater vehicles (AUVs). This type of AUV requires simple and effective on‐board navigation solutions to undertake long‐range missions, operating for months rather than hours or days, without reliance on external support systems. Consequently, a suitable navigation solution has to fulfil two main requirements: (a) bounding the navigation error, and (b) conforming to energy constraints and conserving on‐board power. This study proposes a low‐complexity particle filter‐based TAN algorithm for Autosub Long Range, a long‐endurance deep‐rated AUV. This is a light and tractable filter that can be implemented on‐board in real time. The potential of the algorithm is investigated by evaluating its performance using field data from three deep (up to 3,700 m) and long‐range (up to 195 km in 77 hr) missions performed in the Southern Ocean during April 2017. The results obtained using TAN are compared to on‐board estimates, computed via dead reckoning, and ultrashort baseline (USBL) measurements, treated as baseline locations, sporadically recorded by a support ship. Results obtained through postprocessing demonstrate that TAN has the potential to prolong underwater missions to a range of hundreds of kilometres without the need for intermittent surfacing to obtain global positioning system fixes. During each of the missions, the system performed 20 Monte Carlo runs. Throughout each run, the algorithm maintained convergence and bounded error, with high estimation repeatability achieved between all runs, despite the limited suite of localisation sensors.  相似文献   

12.
Achieving the autonomous deployment of aerial robots in unknown outdoor environments using only onboard computation is a challenging task. In this study, we have developed a solution to demonstrate the feasibility of autonomously deploying drones in unknown outdoor environments, with the main capability of providing an obstacle map of the area of interest in a short period of time. We focus on use cases where no obstacle maps are available beforehand, for instance, in search and rescue scenarios, and on increasing the autonomy of drones in such situations. Our vision‐based mapping approach consists of two separate steps. First, the drone performs an overview flight at a safe altitude acquiring overlapping nadir images, while creating a high‐quality sparse map of the environment by using a state‐of‐the‐art photogrammetry method. Second, this map is georeferenced, densified by fitting a mesh model and converted into an Octomap obstacle map, which can be continuously updated while performing a task of interest near the ground or in the vicinity of objects. The generation of the overview obstacle map is performed in almost real time on the onboard computer of the drone, a map of size is created in , therefore, with enough time remaining for the drone to execute other tasks inside the area of interest during the same flight. We evaluate quantitatively the accuracy of the acquired map and the characteristics of the planned trajectories. We further demonstrate experimentally the safe navigation of the drone in an area mapped with our proposed approach.  相似文献   

13.
This paper investigates the problem of position estimation of unmanned surface vessels (USVs) operating in coastal areas or in the archipelago. We propose a position estimation method where the horizon line is extracted in a 360° panoramic image around the USV. We design a convolutional neural network (CNN) architecture to determine an approximate horizon line in the image and implicitly determine the camera orientation (the pitch and roll angles). The panoramic image is warped to compensate for the camera orientation and to generate an image from an approximately level camera. A second CNN architecture is designed to extract the pixelwise horizon line in the warped image. The extracted horizon line is correlated with digital elevation model data in the Fourier domain using a minimum output sum of squared error correlation filter. Finally, we determine the location of the maximum correlation score over the search area to estimate the position of the USV. Comprehensive experiments are performed in field trials conducted over 3 days in the archipelago. Our approach provides excellent results by achieving robust position estimates with global positioning system (GPS)‐level accuracy in previously unvisited test areas.  相似文献   

14.
A fundamental problem in autonomous vehicle navigation is the identification of obstacle free space in cluttered and unstructured environments. Features such as walls, people, furniture, doors and stairs, etc are potential hazards. The approach taken in this paper is motivated by the recent development on infra-red time-of-flight cameras that provide video frame rate low resolution depth maps. We propose to exploit the temporal information content provided by the high refresh rate of such cameras to overcome the limitations due to low spatial resolution and high depth uncertainty and aim to provide robust and accurate estimates of planar surfaces in the environment. These surfaces’ estimates are then used to provide statistical tests to identify obstacles and dangers in the environment. Classical 3D spatial RANSAC is extended to 4D spatio-temporal RANSAC by developing spatio-temporal models of planar surfaces that incorporate a linear motion model as well as linear environment features. A 4D-vector product is used for hypotheses generation from data that is randomly sampled across both spatial and temporal variations. The algorithm is fully posed in the spatio-temporal representation and there is no need to correlate points or hypothesis between temporal images. The proposed algorithm is computationally fast and robust for estimation of planar surfaces in general and the ground plane in particular. There are potential applications in mobile robotics, autonomous vehicular navigation, and automotive safety systems. The claims of the paper are supported by experimental results obtained from real video data for a time-of-flight range sensor mounted on an automobile navigating in an undercover parking lot.  相似文献   

15.
This paper extends the progress of single beacon one‐way‐travel‐time (OWTT) range measurements for constraining XY position for autonomous underwater vehicles (AUV). Traditional navigation algorithms have used OWTT measurements to constrain an inertial navigation system aided by a Doppler Velocity Log (DVL). These methodologies limit AUV applications to where DVL bottom‐lock is available as well as the necessity for expensive strap‐down sensors, such as the DVL. Thus, deep water, mid‐water column research has mostly been left untouched, and vehicles that need expensive strap‐down sensors restrict the possibility of using multiple AUVs to explore a certain area. This work presents a solution for accurate navigation and localization using a vehicle's odometry determined by its dynamic model velocity and constrained by OWTT range measurements from a topside source beacon as well as other AUVs operating in proximity. We present a comparison of two navigation algorithms: an Extended Kalman Filter (EKF) and a Particle Filter(PF). Both of these algorithms also incorporate a water velocity bias estimator that further enhances the navigation accuracy and localization. Closed‐loop online field results on local waters as well as a real‐time implementation of two days field trials operating in Monterey Bay, California during the Keck Institute for Space Studies oceanographic research project prove the accuracy of this methodology with a root mean square error on the order of tens of meters compared to GPS position over a distance traveled of multiple kilometers.  相似文献   

16.
Whole‐body interaction is an effective way to promote the level of presence and immersion in virtual reality systems. In this paper, we introduce “G‐Bar,” a grounded isometric interaction device that naturally induces whole‐body interaction without complicated sensing and active haptic feedback apparatus. G‐Bar takes advantage of the significant passive reaction force feedback sensed throughout the body to produce an enhanced level of presence/immersion and possibly even task performance. For detailed investigation in the contributing factors, two experiments were carried out to assess the comparative effectiveness of G‐Bar to the following: (1) grounded but isotonic device (with force feedback and without); and (2) nongrounded handheld devices (both isotonic and isometric). The results showed that the G‐Bar induced significantly higher presence and competitive task performance (fixed velocity navigation) than the isotonic (grounded or handheld) and nongrounded isometric interfaces. Compared with the grounded isometric device with active force feedback, G‐Bar produced competitive performance. In particular, the analysis of the subjective evaluation revealed a high correlation between the level of presence and whole‐body interaction. On the other hand, whole‐body experience was not induced as much with just the active force‐feedback devices. Thus, for appropriate tasks, the grounded isometric interface can be a viable alternative to expensive and mechanically limiting active force‐feedback devices in enhancing user experience. Copyright © 2013 John Wiley & Sons, Ltd.  相似文献   

17.
Micro aerial vehicles (MAVs), especially quadrotors, have been widely used in field applications, such as disaster response, field surveillance, and search‐and‐rescue. For accomplishing such missions in challenging environments, the capability of navigating with full autonomy while avoiding unexpected obstacles is the most crucial requirement. In this paper, we present a framework for online generating safe and dynamically feasible trajectories directly on the point cloud, which is the lowest‐level representation of range measurements and is applicable to different sensor types. We develop a quadrotor platform equipped with a three‐dimensional (3D) light detection and ranging (LiDAR) and an inertial measurement unit (IMU) for simultaneously estimating states of the vehicle and building point cloud maps of the environment. Based on the incrementally registered point clouds, we online generate and refine a flight corridor, which represents the free space that the trajectory of the quadrotor should lie in. We represent the trajectory as piecewise Bézier curves by using the Bernstein polynomial basis and formulate the trajectory generation problem as a convex program. By using Bézier curves, we can constrain the position and kinodynamics of the trajectory entirely within the flight corridor and given physical limits. The proposed approach is implemented to run onboard in real‐time and is integrated into an autonomous quadrotor platform. We demonstrate fully autonomous quadrotor flights in unknown, complex environments to validate the proposed method.  相似文献   

18.
In this article a new Data‐Driven formulation of the Particle Filter framework is proposed. The new formulation is able to learn an approximate proposal distribution from previous data. By doing so, the need to explicitly model all the disturbances that might affect the system is relaxed. Such characteristics are particularly suited for Terrain Based Navigation for sensor‐limited AUVs, where typical scenarios often include non‐negligible sources of noise affecting the system, which are unknown and hard to model. Numerical results are presented that demonstrate the superior accuracy, robustness and efficiency of the proposed Data‐Driven approach.  相似文献   

19.
Autonomous systems are rapidly becoming an integrated part of the modern life. Safe and secure navigation and control of these systems present significant challenges in the presence of uncertainties, physical failures, and cyber attacks. In this paper, we formulate a navigation and control problem for autonomous systems using a multilevel control structure, in which the high‐level reference commands are limited by a saturation function, whereas the low‐level controller tracks the reference by compensating for disturbances and uncertainties. For this purpose, we consider a class of nested, uncertain, multiple‐input–multiple‐output systems subject to reference command saturation, possibly with nonminimum phase zeros. A multirate output‐feedback adaptive controller is developed as the low‐level controller. The sampled‐data (SD) design of this controller facilitates the direct implementation on digital computers, where the input/output signals are available at discrete time instances with different sampling rates. In addition, stealthy zero‐dynamics attacks become detectable by considering a multirate SD formulation. Robust stability and performance of the overall closed‐loop system with command saturation and multirate adaptive control are analyzed. Simulation scenarios for navigation and control of a fixed‐wing drone under failures/attacks are provided to validate the theoretical findings.  相似文献   

20.
In recent years, mobile robots have been required to become more and more autonomous in such a way that they are able to sense and recognize the three‐dimensional space in which they live or work. In this paper, we deal with such an environment map building problem from three‐dimensional sensing data for mobile robot navigation. In particular, the problem to be dealt with is how to extract and model obstacles which are not represented on the map but exist in the real environment, so that the map can be newly updated using the modeled obstacle information. To achieve this, we propose a three‐dimensional map building method, which is based on a self‐organizing neural network technique called “growing neural gas network.” Using the obstacle data acquired from the 3D data acquisition process of an active laser range finder, learning of the neural network is performed to generate a graphical structure that reflects the topology of the input space. For evaluation of the proposed method, a series of simulations and experiments are performed to build 3D maps of some given environments surrounding the robot. The usefulness and robustness of the proposed method are investigated and discussed in detail. © 2004 Wiley Periodicals, Inc.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司    京ICP备09084417号-23

京公网安备 11010802026262号