首页 | 官方网站   微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
Position based visual servoing is a widely adopted tool in robotics and automation. While the extended Kalman filter (EKF) has been proposed as an effective technique for this, it requires accurate noise covariance matrices to render desirable performance. Although numerous techniques for updating or estimating the covariance matrices have been developed in the literature, many of these suffer from computational limits or difficulties in imposing structural constraints such as positive semi-definiteness (PSD). In this paper, a relatively new framework, namely the autocovariance least-squares (ALS) method, is applied to estimate noise covariances using real world visual servoing data. To generate the innovations data required for the ALS method, we utilize standard position based visual servoing methods such as EKF, and also an advanced optimization-based framework, namely moving horizon estimation (MHE). A major advantage of the proposed method is that the PSD and other structural constraints on the noise covariances can be enforced conveniently in the optimization problem, which can be solved efficiently using existing software packages. Our results show that using the ALS estimated covariances in the EKF, instead of hand-tuned covariances, gives more than 20% mean error reduction in visual servoing, while utilising MHE to generate the ALS innovations provides a further 21% accuracy improvement.  相似文献   

2.
为了更简便地将基于图像的视觉伺服运用于移动机器人,避免采用近似线性输入输出反馈控制模型中近似与假设的情况,提出了3种使用极线几何与三角几何相结合的方法,此类方法不需要预先知道三维场景的结构知识。实验仿真结果证明了该方法的有效性。  相似文献   

3.
Stable visual servoing of camera-in-hand robotic systems   总被引:3,自引:0,他引:3  
In this paper, the control problem of camera-in-hand robotic systems is considered. In this approach, a camera is mounted on the robot, usually at the hand, which provides an image of objects located in the robot environment. The aim of this approach is to move the robot arm in such a way that the image of the objects attains the desired locations. We propose a simple image-based direct visual servo controller which requires knowledge of the objects' depths, but it does not need to use the inverse kinematics and the inverse Jacobian matrix. By invoking the Lyapunov direct method, we show that the overall closed-loop system is stable and, under mild conditions on the Jacobian, local asymptotic stability is guaranteed. Experiments with a two degrees-of-freedom direct-drive manipulator are presented to illustrate the controller's performance  相似文献   

4.
This work presents a novel method for the visual servoing control problem based on second-order conic optimization. Special cases of the proposed method provide similar results as those obtained by the position-based and image-based visual servoing methods. The goal in our approach is to minimize both the end-effector trajectory in the Cartesian space and image feature trajectories simultaneously. For this purpose, a series of second-order conic optimization problems is solved. Each problem starts from the current camera pose and finds the camera velocity as well as the next camera pose such that (1) the next camera pose is as close as possible to the line connecting the initial and desired camera poses, and (2) the next feature points are as close as possible to the corresponding lines connecting the initial and desired feature points. To validate our approach, we provide simulations and experimental results for several different camera configurations.  相似文献   

5.
In this paper, a control strategy based on fractional calculus for visual servoing systems is proposed. The image-based control strategy is designed using a point features based fractional-order PI controller. A real-time visual servoing system, composed of a manipulator robot with 6 degrees of freedom (d.o.f.) with an eye-in-hand camera, is used for performance evaluation of the proposed control strategy. The image acquisition and processing, together with the computing of the image-based control law are implemented in MATLAB. Using planar static objects, real-time experiments are conducted and the results reveal that the image-based fractional-order PI controller outperforms the conventional image-based integer-order PI controller.  相似文献   

6.
《Mechatronics》2006,16(3-4):221-232
A visual servoing approach to the control of planar flexible robotic manipulators is adopted in this paper, based on the composite control theory, where the camera sensor is used together with the strain gauge measurements, to estimate the tip deformation. A fast Kalman filter, built on an integral manifold approximation of the manipulator model, can be used to fuse in the most effective way the measurements coming from different sensors, each one perturbed by its own noise. As a consequence, the signal to noise ratio of the deformation measurements can be effectively improved. A difficulty however arises in deriving a linear relation between the camera output and the state variables: the specific contribution of this paper is the derivation of such a linear relation in the fast time scale. Simulation results based on a two link planar flexible manipulator show the potential of the proposed approach to gain a more effective suppression of the tip vibrations, while an experimental example demonstrates its practical feasibility.  相似文献   

7.
This paper presents an image-based dynamic visual servoing to make a mobile robot able to track a moving object on the workspace by using a calibrated on board vision system. The stability of the proposed system is proved based on its passivity properties. A robustness analysis and an L2-gain performance analysis are also presented. Experimental results are shown to illustrate the system performance.  相似文献   

8.
《Mechatronics》2000,10(1-2):1-18
A visual servoing algorithm is proposed for a robot with a camera in the hand to track a moving object in terms of image features and their variations, where fuzzy logics and fuzzy-neural networks are involved to learn feature Jacobian-based kinematic control law. Specifically, novel image features are suggested by employing a viewing model of the perspective projection to estimate the relative pitching and yawing angles. Such perspective projection-based features would not interact with the relative distance between the object and the camera, and, desired feature trajectories for learning the visually guided line-of-sight robot motion are obtained by measuring features by the camera in the hand not in the entire workspace, but on a single linear path along which the robot moves under the control of a commercially provided function of linear motion, and then, control actions of the camera are approximately found by fuzzy-neural networks to follow such desired feature trajectories.To show the validity of the proposed algorithm, some experimental results are illustrated, where a four-axis SCARA robot with a BW CCD camera is used.  相似文献   

9.
This paper presents a novel visual servoing framework for micropositioning in three dimensions for assembly and packaging of hybrid microelectromechanical systems (MEMS). The framework incorporates a supervisory logic-based controller that selects feedback from multiple visual sensors in order to execute a microassembly task. The introduction of a visual sensor array allows the motion of microassembly tasks to be controlled globally with a wide angle view at the beginning of the task. Then a high precision view is used for fine motion control at the end of the task. In addition, a depth-from-focus technique is used to visually servo along the optical axis, providing the ability to perform full three-dimensional (3-D) micropositioning under visual control. The supervisory logic-based controller selects the relevant sensor and tracking strategy to be used at a particular stage in the assembly process, allowing the system to take full advantage of the individual sensor's attributes such as field-of-view, resolution, and depth-of-field. The combination of robust visual tracking and depth estimation within a supervisory control architecture is used to perform high-speed, automatic microinsertions in three dimensions. Experimental results are presented for a micro insertion task performed under this framework in order to demonstrate the feasibility of the approach in high precision assembly of MEMS. Results demonstrate that a relative parts placement repeatable to 2 μm in XY and 10 μm in Z is possible without the use of costly vibration isolation equipment and thermal management systems  相似文献   

10.
《Mechatronics》2003,13(6):533-551
This paper describes a position-based visual servoing system for edge trimming of fabric embroideries by laser. The high-speed vision system, based on a 220 Hz digital camera and the TMS320C40 parallel DSP processor is presented, and the novel image processing algorithm developed for seam tracking applications is briefly explained. Two methods for seam trajectory generation are discussed. In the first method the tracking trajectory is determined only by using the vision data; in the second method, which is suitable for periodic patterns, the predetermined path data is modified by the vision data. A tracking controller using a feedforward controller in the tangential direction of the seam and a feedback controller in the normal direction is described. The custom built manipulator is a four-axis velocity controlled gantry robot with independent PID controllers for each axis. The axes’ reference speeds are commanded and updated by the top-level tracking controller in equal time intervals. The gantry controller program runs on a Pentium PC. The experimental results in edge trimming of different seam patterns are presented.  相似文献   

11.
We study visual servoing in a framework of detection and grasping of unknown objects. Classically, visual servoing has been used for applications where the object to be servoed on is known to the robot prior to the task execution. In addition, most of the methods concentrate on aligning the robot hand with the object without grasping it. In our work, visual servoing techniques are used as building blocks in a system capable of detecting and grasping unknown objects in natural scenes. We show how different visual servoing techniques facilitate a complete grasping cycle.  相似文献   

12.
A new approach for fusing visual and force information is shown. First, a new method for tracking trajectories, called movement flow-based visual servoing system, which presents the correct behavior in the image and in the three-dimensional space, is described. The information obtained from this system is fused with that obtained from a force control system in unstructured environments. To do so, a new method of recognizing the contact surface and a system for fusing visual and force information are described. The latter method employs variable weights for each sensor system, depending on a criteria based on the detection of changes in the interaction forces processed by a Kalman filter.  相似文献   

13.
A novel content-adaptive enhancement filter is described, which aims at reducing compression artifacts in MPEG-coded video streams. The filter locally selects the most appropriate kernel among a set of pre-defined masks, based upon a classification of the pixels to be processed. The features used in the classification phase take into account the distribution of transform coefficients and the presence of nearby contour pixels, previously detected by an edge extractor. An important aspect of the proposed approach is the low computational complexity, very appealing in the scenario of a typical low-cost consumer application (video communication over the Internet, set-top-box DVB receiver, etc.). Experimental results show that the proposed algorithm outperforms existing approaches with similar level of complexity.  相似文献   

14.
Yi-Wei Tu  Ming-Tzu Ho 《Mechatronics》2011,21(7):1170-1182
This paper presents the design and implementation of robust real-time visual servoing control with an FPGA-based image co-processor for a rotary inverted pendulum. The position of the pendulum is measured with a machine vision system. The pendulum used in the proposed system is much shorter than those used in published vision-based pendulum control system studies, which makes the system more difficult to control. The image processing algorithms of the machine vision system are pipelined and implemented on a field programmable gate array (FPGA) device to meet real-time constraints. To enhance robustness to model uncertainty and to attenuate disturbance and sensor noise, the design of the stabilizing controller is formulated as a problem of the mixed H2/H control, which is then solved using the linear matrix inequality (LMI) approach. The designed control law is implemented on a digital signal processor (DSP). The effectiveness of the controller and the FPGA-based image co-processor is verified through simulation and experimental studies. The experimental results show that the designed system can robustly control an inverted pendulum in real-time.  相似文献   

15.
Real-time computers are frequently used in harsh environments, such as space or industry. Lightning strikes, streams of elementary particles, and other manifestations of a harsh operating environment can cause transient failures in processors. Since the entire system is in the same environment, an especially severe disturbance can result in a momentary, correlated, failure of all the processors. To have the system survive transient correlated failures and still execute all its critical workload on time, designers must use time redundancy. To survive permanent or transient independently-occurring failures, processor redundancy must be used, and the computer configured into redundant clusters. Given a fixed total number of processors, there is a tradeoff between processor- and time-redundancy, This paper considers the tradeoffs between configuring the system into duplexes and triplexes. There are pessimistic and optimistic reliability models for each configuration. For the range of pertinent parameters, these models are very close, indicating that these models are quite accurate. The duplex-tripler tradeoff is between the effects of permanent, independent-transient, and correlated-transient failures. Configuring the system in triplexes provides better protection against permanent and independent-transient failures, but diminishes protection against correlated-transient failures. The better configuration is given for each application  相似文献   

16.
Recently, Siamese based methods have made a breakthrough in the visual tracking field. However, the existing trackers still cannot take full advantage of the deep features. In this work, we improve the performances of Siamese trackers by complementary learning with different types of matching features. Specifically, a Matching Activation Network (MAN) is firstly designed to highlight the matching regions of the search image given a template. Since only sparse parts of feature maps contribute to the matching result, an important design choice is to emphasize the weak-matching features by erasing the strong-matching ones and learn complementary classifiers from both types of features. Then we propose a novel complementary region proposal network (CoRPN) to take complementary features as inputs and their outputs complement to each other, which are fused to improve the performance. Experiments show that our proposed tracker achieves leading performances on five tracking datasets while retaining real-time speed.  相似文献   

17.
Fully convolutional Siamese network (SiamFC) has demonstrated high performance in the visual tracking field, but the learned CNN features are redundant and not discriminative to separate the object from the background. To address the above problem, this paper proposes a dual attention module that is integrated into the Siamese network to select the features both in the spatial and channel domains. Especially, a non-local attention module is followed by the last layer of the network, and this benefit to obtain the self-attention feature map of the target from the spatial dimension. On the other hand, a channel attention module is proposed to adjust the importance of different channels’ features according to the corresponding responses generated by each channel feature and the target. Additionally, the GOT10k dataset is employed to train our dual attention Siamese network (SiamDA) to improve the target representation ability, which enhances the discrimination of the model. Experimental results show that the proposed algorithm improves the accuracy by 7.6% and the success rate by 5.6% compared with the baseline tracker.  相似文献   

18.
The synchronous approach to reactive and real-time systems   总被引:3,自引:0,他引:3  
The state of the art in real-time programming is briefly reviewed. The synchronous approach is then introduced informally and its possible impact on the design of real-time and reactive systems is discussed. The authors present and discuss the application fields and the principles of synchronous programming. The major concern of the synchronous approach is to base synchronous programming languages on mathematical models. This makes it possible to handle compilation, logical correctness proofs, and verification of real-time programs in a formal way, leading to a clean and precise methodology for design and programming  相似文献   

19.
This article opens perspectives for generic modelling and real-time simulation of automotive gear transmissions containing multiple friction elements. Generic mathematical drivetrain modelling is challenging, as friction elements cause a system to vary its order and structure. Furthermore, adaptive time-step methods are not suitable for real-time simulation. Unfortunately, literature lacks flexible fixed time-step approaches for multiple friction elements. The modelling solution proposed in this article is designed for fixed time-step execution and is applicable to all types of gear transmissions. The approach is verified on an exemplary simplified drivetrain. It is demonstrated on a complex hybrid-electric automatic drivetrain topology and validated via transmission test bench measurement. Finally, the approach is applied to a conventional automatic drivetrain and validated via vehicle measurement.  相似文献   

20.
The fully convolutional siamese network based trackers achieve great progress recently. Most of these methods focus on improving the capability of siamese network to represent the target. In this paper, we propose our model which focuses on estimating the state of the target with our proposed novel IoU (intersection over union) loss function which is named AIoU. Our model consists of a siamese subnetwork for feature extraction and a target estimation subnetwork for state representation. The target estimation subnetwork contains a classification head for classifying background and foreground and a regression head for estimating target. In order to regress better bounding boxes, we further study the loss function utilized in the regression head and propose a powerful IoU loss function. Our tracker achieves competitive performance on OTB2015, VOT2018, and VOT2019 benchmarks with a speed of 180 FPS, which proves the effectiveness of our method.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司    京ICP备09084417号-23

京公网安备 11010802026262号