首页 | 官方网站   微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
More efficient predictive control   总被引:1,自引:0,他引:1  
An approach for constrained predictive control of linear systems (or uncertain systems described by polytopic uncertainty models) is presented. The approach consists of (in general non-convex, but often convex) offline optimization, and very efficient online optimization. Two examples, one being a laboratory experiment, compare the approach to existing approaches, revealing both advantages and disadvantages.  相似文献   

2.
快速有效的摄像机标定方法   总被引:1,自引:1,他引:0  
提出了一种快速有效的基于径向约束的两步算法.该算法综合了线性模型和非线性模型的优点,在求解摄像机参数的过程中,采用线性模型标定摄像机中的一部分参数,进一步考虑非线性畸变,通过条件化简将非线性方程转化成线性方程求解其余摄像机参数,有效避免了直接求解非线性方程带来的计算繁琐和结果不稳定的缺陷.最后对标定结果采取最优化算法求精.实验结果表明,优化后的两步算法提高了摄像机标定的效率和准确性.  相似文献   

3.
沙泉  张少华 《控制与决策》2014,29(4):623-626
用户基准负荷(CBL)是实施激励性补偿驱动的需求响应(DR)项目的关键因素之一.由于信息不对称,传统的基于历史数据的用户基准负荷估计会导致操纵问题,从而影响需求响应项目的经济效率.基于机制设计理论,引入类型参数来描述用户的用电需求意愿;提出了信息不对称环境下,供电公司考虑用户参与约束和激励相容约束的需求订购模型及其求解方法,该模型能激励用户披露真实的基准负荷信息.最后通过算例仿真验证了所提出模型的有效性和合理性.  相似文献   

4.
State-of-the-art algorithms for on-the-fly automata-theoretic LTL model checking make use of nested depth-first search to look for accepting cycles in the product of the system and the Büchi automaton. Here, we present two new single depth-first search algorithms that accomplish the same task. The first is based on Tarjan's algorithm for detecting strongly connected components, while the second is a combination of the first and Couvreur's algorithm for finding acceptance cycles in the product of a system and a generalized Büchi automaton. Both new algorithms report an accepting cycle immediately after all transitions in the cycle have been investigated. We show their correctness, describe efficient implementations and discuss how they interact with some other model checking techniques, such as bitstate hashing. The algorithms are compared to the nested search algorithms in experiments on both random and actual state spaces, using random and real formulas. Our measurements indicate that our algorithms investigate at most as many states as the old ones. In the case of a violation of the correctness property, the algorithms often explore significantly fewer states.  相似文献   

5.
Identity-based encryption with equality test (IBEET) combines public key encryption with equality test (PKEET) and identity-based encryption (IBE). In an IBEET scheme, a user with trapdoors is allowed to verify whether different ciphertexts encrypted under different identities correspond to equal plaintexts through the equality test algorithm. Recently, several IBEET schemes were constructed over lattices to resist the potential quantum threats, perhaps since the latticed-based candidates are the most favored ones in the current NIST PQC standardization. However, almost all of the schemes are developed from Lee et al.’s generic construction and hence double both public parameters and ciphertext sizes. In this paper, we propose a more efficient and tightly-secure IBEET scheme which not only achieves a tight reduction with the semi-adaptive security, but also reduces both public parameters and ciphertext sizes.  相似文献   

6.
The aim of this study is to develop and evaluate an efficient camera calibration method for vision-based head tracking. Tracking head movements is important in the design of an eye-controlled human/computer interface. A vision-based head tracking system is proposed to allow the user's head movements in the design of the eye-controlled human/computer interface. We propose an efficient camera calibration method to track the three-dimensional position and orientation of the user's head accurately. We also evaluate the performance of the proposed method and the influence of the configuration of calibration points on the performance. The experimental error analysis results showed that the proposed method can provide more accurate and stable pose (i.e. position and orientation) of the camera than the direct linear transformation method which has been used in camera calibration. The results for this study can be applied to the tracking of head movements related to the eye-controlled human/computer interface and the virtual reality technology.  相似文献   

7.
In recent years, new and more effective procedures for applying collocation have been published. This article is devoted to present a revision of this subject and complement its developments. From the general theory two broad approaches are derived, which yield the direct and the indirect TH-collocation methods. The former approach had not been published before, and it is a dual of the indirect approach. In particular, second order differential equations of elliptic type are considered and several orthogonal collocation algorithms are developed for them. In TH-collocation, the approximations on the internal boundary and in the subdomain interiors are completely independent. This yields clear computational advantages that are illustrated through the construction of such algorithms. In the implementations presented, three dimensional problems are included. In passing, single-point-collocation methods that have been the subject of several recent publications are revised.  相似文献   

8.
We present a novel approach for efficient optimization of systems consisting of expensive to simulate components and relatively inexpensive system-level simulations. We consider the types of problem in which the components of the system problem are independent in the sense that they do not exchange coupling variables, however, design variables can be shared across components. Component metamodels are constructed using Kriging. The metamodels are adaptively sampled based on a system level infill sampling criterion using Efficient Global Optimization. The effectiveness of the technique is demonstrated by applying it on numerical examples and an engineering case study. Results show steady and fast converge to the global deterministic optimum of the problems.  相似文献   

9.
Structural and Multidisciplinary Optimization - The computational model has become an essential tool in many engineering applications. To take full advantage of a computational model, its accuracy...  相似文献   

10.
Multifidelity optimization approaches seek to bring higher-fidelity analyses earlier into the design process by using performance estimates from lower-fidelity models to accelerate convergence towards the optimum of a high-fidelity design problem. Current multifidelity optimization methods generally fall into two broad categories: provably convergent methods that use either the high-fidelity gradient or a high-fidelity pattern-search, and heuristic model calibration approaches, such as interpolating high-fidelity data or adding a Kriging error model to a lower-fidelity function. This paper presents a multifidelity optimization method that bridges these two ideas; our method iteratively calibrates lower-fidelity information to the high-fidelity function in order to find an optimum of the high-fidelity design problem. The algorithm developed minimizes a high-fidelity objective function subject to a high-fidelity constraint and other simple constraints. The algorithm never computes the gradient of a high-fidelity function; however, it achieves first-order optimality using sensitivity information from the calibrated low-fidelity models, which are constructed to have negligible error in a neighborhood around the solution. The method is demonstrated for aerodynamic shape optimization and shows at least an 80% reduction in the number of high-fidelity analyses compared other single-fidelity derivative-free and sequential quadratic programming methods. The method uses approximately the same number of high-fidelity analyses as a multifidelity trust-region algorithm that estimates the high-fidelity gradient using finite differences.  相似文献   

11.
针对低分辨率网络数码摄像机用于定量量测的标定问题, 提出一种非线性畸变的构建与优化方法。该方法结合多项式模型与传统的畸变模型;利用回归分析原理对模型进行自动精化,以优选显著的畸变参数,从而建立了一种自适应的摄像机畸变模型。运用构建的畸变模型,利用自检校光束法平差方法对同种型号的三个低成本网络摄像机实施标定和畸变纠正。结果表明,该模型有效补偿了通用畸变模型残存的畸变系统差,同时有效的避免了过度参数化的问题,有助于提高摄像机的标定精度和稳健性,使得摄像机的标定精度达到子像素。  相似文献   

12.
We introduce a technique for forcing the calibration of a financial model to produce valid parameters. The technique is based on learning from hints. It converts simple curve fitting into genuine calibration, where broad conclusions can be inferred from parameter values. The technique augments the error function of curve fitting with consistency hint error functions based on the Kullback-Leibler distance. We introduce an efficient EM-type optimization algorithm tailored to this technique. We also introduce other consistency hints, and balance their weights using canonical errors. We calibrate the correlated multifactor Vasicek model of interest rates, and apply it successfully to Japanese Yen swaps market and US dollar yield market.  相似文献   

13.
Several recently developed model order reduction methods for fast simulation of large-scale dynamical systems with two or more parameters are reviewed. Besides, an alternative approach for linear parameter system model reduction as well as a more efficient method for nonlinear parameter system model reduction are proposed in this paper. Comparison between different methods from theoretical elegancy to complexity of implementation are given. By these methods, a large dimensional system with parameters can be reduced to a smaller dimensional parameter system that can approximate the original large sized system to a certain degree for all the parameters.  相似文献   

14.
Automatically repairing a bug can be a time-consuming process especially for large-scale programs owing to the significant amount of time spent recompiling and reinstalling the patched program.To reduce this time overhead and speed up the repair process,in this paper we present a recompilation technique called weak recompilation.In weak recompilation,we assume that a program consists of a set of components,and for each candidate patch only the altered components are recompiled to a shared library.The original program is then dynamically updated by a function indirection mechanism.The advantage of weak recompilation is that redundant recompilation cost can be avoided,and while the reinstallation cost is completely eliminated as the original executable program is not modified at all.For maximum applicability of weak recompilation we created WAutoRepair,a scalable system for fixing bugs with high efficiency in large-scale C programs.The experiments on real bugs in widely used programs show that our repair system significantly outperforms Genprog,a wellknown approach to automatic program repair.For the wireshark program containing over 2 million lines of code,WAutoRepair is over 128 times faster in terms of recompilation cost than Genprog.  相似文献   

15.
An efficient video-on-demand model   总被引:1,自引:0,他引:1  
Wright  W.E. 《Computer》2001,34(5):64-70
An efficient video-on-demand system uses a practical, technologically sophisticated model to serve the viewing needs of a wide audience, including meeting the peak demand for popular, newly released films  相似文献   

16.
Identifying statistically significant anomalies in an unlabeled data set is of key importance in many applications such as financial security and remote sensing. Rare category detection (RCD) helps address this issue by passing candidate data examples to a labeling oracle (e.g., a human expert) for labeling. A challenging task in RCD is to discover all categories without any prior information about the given data set. A few approaches have been proposed to address this issue, which are on quadratic or cubic time complexities w.r.t. the data set size N and require considerable labeling queries involving time-consuming and expensive labeling effort of a human expert. In this paper, aiming at solutions with lower time complexity and less labeling queries, we propose two prior-free (i.e., without any prior information about a given data set) RCD algorithms, namely (1) iFRED which achieves linear time complexity w.r.t. N, and (2) vFRED which substantially reduces the number of labeling queries. This is done by tabulating each dimension of the data set into bins, followed by zooming out to shrink each bin down to a position and conducting wavelet analysis on the data density function to fast locate the position (i.e., a bin) of a rare category, and zooming in the located bin to select candidate data examples for labeling. Theoretical analysis guarantees the effectiveness of our algorithms, and comprehensive experiments on both synthetic and real data sets further verify the effectiveness and efficiency.  相似文献   

17.
Stochastic local search (SLS) is a popular paradigm in incomplete solving for the Boolean satisfiability problem (SAT). Most SLS solvers for SAT switch between two modes, i.e., the greedy (intensification) mode and the diversification mode. However, the performance of these two-mode SLS algorithms lags far behind on solving random 3-satisfiability (3-SAT) problem, which is a significant special case of the SAT problem. In this paper, we propose a new hybrid scoring function called M C based on a linear combination of a greedy property m a k e and a diversification property C o n f T i m e s, and then utilize M C to develop a new two-mode SLS solver called CCMC. To evaluate the performance of CCMC, we conduct extensive experiments to compare CCMC with five state-of-the-art two-mode SLS solvers (i.e., Sparrow2011, Sattime2011, EagleUP, gNovelty+PCL and CCASat) on a broad range of random 3-SAT instances, including all large 3-SAT ones from SAT Competition 2009 and SAT Competition 2011 as well as 200 generated satisfiable huge random 3-SAT ones. The experiments illustrate that CCMC obviously outperforms its competitors, indicating the effectiveness of CCMC. We also analyze the effectiveness of the underlying ideas in CCMC and further improve the performance of CCMC on solving random 5-SAT instances.  相似文献   

18.
Technological advances in nanotechnology enabled the use of microelectromechanical systems (MEMS) in various application areas. With the integration of various sensor devices into MEMS, autonomously calibrating these sensors become a major research problem. When performing calibration on real-world embedded sensor network deployments, random errors due to internal and external factors alter the calibration parameters and eventually effect the calibration quality in a negative way. Therefore, during autonomous calibration, calibration paths which has low cost and low error values are preferable. To tackle the calibration problem on embedded wireless sensor networks, we present an energy efficient and minimum error calibration model, and also prove that due to random errors the problem turns into an NP-complete problem. To the best of our knowledge this is the first time a formal proof is presented on the complexity of an iterative calibration based problem when random errors are present in the measurements. We also conducted heuristic tests using genetic algorithm to solve the optimization version of the problem, on various graphs. The NP-completeness result also reveals that more research is needed to examine the complexity of calibration in a more general framework in real-world sensor network deployments.  相似文献   

19.
Puzzle - an efficient,compression independent video encryption algorithm   总被引:1,自引:0,他引:1  
Real-time video streams require an efficient encryption method to ensure their confidentiality. One of the major challenges in designing a video encryption algorithm is to encrypt the vast amount of video data in real-time to satisfy the stringent time requirements. Video encryption algorithms can be classified according to their association with video compression into joint compression and encryption algorithms and compression-independent encryption algorithms. The latter have a clear advantage over the former regarding the incorporation into existing multimedia systems due to their independence of the video compression. In this paper we present the compression-independent video encryption algorithm Puzzle, which was inspired by the children game jigsaw puzzle. It comprises two simple encryption operations with low computational complexity: puzzling and obscuring. The scheme thereby dramatically reduces the encryption overhead compared to conventional encryption algorithms, such as AES, especially for high resolution video. Further outstanding features of Puzzle are a good trade-off between security demands and encryption efficiency, no impairment on video compression efficiency, and an easy integration into existing multimedia systems. This makes Puzzle particularly well-suited for these security-sensitive multimedia applications, such as videoconferencing, where maximal security and minimal encryption overhead are desired simultaneously.  相似文献   

20.
一种新的相对辐射校正统计模型   总被引:1,自引:0,他引:1  
卫星影像条纹噪声主要来源于CCD相机各个探元响应度的不一致,通过相对辐射校正可以解决这个问题。如何建立合适的校正模型,与卫星本身条件有很大关系。物理模型可信度强,但是要达到物理模型精度的要求,对光学和辐射的仪器要求严格,所需的有些参数过于复杂,难以得到。在条件简陋的情况下,也可以采用基于图像本身的统计模型。提出一种新的统计模型:基于最小二乘回归的相对辐射校正模型。通过实例,发现它具有适用性强的特点,可用于一般的大幅遥感图像,不仅能得到有较好视觉效果的图像,而且生成比较准确反映CCD性质的增益偏置值,适用于相同CCD在邻近时段所成的其它图像。  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司    京ICP备09084417号-23

京公网安备 11010802026262号