共查询到20条相似文献,搜索用时 15 毫秒
1.
More efficient predictive control 总被引:1,自引:0,他引:1
An approach for constrained predictive control of linear systems (or uncertain systems described by polytopic uncertainty models) is presented. The approach consists of (in general non-convex, but often convex) offline optimization, and very efficient online optimization. Two examples, one being a laboratory experiment, compare the approach to existing approaches, revealing both advantages and disadvantages. 相似文献
2.
提出了一种快速有效的基于径向约束的两步算法.该算法综合了线性模型和非线性模型的优点,在求解摄像机参数的过程中,采用线性模型标定摄像机中的一部分参数,进一步考虑非线性畸变,通过条件化简将非线性方程转化成线性方程求解其余摄像机参数,有效避免了直接求解非线性方程带来的计算繁琐和结果不稳定的缺陷.最后对标定结果采取最优化算法求精.实验结果表明,优化后的两步算法提高了摄像机标定的效率和准确性. 相似文献
3.
《Theoretical computer science》2005,345(1):60-82
State-of-the-art algorithms for on-the-fly automata-theoretic LTL model checking make use of nested depth-first search to look for accepting cycles in the product of the system and the Büchi automaton. Here, we present two new single depth-first search algorithms that accomplish the same task. The first is based on Tarjan's algorithm for detecting strongly connected components, while the second is a combination of the first and Couvreur's algorithm for finding acceptance cycles in the product of a system and a generalized Büchi automaton. Both new algorithms report an accepting cycle immediately after all transitions in the cycle have been investigated. We show their correctness, describe efficient implementations and discuss how they interact with some other model checking techniques, such as bitstate hashing. The algorithms are compared to the nested search algorithms in experiments on both random and actual state spaces, using random and real formulas. Our measurements indicate that our algorithms investigate at most as many states as the old ones. In the case of a violation of the correctness property, the algorithms often explore significantly fewer states. 相似文献
4.
Structural and Multidisciplinary Optimization - The computational model has become an essential tool in many engineering applications. To take full advantage of a computational model, its accuracy... 相似文献
5.
Multifidelity optimization approaches seek to bring higher-fidelity analyses earlier into the design process by using performance estimates from lower-fidelity models to accelerate convergence towards the optimum of a high-fidelity design problem. Current multifidelity optimization methods generally fall into two broad categories: provably convergent methods that use either the high-fidelity gradient or a high-fidelity pattern-search, and heuristic model calibration approaches, such as interpolating high-fidelity data or adding a Kriging error model to a lower-fidelity function. This paper presents a multifidelity optimization method that bridges these two ideas; our method iteratively calibrates lower-fidelity information to the high-fidelity function in order to find an optimum of the high-fidelity design problem. The algorithm developed minimizes a high-fidelity objective function subject to a high-fidelity constraint and other simple constraints. The algorithm never computes the gradient of a high-fidelity function; however, it achieves first-order optimality using sensitivity information from the calibrated low-fidelity models, which are constructed to have negligible error in a neighborhood around the solution. The method is demonstrated for aerodynamic shape optimization and shows at least an 80% reduction in the number of high-fidelity analyses compared other single-fidelity derivative-free and sequential quadratic programming methods. The method uses approximately the same number of high-fidelity analyses as a multifidelity trust-region algorithm that estimates the high-fidelity gradient using finite differences. 相似文献
6.
We introduce a technique for forcing the calibration of a financial model to produce valid parameters. The technique is based on learning from hints. It converts simple curve fitting into genuine calibration, where broad conclusions can be inferred from parameter values. The technique augments the error function of curve fitting with consistency hint error functions based on the Kullback-Leibler distance. We introduce an efficient EM-type optimization algorithm tailored to this technique. We also introduce other consistency hints, and balance their weights using canonical errors. We calibrate the correlated multifactor Vasicek model of interest rates, and apply it successfully to Japanese Yen swaps market and US dollar yield market. 相似文献
7.
Samee Ur Rehman Matthijs Langelaar 《Structural and Multidisciplinary Optimization》2017,55(4):1143-1157
We present a novel approach for efficient optimization of systems consisting of expensive to simulate components and relatively inexpensive system-level simulations. We consider the types of problem in which the components of the system problem are independent in the sense that they do not exchange coupling variables, however, design variables can be shared across components. Component metamodels are constructed using Kriging. The metamodels are adaptively sampled based on a system level infill sampling criterion using Efficient Global Optimization. The effectiveness of the technique is demonstrated by applying it on numerical examples and an engineering case study. Results show steady and fast converge to the global deterministic optimum of the problems. 相似文献
8.
An efficient video-on-demand model 总被引:1,自引:0,他引:1
An efficient video-on-demand system uses a practical, technologically sophisticated model to serve the viewing needs of a wide audience, including meeting the peak demand for popular, newly released films 相似文献
9.
Automatically repairing a bug can be a time-consuming process especially for large-scale programs owing to the significant amount of time spent recompiling and reinstalling the patched program.To reduce this time overhead and speed up the repair process,in this paper we present a recompilation technique called weak recompilation.In weak recompilation,we assume that a program consists of a set of components,and for each candidate patch only the altered components are recompiled to a shared library.The original program is then dynamically updated by a function indirection mechanism.The advantage of weak recompilation is that redundant recompilation cost can be avoided,and while the reinstallation cost is completely eliminated as the original executable program is not modified at all.For maximum applicability of weak recompilation we created WAutoRepair,a scalable system for fixing bugs with high efficiency in large-scale C programs.The experiments on real bugs in widely used programs show that our repair system significantly outperforms Genprog,a wellknown approach to automatic program repair.For the wireshark program containing over 2 million lines of code,WAutoRepair is over 128 times faster in terms of recompilation cost than Genprog. 相似文献
10.
Hüseyin Akcan 《Applied Soft Computing》2013,13(4):1766-1773
Technological advances in nanotechnology enabled the use of microelectromechanical systems (MEMS) in various application areas. With the integration of various sensor devices into MEMS, autonomously calibrating these sensors become a major research problem. When performing calibration on real-world embedded sensor network deployments, random errors due to internal and external factors alter the calibration parameters and eventually effect the calibration quality in a negative way. Therefore, during autonomous calibration, calibration paths which has low cost and low error values are preferable. To tackle the calibration problem on embedded wireless sensor networks, we present an energy efficient and minimum error calibration model, and also prove that due to random errors the problem turns into an NP-complete problem. To the best of our knowledge this is the first time a formal proof is presented on the complexity of an iterative calibration based problem when random errors are present in the measurements. We also conducted heuristic tests using genetic algorithm to solve the optimization version of the problem, on various graphs. The NP-completeness result also reveals that more research is needed to examine the complexity of calibration in a more general framework in real-world sensor network deployments. 相似文献
11.
Stochastic local search (SLS) is a popular paradigm in incomplete solving for the Boolean satisfiability problem (SAT). Most SLS solvers for SAT switch between two modes, i.e., the greedy (intensification) mode and the diversification mode. However, the performance of these two-mode SLS algorithms lags far behind on solving random 3-satisfiability (3-SAT) problem, which is a significant special case of the SAT problem. In this paper, we propose a new hybrid scoring function called M C based on a linear combination of a greedy property m a k e and a diversification property C o n f T i m e s, and then utilize M C to develop a new two-mode SLS solver called CCMC. To evaluate the performance of CCMC, we conduct extensive experiments to compare CCMC with five state-of-the-art two-mode SLS solvers (i.e., Sparrow2011, Sattime2011, EagleUP, gNovelty+PCL and CCASat) on a broad range of random 3-SAT instances, including all large 3-SAT ones from SAT Competition 2009 and SAT Competition 2011 as well as 200 generated satisfiable huge random 3-SAT ones. The experiments illustrate that CCMC obviously outperforms its competitors, indicating the effectiveness of CCMC. We also analyze the effectiveness of the underlying ideas in CCMC and further improve the performance of CCMC on solving random 5-SAT instances. 相似文献
12.
《Expert systems with applications》2014,41(17):7691-7706
Identifying statistically significant anomalies in an unlabeled data set is of key importance in many applications such as financial security and remote sensing. Rare category detection (RCD) helps address this issue by passing candidate data examples to a labeling oracle (e.g., a human expert) for labeling. A challenging task in RCD is to discover all categories without any prior information about the given data set. A few approaches have been proposed to address this issue, which are on quadratic or cubic time complexities w.r.t. the data set size N and require considerable labeling queries involving time-consuming and expensive labeling effort of a human expert. In this paper, aiming at solutions with lower time complexity and less labeling queries, we propose two prior-free (i.e., without any prior information about a given data set) RCD algorithms, namely (1) iFRED which achieves linear time complexity w.r.t. N, and (2) vFRED which substantially reduces the number of labeling queries. This is done by tabulating each dimension of the data set into bins, followed by zooming out to shrink each bin down to a position and conducting wavelet analysis on the data density function to fast locate the position (i.e., a bin) of a rare category, and zooming in the located bin to select candidate data examples for labeling. Theoretical analysis guarantees the effectiveness of our algorithms, and comprehensive experiments on both synthetic and real data sets further verify the effectiveness and efficiency. 相似文献
13.
一种新的相对辐射校正统计模型 总被引:1,自引:0,他引:1
卫星影像条纹噪声主要来源于CCD相机各个探元响应度的不一致,通过相对辐射校正可以解决这个问题。如何建立合适的校正模型,与卫星本身条件有很大关系。物理模型可信度强,但是要达到物理模型精度的要求,对光学和辐射的仪器要求严格,所需的有些参数过于复杂,难以得到。在条件简陋的情况下,也可以采用基于图像本身的统计模型。提出一种新的统计模型:基于最小二乘回归的相对辐射校正模型。通过实例,发现它具有适用性强的特点,可用于一般的大幅遥感图像,不仅能得到有较好视觉效果的图像,而且生成比较准确反映CCD性质的增益偏置值,适用于相同CCD在邻近时段所成的其它图像。 相似文献
14.
Real-time video streams require an efficient encryption method to ensure their confidentiality. One of the major challenges in designing a video encryption algorithm is to encrypt the vast amount of video data in real-time to satisfy the stringent time requirements. Video encryption algorithms can be classified according to their association with video compression into joint compression and encryption algorithms and compression-independent encryption algorithms. The latter have a clear advantage over the former regarding the incorporation into existing multimedia systems due to their independence of the video compression. In this paper we present the compression-independent video encryption algorithm Puzzle, which was inspired by the children game jigsaw puzzle. It comprises two simple encryption operations with low computational complexity: puzzling and obscuring. The scheme thereby dramatically reduces the encryption overhead compared to conventional encryption algorithms, such as AES, especially for high resolution video. Further outstanding features of Puzzle are a good trade-off between security demands and encryption efficiency, no impairment on video compression efficiency, and an easy integration into existing multimedia systems. This makes Puzzle particularly well-suited for these security-sensitive multimedia applications, such as videoconferencing, where maximal security and minimal encryption overhead are desired simultaneously. 相似文献
15.
The complete and parametrically continuous (CPC) robot kinematic modeling convention has no model singularities and allows the modeling of the robot base and tool in the same manner by which the internal links are modeled. These two properties can be utilized to construct robot kinematic error models employing the minimum number of kinematic error parameters. These error parameters are independent and span the entire geometric error space. The BASE and TOOL error models are derived as special cases of the regular CPC error model. The CPC error model is useful for both kinematic identification and kinematic compensation. This paper focuses on the derivation of the CPC error models and their use in the experimental implementation of robot calibration. 相似文献
16.
《国际计算机数学杂志》2012,89(9):1735-1751
ABSTRACTWe propose a new methodology for the calibration of a hybrid credit-equity model to credit default swap (CDS) spreads and survival probabilities. We consider an extended Jump to Default Constant Elasticity of Variance model incorporating stochastic and possibly negative interest rates. Our approach is based on a perturbation technique that provides an explicit asymptotic expansion of the CDS spreads. The robustness and efficiency of the method is confirmed by several calibration tests on real market data. 相似文献
17.
Today, the world is taking large leaps of progress in technology. The technology is turning the vision of achieving transparency, speed, accuracy, authenticity, friendliness and security in various services and access control mechanisms, into reality. Consequently, new and newer ideas are coming forth by researchers throughout the world. Khan et al. (Chaos Solitons Fractals 35(3):519–524, 2008) proposed remote user authentication scheme with mobile device, using hash-function and fingerprint biometric. In 2012, Chen et al. pointed out forged login attack through loss of mobile device on Khan et al.’s scheme and subsequently proposed a scheme to improve on this drawback. Truong et al. (Proceedings of 26th IEEE International Conference on Advanced Information Networking and Applications, pp 678–685, 2012) demonstrated that in Chen et al.’s scheme, an adversary can successfully replay an intercepted login request. They also showed that how an adversary can make fool of both the participants of Chen et al.’s protocol by taking advantage of the fact that the user is not anonymous in scheme. Further, they proposed an improvement to Chen et al.’s scheme to cut off its problems. Through this paper, we show that Chen et al.’s scheme has some other drawbacks too and the improvement proposed by Truong et al. is still insecure and vulnerable. We also propose an improved scheme which overcomes the flaws and inherits the goodness of both the schemes, Chen et al.’s scheme and Truong et al.’s scheme. 相似文献
18.
DONG JingJing ;JIANG HanJun ;ZHANG LingWei ;WEI JianJun ;LI FuLe ;ZHANG Chun ;WANG ZhiHua 《中国科学:信息科学(英文版)》2014,(10):209-218
A novel low-power DC offset calibration (DCOC) method independent of intermediate frequency (IF) gain for zero-IF receiver applications has been reported. The conventional analog DCOC method consumes greater power and affects the performance of the receiver. The conventional mixed-signal method requires enhanced memory to store the calibration results at different receiver gains as the DC offset is relative to the radio frequency (RF) and IF gain. A novel algorithm is presented to make the DCOC process independent of IF gain, which significantly reduces the memory area. With the proposed circuit, the receiver calibrates only once so settle-time and power consumption of the IF circuit is lowered. A DCOC circuit with the proposed method is manufactured in 0.18 p.m CMOS technology that drains nearly 0 mA equivalent current from a 1.8 V power supply. 相似文献
19.
DONG JingJing JIANG HanJun ZHANG LingWei WEI JianJun LI FuLe ZHANG Chun WANG ZhiHua 《中国科学:信息科学(英文版)》2014,(10)
A novel low-power DC offset calibration(DCOC) method independent of intermediate frequency(IF) gain for zero-IF receiver applications has been reported. The conventional analog DCOC method consumes greater power and affects the performance of the receiver. The conventional mixed-signal method requires enhanced memory to store the calibration results at different receiver gains as the DC offset is relative to the radio frequency(RF) and IF gain. A novel algorithm is presented to make the DCOC process independent of IF gain, which significantly reduces the memory area. With the proposed circuit, the receiver calibrates only once so settle-time and power consumption of the IF circuit is lowered. A DCOC circuit with the proposed method is manufactured in 0.18 μm CMOS technology that drains nearly 0 mA equivalent current from a 1.8 V power supply. 相似文献
20.
为了减小探空仪湿敏电容器在高空大气,特别是低温环境下的测量误差,设计了一种基于改进型pi-sigma模糊神经网络的误差校正模型,采用了K-means聚类算法和权值直接确定法提高了网络性能。通过实际测试和BP神经网络进行比较,结果显示:pi-sigma模糊神经网络和BP神经网络对于-30~40℃的144组训练样本的最大相对误差分别为4.774%,15.27%,收敛时间分别为0.01,2 s。4组检验样本结果证明:pi-sigma模糊神经网络有效实现了湿敏电容器在低温条件下的温度补偿和非线性校正,同时在预测精度、泛化能力以及训练速度上均优于BP神经网络。 相似文献