首页 | 官方网站   微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
Kharitonov's theorems are generalized to the problem of so-called weak Kharitonov regions for robust stability of linear uncertain systems. Given a polytope of (characteristic) polynomials P and a stability region D in the complex plane, P is called D-stable if the zeros of every polynomial in P are contained in D. It is of interest to know whether the D-stability of the vertices of P implies the D-stability of P. A simple approach is developed which unifies and generalizes many known results on this problem  相似文献   

2.
Computing the width of a set   总被引:1,自引:0,他引:1  
For a set of points P in three-dimensional space, the width of P, W (P), is defined as the minimum distance between parallel planes of support of P. It is shown that W(P) can be computed in O(n log n +I) time and O(n) space, where I is the number of antipodal pairs of edges of the convex hull of P, and n is the number of vertices; in the worst case, I=O( n2). For a convex polyhedra the time complexity becomes O(n+I). If P is a set of points in the plane, the complexity can be reduced to O(nlog n). For simple polygons, linear time suffices  相似文献   

3.
The problem of determining whether a polytope P of n ×n matrices is D-stable-i.e. whether each point in P has all its eigenvalues in a given nonempty, open, convex, conjugate-symmetric subset D of the complex plane-is discussed. An approach which checks the D-stability of certain faces of P is used. In particular, for each D and n the smallest integer m such that D-stability of every m-dimensional face guarantees D-stability of P is determined. It is shown that, without further information describing the particular structure of a polytope, either (2n-4)-dimensional or (2n-2)-dimensional faces need to be checked for D-stability, depending on the structure of D. Thus more work needs to be done before a computationally tractable algorithm for checking D-stability can be devised  相似文献   

4.
The implementations of the Viterbi algorithm (VA) and the interacting multiple model (IMM) algorithm on a shared-bus and shared-memory multiple-input multiple-data (MIMD) multiprocessor are discussed. The computational complexity as well as the speedup and efficiency are examined in detail. It is shown that the computational complexity of the parallel implementation of these algorithms is about the same in both memory space and processing time categories. Efficiency with P processors is about 1-1/P for small P and is expected to be relatively high for large P, especially when many filters and large state and measurement vectors are considered  相似文献   

5.
Carver  R.H. Tai  K.-C. 《Software, IEEE》1991,8(2):66-74
Attention is given to the problems that arise during the testing and debugging cycle of concurrent programs because of their nondeterministic execution behavior, whereby multiple executions of a concurrent program with the same input may exercise different synchronization sequences and even produce different results. These problems are solved by using deterministic execution debugging and testing. The purpose of deterministic execution debugging to to replay executions of a concurrent program so that debugging information can be collected. Examples of semaphores and monitors are used to illustrate the approach and the process of designing replay tubes is described. The use of regression testing to see if earlier debugging and testing introduced new errors, is examined  相似文献   

6.
The effectiveness of parallel processing of relational join operations is examined. The skew in the distribution of join attribute values and the stochastic nature of the task processing times are identified as the major factors that can affect the effective exploitation of parallelism. Expressions for the execution time of parallel hash join and semijoin are derived and their effectiveness analyzed. When many small processors are used in the parallel architecture, the skew can result in some processors becoming sources of bottleneck while other processors are being underutilized. Even in the absence of skew, the variations in the processing times of the parallel tasks belonging to a query can lead to high task synchronization delay and impact the maximum speedup achievable through parallel execution. For example, when the task processing time on each processor is exponential with the same mean, the speedup is proportional to P/ln(P) where P is the number of processors. Other factors such as memory size, communication bandwidth, etc., can lead to even lower speedup. These are quantified using analytical models  相似文献   

7.
In a general algebraic framework, starting with a bicoprime factorization P=NprD-1 Npl, a right-coprime factorization Np Dp-1, a left-coprime factorization D-1pNp, and the generalized Bezout identities associated with the pairs (Np, Dp) and (D˜ p, N˜p) are obtained. The set of all H-stabilizing compensators for P in the unity-feedback configuration S(P, C) are expressed in terms of (Npr, D, N pt) and the elements of the Bezout identity. The state-space representation P=C(sI-A)-1B is included as an example  相似文献   

8.
The robust stability of discrete-time systems formulated in terms of the delta (δ) operator is discussed. That is, given the nominal characteristic equation P(δ) of a discrete-time system, it is of interest to know how much the coefficients can be perturbed while preserving stability. A procedure to obtain the maximum intervals for a perturbed polynomial P(δ) to still be stable is presented  相似文献   

9.
It is shown that a not-necessarily-balanced state-space realization of the Moore reduced model can be computed directly without balancing via projections defined in terms of arbitrary bases for the left and right eigenspaces associated with the large eigenvalues of the product PQ of the reachability and controllability Grammians. Two specific methods for computing these bases are proposed, one based on the ordered Schur decomposition of PQ and the other based on the Cholesky factors of P and Q. The algorithms perform reliably even for nonminimal models  相似文献   

10.
A method is presented for the decomposition of the frequency domain of 2-D linear systems into two equivalent 1-D systems having dynamics in different directions and connected by a feedback system. It is shown that under some assumptions the decomposition problem can be reduced to finding a realizable solution to the matrix polynomial equation X(z1)P(z2 )+Q(z1)Y(z2 )=D(z1, z2). A procedure for finding a realizable solution X(z1 ), Y(z2) to the equation is given  相似文献   

11.
Simultaneous controller design for linear time-invariant systems   总被引:1,自引:0,他引:1  
The use of generalized sampled-data hold functions (GSHF) in the problem of simultaneous controller design for linear time-invariant plants is discussed. This problem can be stated as follows: given plants P1, P2, . . ., PN , find a controller C which achieves not only simultaneous stability, but also simultaneous optimal performance in the N given systems. By this, it is meant that C must optimize an overall cost function reflecting the closed-loop performance of each plant when it is regulated by C. The problem is solved in three aspects: simultaneous stabilization, simultaneous optimal quadratic performance, and simultaneous pole assignment in combination with simultaneous intersampling performance  相似文献   

12.
Measurements of 23 style characteristics, and the program metrics LOC, V(g), VARS, and PARS were collected from student Cobol programs by a program analyzer. These measurements, together with debugging time (syntax and logic) data, were analyzed using several statistical procedures of SAS (statistical analysis system), including linear, quadratic, and multiple regressions. Some of the characteristics shown to correlate significantly with debug time are GOTO usage, structuring of the IF-ELSE construct, level 88 item usage, paragraph invocation pattern, and data name length. Among the observed characteristic measures which are associated with lowest debug times are: 17% blank lines in the data division, 12% blank lines in the procedure division, and 13-character-long data items. A debugging effort estimator, DEST, was developed to estimate debug times  相似文献   

13.
A reduced cover set of the set of full reducer semijoin programs for an acyclic query graph for a distributed database system is given. An algorithm is presented that determines the minimum cost full reducer program. The computational complexity of finding the optimal full reducer for a single relation is of the same order as that of finding the optimal full reducer for all relations. The optimization algorithm is able to handle query graphs where more than one attribute is common between the relations. A method for determining the optimum profitable semijoin program is presented. A low-cost algorithm which determines a near-optimal profitable semijoin program is outlined. This is done by converting a semijoin program into a partial order graph. This graph also allows one to maximize the concurrent processing of the semijoins. It is shown that the minimum response time is given by the largest cost path of the partial order graph. This reducibility is used as a post optimizer for the SSD-1 query optimization algorithm. It is shown that the least upper bound on the length of any profitable semijoin program is N(N-1) for a query graph of N nodes  相似文献   

14.
The performance of job scheduling is studied in a large parallel processing system where a job is modeled as a concatenation of two stages which must be processed in sequence. Pi is the number of processors required by stage P as the total number of processors in the system. A large parallel computing system is considered where Max(P1, P2)⩾P≫1 and Max(P1 , P2)≫Min(P1, P2). For such systems, exact expressions for the mean system delay are obtained for various job models and disciplines. The results show that the priority should be given to jobs working on the stage which requires fewer processors. The large parallel system (i.e. P≫1) condition is then relaxed to obtain the mean system time for two job models when the priority is given to the second stage. Moreover, a scale-up rule is introduced to obtain the approximated delay performance when the system provides more processors than the maximum number of processors required by both stages (i.e. P>Max(P1, P2)). An approximation model is given for jobs with more than two stages  相似文献   

15.
Pole assignment in a singular system Edx/dt=Ax+Bu is discussed. It is shown that the problem of assigning the roots of det(sE-(A +BF)) by applying a proportional feedback u=Fx+r in a given singular system is equivalent to the problem of pole assignment of an appropriate regular system. An immediate application of this result is that procedures and computational algorithms that were originally developed for assigning eigenvalues in regular systems become useful tools for pole assignment in singular systems. The approach provides a useful tool for the combined problem of eliminating impulsive behavior and stabilizing a singular system  相似文献   

16.
The isolation approach to symbolic execution of Ada tasking programs provides a basis for automating partial correctness proofs. The strength of this approach lies in its isolation nature; tasks are symbolically executed and verified independently, and then checked for cooperation where interference can occur. This keeps the verification task computationally feasible and enhances its compositionality. Safety, however, is a more appropriate notion of correctness for concurrent programs than partial correctness. The author shows how the isolation approach to symbolic execution of Ada tasking program supports the verification of general safety properties. Specific safety properties that are considered include mutual exclusion, freedom from deadlock, and absence of communication failure. The techniques are illustrated using a solution to the readers and writers problem  相似文献   

17.
18.
In a concurrent environment, due to schedule, race conditions and synchronisation among concurrent units, some program statements may never be executed. Such statements are dead statements and have no influence on the programs except making them more difficult to analyse and understand. Since the execution of concurrent programs is non-deterministic, it is hard to detect dead statements. In this paper, we develop a data flow approach to detect dead statements for concurrent Ada programs. In this method, concurrent Ada programs are represented by concurrent control flow graphs in a simple and precise way, and detecting rules are extracted by analysing program behaviours. Based on these rules, a dead statement detecting algorithm is proposed.  相似文献   

19.
Let a family of polynomials be P(s)=t 0Sn+t1s n-1 . . .+tn where Ojtj⩽β. Recently, C.B. Soh and C.S. Berger have shown that a necessary and sufficient condition for this equation to have a damping ratio of φ is that the 2n+1 polynomials in it which have tkk or tkk have a damping ratio of φ. The authors derive a more powerful result requiring only eight polynomials to be Hurwitz for the equation to have a damping ratio of φ using Kharitonov's theorem for complex polynomials  相似文献   

20.
并发程序执行的不确定性导致了程序错误的不可再现性,后续的执行无法再现前次执行的错误,这使得以反复执行程序,重复再现故障为核心的循环调试方法变得不再可用。本文提出了一种基于确定性重演的并发程序的调试方法,确定化并发程序的执行轨迹,重现程序初始运行的错误状态。  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司    京ICP备09084417号-23

京公网安备 11010802026262号