首页 | 官方网站   微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
The weights of an optimum presteered broadband (PB) antenna array processor are often obtained by solving a linearly constrained minimum variance (LCMV) problem. The objective function is the mean output power (variance), and the constraint space is a set of linear equations that ensure a constant gain in a specified direction known as the look direction. The LCMV optimization results in a set of weights that attenuate all signals except for the look direction signal. However, it is well known that array calibration errors can degrade the performance of the processor with only look direction constraints. For instance, a slight mismatch between the direction of arrival (DOA) of the desired signal and the calibrated look direction of the processor will cause the optimization process to interpret the signal as interference, causing signal attenuation. To alleviate the directional mismatch problem, the spatial power response of the PB processor in the vicinity of the look direction can be widened by imposing additional constraints known as the derivative constraints on the processor weights. While derivative constraints are effective against directional mismatches, we demonstrate that they are no longer robust when there are additional calibration errors like positional errors in the sensors or quantizational errors in the presteered front end of the broadband processor. The main contribution of this paper is the derivation of a new set of constraints referred to as presteering derivative constraints, which are able to maintain processor robustness despite multiple errors including directional mismatches, positional errors, and quantization errors. It is also demonstrated that the presteering derivative constraints are sufficient conditions for derivative constraints, and hence, the spatial power response of the optimized broadband processor is also maximally flat in the vicinity of the look direction  相似文献   

2.
The authors present a method for solving a class of optimization problems with nonsmooth constraints. In particular, they apply the method to the design of narrowband minimum-power antenna array processors which are robust in the presence of errors such as array element placement, look direction misalignment, and frequency offset. They first show that the constrained minimum power problem has a unique global minimum provided that the constraint set is nonempty. Then it is shown how the design problem derived directly from considerations of the sensitivity of the antenna array processor to errors can be transformed into a quadratic programming problem with linear inequality constraints which can be solved efficiently by the standard active set strategy. They also present numerical results for two types of nonsmooth constraints developed to provided robustness. These results confirm the effectiveness of the method  相似文献   

3.
A unified approach to the design of robust narrowband antenna array processors is presented. The approach is based on the idea of minimizing the weighted mean-square-deviation between the desired response and the response of the processor over variations in parameters. Three specific examples of robust design are considered: robustness against directional mismatch, robustness against array geometry errors, and robustness against channel phase errors. Initially, a general quadratic constraint on the weights is developed. However, it is then shown that the quadratic constraint can be replaced by linear constraints or at most linear constraints plus norm constraint. These latter constraints are no more complex than those required for designs which do not incorporate robustness features explicitly. Numerical results show that the proposed approach appears to offer a unified treatment for directly designing narrowband processors which are robust against various types of errors and mismatches between signal model and actual scenario  相似文献   

4.
Frequency response shaping for the direct form pre-steered broadband (PB) antenna array processor is often achieved by imposing look direction constraints on the weights of the processor. This results in a linearly constrained optimization problem. To ensure a maximally flat spatial response of a specified order in the look direction of the PB processor, additional constraints known as derivative constraints can be further imposed on the weights. In general, derivative constraints corresponding to necessary and sufficient (NS) conditions for a maximally flat spatial power response can result in a quadratic equality constrained optimization problem. We transform the quadratic NS derivative constraints to parameterized linear forms. These parameterized linear forms allow the global optimum of the quadratic equality constrained optimization problem to be obtained easily. They also provide a general framework for deriving new sets of derivative constraints which correspond only to sufficient conditions for a maximally flat spatial power response. These sufficient derivative constraints are useful for real-time processing because of their reduced computational requirements and because they ran deliver performance comparable to the NS derivative constraints  相似文献   

5.
We use vector space projection (VSP) methods to design wide-band adaptive and self-healing arrays. Rectangular arrays are assumed but the VSP algorithm can be applied to any configuration. In the VSP method, we formulate a set of design constraints and then iteratively improve on a trial solution by operations known as "alternating projections". When all of the constraint sets are convex and the intersection of these sets is not the empty set, we can meet the design specifications in a finite dimensional setting. In our simulations, we show that reasonable design constraints are readily met. We demonstrate that VSP is useful for broad-band self-healing, i.e., the reconfiguration of the array when some broad-band elements fail to operate. Finally we compare our results with a known design procedure for broadband antenna arrays.  相似文献   

6.
A new type of receiving array which adaptively minimizes ouput noise power while simultaneously satisfying certain robustness and/or bandwidth criteria is considered. The resulting array gains are shown to be robust against direction uncertainty in the assumed look direction, against wavefront distortions and against array geometry errors. The robustness property is incorporated directly into the adaption algorithm via constraints. Extensive simulation has established very satisfactory performance of this new algorithm, both as a limited broad-band processor and as a robust narrow-band processor.  相似文献   

7.
机载雷达时-空二维自适应处理新方法   总被引:1,自引:0,他引:1  
本文针对机载侧面相控阵雷达的杂波谱特点,提出了一种基于列子阵结构的先时后空局域联合二维信号处理方法。文中对该方法的原理和性能作了较详细的讨论,并与其它两种方法作了比较,理论分析和计算机模拟均表明新方法的性能和容差能力较好,特别是在主杂波附近区域的改善因子有明显的提高。  相似文献   

8.
Security processors are used to implement cryptographic algorithmswith high throughput and/or low energy consumption constraints. The designof these processors is a balancing act between flexibility and energy consumption.The target is to create a processor with just enough programmability to covera set of algorithms—an application domain. This paper proposes GEZEL,a design environment consisting of a design language and an implementationmethodology that can be used for such domain specific processors. We use thesecurity domain as driver, and discuss the impact of the domain on the targetarchitecture. We also present a methodology to create, refine and verify asecurity processor.  相似文献   

9.
DTW的ASIC实现算法研究   总被引:3,自引:0,他引:3  
李韬  贺前华  王前 《微电子学》2004,34(3):281-284
通过分析DTW算法,提出了一种适合ASIC实现的心动阵列结构。仿真结果表明,该并行VLSI处理器阵列系统能够在N M-1个时钟周期内计算出两个模板的匹配加权距离。较之基于通用处理器串行实现的DTW算法需要的3pMN/2个时钟周期,该算法节省了大量的运算时间。  相似文献   

10.
Aggressive processor design methodology using high-speed clock and deep submicrometer technology is necessitating the use of at-speed delay fault testing. Although nearly all modern processors use pipelined architecture, no method has been proposed in literature to model these for the purpose of test generation. This paper proposes a graph theoretic model of pipelined processors and develops a systematic approach to path delay fault testing of such processor cores using the processor instruction set. The proposed methodology generates test vectors under the extracted architectural constraints. These test vectors can be applied in functional mode of operation, hence, self-test becomes possible. Self-test in a functional mode can also be used for online periodic testing. Our approach uses a graph model for architectural constraint extraction and path classification. Test vectors are generated using constrained automatic test pattern generation (ATPG) under the extracted constraints. Finally, a test program consisting of an instruction sequence is generated for the application of generated test vectors. We applied our method to two example processors, namely a 16-bit 5-stage VPRO pipelined processor and a 32-bit pipelined DLX processor, to demonstrate the effectiveness of our methodology  相似文献   

11.
Many software compilers for embedded processors produce machine code of insufficient quality. Since for most applications software must meet tight code speed and size constraints, embedded software is still largely developed in assembly language. In order to eliminate this bottleneck and to enable the use of high-level language compilers also for embedded software, new code generation and optimization techniques are required. This paper describes a novel code generation technique for embedded processors with irregular data path architectures, such as typically found in fixed-point DSPs. The proposed code generation technique maps data flow graph representation of a program into highly efficient machine code for a target processor modeled by instruction set behavior. High code quality is ensured by tight coupling of different code generation phases. In contrast to earlier works, mainly based on heuristics, our approach is constraint-based. An initial set of constraints on code generation are prescribed by the given processor model. Further constraints arise during code generation based on decisions concerning code selection, register allocation, and scheduling. Whenever possible, decisions are postponed until sufficient information about a good decision has been collected. The constraints are active in the "background" and guarantee local satisfiability at any point of time during code generation. This mechanism permits to simultaneously cope with special-purpose registers and instruction level parallelism. We describe the detailed integration of code generation phases. The implementation is based on the constraint logic programming (CLP) language ECLiPSe. For a standard DSP, we show that the quality of generated code comes close to hand-written assembly code. Since the input processor model can be edited by the user, also retargetability of the code generation technique is achieved within a certain processor class. This revised version was published online in July 2006 with corrections to the Cover Date.  相似文献   

12.
We examine the diagnosis of processor array systems formed as two-dimensional arrays, with boundaries, and either four or eight neighbors for each interior processor. We employ a parallel test schedule. Neighboring processors test each other, and report the results. Our diagnostic objective is to find a fault-free processor or set of processors. The system may then be sequentially diagnosed by repairing those processors tested faulty according to the identified fault-free set, or a job may be run on the identified fault-free processors. We establish an upper bound on the maximum number of faults which can be sustained without invalidating the test results under worst case conditions. We give test schedules and diagnostic algorithms which meet the upper bound as far as the highest order term. We compare these near optimal diagnostic algorithms to alternative algorithms, both new and already in the literature, and against an upper bound ideal case algorithm, which is not necessarily practically realizable. For eight-way array systems with N processors, an ideal algorithm has diagnosability 3N/sup 2/3/-2N/sup 1/2/ plus lower-order terms. No algorithm exists which can exceed this. We give an algorithm which starts with tests on diagonally connected processors, and which achieves approximately this diagnosability. So the given algorithm is optimal to within the two most significant terms of the maximum diagnosability. Similarly, for four-way array systems with N processors, no algorithm can have diagnosability exceeding 3N/sup 2/3//2/sup 1/3/-2N/sup 1/2/ plus lower-order terms. And we give an algorithm which begins with tests arranged in a zigzag pattern, one consisting of pairing nodes for tests in two different directions in two consecutive test stages; this algorithm achieves diagnosability (3/2)(5/2)/sup 1/3/N/sup 2/3/-(5/4)N/sup 1/2/ plus lower-order terms, which is about 0.85 of the upper bound due to an ideal algorithm.  相似文献   

13.
Extending the work of A.W. McCarthy et al. (1988) and M.I. Miller and B. Roysam (1991), the authors demonstrate that a fully parallel implementation of the maximum-likelihood method for single-photon emission computed tomography (SPECT) can be accomplished in clinical time frames on massively parallel systolic array processors. The authors show that for SPECT imaging on 64x64 image grids, with 96 view angles, the single-instruction, multiple data (SIMD) distributed array processor containing 64(2) processors performs the expectation-maximization (EM) algorithm with Good's smoothing at a rate of 1 iteration/1.5 s. This promises for emission tomography fully Bayesian reconstructions including regularization in clinical computation times which are on the order of 1 min/slice. The most important result of the implementations is that the scaling rules for computation times are roughly linear in the number of processors.  相似文献   

14.
In this paper, we present a solution to the problem of joint tiling and scheduling a given loop nest with uniform data dependencies symbolically. This challenge arises when the size and number of available processors for parallel loop execution is not known at compile time. But still, in order to avoid any overhead of dynamic (run-time) recompilation, a schedule of loop iterations shall be computed and optimized statically. In this paper, it will be shown that it is possible to derive parameterized latency-optimal schedules statically by proposing a two step approach: First, the iteration space of a loop program is tiled symbolically into orthotopes of parametrized extensions. Subsequently, the resulting tiled program is also scheduled symbolically, resulting in a set of latency-optimal parameterized schedule candidates. At run time, once the size of the processor array becomes known, simple comparisons of latency-determining expressions finally steer which of these schedules will be dynamically selected and the corresponding program configuration executed on the resulting processor array so to avoid any further run-time optimization or expensive recompilation. Our theory of symbolic loop parallelization is applied to a number of loop programs from the domains of signal processing and linear algebra. Finally, as a proof of concept, we demonstrate our proposed methodology for a massively parallel processor array architecture called tightly coupled processor array (TCPA) on which applications may dynamically claim regions of processors in the context of invasive computing.  相似文献   

15.
The Micro-Grain Array Processor-2 (MGAP-2) is a two-dimensional SIMD array of 49152 fine-grain processors designed primarily for high-performance signal and image processing. Each processor can compute two arbitrary three-input Boolean functions, contains local RAM, and has additional logic for interprocessor communication. The MGAP-2 differs from existing fine-grain arrays in that it has a high degree of integration while incorporating processor level interconnect control. Each processor can independently select its communication direction. This allows a programmer to map algorithms onto the array in a more efficient manner than if the processors communicated in the standard SIMD fashion. Also, the MGAP-2's processor level interconnect allows groups of processors to be clustered into larger computational units, making the basic computational units as powerful as they need to be for a given problem  相似文献   

16.
Rapid developments in high-performance supercomputers, with upward of 65,536 processors and 32 terabytes of memory, have dramatically changed the landscape in computational electromagnetics. The IBM BlueGene/L supercomputer are examples. They have recently made it possible to solve extremely large problems efficiently. For instance, they have reduced 52 days of simulation on a single Pentium 4 processor to only about 10 minutes on 4000 processors in a BlueGene/L supercomputer. In this article, we investigate the performance of a parallel Finite-Difference Time-Domain (FDTD) code on a large BlueGene/L system. We show that the efficiency of the code is excellent, and can reach up to 90%. The code has been used to simulate a number of electrically large problems, including a 100 * 100 patch antenna array, a 144-element dual- polarized Vivaldi array, a 40-element helical antenna array, and an electronic packaging problem. The results presented serve to demonstrate the efficiency of the parallelization of the code on the BlueGene/L system. In addition, we also introduce the development of the high-performance Beowulf clusters for simulation of electrically large problems.  相似文献   

17.
The authors describe the range of hardware variations of array processors, a form of SIMD (simple instruction stream, multiple dates stream architecture), comparing and contrasting the significant differences among them and briefly illustrating the wide range of algorithms that can effectively utilize them. Three applications are reviewed. The first application, image convolution, represents the traditional numerically computationally intensive areas of application. SIMD array processors are sufficiently powerful to process digital imagery in real time easily. The second application, an example of real-time database management, is the air traffic control problem. The problem cannot be solved today by networks of computers that are successfully used in similar, less time-critical applications. With an array processor there is sufficient real time remaining after the present system tasks are accomplished to realize additional system enhancements. The third application area, graph algorithms, which is more theoretical, is representative of problems for which the simplicity of the array processor solution results in an execution time better than the best theoretical case for a conventional sequential implementation  相似文献   

18.
针对阵列天线中阵列孔径、阵元数目、阵元间距等多约束的稀布线阵综合问题,文中提出了一种基于改 进麻雀搜索算法的稀布阵列综合优化方法。给出了改进麻雀搜索算法的流程,并在确定阵列孔径、阵元数目和最小阵 元间距的约束条件下,采用Tent 混沌映射进行天线阵元位置的初始化,提高算法的搜索性和收敛性,实现了抑制天线 峰值旁瓣电平(PSLL)的稀布线阵综合仿真。仿真结果表明,所提出的方法相比于其它文献中的优化方法,能够得到更 低的峰值旁瓣电平,稳健性好,效率高。在仿真结果的基础上,引入实际天线进行组阵分析,验证了该方法的可行性。  相似文献   

19.
计算资源与寄存器资源分配是可重构处理器自动并行映射的重要问题,该文针对可重构分组密码指令集处理器的资源分配问题,建立算子调度参数模型和处理器资源参数模型,研究了分组密码并行调度与资源消耗之间的约束关系;在此基础上提出基于贪婪思维、列表调度和线性扫描的自动映射算法,实现了分组密码在可重构分组密码指令集处理器上的自动映射。通过可用资源变化实验验证算法并行映射的有效性,并对AES-128算法的映射效果做了横向对比验证算法的先进性,所提自动映射算法对分组密码在可重构处理中的并行计算研究有一定的指导意义。  相似文献   

20.
The general-purpose, highly parallel, cellular array processor (CAP) we developed features multiple-instruction stream, multiple-data stream (MIMD) processing and image display. Processor elements can number in several hundreds. The present system uses 256 processors. Each processor element consists of a general-purpose microprocessor, memory, and a special VLSI chip that performs parallel-processing-specific functions such as processor communication and synchronization. The VLSI has two 2M byte/s independent common bus interfaces for data broadcating and six 15M bit/s serial communication ports for local data communication. The chip also can process image data in real time for multiple processors. Use of the communication interfaces enables a variety of processor networks to be configured. One CAP application has been computer graphics, in which ray tracing is used to generate quality images.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司    京ICP备09084417号-23

京公网安备 11010802026262号