首页 | 官方网站   微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
Ertas  T. Ali  F.H. 《Electronics letters》1997,33(17):1438-1440
It is well-known that CPM schemes themselves exhibit attractive spectral efficiencies and that combining them with a rate m/(m+1) external trellis encoder with 2m-1-level modulation results in better power-bandwidth performance. Since the free distance is a good measure of performance, the authors construct upper bounds on the ultimately achievable free distance for trellis codes of Ungerboeck type combined with constant-envelope CPFSK signals. Some encoders achieving the constructed upper bounds are also presented  相似文献   

2.
Acquisition of Direct Sequence Signals with Modulation and Jamming   总被引:1,自引:0,他引:1  
The effects of data modulation and/or narrow-band interference on the acquisition time of direct sequence (DS) systems are assessed when particular acquisition schemes are selected. Finally, the results of these analyses are used to propose receivers which mitigate the deleterious effects of the data or jamming. The analyses demonstrate that the I-Q detector, modified for a data modulated carrier, is superior to the correlator/square-law detector despite the latter's robustness to data. When the average pulsed jammer power is constrained, the analyses illustrate that the jammer's duty factor does not impact acquisition time when the pulse repetition frequency (PRF) is high; a duty factor of unity maximally degrades acquistion performance when the PRF is low. A proposed adaptive receiver provides considerable jamming protection; the acquistion performance of such a receiver bounds the performance of all adaptive acquistion receivers.  相似文献   

3.
The authors present the performance of double dwell acquisition using a continuous integration detector. One performance measure in an acquisition system is the mean acquisition time, which depends on detection and false alarm probability. The detection and false alarm probability of double dwell acquisition using continuous integration detector is derived  相似文献   

4.
A numerically stable, fast, order-recursive algorithm for solving the covariance problem in signal modeling is described. The propagation of finite arithmetic errors as well as data acquisition errors is studied in detail. First, linearization of the main algorithmic recursions is carried out. Then, a suitable transformation converts the resulting state equations of the accumulated errors into their residual form. Subsequently, bounds for the residuals are computed. The derivation of these bounds depends heavily on the Levinson type structure of the algorithm and the low displacement rank of the problem. The main result is that the algorithm is weakly numerically stable. The proposed order-recursive algorithm is subsequently utilized as a block adaptive method. Its performance is also demonstrated by long run simulations  相似文献   

5.
Space-time convolutional codes have shown considerable promise for providing improved performance for wireless communication through combined diversity and coding gain. An efficient design procedure is presented for optimizing the coding and diversity gain measures proposed in the first papers on space-time codes. The procedure is based on some simple lower and upper bounds on coding gain. The same calculations needed to compute these bounds can be used to check either necessary or sufficient conditions on space-time codes which achieve maximum diversity gain. A new simple, but useful, measure of code performance is also suggested which augments existing measures. The use of the design procedure is illustrated and new codes are provided. These codes are shown to outperform the space-time convolutional codes provided in the initial papers introducing space-time codes.  相似文献   

6.
We consider information retrieval in a wireless sensor network deployed to monitor a spatially correlated random field. We address optimal sensor scheduling and information routing under the performance measure of network lifetime. Both single-hop and multi-hop transmissions from sensors to an access point are considered. For both cases, we formulate the problems as integer programming based on the theories of coverage and connectivity in sensor networks. We derive upper bounds for the network lifetime that provide performance benchmarks for suboptimal solutions. Suboptimal sensor scheduling and data routing algorithms are proposed to approach the lifetime upper bounds with reduced complexity. In the proposed algorithms, we consider the impact of both the network geometry and the energy consumption in communications and relaying on the network lifetime. Simulation examples are used to demonstrate the performance of the proposed algorithms as compared to the lifetime upper bounds.  相似文献   

7.
The moderate complexity of low-density parity-check (LDPC) codes under iterative decoding is attributed to the sparseness of their parity-check matrices. It is therefore of interest to consider how sparse parity-check matrices of binary linear block codes can be a function of the gap between their achievable rates and the channel capacity. This issue was addressed by Sason and Urbanke, and it is revisited in this paper. The remarkable performance of LDPC codes under practical and suboptimal decoding algorithms motivates the assessment of the inherent loss in performance which is attributed to the structure of the code or ensemble under maximum-likelihood (ML) decoding, and the additional loss which is imposed by the suboptimality of the decoder. These issues are addressed by obtaining upper bounds on the achievable rates of binary linear block codes, and lower bounds on the asymptotic density of their parity-check matrices as a function of the gap between their achievable rates and the channel capacity; these bounds are valid under ML decoding, and hence, they are valid for any suboptimal decoding algorithm. The new bounds improve on previously reported results by Burshtein and by Sason and Urbanke, and they hold for the case where the transmission takes place over an arbitrary memoryless binary-input output-symmetric (MBIOS) channel. The significance of these information-theoretic bounds is in assessing the tradeoff between the asymptotic performance of LDPC codes and their decoding complexity (per iteration) under message-passing decoding. They are also helpful in studying the potential achievable rates of ensembles of LDPC codes under optimal decoding; by comparing these thresholds with those calculated by the density evolution technique, one obtains a measure for the asymptotic suboptimality of iterative decoding algorithms  相似文献   

8.
We are proposing a new noncoherent pseudonoise code acquisition method for code-division multiple-access (CDMA) mobile communication systems. The proposed method employs two digital matched filters, and the code acquisition is based on a double dwell process. Through serial cascading of the dual-matched filters, the proposed code acquisition method does not lose the track of the incoming sequence even after returning from the false alarm state. This unique feature imparts on our design much desired stability. Moreover, the use of two matched filters increases the acquisition speed, which is of prime importance. One important issue in CDMA acquisition is how to determine the threshold values for optimal performance, the measure of optimality being the minimum mean acquisition time. In our performance analysis, we have derived the probability of detection and false alarm as a function of threshold values, then determine the threshold values that achieve the minimum mean acquisition time. Our performance analysis shows that the mean acquisition time is 35 ms at -15 dB input chip signal-to-noise ratio, much faster than the conventional active correlation technique  相似文献   

9.
Performance bounds for estimating vector systems   总被引:4,自引:0,他引:4  
We propose a unified framework for the analysis of estimators of geometrical vector quantities and vector systems through a collection of performance measures. Unlike standard performance indicators, these measures have intuitive geometrical and physical interpretations, are independent of the coordinate reference frame, and are applicable to arbitrary parameterizations of the unknown vector or system of vectors. For each measure, we derive both finite-sample and asymptotic lower bounds that hold for large classes of estimators and serve as benchmarks for the assessment of estimation algorithms. Like the performance measures themselves, these bounds are independent of the reference coordinate frame, and we discuss their use as system design criteria  相似文献   

10.
Bit Error Probability (BEP) provides a fundamental performance measure for wireless diversity systems. This paper presents two new exact BEP expressions for Maximal Ratio Combining (MRC) diversity systems. One BEP expression takes a closed form, while the other is derived by treating the squared-sum of Rayleigh random variables as an Erlang variable. Due to the fact that the extant bounds are loose and could not properly characterize the error performance of MRC diversity systems, this paper presents a very tight bound. The numerical analysis shows that the new derived BEP expressions coincide with the extant expressions, and that the new approximation tightly bounds the accurate BEP.  相似文献   

11.
Distortion-rate theory is used to derive absolute performance bounds and encoding guidelines for direct fixed-rate minimum mean-square error data compression of the discrete Fourier transform (DFT) of a stationary real or circularly complex sequence. Both real-part-imaginary-part and magnitude-phase-angle encoding are treated. General source coding theorems are proved in order to justify using the optimal test channel transition probability distribution for allocating the information rate among the DFT coefficients and for calculating arbitrary performance measures on actual optimal codes. This technique has yielded a theoretical measure of the relative importance of phase angle over the magnitude in magnitude-phase-angle data compression. The result is that the phase angle must be encoded with 0.954 nats, or 1.37 bits, more rate than the magnitude for rates exceeding 3.0 nats per complex element. This result and the optimal error bounds are compared to empirical results for efficient quantization schemes.  相似文献   

12.
Major challenges in ultrawideband (UWB) communications include timing acquisition, tracking, and low complexity demodulation. Timing with dirty templates (TDT) is a recently proposed acquisition algorithm with attractive features. Starting with a performance analysis of TDT, this paper goes on to considerably broaden its scope by developing novel tracking loops and detectors by naturally following the TDT operation. Specifically, upper bounds on the mean square error of the blind and data-aided TDT estimators are derived, along with TDT-based demodulators, obviating the need to know the underlying channel and time hopping code. Analytical comparisons reveal that TDT demodulators outperform RAKE with limited number of fingers in the medium-high SNR range. TDT demodulation performance in the presence of timing errors is evaluated and shown to be robust to mistiming. In order to follow timing offset variations, an adaptive loop is also introduced to track the first multipath arrival of each symbol. For a given input disturbance, parameters of the loop are selected to optimize jointly transient and steady state performance. Analytical results are corroborated by simulations.  相似文献   

13.
Network reliability is extensively used to measure the degree of stability of the quality of infrastructure services. The performance of an infrastructure network and its components degrades over time in real situations. Multi-state reliability modeling that allows a finite number of different states for the performance of the network and its components is more appropriate for the reliability assessment, and provides a more realistic view of the network performance than the traditional binary reliability modeling. Due to the computational complexity of the enumerative methods in evaluating the multi-state reliability, the problem can be reduced to searching lower boundary points, and using them to evaluate reliability. Lower boundary points can be used to compute the exact reliability value and reliability bounds. We present an algorithm to search for lower boundary points. The proposed algorithm has considerable improvement in terms of computational efficiency by significantly reducing the number of iterations to obtain lower reliability bounds.  相似文献   

14.
The effects of bit-interleaving on the performance of convolutional codes and turbo codes in fast frequency- hop/spread-spectrum multiple-access systems with M-FSK modulation are investigated. It is observed that bit-interleaving induces two counter-acting forces on decoder performance. On the one hand, bit-interleaving disperses consecutive error bits caused by a noisy M-ary signal and makes the errors more correctable. On the other hand, the same measure makes it difficult for the decoder to make use of the bit dependency information. Both theoretical upper bounds and simulation results show that bit- interleaving degrades the performance of soft-decision decoded convolutional codes and turbo codes.  相似文献   

15.
This paper compares the meaning of different threshold setting principles in the code acquisition process of a direct sequence (DS) spread spectrum (SS) receiver. The consideration is made mainly in one-path additive white Gaussian noise (AWGN) channel. A consideration to a fixed multipath channel is given to see its effect on the results. Also, a consideration for a certain type of fading is given in a case where the signal power is assumed to be considerably lower, i.e., faded, part of the time. For the possible performance measures of code acquisition, the main interest is in the mean acquisition time TMA . The probability of the acquisition in a given observation interval Pacq is also considered to see if different measures have different demands. A matched filter (MF) acquisition is used with and without a verification mode using an active integration. In the comparisons, fixed thresholds, thresholds based on constant false alarm rate (CFAR) criteria, and optimal thresholds in the sense to give either the minimum TMA or the maximum Pacq are used. The results, which are obtained by using a method of selecting the maximum value at the output of the MF, are compared to the threshold cases. The results can be summarized as follows: when the performance measure is T MA, the best results are obtained by using CFAR-based threshold comparison. By a proper selection of the probability of a false alarm, the same performance is obtained as by using the optimal thresholds. When the performance measure is Pacq, the maximum-selection method is the best choice  相似文献   

16.
Performance of optimum transmitter power control in cellular radiosystems   总被引:2,自引:0,他引:2  
Most cellular radio systems provide for the use of transmitter power control to reduce cochannel interference for a given channel allocation. Efficient interference management aims at achieving acceptable carrier-to-interference ratios in all active communication links in the system. Such schemes for the control of cochannel interference are investigated. The effect of adjacent channel interference is neglected. As a performance measure, the interference (outage) probability is used, i.e., the probability that a randomly chosen link is subject to excessive interference. In order to derive upper performance bounds for transmitter power control schemes, algorithms that are optimum in the sense that the interference probability is minimized are suggested. Numerical results indicate that these upper bounds exceed the performance of conventional systems by an order of magnitude regarding interference suppression and by a factor of 3 to 4 regarding the system capacity. The structure of the optimum algorithm shows that efficient power control and dynamic channel assignment algorithms are closely related  相似文献   

17.
On the generalization of soft margin algorithms   总被引:4,自引:0,他引:4  
Generalization bounds depending on the margin of a classifier are a relatively new development. They provide an explanation of the performance of state-of-the-art learning systems such as support vector machines (SVMs) and Adaboost. The difficulty with these bounds has been either their lack of robustness or their looseness. The question of whether the generalization of a classifier can be more tightly bounded in terms of a robust measure of the distribution of margin values has remained open for some time. The paper answers this open question in the affirmative and, furthermore, the analysis leads to bounds that motivate the previously heuristic soft margin SVM algorithms as well as justifying the use of the quadratic loss in neural network training algorithms. The results are extended to give bounds for the probability of failing to achieve a target accuracy in regression prediction, with a statistical analysis of ridge regression and Gaussian processes as a special case. The analysis presented in the paper has also lead to new boosting algorithms described elsewhere.  相似文献   

18.
Compressed sensing is an emerging technique in the field of digital signal acquisition. It promises almost exact recovery of high‐dimensional signals from a very small set of measurements. However, this technique is challenged by the task of recovering signals immersed in noise. In this paper, we derive upper and lower bounds on mean squared recovery error of noisy signals. These bounds are valid for any number of acquired measurements and at any signal‐to‐noise ratio. This work is highly useful for the design of any compressed sensing‐based real world application by quantifying recovery error entailed with realistic digital signal acquisition scenarios. Copyright © 2013 John Wiley & Sons, Ltd.  相似文献   

19.
Statistical decisions are considered between two hypotheses which consist of classes of Poisson distributions for random counting measures. Each Poisson distribution is generated by an intensity measure on a general observation space. The classes are specified by Choquet capacity bounds on the intensity measures. This problem was first posed and studied by Geraniotis and Poor. A minimax Neyman-Pearson result for error probability performance is the main contribution. Recursive computation of the minimax test statistic is also investigated.  相似文献   

20.
This paper calculates new bounds on the size of the performance gap between random codes and the best possible codes. The first result shows that, for large block sizes, the ratio of the error probability of a random code to the sphere-packing lower bound on the error probability of every code on the binary symmetric channel (BSC) is small for a wide range of useful crossover probabilities. Thus even far from capacity, random codes have nearly the same error performance as the best possible long codes. The paper also demonstrates that a small reduction k-k˜ in the number of information bits conveyed by a codeword will make the error performance of an (n,k˜) random code better than the sphere-packing lower bound for an (n,k) code as long as the channel crossover probability is somewhat greater than a critical probability. For example, the sphere-packing lower bound for a long (n,k), rate 1/2, code will exceed the error probability of an (n,k˜) random code if k-k˜>10 and the crossover probability is between 0.035 and 0.11=H-1(1/2). Analogous results are presented for the binary erasure channel (BEC) and the additive white Gaussian noise (AWGN) channel. The paper also presents substantial numerical evaluation of the performance of random codes and existing standard lower bounds for the BEC, BSC, and the AWGN channel. These last results provide a useful standard against which to measure many popular codes including turbo codes, e.g., there exist turbo codes that perform within 0.6 dB of the bounds over a wide range of block lengths  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司    京ICP备09084417号-23

京公网安备 11010802026262号