首页 | 官方网站   微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 46 毫秒
1.
Analytical solutions of the performance of optical communication systems are difficult to obtain and often, Monte Carlo simulations are used to achieve realistic estimates of the performance of such systems. However, for high performance systems, this technique requires a large number of simulation trials for the estimates to be in a reasonable interval of confidence, with the number of trials increasing linearly with the performance of the system. We apply an importance sampling technique to estimate the performance of direct detection optical systems, where the “gain” of importance sampling over Monte Carlo simulations is shown to increase linearly with the system performance. Further, we use this technique to study the performance of optical communication systems employing avalanche photodetectors as well as fibre-optic code division multiple access systems (FO-CDMA). We also show that the quick simulation technique developed can be used for a wide variety of coding schemes, and for the first time, we present a comparative analysis of the performance of FO-CDMA systems employing optical orthogonal codes and prime sequences. In all cases, it is shown that importance sampling simulations require less than 50-100 trials for estimating error probabilities of 10-10 and below  相似文献   

2.
Importance sampling (IS) is a simulation technique which aims to reduce the variance (or other cost function) of a given simulation estimator. In communication systems, this usually, but not always, means attempting to reduce the variance of the bit error rate (BER) estimator. By reducing the variance, IS estimators can achieve a given precision from shorter simulation runs; hence the term “quick simulation.” The idea behind IS is that certain values of the input random variables in a simulation have more impact on the parameter being estimated than others. If these “important” values are emphasized by sampling more frequently, then the estimator variance can be reduced. Hence, the basic methodology in IS is to choose a distribution which encourages the important values. This use of a “biased” distribution will, of course, result in a biased estimator if applied directly in the simulation. However, there is a simple procedure whereby the simulation outputs are weighted to correct for the use of the biased distribution, and this ensures that the new IS estimator is unbiased. Hence, the “art” of designing quick simulations via IS is entirely dependent on the choice of biased distribution. Over the last 50 years, IS techniques have flourished, but it is only in the last decade that coherent design methods have emerged. The outcome of these developments is that at the expense of increasing technical content, modern techniques can offer substantial run-time saving for a very broad range of problems. We present a comprehensive history and survey of IS methods. In addition, we offer a guide to the strengths and weaknesses of the techniques, and hence indicate which techniques are suitable for various types of communications systems. We stress that simple approaches can still yield useful savings, and so the simulation practitioner as well as the technical researcher should consider IS as a possible simulation tool  相似文献   

3.
Detection systems are designed to operate with optimal or nearly optimal probability of a wrong decision. Analytical solutions of the performance of these systems have been very difficult to obtain. Monte Carlo simulations are often the most tractable method of estimating performance. However, in systems with small probability of error, this technique requires very large amounts of computer time. A technique known as importance sampling substantially reduces the number of simulation trials needed, for a given accuracy, over the standard Monte Carlo method. The theory and application of the importance sampling method in Monte Carlo simulation is considered in a signal detection context. A general method of applying this technique to the optimal detection problem is given. Results show that in cases examined, the gain is approximately proportional to the inverse of the error probability. Applications of the proposed method are not limited to optimum detection systems; analysis, leading to a measure of the gain in using this biasing scheme, shows that in all optimal systems considered, less than 100 trials is needed to achieve estimates with 45% confidence, even for extremely small error probabilities  相似文献   

4.
Digital communication systems are frequently operated over nonlinear channels with memory. The analysis of the performance of these systems is difficult and no complete analytical treatment of the problem has been obtained before. Several recent efforts have been directed toward the computation of error probabilities via Monte-Carlo simulation using a complete system model. These simulations require excessively large sample sizes and are not practical for estimating very low values of error probabilities. This paper presents a modified Monte-Carlo simulation technique for estimating error probabilities in digital communication systems operating over nonlinear channels. An importance-sampling technique is used to modify the probability density function of the noise process in a way to make simulation possible. Theoretical results as well as realistic examples are presented, showing that the number of samples needed for simulation is reduced considerably.  相似文献   

5.
This paper considers the problem of designing efficient and systematic importance sampling (IS) schemes for the performance study of hidden Markov model (HMM) based trackers. Importance sampling (IS) is a powerful Monte Carlo (MC) variance reduction technique, which can require orders of magnitude fewer simulation trials than ordinary MC to obtain the same specified precision. We present an IS technique applicable to error event analysis of HMM based trackers. Specifically, we use conditional IS to extend our work in another of our paper to estimate average error event probabilities. In addition, we derive upper bounds on these error probabilities, which are then used to verify the simulations. The power and accuracy of the proposed method is illustrated by application to an HMM frequency tracker  相似文献   

6.
Importance sampling is a technique for speeding up Monte Carlo (MC) simulations. The fundamental idea is to use a different simulation distribution to increase the relative frequency of “important” events and then weight the observed data in order to obtain an unbiased estimate of the parameter of interest. This estimate often requires orders-of-magnitude fewer simulation trials than ordinary MC simulations to obtain the same specified precision. We present an importance sampling technique applicable to error event simulation of hidden Markov model (HMM) tracking algorithms. The computational savings possible with the use of this technique are demonstrated both analytically and using simulation results for a specific HMM tracking algorithm  相似文献   

7.
On the performance of linear parallel interference cancellation   总被引:3,自引:0,他引:3  
This paper analyzes the performance of the linear parallel interference cancellation (LPIC) multiuser detector in a synchronous multiuser communication scenario with binary signaling, nonorthogonal multiple access interference, and an additive white Gaussian noise channel. The LPIC detector has been considered in the literature lately due to its low computational complexity, potential for good performance under certain operating conditions, and close connections to the decorrelating detector. In this paper, we compare the performance of the two-stage LPIC detector to the original multistage detector proposed by Varanasi and Aazhang (1990, 1991) for CDMA systems. The general M-stage LPIC detector is compared to the conventional matched filter detector to describe operating conditions where the matched filter detector outperforms the LPIC detector in terms of error probability at any stage M. Analytical results are presented that show that the LPIC detector may exhibit divergent error probability performance under certain operating conditions and may actually yield error probabilities greater than 0.5 in some cases. Asymptotic results are presented for the case where the number of LPIC stages goes to infinity. Implications of the prior results for code division multiple access (CDMA) systems with random binary spreading sequences are discussed in the “large-system” scenario. Our results are intended to analytically corroborate the simulation evidence of other authors and to provide cautionary guidelines concerning the application of LPIC detector to CDMA communication systems  相似文献   

8.
Analytically-based methods for evaluating the performance of digital lightwave systems in terms of bit error rates (BERs) are extremely difficult to develop without making restrictive assumptions. A Monte Carlo simulation approach can offer an attractive alternative. However, for typical optical systems, this approach would require an excessive amount of computer time. Importance sampling (IS) is a variance reduction method which can substantially increase the computational efficiency of Monte Carlo simulations. This paper presents an IS method to efficiently evaluate the BERs of direct-detection optical systems employing avalanche photodiode (APD) receivers. Specifically, using a heuristic argument based on large deviations theory, a large class ℱ of exponentially twisted sampling distributions for the APD-based receiver is developed. It is then demonstrated that when used as a sampling distribution, the “optimized” exponentially twisted distribution of large deviations theory is the most efficient distribution among the sampling distributions in ℱ. Further, it is demonstrated that such a distribution would estimate the performance of optical systems with a high degree of accuracy to warrant its possible use as a powerful and flexible tool in computer-aided design, analysis and modeling of fiber-optic transmission systems  相似文献   

9.
We present an IS stochastic technique for the efficient simulation of adaptive systems which employ diversity in the presence of frequency nonselective slow Rayleigh fading and additive, white, Gaussian noise. The computational efficiency is achieved using techniques based on importance sampling (IS). We utilize a stochastic gradient descent (SGD) algorithm to determine the near-optimal IS parameters that characterize the dominant fading process. After accounting for the overhead of the optimization algorithm, average speed-up factors of up to six orders of magnitude [over conventional Monte Carlo (MC)] were attained for error probabilities as low as 10-11 for a fourth-order diversity model  相似文献   

10.
The authors propose a supervised nonparametric technique based on the “compound classification rule” for minimum error, to detect land-cover transitions between two remote-sensing images acquired at different times. Thanks to a simplifying hypothesis, the compound classification rule is transformed into a form easier to compute. In the obtained rule, an important role is played by the probabilities of transitions, which take into account the temporal dependence between two images. In order to avoid requiring that training sets be representative of all possible types of transitions, the authors propose an iterative algorithm which allows the probabilities of transitions to be estimated directly from the images under investigation. Experimental results on two Thematic Mapper images confirm that the proposed algorithm may provide remarkably better detection accuracy than the “Post Classification Comparison” algorithm, which is based on the separate classifications of the two images  相似文献   

11.
On denoising and best signal representation   总被引:17,自引:0,他引:17  
We propose a best basis algorithm for signal enhancement in white Gaussian noise. The best basis search is performed in families of orthonormal bases constructed with wavelet packets or local cosine bases. We base our search for the “best” basis on a criterion of minimal reconstruction error of the underlying signal. This approach is intuitively appealing, because the enhanced or estimated signal has an associated measure of performance, namely, the resulting mean-square error. Previous approaches in this framework have focused on obtaining the most “compact” signal representations, which consequently contribute to effective denoising. These approaches, however, do not possess the inherent measure of performance which our algorithm provides. We first propose an estimator of the mean-square error, based on a heuristic argument and subsequently compare the reconstruction performance based upon it to that based on the Stein (1981) unbiased risk estimator. We compare the two proposed estimators by providing both qualitative and quantitative analyses of the bias term. Having two estimators of the mean-square error, we incorporate these cost functions into the search for the “best” basis, and subsequently provide a substantiating example to demonstrate their performance  相似文献   

12.
Dependability modeling plays a major role in the design, validation and maintenance of real-time computing systems. Typical models provide measures such as mean time to failure, reliability and safety as functions of the component failure rates and fault/error coverage probabilities. In this paper we propose a modeling technique that allows the coverage to be dependent upon the local (i.e. embedded at task level) and global (i.e. available at system level) fault/error detection and recovery mechanisms. This approach also ensures important savings in terms of the simulation time required for deriving the coverage probabilities. Stochastic. reward nets are employed as a unique dependability modeling framework. For illustrating the usefulness of this technique we analyze dependability of a railroad control computer.  相似文献   

13.
We present a framework for determining the exact decoder error and failure probabilities for linear block codes in a frequency-hop communication channel with an arbitrary number of conditional symbol error and erasure probabilities. Applications are demonstrated for type-I hybrid ARQ systems by deriving equations for the packet error probability and throughput. Because these quantities are too small to be obtained by simulation, the framework provides exact results which are unobtainable by previous work  相似文献   

14.
Each simulation run simulates a single error event-that is, a subsequence of incorrect trellis branching decisions-and importance sampling is used to emphasize important nontrivial error events. The fundamental principles of the error event simulation method in conjunction with importance sampling are reviewed. The importance sampling background for coded communications systems is discussed in the context of block codes, because it is easier to present the fundamentals of importance sampling without the additional complexities of Viterbi decoding. The details of the error event simulation method for Viterbi decoders and numerical examples that demonstrate this method are presented. These numerical examples involve both hard and soft decision decoding for the ideal additive Gaussian noise channel. The technique is shown to provide markedly improved efficiency  相似文献   

15.
Topographic mapping with spotlight synthetic aperture radar (SAR) using an interferometric technique is studied. Included is a review of the equations for determination of terrain elevation from the phase difference between a pair of SAR images formed from data collected at two differing imaging geometries. This paper builds upon the systems analysis of Li and Goldstein in which image pair decorrelation as a function of the “baseline” separation between the receiving antennas was first analyzed. In this paper correlation and topographic height error variance models are developed based on a SAR image model derived from a tomographic image formation perspective. The models are general in the sense that they are constructed to analyze the case of single antenna, two-pass interferometry with arbitrary antenna line of sight, and velocity vector directions. Correlation and height error variance sensitivity to SAR system parameters and terrain gradients are studied  相似文献   

16.
In multilevel bandpass data transmission there is usually a difference between the phase of the carrier signal and the phase of the receiver oscillator, thereby causing imperfect demodulation. The use of nonclassical Gauss quadrature rules (GQR's) allows: 1) a theoretical study of the joint effects of phase jitter, thermal noise, and intersymbol interference on average error probability; and 2) a maximum-precision sampling technique in the simulation of digital communication systems. On this basis, the mean-square-error and zero-forcing equalizers are considered, and their performances evaluated in terms of the average error probability for multilevel pulse-amplitude-modulation (PAM) and partial-response-coded (PRC) signaling schemes.  相似文献   

17.
Importance sampling (IS) is a powerful method for reducing simulation run times when estimating the probabilities of rare events in communication systems using Monte Carlo simulation and is made feasible and effective for the simulation of networks of queues by regenerative techniques. However, using the most favorable IS settings very often makes the length of regeneration cycles infinite or impractically long. To address this problem, a methodology that uses IS dynamically within each regeneration cycle to drive the system back to the regeneration state after an accurate estimate has been obtained is discussed. A statistically based technique for optimizing IS parameter values for simulations of queueing systems, including complex systems with bursty arrival processes, is formulated. A deterministic variant of stochastic simulated annealing (SA), called mean field annealing (MFA), is used to minimize statistical estimates of the IS estimator variance. The technique is demonstrated by evaluating blocking probabilities  相似文献   

18.
A new simulation technique is presented for performance studies of local area networks which use CSMA/CD as the access protocol. The method requires partitioning the network stations into a few primary stations and the remainder as background stations. The background stations are modeled as a group by a background algorithm based on a dynamic CSMA/CD model which uses a set of probabilities and sampling to produce, in a recurrent manner, a stochastic sequence of busy/idle periods. The new method is more efficient than conventional discrete-event simulations when the number of network stations is reasonably large. The algorithm and the overall simulation technique are comprehensively validated. The technique is illustrated with a performance study of interactive transactions on a metropolitan CATV network.  相似文献   

19.
The performance of Reed-Solomon (RS) coded direct-sequence code division multiple-access (DS-CDMA) systems using noncoherent M-ary orthogonal modulation is investigated over multipath Rayleigh fading channels. Diversity reception techniques with equal gain combining (EGC) or selection combining (SC) are invoked and the related performance is evaluated for both uncoded and coded DS-CDMA systems. “Errors-and-erasures” decoding is considered, where the erasures are based on Viterbi's (1982) so-called ratio threshold test (RTT). The probability density functions (PDF) of the ratio associated with the RTT conditioned on both the correct detection and erroneous detection of the M-ary signals are derived. These PDFs are then used for computing the codeword decoding error probability of the RS coded DS-CDMA system using “errors-and-erasures” decoding. Furthermore, the performance of the “errors-and-erasures” decoding technique employing the RTT is compared to that of “error-correction-only” decoding refraining from using side-information over multipath Rayleigh fading channels. As expected, the numerical results show that when using “errors-and-erasures” decoding, RS codes of a given code rate can achieve a higher coding gain than without erasure information  相似文献   

20.
The purpose of this paper is to describe a technique of importance sampling for Monte Carlo simulation of Radar signal detectors under the ‘No Signal’ condition. The input to the detector is ‘restricted’ to a range that is more likely to cause a ‘false alarm’, thereby increasing the number of false alarms in a given sample. A model based on the theory of the Gauss-Markov process is developed so as to allow consideration of the case when the successive noise (clutter) samples are correlated. The technique enables estimation of very low false alarm probabilities with a relatively moderate sample size.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司    京ICP备09084417号-23

京公网安备 11010802026262号