首页 | 官方网站   微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
On the optimality of the OFDMA network   总被引:2,自引:0,他引:2  
This letter studies the optimality of the OFDMA network. It is proved that in the multiuser multicarrier downlink system with independent decoding, OFDMA is the optimal multiple access scheme. It is also shown that the optimality of OFDMA holds for any adaptive modulation scheme whose rate-SNR/SINR function can be approximated as a convex function.  相似文献   

2.
Let X be a discrete random variable drawn according to a probability mass function p(x), and suppose p(x), is dyadic, i.e., log(1/p(x)) is an integer for each x. It is shown that the binary code length assignment l(x)=log(1/p(x)) dominates any other uniquely decodable assignment l'(x) in expected length in the sense that El(X)<El'(X), indicating optimality in long run performance (which is well known), and competitively dominates l'(x), in the sense that Pr{ l (X)<l'(X)}>Pr{l ( X)>l'(X)}, which indicates l is also optimal in the short run. In general, if p is not dyadic then l=[log 1/p] dominates l'+1 in expected length and competitivity dominates l'+1, where l' is any other uniquely decodable code  相似文献   

3.
On the optimality of the gridding reconstruction algorithm   总被引:2,自引:0,他引:2  
Gridding reconstruction is a method to reconstruct data onto a Cartesian grid from a set of nonuniformly sampled measurements. This method is appreciated for being robust and computationally fast. However, it lacks solid analysis and design tools to quantify or minimize the reconstruction error. Least squares reconstruction (LSR), on the other hand, is another method which is optimal in the sense that it minimizes the reconstruction error. This method is computationally intensive and, in many cases, sensitive to measurement noise. Hence, it is rarely used in practice. Despite their seemingly different approaches, the gridding and LSR methods are shown to be closely related. The similarity between these two methods is accentuated when they are properly expressed in a common matrix form. It is shown that the gridding algorithm can be considered an approximation to the least squares method. The optimal gridding parameters are defined as the ones which yield the minimum approximation error. These parameters are calculated by minimizing the norm of an approximation error matrix. This problem is studied and solved in the general form of approximation using linearly structured matrices. This method not only supports more general forms of the gridding algorithm, it can also be used to accelerate the reconstruction techniques from incomplete data. The application of this method to a case of two-dimensional (2-D) spiral magnetic resonance imaging shows a reduction of more than 4 dB in the average reconstruction error.  相似文献   

4.
Locally monotonic regression is a recently proposed technique for the deterministic smoothing of finite-length discrete signals under the smoothing criterion of local monotonicity. Locally monotonic regression falls within a general framework for the processing of signals that may be characterized in three ways: regressions are given by projections that are determined by semimetrics, the processed signals meet shape constraints that are defined at the local level, and the projections are optimal statistical estimates in the maximum likelihood sense. the authors explore the relationship between the geometric and deterministic concept of projection onto (generally nonconvex) sets and the statistical concept of likelihood, with the object of characterizing projections under the family of the p-semi-metrics as maximum likelihood estimates of signals contaminated with noise from a well-known family of exponential densities  相似文献   

5.
6.
On the optimality of the binary reflected Gray code   总被引:2,自引:0,他引:2  
This paper concerns the problem of selecting a binary labeling for the signal constellation in M-PSK, M-PAM, and M-QAM communication systems. Gray labelings are discussed and the original work by Frank Gray is analyzed. As is noted, the number of distinct Gray labelings that result in different bit-error probability grows rapidly with increasing constellation size. By introducing a recursive Gray labeling construction method called expansion, the paper answers the natural question of what labeling, among all possible constellation labelings, will give the lowest possible average probability of bit errors for the considered constellations. Under certain assumptions on the channel, the answer is that the labeling proposed by Gray, the binary reflected Gray code, is the optimal labeling for all three constellations, which has, surprisingly, never been proved before.  相似文献   

7.
Adaptive algorithms for designing two-category linear pattern classifiers have been widely used on nonseparable pattern sets even though they do not directly minimize the number of classification errors and their optimality for pattern classification is not completely known. Many of these algorithms have been shown to be asymptotically optimal for patterns from Gaussian distributions with equal-covariance matrices. However, their relative efficiencies for design with a finite number of patterns have not been known. This paper uses truncated Taylor series expansions to evaluate the misadjustment, or extra probability of error, that results when these algorithms are used to design a linear classifier with a finite number of patterns. The expressions have been evaluated for three algorithms-- the fixed-increment error-correction algorithm, the relaxation error-correction algorithm, and the least-mean-square (LMS) algorithm--used with patterns from Gaussian distributions with equal-covariance matrices.  相似文献   

8.
Communication over a waveform channel corrupted by additive white Gaussian noise, and by an unknown and arbitrary interfering signal of bounded power is considered. For this channel, the authors derive an upper bound to the worst case error probability of direct-sequence spread spectrum communication with a correlation receiver, and also a lower bound applicable to any binary signaling technique and any receiver. By comparing these two bounds, it is shown that, if a small error probability is required, then no other binary signaling scheme or receiver can substantially improve upon the performance of direct-sequence with a correlation receiver for the same power and bandwidth  相似文献   

9.
Two strikingly simple necessary conditions for the optimality of a class of multi-input closed-loop linear control systems are presented.  相似文献   

10.
In this paper, we study the optimality of bit detection for coherent M-ary phase-shift keying (PSK) and M-ary quadrature amplitude modulation (QAM), and noncoherent M-ary frequency-shift keying (FSK) signal sets. For M-PSK and M-QAM signal constellations, we employ Gray mapping, consider 8-PSK and 16-QAM signal sets as representative of the general results, and derive the log-likelihood ratio (LLR) for each bit forming the symbol. Using the LLRs, we derive the average bit-error probability (BEP) for the individual bits, and show that the decision regions and the corresponding average BEP for the case of M-PSK coincide with those obtained with the optimal symbol-based detector, whereas, in general, this is not the case for both M-QAM and M-FSK.  相似文献   

11.
This paper demonstrates that the classical receiver used for coherent detection of differentially encoded M-ary phase-shift keying (M-PSK), namely, the one that makes optimum coherent decisions on two successive symbol phases and then differences these to arrive at a decision on the information phase is suboptimum when M>2. However, this receiver structure can be arrived at by a suitable approximation of the likelihood function used to derive the true optimum receiver whose structure is also given  相似文献   

12.
This paper investigates the energy compaction capabilities of nonunitary filter banks in subband coding. It is shown that nonunitary filter banks have larger coding gain than unitary filter banks because of the possibility of performing half-whitening in each channel. For long filter unit pulse responses, optimization of subband coding gain for stationary input signals results in a filter bank decomposition, where each channel works as an optimal open-loop DPCM system. We derive a formula giving the optimal filter response for each channel as a function of the input power spectral density (PSD). For shorter filter bank responses, good gain is obtained by suboptimal half-whitening responses, but the impact on the theoretical coding gain is still highly significant. Image coding examples demonstrate that better performance is achieved using nonunitary filter banks when the input images are in correspondence with the signal model.  相似文献   

13.
On the optimality of conditional expectation as a Bregman predictor   总被引:1,自引:0,他引:1  
We consider the problem of predicting a random variable X from observations, denoted by a random variable Z. It is well known that the conditional expectation E[X|Z] is the optimal L/sup 2/ predictor (also known as "the least-mean-square error" predictor) of X, among all (Borel measurable) functions of Z. In this orrespondence, we provide necessary and sufficient conditions for the general loss functions under which the conditional expectation is the unique optimal predictor. We show that E[X|Z] is the optimal predictor for all Bregman loss functions (BLFs), of which the L/sup 2/ loss function is a special case. Moreover, under mild conditions, we show that the BLFs are exhaustive, i.e., if for every random variable X, the infimum of E[F(X,y)] over all constants y is attained by the expectation E[X], then F is a BLF.  相似文献   

14.
Although the capacity of multiple-input/multiple-output (MIMO) broadcast channels (BCs) can be achieved by dirty paper coding (DPC), it is difficult to implement in practical systems. This paper investigates if, for a large number of users, simpler schemes can achieve the same performance. Specifically, we show that a zero-forcing beamforming (ZFBF) strategy, while generally suboptimal, can achieve the same asymptotic sum capacity as that of DPC, as the number of users goes to infinity. In proving this asymptotic result, we provide an algorithm for determining which users should be active under ZFBF. These users are semiorthogonal to one another and can be grouped for simultaneous transmission to enhance the throughput of scheduling algorithms. Based on the user grouping, we propose and compare two fair scheduling schemes in round-robin ZFBF and proportional-fair ZFBF. We provide numerical results to confirm the optimality of ZFBF and to compare the performance of ZFBF and proposed fair scheduling schemes with that of various MIMO BC strategies.  相似文献   

15.
Graphical models, such as Bayesian networks and Markov random fields (MRFs), represent statistical dependencies of variables by a graph. The max-product “belief propagation” algorithm is a local-message-passing algorithm on this graph that is known to converge to a unique fixed point when the graph is a tree. Furthermore, when the graph is a tree, the assignment based on the fixed point yields the most probable values of the unobserved variables given the observed ones. Good empirical performance has been obtained by running the max-product algorithm (or the equivalent min-sum algorithm) on graphs with loops, for applications including the decoding of “turbo” codes. Except for two simple graphs (cycle codes and single-loop graphs) there has been little theoretical understanding of the max-product algorithm on graphs with loops. Here we prove a result on the fixed points of max-product on a graph with arbitrary topology and with arbitrary probability distributions (discrete- or continuous-valued nodes). We show that the assignment based on a fixed point is a “neighborhood maximum” of the posterior probability: the posterior probability of the max-product assignment is guaranteed to be greater than all other assignments in a particular large region around that assignment. The region includes all assignments that differ from the max-product assignment in any subset of nodes that form no more than a single loop in the graph. In some graphs, this neighborhood is exponentially large. We illustrate the analysis with examples  相似文献   

16.
In this work, a basic resource allocation (RA) problem is considered, where a fixed capacity must be shared among a set of users. The RA task can be formulated as an optimization problem, with a set of simple constraints and an objective function to be minimized. A fundamental relation between the RA optimization problem and the notion of max–min fairness is established. A sufficient condition on the objective function that ensures the optimal solution is max–min fairness is provided. Notably, some important objective functions like least squares and maximum entropy fall in this case. Finally, an application of max–min fairness for overload protection in 3G networks is considered.  相似文献   

17.
Estimation of the large deviations probability pn =P(Sn⩾γn) via importance sampling is considered, where Sn is a sum of n i.i.d. random variables. It has been previously shown that within the nonparametric candidate family of all i.i.d. (or, more generally, Markov) distributions, the optimized exponentially twisted distribution is the unique asymptotically optimal sampling distribution. As n→∞, the sampling cost required to stabilize the normalized variance grows with strictly positive exponential rate for any suboptimal sampling distribution, while this sampling cost for the optimal exponentially twisted distribution is only O(n 1/2). Here, it is established that the optimality is actually much stronger. As n→∞, this solution simultaneously stabilizes all error moments of both the sample mean and the sample variance estimators with sampling cost O(n1/2 ). In addition, it is shown that the embedded parametric family of exponentially twisted distributions has a certain uniform asymptotic stability property. The technique is stable even if the optimal twisting parameter(s) cannot be precisely determined  相似文献   

18.
Arguably, the most important and defining feature of the JPEG2000 image compression standard is its R-D optimized code stream of multiple progressive layers. This code stream is an interleaving of many scalable code streams of different sample blocks. In this paper, we reexamine the R-D optimality of JPEG2000 scalable code streams under an expected multirate distortion measure (EMRD), which is defined to be the average distortion weighted by a probability distribution of operational rates in a given range, rather than for one or few fixed rates. We prove that the JPEG2000 code stream constructed by embedded block coding of optimal truncation is almost optimal in the EMRD sense for uniform rate distribution function, even if the individual scalable code streams have nonconvex operational R-D curves. We also develop algorithms to optimize the JPEG2000 code stream for exponential and Laplacian rate distribution functions while maintaining compatibility with the JPEG2000 standard. Both of our analytical and experimental results lend strong support to JPEG2000 as a near-optimal scalable image codec in a fairly general setting.  相似文献   

19.
20.
A simple technique for linear phase approximation is illustrated. The approximation polynomial used is of the form Pn(s) = M(s) + kN(s), where M(s) and N(s) are even and odd polynomials with interlaced, and equally spaced, imaginary zeros. The influence of constant k is investigated, and it is shown that a proper selection of that constant leads to good approximation.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司    京ICP备09084417号-23

京公网安备 11010802026262号