首页 | 官方网站   微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 234 毫秒
1.
In the literature on optimal regular volume sampling, the Body‐Centered Cubic (BCC) lattice has been proven to be optimal for sampling spherically band‐limited signals above the Nyquist limit. On the other hand, if the sampling frequency is below the Nyquist limit, the Face‐Centered Cubic (FCC) lattice was demonstrated to be optimal in reducing the prealiasing effect. In this paper, we confirm that the FCC lattice is indeed optimal in this sense in a certain interval of the sampling frequency. By theoretically estimating the prealiasing error in a realistic range of the sampling frequency, we show that in other frequency intervals, the BCC lattice and even the traditional Cartesian Cubic (CC) lattice are expected to minimize the prealiasing. The BCC lattice is superior over the FCC lattice if the sampling frequency is not significantly below the Nyquist limit. Interestingly, if the original signal is drastically undersampled, the CC lattice is expected to provide the lowest prealiasing error. Additionally, we give a comprehensible clarification that the sampling efficiency of the FCC lattice is lower than that of the BCC lattice. Although this is a well‐known fact, the exact percentage has been erroneously reported in the literature. Furthermore, for the sake of an unbiased comparison, we propose to rotate the Marschner‐Lobb test signal such that an undue advantage is not given to either lattice.  相似文献   

2.
Perhaps the most flexible synopsis of a database is a uniform random sample of the data; such samples are widely used to speed up processing of analytic queries and data-mining tasks, enhance query optimization, and facilitate information integration. The ability to bound the maximum size of a sample can be very convenient from a system-design point of view, because the task of memory management is simplified, especially when many samples are maintained simultaneously. In this paper, we study methods for incrementally maintaining a bounded-size uniform random sample of the items in a dataset in the presence of an arbitrary sequence of insertions and deletions. For “stable” datasets whose size remains roughly constant over time, we provide a novel sampling scheme, called “random pairing” (RP), that maintains a bounded-size uniform sample by using newly inserted data items to compensate for previous deletions. The RP algorithm is the first extension of the 45-year-old reservoir sampling algorithm to handle deletions; RP reduces to the “passive” algorithm of Babcock et al. when the insertions and deletions correspond to a moving window over a data stream. Experiments show that, when dataset-size fluctuations over time are not too extreme, RP is the algorithm of choice with respect to speed and sample-size stability. For “growing” datasets, we consider algorithms for periodically resizing a bounded-size random sample upwards. We prove that any such algorithm cannot avoid accessing the base data, and provide a novel resizing algorithm that minimizes the time needed to increase the sample size. We also show how to merge uniform samples from disjoint datasets to obtain a uniform sample of the union of the datasets; the merged sample can be incrementally maintained. Our new RPMerge algorithm extends the HRMerge algorithm of Brown and Haas to effectively deal with deletions, thereby facilitating efficient parallel sampling.  相似文献   

3.
针对连续波激光雷达在同步测量目标距离和速度的应用中存在辐射信号上限峰值功率低、测量极限距离近的问题,提出一种基于Golomb脉冲序列调制的测量信号波形,研究该方法在道路环境中同步测量目标距离和速度的可行性。首先,以一种准连续波,即伪随机(PN)码调制为例分析连续波调制方法存在的发射信号峰值功率低的问题,在此基础上讨论Golomb序列的特征,提出使用Golomb序列调制发射信号提高发射脉冲峰值功率的方法;然后,讨论Golomb序列调制的多普勒信号频谱分析方法和基于数据累加的延迟时间定位方法,实现同步测量目标的速度和距离;最后,在道路目标产生的多普勒频率范围内,通过仿真实验验证该方法的正确性。实验结果表明,在以脉冲序列构成的对多普勒信号平均采样率远低于奈奎斯特频率的条件下,仍然可以通过快速傅里叶变换(FFT)方法获得多普勒信号的频率,从而在发射平均功率恒定的条件下极大提高单脉冲的峰值功率;另外,利用Golomb序列的不等间隔特性,通过数据累加方法可以完成对激光脉冲飞行时间的定位,在保证了速度测量的同时实现对目标距离的测量。  相似文献   

4.
针对Shannon采样定理中要求信号带宽有限的缺陷,提出了一种基于有限更新率的非带限信号采样和重建的方法.该方法借助奇异值分解技术,从空间变换的角度实现了对与狄拉克流有关的一类特殊有限更新率信号的采样与重建.它用采样点的离散傅里叶变换系数构成一个Hankel矩阵,通过对该矩阵进行奇异值分解求得狄拉克流的位置信息,然后再求解范德蒙方程组求取狄拉克流的权信息,重建出原信号.计算机仿真结果表明,只要以不低于信号更新率的速率对信号进行采样,利用该算法就可以准确地重建出原信号,而且具有很好的抗噪性能.  相似文献   

5.
在对宽频段的非合作跳频信号进行无源定位场合,超高速数模转换(ADC)采样和各种定频信号的分离已经成为瓶颈。根据压缩采样理论,对于在某些表达基上为稀疏的宽频段信号,可以用远低于Nyquist门限的采样率来进行模拟信息转换(AIC),但是对于有多个定频信号存在的情况,信号重建需要的信息样点数陡然增加,这给分布式传感器系统中的AIC部分的实现带来极大的复杂度。基于子空间投影技术,提出了一种能抑制目标频带内定频干扰信号的跳频信号模拟压缩采样方法,仿真结果表明该方法是可行的。  相似文献   

6.
We consider the problem of generating balanced training samples from an unlabeled data set with an unknown class distribution. While random sampling works well when the data are balanced, it is very ineffective for unbalanced data. Other approaches, such as active learning and cost-sensitive learning, are also suboptimal as they are classifier-dependent and require misclassification costs and labeled samples, respectively. We propose a new strategy for generating training samples, which is independent of the underlying class distribution of the data and the classifier that will be trained using the labeled data. Our methods are iterative and can be seen as variants of active learning, where we use semi-supervised clustering at each iteration to perform biased sampling from the clusters. We provide several strategies to estimate the underlying class distributions in the clusters and to increase the balancedness in the training samples. Experiments with both highly skewed and balanced data from the UCI repository and a private data set show that our algorithm produces much more balanced samples than random sampling or uncertainty sampling. Further, our sampling strategy is substantially more efficient than active learning methods. The experiments also validate that, with more balanced training data, classifiers trained with our samples outperform classifiers trained with random sampling or active learning.  相似文献   

7.
在传统的压缩编码技术中,采样均遵循奈奎斯特定律,该定律规定采样速率要高于原信号频率的两倍。针对这一方法无法克服的巨大计算量及资源浪费,将最近提出的压缩感知理论用于图像压缩编码,可大大降低采样速率,该文着重讨论了基于压缩感知理论的图像压缩算法,仿真实验证明了这一算法的可行性。  相似文献   

8.
王冠皓  徐军 《计算机应用》2014,34(11):3304-3308
乳腺在注射造影剂钆喷酸葡胺(Gd-DTPA)后,乳腺核磁共振(MR)图像中恶性肿瘤区域比正常或者良性区域呈现出更加快速和更强的灰度变化,因此动态对比度增强核磁共振成像(DCE-MRI)成为了医生检测和诊断乳腺恶性肿瘤的重要工具。但是DCE-MR图像的快速获取目前仍然是一个难题, 为了快速高效地获取这样的DCE-MR图像, 根据群稀疏思想和压缩感知(CS)理论,提出了一种结合变密度随机采样的共轭梯度下降方法。该方法首先使用变密度随机采样的方式从图像的局部k-空间(傅立叶系数)数据中获取采样信息,再将传统的基于l1范数的共轭梯度下降算法扩展到l2,1范数以使得改进的共轭梯度下降算法可以对多幅DCE-MR图像同时进行联合重建。实验结果表明:采样率小于40%时,改进的联合重建方法比多测量向量(MMV)算法在重建时间上减少了约30%;变密度随机采样比均匀随机采样在重建准确率上提高了约70%。  相似文献   

9.
符永铨  王意洁  周婧 《软件学报》2009,20(3):630-643
针对非结构化P2P 系统中可扩展的快速无偏抽样问题,提出了一种基于多个peer 自适应随机行走的抽样方法SMARW.在该方法中,基于代理随机行走选择一组临时的peer 执行抽样过程,一次产生一组可调数目的抽样节点,提高了抽样速度,选择每次产生的抽样节点作为临时peer 进行新的抽样过程,这种简单的方法可以保证系统具有近似最优的系统负载均衡程度.同时,SMARW 利用自适应的分布式随机行走修正过程提高抽样过程的收敛速度.理论分析和模拟测试表明,SMARW 方法具有较高的无偏抽样能力以及近似最优的系统负载均衡程度.  相似文献   

10.
文章提出了一种改进的随机抽样算法,对其时间和空间复杂性进行了分析,结果表明改进的随机抽样算法总体性能优于现有随机抽样算法,最后,给出了改进算法在等距抽样中的应用.  相似文献   

11.
压缩感知及应用   总被引:1,自引:0,他引:1  
传统的信号采样必须遵循香农采样定理,产生的大量数据造成了存储空间的浪费.压缩感知(CS)提出一种新的采样理论,它能够以远低于奈垒斯特采样速率采样信号.压缩感知的基本论点是如果信号具有稀疏性,可投影到一个与变换基不相关的随机矩阵并获得远少于信号长度的测量值,再通过求解优化问题,精确重构信号.本文详述了压缩感知的基本理论,压缩感知适用的基本条件:稀疏性和非相干性,测量矩阵设计要求,及重构算法的RIP准则,并介绍了压缩感知的应用及仿真.仿真结果表明当采样个数大于K×log(N/K),就能将N维信号稳定地重建出来.  相似文献   

12.
压缩传感综述   总被引:82,自引:13,他引:69  
李树涛  魏丹 《自动化学报》2009,35(11):1369-1377
在传统采样过程中, 为了避免信号失真, 采样频率不得低于信号最高频率的2倍. 然而对于数字图像、视频的获取, 依照香农(Shannon)定理会导致海量采样数据, 大大增加了存储和传输的代价. 近年来, 一种新兴的压缩传感理论为数据采集技术带来了革命性的突破, 得到了研究人员的广泛关注. 压缩传感采用非自适应线性投影来保持信号的原始结构, 能通过数值最优化问题准确重构原始信号. 压缩传感以远低于奈奎斯特频率进行采样, 在压缩成像系统、模拟/信息转换、生物传感等领域有着广阔的应用前景. 本文主要介绍了压缩传感的基本理论及相关应用, 并对其研究前景进行了展望.  相似文献   

13.
有关正六边形点阵结构数字图像的研究   总被引:3,自引:0,他引:3       下载免费PDF全文
传统连续图像信号的采样过程采用的是矩形点阵结构。当连续图像信号的频带处于一个圆形区域之内时,正六边形点阵结构的采样密度比矩形点阵结构的采样密度要降低13.4%。但目前图像输入输出设备只支持矩形点阵结构的数字图像,所以首先讨论了满足Nyquist采样定理的正六边形点阵结构的采样矩阵(空间采样间隔),及矩形点阵结构数字图像和正六边形点阵结构数字图像之间的转换。另一方面由于正六边形点阵结构的数字图像是不可分离信号,这给图像处理造成许多的不便。为此提出了一种基于可分离滤波器阵列的图像分解方法,降低了计算复杂度,得到类似矩形图像小波变换所得的多尺度分解结构,并给出重构图像的实验结果。  相似文献   

14.
In recent years, the deep web has become extremely popular. Like any other data source, data mining on the deep web can produce important insights or summaries of results. However, data mining on the deep web is challenging because the databases cannot be accessed directly, and therefore, data mining must be performed by sampling the datasets. The samples, in turn, can only be obtained by querying deep web databases with specific inputs. In this paper, we target two related data mining problems, association mining and differential rulemining. These are proposed to extract high-level summaries of the differences in data provided by different deep web data sources in the same domain. We develop stratified sampling methods to perform these mining tasks on a deep web source. Our contributions include a novel greedy stratification approach, which recursively processes the query space of a deep web data source, and considers both the estimation error and the sampling costs. We have also developed an optimized sample allocation method that integrates estimation error and sampling costs. Our experimental results show that our algorithms effectively and consistently reduce sampling costs, compared with a stratified sampling method that only considers estimation error. In addition, compared with simple random sampling, our algorithm has higher sampling accuracy and lower sampling costs.  相似文献   

15.
Microsystem Technologies - Compressed sensing (CS) is the process of signal reconstruction at a rate far below the Nyquist sampling rate. Sometimes, CS measurements need transmission over radio...  相似文献   

16.
In order to select a sample in a finite population of N units with given inclusion probabilities, it is possible to define a sampling design on at most N samples that have a positive probability of being selected. Designs defined on minimal sets of samples are called minimum support designs. It is shown that, for any vector of inclusion probabilities, systematic sampling always provides a minimum support design. This property makes it possible to extensively compute the sampling design and the joint inclusion probabilities. Random systematic sampling can be viewed as the random choice of a minimum support design. However, even if the population is randomly sorted, a simple example shows that some joint inclusion probabilities can be equal to zero. Another way of randomly selecting a minimum support design is proposed, in such a way that all the samples have a positive probability of being selected, and all the joint inclusion probabilities are positive.  相似文献   

17.
一种非线性图象矢量量化的重建方法——梯度中值算子法   总被引:1,自引:0,他引:1  
针对图象矢量量化编码复杂度高,并存在方块效应等缺点,本文基于亚-Nyquist采样方式构成训练码矢,并提出了一种新的重建图象方法——梯度中值算子法,它不仅降低了运算的复杂度,而且在插值的同时改善了图象的质量。实验结果表明,在压缩比为0.3125bit/pixel下可取得较高的信噪比(RSN)和较好的视觉效果  相似文献   

18.
分段抽样模型中抽中目标的概率分析   总被引:1,自引:0,他引:1  
杨观赐  李少波  钟勇 《计算机应用》2012,32(8):2209-2211
为了增大基于种群操作的搜索技术在有限时间内捕捉到决策空间中的特定目标的概率,基于古典概率模型建立不划分的随机抽样模型和划分成多个子区域的随机抽样模型(简称划分模型),分析比较了两个模型分别进行多次独立随机抽样至少抽中1次特定目标的概率,并证明:当总体中特定目标的数量为1或2时,划分模型抽中特定目标的概率恒大于不划分模型的概率。  相似文献   

19.
压缩感知理论突破了奈奎斯特采样频率的限制,利用该理论研究和实现了新的图像压缩采样方案.该方案利用小波变换和阈值处理相结合实现图像稀疏化,利用标准伪随机数均匀分布和二维中心傅立叶变换生成随机测量矩阵,并对小波变换后的高频子带进行加权采样,采用分段正交匹配追踪算法实现采样数据重建.仿真实验结果表明,该方案重建图像效果好.  相似文献   

20.
交替采样技术是一种理想的提高采样率的方法,但所伴随的高速输出数据对存储也带来了一定的困难。本文介绍了一种基于交替采样技术的高速数据采集系统,该系统采用了两片采样率为500Msps的A/D转换器,实现了1Gsps的采样率,并利用FPGA对A/D转换器的输出数据进行转换和缓存。本文着重介绍了该数据采集系统的数据转换和数据存储,并给出了仿真波形。  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司    京ICP备09084417号-23

京公网安备 11010802026262号