首页 | 官方网站   微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 234 毫秒
1.
通过序列影像间的互补信息改善影像失真和退化的超分辨率重建,其关键是精确获取序列影像间的运动信息。该文讨论了一种基于最小二乘影像匹配的高精度运动估计方法,根据最小二乘影像匹配的高精度同名像点,获取同名点在序列影像间的运动信息,从而进行低分辨率序列影像的子像素级运动信息精确估计,据此进行超分辨率重建。利用一组模拟低分辨率序列影像进行的超分辨率重建验证结果表明:基于最小二乘法运动估计精度较高,采用迭代反投影法重建影像具有较好的视觉效果,该方法尤其适用于存在平移运动的影像序列的超分辨率重建。  相似文献   

2.
针对BRISK算法在喀斯特山区无人机影像匹配中存在耗时长、正确匹配点数较少的问题,该文提出一种基于BRISK检测子和LATCH描述符的喀斯特山区无人机影像匹配算法,即利用BRISK检测子对影像进行特征点检测,利用LATCH描述符描述特征点,并采用结合最小距离的FLANN算法进行粗匹配,最后利用RANSAC算法对影像进行精匹配,剔除粗匹配中的错误匹配点对。实验结果表明:该算法的匹配总数和正确点数是SIFT和AKAZE算法的2倍以上,单点平均耗时是二者的7%~80%;与BRISK算法相比,在匹配总数减少5%的情况下,该算法的正确点数增加了30%以上,单点平均耗时减少50%以上。  相似文献   

3.
针对SAR影像难以精确计算信噪比的问题,该文以滤波评价指标信噪比为依托,提出了一种基于牛顿插值迭代修正SAR影像近似信噪比的滤波质量评价方法:根据已知值通过牛顿插值得到更多的数据,再利用牛顿迭代法拟合非线性函数修正近似信噪比。以模拟影像和真实影像为数据源,选择目视效果较好的滤波结果作为真实影像的近似无噪声影像,人工对模拟影像和无噪影像进行降质,并使用5种滤波器对影像进行处理,分别利用基于牛顿插值迭代得到的修正近似信噪比、传统的峰值信噪比、等效视数和边缘保持指数评价滤波效果。结果表明:基于牛顿插值迭代修正的近似信噪比可作为衡量SAR影像滤波质量的评价指标,且无需参考原始无噪影像,对真实SAR影像的滤波质量评价有一定的可行性。  相似文献   

4.
绿洲植被覆盖度遥感信息提取——以敦煌绿洲为例   总被引:1,自引:0,他引:1  
张号  屈建军  张克存 《中国沙漠》2015,35(2):493-498
以敦煌绿洲为研究区,利用Landsat TM遥感数据,通过归一化植被指数(NDVI)和混合像元分解两种方法,提取了敦煌绿洲的植被覆盖度信息。在基于NDVI提取植被覆盖度时,选取了基于NDVI的像元二分模型; 在混合像元分解过程中,对遥感影像进行波段反射率归一化处理和最小噪声变换(MNF),确定了3个类型端元:植被、不透水表面/土壤、水体/阴影; 最后利用高分辨率遥感影像验证对比了两种提取方法的精度。结果表明:混合像元分解更能准确地提取敦煌地区植被覆盖度信息,其线性相关系数为0.8915,均方根误差为0.0882,而且提取结果更符合实际情况,可以为敦煌植被状况监测及生态环境保护提供科学建议。  相似文献   

5.
具有更佳分辨率小波分解的遥感影像纹理分类*   总被引:8,自引:0,他引:8  
首先提出了具有更佳分辨率的小波分解,然后研究了基于该小波分解特征的影像纹理分类,并对25类地貌遥感影像在两种不同分解方式、两种不同滤波器长度及三种不同分辨率下进行了分类试验,取得了较高的分类正确率。  相似文献   

6.
提出一种基于差值权重和快速傅立叶变换的影像融合方法--DWF,对"北京一号"小卫星的全色和多光谱影像进行融合.该算法根据卫星影像的成像过程,通过傅立叶变换获取全色影像的高频、低频信息和多光谱影像在HIS颜色空间中分量的低频信息,由这3种信息通过基于差值权重的决策计算获得融合影像的空间信息.实验采用PCA、GS、DWF以及基于小波的HIS变换的融合方法,结果表明DWF方法优于其他融合方法,比较适合于"北京一号"卫星的全色与多光谱影像融合.  相似文献   

7.
遥感影像可以极大地增强DEM的表达效果,然而由于各种因素的影响,通常需对其进行预处理,传统方法是通过同名控制点进行校正。该文提出一种新的自动匹配算法,即提取DEM和遥感影像对应的特征线,利用Douglas-Peucker算法提取对应的特征点,以DEM特征点为离散点进行Delaunay三角剖分,并基于TIN完成纹理映射。实验结果表明,该算法显示效果较好,可有效改善畸变图像引起的错误显示。  相似文献   

8.
陈彦军  张玉红 《地理科学》2021,41(7):1276-1284
针对2003年以后Landsat-7扫描线纠正器失效问题,提出一种利用回归分析和三次样条插值对图像缺失像素进行恢复的算法,该方法无需使用参考图,并以非监督分类结果为准则。首先对图像进行预处理,包括条带定位、非监督分类和局部平均灰度计算;然后,利用回归分析计算在每个条带像素的3个方向上条带外侧像素的灰度变化趋势;在非监督分类准则下,通过判断各个方向条带两侧像素类型异同的策略确定条带当前像素的填充方向;随后,对条带边界点和非边界点采用不同算法分别填充。对非边界像素,在填充方向上利用三次样条插值计算当前像素的灰度插值;最后,对条带进行自适应滤波处理。实验结果表明,在不需要参考其它遥感影像与数据的条件下,研究方法与常用的有参考图方法相比较差别不大,而其均方根误差值17.590 4接近于最优的权重线性回归方法的17.400 6。  相似文献   

9.
参照《中国植被》中的植被分类体系,结合野外考察结果,建立了适合中国西北农牧交错带的植被分类体系。以覆盖研究区的多幅Landsat影像为基础,按“分层分类,逐层验证”的思路,实现了对研究区植被信息的提取。提取时,先利用完全约束的最小二乘模型对遥感影像进行混合像元分解方法,将整个研究区划分为植被区和非植被区;在植被区,基于光谱特征、纹理特征和地形特征,构建CART决策树,获得了乔木林、灌丛和草原等7种主要植被型组;在植被型组内,基于不同植被类型NDVI的季节差异特征,构建NDVI差值比值指数 (NDVI_DR),将乔木林和灌丛区分为常绿和落叶植被型,使用温度植被干旱指数(TVDI),将草原进一步区分为荒漠草原、典型草原和草甸草原3种类型,从而得到各个植被型的空间分布范围。经验证,最终分类的总体精度能达到79.51%,kappa系数为0.773。采用的分类方法充分利用了遥感数据既有的光谱信息和纹理信息,同时辅以地形信息。实践结果表明,分层分类和多种指标相结合的方法可以有效实现对影像跨幅的、以复杂镶嵌结构为主要特征的农牧交错带植被信息提取,精度较高,技术可行。  相似文献   

10.
基于矩匹配算法的山区影像地形辐射校正方法研究   总被引:10,自引:0,他引:10  
针对山区遥感影像地形辐射校正问题,应用现有模型(如C校正)进行地形辐射校正很难达到理想效果。引入矩匹配算法,利用DEM数据计算坡度、坡向等地形信息,以特定坡度和坡向数据为参考依据,对影像进行地形辐射校正。通过北京房山区SPOT5影像进行试验,表明该方法能在很大程度上消除地形阴影,更好地反映阴影区域的细节信息,同时光谱特性保真程度较好,原模糊的影像区域通过处理基本上达到有效识别地物的要求。  相似文献   

11.
鄂栋臣  沈强 《极地研究》2004,15(2):108-117
This paper briefly reviews the cause of the striping and then develops a tapered (Chebwin & Kaiser) window finite impulse response (FIR) filter and a constrained least squares FIR filter by reason of the striping of ASTER satellite data . Both filters minimize the stripes in the visible data and simultaneously minimize any distortion in the filtered data. Finally, the results obtained by using these new filtering methods are quantitatively compared with those produced by other destriping methods.  相似文献   

12.
FIR filter effects and nucleation phases   总被引:1,自引:0,他引:1  
The symmetric impulse response of linear phase Finite Impulse Response (FIR) filters most commonly used in modern seismic recording systems produces precursory signals to impulsive arrivals. These acausal filter-generated artefacts may result in misinterpretations of various onset properties. Prior to any onset interpretation, these effects have to be removed from the seismic record. This can be achieved without loss of bandwidth by post-filtration of the digital seismograms if the filter coefficients and the decimation ratios are known. We have analysed numerous signals from different instruments and sampling rates for precursory phases and found that—in contrast to commonly held beliefs—FIR-filter-related precursory signals are not always easy to recognize visually from their waveform signature. Furthermore, they can exhibit surprisingly similar properties to those reported for nucleation phases, although the majority of nucleation phases reported in the past have been obtained on instruments with a causal response. We demonstrate examples of filter-related precursory signals for events scanning nine orders of moment, from 1010 N m to 1019 N m. Surprisingly, the lower bound of the artefact durations as a function of seismic moment scales close to the cube root of the seismic moment. We interpret this as being caused by the fact that above a certain seismic moment, the attenuated source signal acts as a causal lowpass filter of a smaller bandwidth than the FIR filter. Assuming an ω-2 source model, constant stress drop and an empirical relationship between the maximum artefact duration and the cut-off frequency of the FIR filter, the artefact durations are expected to scale proportional to the 1/2.5 power of the seismic moment, in comparison to 1/3 as proposed for nucleation phases.  相似文献   

13.
1957-2008年黑河流域径流年内分配变化   总被引:4,自引:0,他引:4  
根据黑河流域重点控制水文站莺落峡、正义峡实测月径流量观测资料,应用不均匀系数、集中度(期)、变化幅度等方法指标,分析了黑河流域径流年内变化规律;采用累积滤波器和Mann-Kendall秩相关法诊断了各月径流量的变化趋势.结果表明:①对于莺落峡站,总体上各个年代径流年内分配均呈明显的"单峰型"分布,其径流量7月达到极大值...  相似文献   

14.
Spatially explicit land use/cover models are indispensable for sustainable rural land use planning, particularly in southern African countries that are experiencing rapid land use/cover changes. Using Zimbabwe as an example, we simulated future land use/cover changes up to 2030 based on a Markov-cellular automata model that integrates Markovian transition probabilities computed from satellite-derived land use/cover maps and a cellular automata spatial filter. A multicriteria evaluation (MCE) procedure was used to generate transition potential maps from biophysical and socioeconomic data. Dynamic adjustments of transition probabilities and transition potential map thresholds were implemented in the Markov-cellular automata model through a multi-objective land allocation (MOLA) procedure. Using the normalised transition probabilities, the Markov-cellular automata model simulated future land use/cover changes (up to 2030) under the 2000 calibration scenario, predicting a continuing downward trend in woodland areas and an upward trend in bareland areas. Future land use/cover simulations indicated that if the current land use/cover trends continue in the study area without holistic sustainable development measures, severe land degradation will ensue.  相似文献   

15.
Current data sharing in the Internet environment is supported using metadata at the file level. This approach has three fundamental shortcomings. First, sharing data from different sources with different semantics, data models, and acquisition methods usually requires data conversion and/or integration like data conflation. This can be tedious and error‐prone. Second, data updated from one source cannot be automatically propagated to other related data or applications. Finally, data sharing at the file level makes it difficult to provide feature‐level data for searching, accessing, and exchanging in real time over the Internet. This paper addresses these three issues by proposing a standards‐based framework for sharing geospatial data in the transportation application domain. The proposed framework uses a standard data model—geospatial data model proposed by the Geospatial One‐Stop initiative to harmonize the semantics and data models without the use of data integration methods. It uses Geography Markup Language (GML) for geospatial data coding and feature relationship, which provides a basis to propagate the data update from one source to related other sources and applications, and to search and extract data at the feature level. The framework uses the Web Feature Service (WFS) to search, access and extract data at the feature level from distributed sources. Finally, the Scalable Vector Graphics (SVG) standard was used for data display on the Web browser. Two transportation network datasets are used in the prototype case study to implement the proposed framework. The prototype allows the user to access and extract data at the feature level on the Web from distributed sources without downloading the full data file. It shows that the proposed standards‐based feature‐level data‐sharing system is capable of sharing data without data conflation, accessing, and exchanging data in real time at the feature level. The prototype also shows that changes in one database can be automatically reflected or propagated in another related database without data downloading.  相似文献   

16.
吴波  陈晓翔 《热带地理》2012,32(1):54-58
已有的SeaWinds散射计反演结果误差评价包括两种方法:一是与其他数据来源进行对比,如与数值气象预报产品或浮标风场数据对比;二是从风场反演结果的空间一致性进行判断.文中采用二维几何中心矩定量描述象元解空间中模糊解区域(与真解最接近的模糊解区域)的图谱特征,如图像椭圆的主轴倾角、长短轴比以及总灰度值;研究了总灰度值与风矢量反演误差的关系.结果表明,当总灰度值大于某一阈值时,风矢量反演误差将被限制在一个较小的范围之内.根据图像椭圆总灰度的这一特性,设计了一种新的基于总灰度约束特性的滤波算法.通过与浮标风矢量  相似文献   

17.
ABSTRACT

The importance of including a contextual underpinning to the spatial analysis of social data is gaining traction in the spatial science community. The challenge, though, is how to capture these data in a rigorous manner that is translational. One method that has shown promise in achieving this aim is the spatial video geonarrative (SVG), and in this paper we pose questions that advance the science of geonarratives through a case study of criminal ex-offenders. Eleven ex-offenders provided sketch maps and SVGs identifying high-crime areas of their community. Wordmapper software was used to map and classify the SVG content; its spatial filter extension was used for hot spot mapping with statistical significance tested using Monte Carlo simulations. Then, each subject’s sketch map and SVG were compared. Results reveal that SVGs consistently produce finer spatial-scale data and more locations of relevance than the sketch maps. SVGs also provide explanation of spatial-temporal processes and causal mechanisms linked to specific places, which are not evident in the sketch maps. SVG can be a rigorous translational method for collecting data on the geographic context of many phenomena. Therefore, this paper makes an important advance in understanding how environmentally immersive methods contribute to the understanding of geographic context.  相似文献   

18.
Parks and protected areas provide a wide range of benefits, but methods to evaluate their importance to society are often ad hoc and limited. In this study, the quality of crowdsourced information from Public Participation GIS (PPGIS) and Volunteered Geographic Information (VGI) sources (Flickr, OpenStreetMap (OSM), and Wikipedia) was compared with visitor counts that are presumed to reflect social importance. Using the state of Victoria, Australia as a case study, secondary crowdsourced VGI data, primary crowdsourced (PPGIS data) and visitor statistics were examined for their correspondence and differences, and to identify spatial patterns in park popularity. Data completeness—the percent of protected areas with data—varied between sources, being highest for OSM (90%), followed by Flickr (41%), PPGIS (24%), visitation counts (5%), and Wikipedia articles (4%). Statistically significant correlations were found between all five measures of popularity for protected areas. Using stepwise multiple linear regression, the explained variability in visitor numbers was greater than 70%, with PPGIS, Flickr and OSM having the largest standardized coefficients. The social importance of protected areas varied as a function of accessibility and the types of values (direct or indirect use) expressed for the areas. Crowdsourced data may provide an alternative to visitor counts for assessing protected area social importance and spatial variability of visitation. However, crowdsourced data appears to be an unreliable proxy for the full range of values and importance of protected areas, especially for non-use values such as biological conservation.  相似文献   

19.
RECENT DEVELOPMENTS IN MULTIVARIATE CALIBRATION   总被引:1,自引:0,他引:1  
With the goal of understanding global chemical processes,environmental chemists have some of the mostcomplex sample analysis problems.Multivariate calibration is a tool that can be applied successfully inmany situations where traditional univariate analyses cannot.The purpose of this paper is to reviewmultivariate calibration,with an emphasis being placed on the developments in recent years.The inverseand classical models are discussed briefly,with the main emphasis on the biased calibration methods.Principal component regression(PCR)and partial least squares(PLS)are discussed,along with methodsfor quantitative and qualitative validation of the calibration models.Non-linear PCR,non-linear PLSand locally weighted regression are presented as calibration methods for non-linear data.Finally,calibration techniques using a matrix of data per sample(second-order calibration)are discussed briefly.  相似文献   

20.
Digital filter smoothing methods for shot-noise-limited data are addressed in this study.The preferredmethod is based on a Gaussian filter in which the width of the Gaussian filter function is varied dependingon the estimate of the second derivative of the raw data.This filter is developed from the standpoint ofmaximum likelihood parameter estimation of the probability density function which describes shot-noise-limited data.The smoothing filter is tested and compared with the conventional sequential regressionfilter.This adaptive Gaussian smoothing filter works better than both the sequential regression and theadaptive Gaussian filter derived for normal noise.For data containing both high-and low-frequencycomponents,the limiting step in the adaptive filter is an estimation of the smoothing interval.Methodsfor determining an optimum smoothing interval are discussed.With the optimized smoothing interval,the adaptive Gaussian filter works well for data sets with a wide range of varying frequency components.In particular,synthetic data typical of atomic emission spectra are used to test this smoothing filter.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司    京ICP备09084417号-23

京公网安备 11010802026262号