首页 | 官方网站   微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
分布式水文模型的并行计算研究进展   总被引:2,自引:1,他引:2  
大流域、高分辨率、多过程耦合的分布式水文模拟计算量巨大,传统串行计算技术不能满足其对计算能力的需求,因此需要借助于并行计算的支持。本文首先从空间、时间和子过程三个角度对分布式水文模型的可并行性进行了分析,指出空间分解的方式是分布式水文模型并行计算的首选方式,并从空间分解的角度对水文子过程计算方法和分布式水文模型进行了分类。然后对分布式水文模型的并行计算研究现状进行了总结。其中,在空间分解方式的并行计算方面,现有研究大多以子流域作为并行计算的基本调度单元;在时间角度的并行计算方面,有学者对时空域双重离散的并行计算方法进行了初步研究。最后,从并行算法设计、流域系统综合模拟的并行计算框架和支持并行计算的高性能数据读写方法3个方面讨论了当前存在的关键问题和未来的发展方向。  相似文献   

2.
传统分布式水文模型采用串行计算模式,其计算能力无法满足大规模水文精细化、多要素、多过程耦合模拟的需求,亟需并行计算的支持。进入21世纪后,计算机技术的飞速发展和并行环境的逐步完善,为分布式水文模型并行计算提供了软硬件支撑。论文从并行环境、并行算法2个方面对已有研究进行总结概括,分析了不同并行环境和并行算法的优势与不足,并提出提高模型并行效率的手段,即合理分配进程/线程数缩减通信开销,采用混合并行环境增强模型可扩展性,空间或时空离散化提高模型的可并行性及动态分配计算任务、平衡工作负载等。最后,论文对高性能并行分布式模型的未来研究方向进行展望。  相似文献   

3.
ABSTRACT

Spatial interpolation is a traditional geostatistical operation that aims at predicting the attribute values of unobserved locations given a sample of data defined on point supports. However, the continuity and heterogeneity underlying spatial data are too complex to be approximated by classic statistical models. Deep learning models, especially the idea of conditional generative adversarial networks (CGANs), provide us with a perspective for formalizing spatial interpolation as a conditional generative task. In this article, we design a novel deep learning architecture named conditional encoder-decoder generative adversarial neural networks (CEDGANs) for spatial interpolation, therein combining the encoder-decoder structure with adversarial learning to capture deep representations of sampled spatial data and their interactions with local structural patterns. A case study on elevations in China demonstrates the ability of our model to achieve outstanding interpolation results compared to benchmark methods. Further experiments uncover the learned spatial knowledge in the model’s hidden layers and test the potential to generalize our adversarial interpolation idea across domains. This work is an endeavor to investigate deep spatial knowledge using artificial intelligence. The proposed model can benefit practical scenarios and enlighten future research in various geographical applications related to spatial prediction.  相似文献   

4.
Spatial interpolation of marine environment data using P-MSN   总被引:1,自引:0,他引:1  
ABSTRACT

When a marine study area is large, the environmental variables often present spatially stratified non-homogeneity, violating the spatial second-order stationary assumption. The stratified non-homogeneous surface can be divided into several stationary strata with different means or variances, but still with close relationships between neighboring strata. To give the best linear-unbiased estimator for those environmental variables, an interpolated version of the mean of the surface with stratified non-homogeneity (MSN) method called point mean of the surface with stratified non-homogeneity (P-MSN) was derived. P-MSN distinguishes the spatial mean and variogram in different strata and borrows information from neighboring strata to improve the interpolation precision near the strata boundary. This paper also introduces the implementation of this method, and its performance is demonstrated in two case studies, one using ocean color remote sensing data, and the other using marine environment monitoring data. The predictions of P-MSN were compared with ordinary kriging, stratified kriging, kriging with an external drift, and empirical Bayesian kriging, the most frequently used methods that can handle some extent of spatial non-homogeneity. The results illustrated that for spatially stratified non-homogeneous environmental variables, P-MSN outperforms other methods by simultaneously improving interpolation precision and avoiding artificially abrupt changes along the strata boundaries.  相似文献   

5.
6.
Understanding the topographic context preceding the development of erosive landforms is of major relevance in geomorphic research, as topography is an important factor on both water and mass movement-related erosion, and knowledge of the original surface is a condition for quantifying the volume of eroded material. Although any reconstruction implies assuming that the resulting surface reflects the original topography, past works have been dominated by linear interpolation methods, incapable of generating curved surfaces in areas with no data or values outside the range of variation of inputs. In spite of these limitations, impossibility of validation has led to the assumption of surface representativity never being challenged. In this paper, a validation-based method is applied in order to define the optimal interpolation technique for reconstructing pre-erosion topography in a given study area. In spite of the absence of the original surface, different techniques can be nonetheless evaluated by quantifying their capacity to reproduce known topography in unincised locations within the same geomorphic contexts of existing erosive landforms. A linear method (Triangulated Irregular Network, TIN) and 23 parameterizations of three distinct Spline interpolation techniques were compared using 50 test areas in a context of research on large gully dynamics in the South of Portugal. Results show that almost all Spline methods produced smaller errors than the TIN, and that the latter produced a mean absolute error 61.4% higher than the best Spline method, clearly establishing both the better adjustment of Splines to the geomorphic context considered and the limitations of linear approaches. The proposed method can easily be applied to different interpolation techniques and topographic contexts, enabling better calculations of eroded volumes and denudation rates as well as the investigation of controls by antecedent topographic form over erosive processes.  相似文献   

7.
8.
Novel digital data sources allow us to attain enhanced knowledge about locations and mobilities of people in space and time. Already a fast-growing body of literature demonstrates the applicability and feasibility of mobile phone-based data in social sciences for considering mobile devices as proxies for people. However, the implementation of such data imposes many theoretical and methodological challenges. One major issue is the uneven spatial resolution of mobile phone data due to the spatial configuration of mobile network base stations and its spatial interpolation. To date, different interpolation techniques are applied to transform mobile phone data into other spatial divisions. However, these do not consider the temporality and societal context that shapes the human presence and mobility in space and time. The paper aims, first, to contribute to mobile phone-based research by addressing the need to give more attention to the spatial interpolation of given data, and further by proposing a dasymetric interpolation approach to enhance the spatial accuracy of mobile phone data. Second, it contributes to population modelling research by combining spatial, temporal and volumetric dasymetric mapping and integrating it with mobile phone data. In doing so, the paper presents a generic conceptual framework of a multi-temporal function-based dasymetric (MFD) interpolation method for mobile phone data. Empirical results demonstrate how the proposed interpolation method can improve the spatial accuracy of both night-time and daytime population distributions derived from different mobile phone data sets by taking advantage of ancillary data sources. The proposed interpolation method can be applied for both location- and person-based research, and is a fruitful starting point for improving the spatial interpolation methods for mobile phone data. We share the implementation of our method in GitHub as open access Python code.  相似文献   

9.
10.
Abstract

With the increasing importance of parallel computing, attention must be given to utilising these resources efficiently. This article describes an algorithm to use cooperating parallel processors to solve the problem of vector polygon overlay, one of the most computationally-intensive problems in the GIS arena. The basic algorithm, which is described here using natural language, is not specific to a particular parallel architecture but has elements that are best suited to particular configurations, namely distributed-memory Multiple Instruction stream Multiple Data stream (MIMD) architectures. The intention is to provide an algorithm which utilises the potential of such architectures by distributing the computational load over several cooperating processors.  相似文献   

11.
A general-purpose parallel raster processing programming library (pRPL) was developed and applied to speed up a commonly used cellular automaton model with known tractability limitations. The library is suitable for use by geographic information scientists with basic programming skills, but who lack knowledge and experience of parallel computing and programming. pRPL is a general-purpose programming library that provides generic support for raster processing, including local-scope, neighborhood-scope, regional-scope, and global-scope algorithms as long as they are parallelizable. The library also supports multilayer algorithms. Besides the standard data domain decomposition methods, pRPL provides a spatially adaptive quad-tree-based decomposition to produce more evenly distributed workloads among processors. Data parallelism and task parallelism are supported, with both static and dynamic load-balancing. By grouping processors, pRPL also supports data–task hybrid parallelism, i.e., data parallelism within a processor group and task parallelism among processor groups. pSLEUTH, a parallel version of a well-known cellular automata model for simulating urban land-use change (SLEUTH), was developed to demonstrate full utilization of the advanced features of pRPL. Experiments with real-world data sets were conducted and the performance of pSLEUTH measured. We conclude not only that pRPL greatly reduces the development complexity of implementing a parallel raster-processing algorithm, it also greatly reduces the computing time of computationally intensive raster-processing algorithms, as demonstrated with pSLEUTH.  相似文献   

12.
13.
The geometry of impounded surfaces is a key tool to reservoir storage management and projection. Yet topographic data and bathymetric surveys of average-aged reservoirs may be absent for many regions worldwide. This paper examines the potential of contour line interpolation (TOPO) and Structure from Motion (SfM) photogrammetry to reconstruct the topography of existing reservoirs prior to dam closure. The study centres on the Paso de las Piedras reservoir, Argentina, and assesses the accuracy and reliability of TOPO- and SfM- derived digital elevation models (DEMs) using different grid resolutions. All DEMs were of acceptable quality. However, different interpolation techniques produced different types of error, which increased (or decreased) with increasing (or decreasing) grid resolution as a function of their nature, and relative to the terrain complexity. In terms of DEM reliability to reproduce area–elevation relationships, processing-related disagreements between DEMs were markedly influenced by topography. Even though they produce intrinsic errors, it is concluded that both TOPO and SfM techniques hold great potential to reconstruct the bathymetry of existing reservoirs. For areas exhibiting similar terrain complexity, the implementation of one or another technique will depend ultimately on the need for preserving accurate elevation (TOPO) or topographic detail (SfM).  相似文献   

14.
Geographical information systems could be improved by adding procedures for geostatistical spatial analysis to existing facilities. Most traditional methods of interpolation are based on mathematical as distinct from stochastic models of spatial variation. Spatially distributed data behave more like random variables, however, and regionalized variable theory provides a set of stochastic methods for analysing them. Kriging is the method of interpolation deriving from regionalized variable theory. It depends on expressing spatial variation of the property in terms of the variogram, and it minimizes the prediction errors which are themselves estimated. We describe the procedures and the way we link them using standard operating systems. We illustrate them using examples from case studies, one involving the mapping and control of soil salinity in the Jordan Valley of Israel, the other in semi-arid Botswana where the herbaceous cover was estimated and mapped from aerial photographic survey.  相似文献   

15.
As geospatial researchers' access to high-performance computing clusters continues to increase alongside the availability of high-resolution spatial data, it is imperative that techniques are devised to exploit these clusters' ability to quickly process and analyze large amounts of information. This research concentrates on the parallel computation of A Multidirectional Optimal Ecotope-Based Algorithm (AMOEBA). AMOEBA is used to derive spatial weight matrices for spatial autoregressive models and as a method for identifying irregularly shaped spatial clusters. While improvements have been made to the original ‘exhaustive’ algorithm, the resulting ‘constructive’ algorithm can still take a significant amount of time to complete with large datasets. This article outlines a parallel implementation of AMOEBA (the P-AMOEBA) written in Java utilizing the message passing library MPJ Express. In order to account for differing types of spatial grid data, two decomposition methods are developed and tested. The benefits of using the new parallel algorithm are demonstrated on an example dataset. Results show that different decompositions of spatial data affect the computational load balance across multiple processors and that the parallel version of AMOEBA achieves substantially faster runtimes than those reported in related publications.  相似文献   

16.
A common problem in location-allocation modeling is the error associated with the representation and scale of demand. Numerous researchers have investigated aggregation errors associated with using different scaled data, and more recently, error associated with the geographic representation of model objects has also been studied. For covering problems, the validity of using polygon centroid representations of demand has been questioned by researchers, but the alternative has been to assume that demand is uniformly distributed within areal units. The spatial heterogeneity of demand within areal units thus has been modeled using one of two extremes – demand is completely concentrated at one location or demand is uniformly distributed. This article proposes using intelligent areal interpolation and geographic information systems to model the spatial heterogeneity of demand within spatial units when solving the maximal covering location problem. The results are compared against representations that assume demand is either concentrated at centroids or uniformly distributed. Using measures of scale and representation error, preliminary results from the test study indicate that for smaller scale data, representation has a substantial impact on model error whereas at larger scales, model error is not that different for the alternative representations of the distribution of demand within areal units.  相似文献   

17.
18.
Control data are critical for improving areal interpolation results. Remotely sensed imagery, road network, and parcels are the three most commonly used ancillary data for areal interpolation of population. Meanwhile, the open access geographic data generated by social networks is emerging as an alternative control data that can be related to the distribution of population. This study evaluates the effectiveness of geo-located night-time tweets data as ancillary information and its combination with the three commonly used ancillary datasets in intelligent areal interpolation. Due to the skewed Twitter user age, the other purpose of this study is to test the effect of age bias control data on estimation of different age group populations. Results suggest that geo-located tweets as single control data does not perform as well as the three other control layers for total population and all age-specific population groups. However, the noticeable enhancement effect of Twitter data on other control data, especially for age groups with a high percentage of Twitter users, suggests that it helps to better reflect population distribution by increasing variation in densities within a residential area delineated by other control data.  相似文献   

19.
Viewshed analysis, often supported by geographic information system, is widely used in many application domains. However, as terrain data continue to become increasingly large and available at high resolutions, data-intensive viewshed analysis poses significant computational challenges. General-purpose computation on graphics processing units (GPUs) provides a promising means to address such challenges. This article describes a parallel computing approach to data-intensive viewshed analysis of large terrain data using GPUs. Our approach exploits the high-bandwidth memory of GPUs and the parallelism of massive spatial data to enable memory-intensive and computation-intensive tasks while central processing units are used to achieve efficient input/output (I/O) management. Furthermore, a two-level spatial domain decomposition strategy has been developed to mitigate a performance bottleneck caused by data transfer in the memory hierarchy of GPU-based architecture. Computational experiments were designed to evaluate computational performance of the approach. The experiments demonstrate significant performance improvement over a well-known sequential computing method, and an enhanced ability of analyzing sizable datasets that the sequential computing method cannot handle.  相似文献   

20.
The objective of this study is to quantitatively evaluate Tropical Rainfall Measuring Mission (TRMM) data with rain gauge data and further to use this TRMM data to drive a Distributed Time-Variant Gain Model (DTVGM) to perform hydrological simulations in the semi-humid Weihe River catchment in China. Before the simulations, a comparison with a 10-year (2001-2010) daily rain gauge data set reveals that, at daily time step, TRMM rainfall data are better at capturing rain occurrence and mean values than rainfall extremes. On a monthly time scale, good linear relationships between TRMM and rain gauge rainfall data are found, with determination coefficients R2 varying between 0.78 and 0.89 for the individual stations. Subsequent simulation results of seven years (2001-2007) of data on daily hydrological processes confirm that the DTVGM when calibrated by rain gauge data performs better than when calibrated by TRMM data, but the performance of the simulation driven by TRMM data is better than that driven by gauge data on a monthly time scale. The results thus suggest that TRMM rainfall data are more suitable for monthly streamflow simulation in the study area, and that, when the effects of recalibration and the results for water balance components are also taken into account, the TRMM 3B42-V7 product has the potential to perform well in similar basins.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司    京ICP备09084417号-23

京公网安备 11010802026262号