首页 | 官方网站   微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
The extraction of water distribution is extremely useful in research and planning activities, including those associated with water resources, environments, disasters, local climates, and other factors. Remote-sensing images with moderate resolution have been the main data source due to the vast distribution of water and the high cost, access difficulty, and massive size of high-resolution images. Although some water indices and methods for water extraction have been proposed, there is still a lack of these resources to easily, accurately, efficiently, and automatically extract water. This paper focused on some improvements that mainly used the most traditional but also the newest Operational Land Imager (OLI) images in Landsat 8. This study first analysed the variation features of previous water indices. Secondly, taking the city of Beijing and its surrounding area as the experimental site, a spectral curve analysis was performed and a new water index was proposed. This index was compared to three typical indices. Thirdly, a new approach was proposed to accurately and easily extract water. It included four major steps: background partitioning, thresholding and preliminary segmentation, noise removal by patch size, and local region growth. Next, the stricter and more effective stratified random sampling method was used to test the accuracy. Then, we tested the generality of the proposed water index and extraction method using nine typical test sites from around the world and tried to simplify the workflow. Finally, this paper discusses threshold optimization issues, such as automatic selection and reduction of the number of thresholds. The results show that the normalized water index (NDWI), modified normalized water index (MNDWI), and normalized difference built-up index (NDBI) may fail in some situations due to the complex spectrum of the impervious surface class. Some shadow pixels were impossible to remove using only spectral analysis because both the digital number (DN) trends and values were similar to those of water. The proposed water index was easy and simple, but it corresponded better to water bodies. Additionally, it was more accurate and universal and showed greater potential for extracting water. This method relatively accurately and completely extracted various water bodies from plain city, plain country, and natural mountainous regions in many typical climate zones, eliminating interference caused by dark impervious surfaces, plants, sand, suspended sediments, snow, ice, bedrock, reservoir drawdown areas, shadows from mountains and buildings, mixed pixels, etc. The mean kappa coefficients were 0.988, 0.982, and 0.984 in plain city, plain country, and natural mountainous regions, respectively. This paper suggests that thresholds can be automatically determined by comparing the accuracy changes of different thresholds according to preselected sample and test points. Furthermore, the combined use of the maximum class square error method (also known as the Ostu algorithm) and the adaptive thresholding method exhibits great potential for automatic determination of thresholds in regions without many noises with higher water index values. In addition, water bodies could also be accurately extracted by setting these thresholds to fixed values based on the results at more test sites.  相似文献   

2.
Separating speech signals of multiple simultaneous talkers in a reverberant enclosure is known as the cocktail party problem. In real-time applications online solutions capable of separating the signals as they are observed are required in contrast to separating the signals offline after observation. Often a talker may move, which should also be considered by the separation system. This work proposes an online method for speaker detection, speaker direction tracking, and speech separation. The separation is based on multiple acoustic source tracking (MAST) using Bayesian filtering and time–frequency masking. Measurements from three room environments with varying amounts of reverberation using two different designs of microphone arrays are used to evaluate the capability of the method to separate up to four simultaneously active speakers. Separation of moving talkers is also considered. Results are compared to two reference methods: ideal binary masking (IBM) and oracle tracking (O-T). Simulations are used to evaluate the effect of number of microphones and their spacing.  相似文献   

3.
Unlike conventional unsupervised classification methods, such as K‐means and ISODATA, which are based on partitional clustering techniques, the methodology proposed in this work attempts to take advantage of the properties of Kohonen's self‐organizing map (SOM) together with agglomerative hierarchical clustering methods to perform the automatic classification of remotely sensed images. The key point of the proposed method is to execute the cluster analysis process by means of a set of SOM prototypes, instead of working directly with the original patterns of the image. This strategy significantly reduces the complexity of the data analysis, making it possible to use techniques that have not normally been considered viable in the processing of remotely sensed images, such as hierarchical clustering methods and cluster validation indices. Through the use of the SOM, the proposed method maps the original patterns of the image to a two‐dimensional neural grid, attempting to preserve the probability distribution and topology of the input space. Afterwards, an agglomerative hierarchical clustering method with restricted connectivity is applied to the trained neural grid, generating a simplified dendrogram for the image data. Utilizing SOM statistic properties, the method employs modified versions of cluster validation indices to automatically determine the ideal number of clusters for the image. The experimental results show examples of the application of the proposed methodology and compare its performance to the K‐means algorithm.  相似文献   

4.
This paper presents a novel host-based combinatorial method based on k-Means clustering and ID3 decision tree learning algorithms for unsupervised classification of anomalous and normal activities in computer network ARP traffic. The k-Means clustering method is first applied to the normal training instances to partition it into k clusters using Euclidean distance similarity. An ID3 decision tree is constructed on each cluster. Anomaly scores from the k-Means clustering algorithm and decisions of the ID3 decision trees are extracted. A special algorithm is used to combine results of the two algorithms and obtain final anomaly score values. The threshold rule is applied for making the decision on the test instance normality. Experiments are performed on captured network ARP traffic. Some anomaly criteria has been defined and applied to the captured ARP traffic to generate normal training instances. Performance of the proposed approach is evaluated using five defined measures and empirically compared with the performance of individual k-Means clustering and ID3 decision tree classification algorithms and the other proposed approaches based on Markovian chains and stochastic learning automata. Experimental results show that the proposed approach has specificity and positive predictive value of as high as 96 and 98%, respectively.  相似文献   

5.
6.
It is well known that every investment carries a risk associated, and depending on the type of investment, it can be very risky; for instance, securities. However, Markowitz proposed a methodology to minimize the risk of a portfolio through securities diversification. The selection of the securities is a choice of the investor, who counts with several technical analyzes to estimate investment’s returns and risks. This paper presents an autoregressive exogenous (ARX) predictor model to provide the risk and return of some Brazilian securities – negotiated at the Brazilian stock market, BOVESPA – to select the best portfolio, herein understood as the one with minimum expected risk. The ARX predictor succeeded in predicting expected returns and risks of the securities, which resulted in an effective portfolio. Additionally the Markowitz theory was confirmed, showing that diversification reduces the risk of a portfolio.  相似文献   

7.
The Journal of Supercomputing - In this paper, a novel gene selection benefiting from feature clustering and feature discretization is developed. In large numbers of genes, unsupervised fuzzy...  相似文献   

8.
Any organization which plans to introduce a new enterprise resource planning (ERP) system will carry out a range of activities to improve its readiness for the new system. This paper develops a new approach for managing these interrelated activities using fuzzy cognitive maps (FCMs) and the fuzzy analytical hierarchy process (FAHP). This approach enables the organization to (1) identify the readiness-relevant activities, (2) determine how these activities influence each other, (3) assess how these activities will contribute to the overall readiness and (4) prioritize these activities according to their causal interrelationships to allocate management effort for the overall readiness improvement. The approach first uses FCMs and a fuzzy connection matrix to represent all possible causal relationships between activities. It then uses FAHP to determine the contribution weights and uses FCM inference to include the effects of feedback between the activities. Based on the contribution and interrelationships between activities, a management matrix is developed to categorize them into four management zones for effective allocation of limited management efforts. An empirical study is conducted to demonstrate how the approach works.  相似文献   

9.
The main goal of the present paper is to present a two phase approach for solving the reliability–redundancy allocation problems (RRAP) with nonlinear resource constraints. In the first phase of the proposed approach, an algorithm based on artificial bee colony (ABC) is developed to solve the allocation problem while in the second phase an improvement of the solution as obtained by this algorithm is made. Four benchmark problems in the reliability–redundancy allocation and two reliability optimization problems have been taken to demonstrate the approach and it is shown by comparison that the solutions by the new proposed approach are better than the solutions available in the literature.  相似文献   

10.
In this paper we propose a novel iterative predictor–corrector (IPC) approach to model static and kinetic friction during interactions with deformable objects. The proposed IPC method works within the purview of the implicit mixed linear complementarity problem (MLCP) formulation of collision response. In IPC, first the potential directions of frictional force are determined at each contact point by leveraging the monotonic convergence of an iterative MLCP solver. All the contacts are then categorized into either static or kinetic frictional states. Linear projection constraints (LPCs) are used to enforce ‘stiction’ for contacts in static friction. We propose a modified iterative constraint anticipation (MICA) approach that can resolve the LPCs while simultaneously solving the MLCP. Our method can handle arbitrary models including asymmetric and anisotropic friction models. IPC requires low memory and is highly tunable. Multiple example problems are solved to demonstrate the method.  相似文献   

11.
This paper presents an improved register–transfer level functional partitioning approach for testability. Based on an earlier work (X. Gu, K. Kuchcinski, Z. Peng, An efficient and economic partitioning approach for testability, in Proceedings of International Test Conference, Washington DC, 1995.), the proposed method identifies the hard-to-test points initially based on data path testability and control state reachability. These points will be made directly accessible by DFT techniques. Then the actual partitioning procedure is performed by a quantitative clustering algorithm which clusters directly interconnected components based on a new global testability of data path and global state reachability of control part. After each clustering step, we use a new estimation method which is based partially on explicit re-calculation and partially on gradient techniques for incremental testability and state reachability analysis to update the test property of the circuit. This process will be iterated until the design is partitioned into several disjoint sub-circuits and each of them can be tested independently. The control part is then modified to control the circuit in normal and test mode accordingly. Therefore, test quality is improved by independent test generation and application for every partition and by combining the effect of data path with control part. Experimental results show the advantages of the proposed algorithm compared to other conventional approaches.  相似文献   

12.
Incomplete data are often encountered in data sets used in clustering problems, and inappropriate treatment of incomplete data can significantly degrade the clustering performance. In view of the uncertainty of missing attributes, we put forward an interval representation of missing attributes based on nearest-neighbor information, named nearest-neighbor interval, and a hybrid approach utilizing genetic algorithm and fuzzy c-means is presented for incomplete data clustering. The overall algorithm is within the genetic algorithm framework, which searches for appropriate imputations of missing attributes in corresponding nearest-neighbor intervals to recover the incomplete data set, and hybridizes fuzzy c-means to perform clustering analysis and provide fitness metric for genetic optimization simultaneously. Several experimental results on a set of real-life data sets are presented to demonstrate the better clustering performance of our hybrid approach over the compared methods.  相似文献   

13.
We consider the 0–1 Knapsack Problem with Setups. We propose an exact approach which handles the structure of the ILP formulation of the problem. It relies on partitioning the variables set into two levels and exploiting this partitioning. The proposed approach favorably compares to the algorithms in literature and to solver CPLEX 12.5 applied to the ILP formulation. It turns out to be very effective and capable of solving to optimality, within limited CPU time, all instances with up to 100, 000 variables.  相似文献   

14.
15.
Leading species at the forest stand level is a required forest inventory attribute. Information regarding leading species enables the calculation of volume and biomass in support of forest monitoring and reporting activities. In this study, approaches for leading species estimation based upon very high spatial resolution (pixel sided <1 m) have been developed and implemented, with opportunities for improving attribute accuracy using data fusion methods. Over a study region located in the Yukon Territory, Canada, we apply the Dempster–Shafer Theory (DST) to integrate multiple resolutions of satellite imagery (including spatial and spectral), topographic information, and fire disturbance history records for the estimation of leading species.Among the data source combinations tested in the study, the QuickBird panchromatic combined with selected optical channels from Landsat-5 Thematic Mapper (TM) imagery provided the highest overall accuracy (70.4%) for identifying leading species and improved the accuracy by 3.1% over a baseline from a classification-tree based method applied on all data sources. Additional insights to the application of DST to fuse satellite imagery with ancillary data sources to map leading stand species in a boreal environment are also elaborated upon, including the range and distribution of training data and DST mass function establishment.  相似文献   

16.
This paper presents an approach to implement vibration, pressure, and current signals for fault diagnosis of the valves in reciprocating compressors. Due to the complexity of structure and motion of such compressor, the acquired vibration signal normally involves transient impacts and noise. This causes the useful information to be corrupted and difficulty in accurately diagnosing the faults with traditional methods. To reveal the fault patterns contained in this signal, the Teager–Kaiser energy operation (TKEO) is proposed to estimate the amplitude envelopes. In case of pressure and current, the random noise is removed by using a denoising method based on wavelet transform. Subsequently, statistical measures are extracted from all signals to represent the characteristics of the valve conditions. In order to classify the faults of compressor valves, a new type of learning architecture for deep generative model called deep belief networks (DBNs) is applied. DBN employs a hierarchical structure with multiple stacked restricted Boltzmann machines (RBMs) and works through a greedy layer-by-layer learning algorithm. In pattern recognition research areas, DBN has proved to be very effective and provided with high performance for binary values. However, for implementing DBN to fault diagnosis where most of signals are real-valued, RBM with Bernoulli hidden units and Gaussian visible units is considered in this study. The proposed approach is validated with the signals from a two-stage reciprocating air compressor under different valve conditions. To confirm the superiority of DBN in fault classification, its performance is compared with that of relevant vector machine and back propagation neuron networks. The achieved accuracy indicates that the proposed approach is highly reliable and applicable in fault diagnosis of industrial reciprocating machinery.  相似文献   

17.
A complete fault detection and isolation system is designed for a gas–liquid separation unit. It involves the determination and identification of grey box models, the design of a model-based residual generator, and finally the evaluation of the residuals via a set of statistical tests. The latter are cumulative sum (CUSUM) tests which are combined in such a way that both fault detection and fault isolation can be achieved. The performance of the resulting diagnosis system, such as missed alarm rate, wrong isolation rate and mean detection delay, are studied via simulations.  相似文献   

18.
This paper reports the results of two studies carried out in a controlled environment aiming to understand relationships between movement patterns of coordination that emerge during climbing and performance outcomes. It involves a recent method of nonlinear dimensionality reduction, multi-scale Jensen–Shannon neighbor embedding (Lee et al., 2015), which has been applied to recordings of movement sensors in order to visualize coordination patterns adapted by climbers. Initial clustering at the climb scale provides details linking behavioral patterns with climbing fluency/smoothness (i.e., the performance outcome). Further clustering on shorter time intervals, where individual actions within a climb are analyzed, enables more detailed exploratory data analysis of behavior. Results suggest that the nature of individual learning curves (the global, trial-to-trial performance) corresponded to certain behavioral patterns (the within trial motor behavior). We highlight and discuss three distinctive learning curves and their corresponding relationship to behavioral pattern emergence, namely: no improvement and a lack of new motor behavior emergence; sudden improvement and the emergence of new motor behaviors; and gradual improvement and a lack of new motor behavior emergence.  相似文献   

19.
Data are considered to be important organizational assets because of their assumed value, including their potential to improve the organizational decision-making processes. Such potential value, however, comes with various costs, including those of acquiring, storing, securing and maintaining the given assets at appropriate quality levels. Clearly, if these costs outweigh the value that results from using the data, it would be counterproductive to acquire, store, secure and maintain the data. Thus cost–benefit assessment is particularly important in data warehouse (DW) development; yet very few techniques are available for determining the value that the organization will derive from storing a particular data table and hence determining which data set should be loaded in the DW. This research seeks to address the issue of identifying the set of data with the potential for producing the greatest net value for the organization by offering a model that can be used to perform a cost–benefit analysis on the decision support views that the warehouse can support and by providing techniques for estimating the parameters necessary for this model.  相似文献   

20.
Speech process has benefited a great deal from the wavelet transforms. Wavelet packets decompose signals in to broader components using linear spectral bisecting. In this paper, mixtures of speech signals are decomposed using wavelet packets, the phase difference between the two mixtures are investigated in wavelet domain. In our method Laplacian Mixture Model (LMM) is defined. An Expectation Maximization (EM) algorithm is used for training of the model and calculation of model parameters which is the mixture matrix. And then we compare estimation of mixing matrix by LMM-EM with different wavelets. And then we use adaptive algorithm in each wavelet packet for speech separation and we see better results are obtained. Therefore individual speech components of speech mixtures are separated.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司    京ICP备09084417号-23

京公网安备 11010802026262号