首页 | 官方网站   微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
2.
The paper presents a new approach that uses neural networks to predict the performance of a number of dynamic decentralized load-balancing strategies. A distributed multicomputer system using distributed load-balancing strategies is represented by a unified analytical queuing model. A large simulation data set is used to train a neural network using the back-propagation learning algorithm based on gradient descent The performance model using the predicted data from the neural network produces the average response time of various load balancing algorithms under various system parameters. The validation and comparison with simulation data show that the neural network is very effective in predicting the performance of dynamic load-balancing algorithms. Our work leads to interesting techniques for designing load balancing schemes (for large distributed systems) that are computationally very expensive to simulate. One of the important findings is that performance is affected least by the number of nodes, and most by the number of links at each node in a large distributed system.  相似文献   

3.
Power load has the characteristic of nonlinear fluctuation and random growth. Aiming at the drawback that the forecasting accuracy of general GM(1,1) model goes down when there is a greater load mutation, this paper proposes a new grey model with grey correlation contest for short-term power load forecasting. In order to cover the impact of various certain and uncertain factors in climate and society on the model as fully as possible, original series are selected from different viewpoints to construct different forecasting strategies. By making full use of the characteristic that GM(1,1) model can give a perfect forecasting result in the smooth rise and drop phase of power load, and the feature that there are several peaks and valleys within daily power load, the predicted day is divided into several smooth segments for separate forecasting. Finally, the different forecasting strategies are implemented respectively in the different segments through grey correlation contest, so as to avoid the error amplification resulted from the improper choice of initial condition. A practical application verifies that, compared with the existing grey forecasting models, the proposed model is a stable and feasible forecasting model with a higher forecasting accuracy.  相似文献   

4.
In this paper, we adopt the exponentially weighted moving average (EWMA) method to develop the residual modification EWMA grey forecasting model REGM(1,1) and combines it with fuzzy theory to derive the fuzzy REGM or the FREGM(1,1) model. The proposed model is used to forecast annual petroleum demand in Taiwan. The experimental results show that the mean absolute percentage errors, median absolute percentage error, and symmetric mean absolute percentage error of FREGM(1,1) model are higher by 23.71, 12.26, and 23.06% respectively, compared with those obtained using the traditional GM(1,1) model.  相似文献   

5.
Forecasting activities are widely performed in the various areas of supply chains for predicting important supply chain management (SCM) measurements such as demand volume in order management, product quality in manufacturing processes, capacity usage in production management, traffic costs in transportation management, and so on. This paper presents a computerized system for implementing the forecasting activities required in SCM. For building a generic forecasting model applicable to SCM, a linear causal forecasting model is proposed and its coefficients are efficiently determined using the proposed genetic algorithms (GA), canonical GA and guided GA (GGA). Compared to canonical GA, GGA adopts a fitness function with penalty operators and uses population diversity index (PDI) to overcome premature convergence of the algorithm. The results obtained from two case studies show that the proposed GGA provides the best forecasting accuracy and greatly outperforms the regression analysis and canonical GA methods. A computerized system was developed to implement the forecasting functions and is successfully running in real glass manufacturing lines.  相似文献   

6.
In the last decade, new methods of forecasting were developed different from traditional statistical methods. In particular, it is possible to “efficiently” predict any sequence of outcomes without using any hypothesis on the nature of a source generating it. In the present paper, a modified version of the universal forecasting algorithm is considered. The main part of the paper is devoted to algorithmic analysis of universal forecasting methods and to exploring limits of their performance.  相似文献   

7.
In the evolving submicron technology, making it particularly attractive to use decentralized designs. A common form of decentralization adopted in processors is to partition the execution core into multiple clusters. Each cluster has a small instruction window, and a set of functional units. A number of algorithms have been proposed for distributing instructions among the clusters. The first part of this paper analyzes (qualitatively as well as quantitatively) the effect of various hardware parameters such as the type of cluster interconnect, the fetch size, the cluster issue width, the cluster window size, and the number of clusters on the performance of different instruction distribution algorithms. The study shows that the relative performance of the algorithms is very sensitive to these hardware parameters and that the algorithms that perform relatively better with four or fewer clusters are generally not the best ones for a larger number of clusters. This is important, given that with an imminent increase in the transistor budget, more clusters are expected to be integrated on a single chip. The second part of the paper investigates alternate interconnects that provide scalable performance as the number of clusters is increased. In particular, it investigates two hierarchical interconnects - a single ring of crossbars and multiple rings of crossbars - as well as instruction distribution algorithms to take advantage of these interconnects. Our study shows that these new interconnects with the appropriate distribution techniques achieve an IPC (instructions per cycle) that is 15-20 percent better than the most scalable existing configuration, and is within 2 percent of that achieved by a hypothetical ideal processor having a 1-cycle latency crossbar interconnect. These results confirm the utility and applicability of hierarchical interconnects and hierarchical distribution algorithms in clustered processors.  相似文献   

8.
9.
Traditional data-driven energy consumption forecasting models, including machine learning and deep learning methods, showed outstanding performance in terms of forecasting accuracy and efficiency. The superior performances are based on enough training data samples. Moreover, the derived forecasting model is only applicable to the training dataset and usually is applied to specific household. In real-world smart city development, a centralized forecasting model is required to model and forecasting energy consumption patterns for multiple households, whereas the traditional data-driven forecasting approaches may become invalid. A consistent model is demanded in this scenario modeling multiple households’ energy consumption patterns. Additionally, privacy issues are also highly concerned in such scenarios. Accurate energy consumption forecasting with privacy preservations becomes a key point for the state-of-art research. In this study, we adopt an innovative privacy-preserving structure that combines deep learning and federated learning. Under the premise of guaranteeing forecasting accuracy and privacy preservation, this structure can achieve the forecasting of various household energy consumption with a consistent model that simultaneously forecast multiple household energy consumption data by transmission control protocol.  相似文献   

10.
It has been recently shown that calibration with an error less than Δ>0Δ>0 is almost surely guaranteed with a randomized forecasting algorithm, where forecasts are obtained by random rounding the deterministic forecasts up to ΔΔ. We show that this error cannot be improved for a vast majority of sequences: we prove that, using a probabilistic algorithm, we can effectively generate with probability close to one a sequence “resistant” to any randomized rounding forecasting with an error much smaller than ΔΔ. We also reformulate this result by means of a probabilistic game.  相似文献   

11.
12.
Streamflow forecasting has always been a challenging task for water resources engineers and managers. This study applies Multilayer Perceptron (MLP) networks optimized with three training algorithms, including resilient back-propagation (MLP_RP), variable learning rate (MLP_GDX), and Levenberg–Marquardt (MLP_LM), to forecast streamflow in Aspas Watershed, located in Fars province in southwestern Iran. The algorithms were trained and tested using 3 years of data. Antecedent streamflow with 1 day time lag constituted the first input vector, and MLP with this vector, labeled as MLP1 was the first model. Inclusion of streamflow with two, three, and four time lags led to input vectors 2, 3, and 4 which when combined with MLP resulted in MLP2, MLP3, and MLP4, respectively. It was found that the Levenberg–Marquardt algorithm performed best among three types of training algorithms employed for training the MLP models. Generally, the MLP4_LM model yields the best result with a determination coefficient and a root mean square error of 0.93 and 2.6 (m3/s).  相似文献   

13.
改进的ID3算法在客户流失预测中的应用   总被引:2,自引:0,他引:2       下载免费PDF全文
对分类预测中广泛使用的ID3决策树算法进行了分析,指出了该算法的取值偏向性以及运算效率不高等缺点,在此基础上提出了一种改进的ID3算法并将其应用于某移动通信公司的客户流失预测。改进的算法通过属性加权克服取值偏向性,运用熵函数的递推性质并通过二元熵函数查表法显著地提高运算效率。应用结果表明,提出的改进算法性能明显改善。  相似文献   

14.
It is very common, nowadays, to find that a new product or technology is substituting an older one to satisfy a particular consumer need. But with the advancement of technology, there are many instances where a particular product or technology is replacing an older one while at the same time being replaced by a newer one. This relatively new substitution phenonenon can be called a multilevel substitution process. It seems that until now no effort has been directed to study the multilevel substitution cases. Moreover, any forecast made at a given point in time needs to be updated as varying circumstances influence the elements of the forecast. This paper, therefore, presents a systematic methodology for forecasting multilevel technological substitution incorporating various forms of time dependent parameters in an existing trend extrapolation model. The methodology is based on Forrester's “System Dynamics” technique. The developed model makes forecast of the market share as well as the actual size of the market of each of the competing technology or product under various assumptions about the growth of the total joint market based on past trend or future anticipations.  相似文献   

15.
Place recognition is a fundamental perceptual problem at the heart of many basic robot operations, most notably mapping. Failures can result from ambiguous sensor readings and environments with similar appearances. In this paper, we describe a robust place recognition algorithm that fuses a number of uncertain local matches into a high-confidence global match. We describe the theoretical basis of the approach and present extensive experimental results from a variety of sensor modalities and environments.  相似文献   

16.
An adaptive grid model is being developed to reduce the resolution-related uncertainty in air quality predictions. By clustering the grid nodes in regions where errors in pollutant concentrations would potentially be large, the model is expected to generate much more accurate results than its fixed, uniform grid counterparts. The repositioning of grid nodes is performed automatically using a weight function that assumes large values when the curvature (change of slope) of the pollutant fields is large. Despite the movement of the nodes, the structure of the grid does not change: each node retains its connectivity to the same neighboring nodes. Since there is no a priori knowledge of the grid movement, the input data must be re-gridded after each adaptation step, throughout the simulation. Emissions are one of the major inputs and mapping them to the adapted grid is a computationally intensive task. Efficient intersection algorithms are being developed that take advantage of the unchanging grid structure.Here, the grid node repositioning and intersection algorithms are evaluated using surface elevation data. Two elevation data sets are reduced to one-fourth of their sizes using uniform as well as adaptive grids. The first data set contains important terrain features near the boundaries while the second has all of its features far away from the boundaries. The compression of the first data set using grid node repositioning results in a maximum error that is 25% smaller compared to a uniform grid with the same number of nodes. The maximum error associated with the adaptive grid compression of the second data set is 60% smaller compared to the uniform grid compression. These results show that the adaptive grid algorithm has the potential of significantly improving the accuracy of air quality predictions, especially when the regions of changing slope are far away from the boundaries. Indeed, in a preliminary air quality application, the adaptive grid displayed superior performance in capturing the details of plumes from a large number of emission sources. The algorithms are computationally efficient and the overhead involved in repositioning the grid nodes and intersecting the grid cells with emission sources is not limiting in air quality simulations.  相似文献   

17.
Neural networks have been widely used for short-term, and to a lesser degree medium and long-term, demand forecasting. In the majority of cases for the latter two applications, multivariate modeling was adopted, where the demand time series is related to other weather, socio-economic and demographic time series. Disadvantages of this approach include the fact that influential exogenous factors are difficult to determine, and accurate data for them may not be readily available. This paper uses univariate modeling of the monthly demand time series based only on data for 6 years to forecast the demand for the seventh year. Both neural and abductive networks were used for modeling, and their performance was compared. A simple technique is described for removing the upward growth trend prior to modeling the demand time series to avoid problems associated with extrapolating beyond the data range used for training. Two modeling approaches were investigated and compared: iteratively using a single next-month forecaster, and employing 12 dedicated models to forecast the 12 individual months directly. Results indicate better performance by the first approach, with mean percentage error (MAPE) of the order of 3% for abductive networks. Performance is superior to naı¨ve forecasts based on persistence and seasonality, and is better than results quoted in the literature for several similar applications using multivariate abductive modeling, multiple regression, and univariate ARIMA analysis. Automatic selection of only the most relevant model inputs by the abductive learning algorithm provides better insight into the modeled process and allows constructing simpler neural network models with reduced data dimensionality and improved forecasting performance.  相似文献   

18.
We suggest and experimentally investigate a method to construct forecasting algorithms based on data compression methods (or the so-called archivers). By the example of predicting currency exchange rates we show that the precision of thus obtained predictions is relatively high.Translated from Problemy Peredachi Informatsii, No. 1, 2005, pp. 74–78.Original Russian Text Copyright © 2005 by Ryabko, Monarev.Supported in part by the Russian Foundation for Basic Research, project no. 03-01-00495, and INTAS, Grant 00-738.  相似文献   

19.
一种新的灰色预测模型及其建模机理   总被引:5,自引:0,他引:5  
为了提高灰色模型的预测精度,并拓展其应用范围,针对具有近似非齐次指数律特征的数据序列,构建了一种新的灰色预测模型NGM(1,1,k).通过最小二乘法求出了新灰色模型参数的计算公式,以微分方程作为演绎推理工具,得到了该模型的时间响应序列函数,并对其建模精度进行了理论和实验分析.研究结果表明了所提出的灰色模型的有效性和适用性.  相似文献   

20.
Time series forecasting concerns the prediction of future values based on the observations previously taken at equally spaced time points. Statistical methods have been extensively applied in the forecasting community for the past decades. Recently, machine learning techniques have drawn attention and useful forecasting systems based on these techniques have been developed. In this paper, we propose an approach based on neuro-fuzzy modeling for time series prediction. Given a predicting sequence, the local context of the sequence is located in the series of the observed data. Proper lags of relevant variables are selected and training patterns are extracted. Based on the extracted training patterns, a set of TSK fuzzy rules are constructed and the parameters involved in the rules are refined by a hybrid learning algorithm. The refined fuzzy rules are then used for prediction. Our approach has several advantages. It can produce adaptive forecasting models. It works for univariate and multivariate prediction. It also works for one-step as well as multi-step prediction. Several experiments are conducted to demonstrate the effectiveness of the proposed approach.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司    京ICP备09084417号-23

京公网安备 11010802026262号