共查询到20条相似文献,搜索用时 406 毫秒
1.
2.
3.
4.
钢材销售和市场决策支持系统是以数据仓库平台为基础的统一的企业数据分析系统。系统充分利用内部运营和外部市场中积累的海量数据,采用先进的数据挖掘、数据分析、预测模型技术,对运营系统进行深层次的知识发掘,从多角度去分析、预测企业的各种业务指标,从而为企业分析、预测内部和市场运营情况,提供了分析平台。 相似文献
5.
作者介绍了基于HTTP连接访问OLAP服务器数据构建Web决策支持系统的方法,并以实例说明了具体的创建过程,包括在MS SQL Server 2000上建立数据仓库、用Analysis Services创建OLAP服务、在IIS上配置Web站点、设计提供PTS服务的Web页并给出了分析示例.在文中,讨论了HTTP连接IIS与分析服务器进行信息交换的机制,特别是解决Web页长时间运行不能返回数据的问题.实例运行结果表明,利用微软的解决方案,能够为企业快速建立基于Web的决策支持系统. 相似文献
6.
7.
蔡晓霞 《Canadian Metallurgical Quarterly》2011,30(2)
数据挖掘是一种新兴的信息处理技术,在信息的利用和提取中发挥着重要的作用.本文在论述数据挖掘技术的基础上,提出了数据挖掘技术在图书馆应用的必要性和可行性,并分析了数据挖掘技术在图书馆资源建设、读者个性化服务、咨询服务、剔旧工作、Web中的应用. 相似文献
8.
钢铁企业营销分析数据仓库主题、维、粒度的确定方法 总被引:1,自引:0,他引:1
以构建数据仓库的关键在于确定好数据仓库的主题、数据维、粒度为基本出发点,提出了钢铁企业要依据市场核心要素、决策需求内容、决策需求层次来确定营销分析数据仓库主题、数据维、分析粒度的一种探索性方法,并给予了验证。 相似文献
9.
OLAP技术在南钢管理信息系统中的应用研究 总被引:2,自引:0,他引:2
结合南钢目前已有的基于联机事务处理(OLTP)技术的数据库管理系统,针对南钢生产数据实际使用情况,应用先进的联机分析处理(OLAP)技术,利用MiscroSoft Visual Basic作为前端高级语言开发工具对SQL Server2000数据库中的数据进行分析,为企业有效决策提供依据。同时对两种数据处理方式作出了比较,并简要介绍了OLAP技术在SQL Server2000中的实现方法和步骤。 相似文献
10.
11.
12.
Hao “Howard” Nie Sheryl Staub-French Thomas Froese 《Canadian Metallurgical Quarterly》2007,21(3):164-174
Existing project management practices underemphasize the interrelationships between individual work tasks and other project components. This leaves the interdependencies under-recognized and under-managed, and promotes a “one-time event” thinking that hinders the quest for ongoing performance improvements. We propose a unified approach to project management that brings an integrative view to the forefront, centered on the notion of defining multiple views of the project and the interrelationships that exist between the views. On Line analytical Processing (OLAP) technology can provide a platform for implementing the unified approach to project management. OLAP’s multidimensional data structure matches well with a multiview framework. We developed and tested a prototype that represents two OLAP cubes for cost control and manpower allocation across five dimensions, including time, participant, task, product, and cost type. Through several project management scenarios, we evaluate the feasibility of mapping OLAP with project management activities, demonstrate the views available for different types of project management decision-making, and discuss the benefits and limitations of the OLAP-based platform. These tests demonstrate the potential of OLAP technology to provide flexible and efficient analysis of construction project data. 相似文献
13.
Associative recognition requires Ss to discriminate intact from rearranged test pairs. In a 3-alternative forced-choice procedure, an intact test pair is tested against 2 rearranged distractors which may overlap, by sharing a common word in each test pair alternative (OLAP), or may not share words (NOLAP). With the exception of B. B. Murdock's (see record 1983-04936-001) theory of distributed associative memory (TODAM), current global matching models predict that forced-choice performance will be better for OLAP than for NOLAP test trials. TODAM can predict either an OLAP advantage or no difference between OLAP and NOLAP test conditions. The performance of the models is produced by fundamental statistical properties, and with the exception of TODAM, the OLAP advantage cannot be eliminated by varying parameters. Results of 3 experiments, however, show a NOLAP advantage. The implications of these results for global matching models and the relationship between recall and recognition are discussed. (PsycINFO Database Record (c) 2010 APA, all rights reserved) 相似文献
14.
15.
Ping Chen Rebecca Bari Buchheit James H. Garrett Jr. Sue McNeil 《Canadian Metallurgical Quarterly》2005,19(2):137-147
Data quality is extremely important where information dramatically influences the decisions being made. In the context of civil infrastructure systems, planning and management activities are critically dependent on data to support the efficient allocation of resources, detailed cost-benefit analysis, and informed decision-making. A Web-based tool, called Web-Vacuum, which employs data-mining (DM) techniques and partially implements a two-level data-quality assessment procedure, was developed to support the general purpose of data-quality assessment. The algorithms, workflow, and interfaces used in Web-Vacuum are presented. A data-quality assessment case study using a bridge management system data set is used to demonstrate that the application of Web-Vacuum can be used to assist in determining the quality of a data set. 相似文献
16.
17.
Measuring the Productivity of the Construction Industry in China by Using DEA-Based Malmquist Productivity Indices 总被引:1,自引:0,他引:1
Data envelopment analysis (DEA) measures the relative efficiency of decision-making units and avoids any functional specification to express production relationship between inputs and outputs. DEA-based Malmquist productivity index (MPI) measures the productivity change over time. In this paper, the MPI is used to measure the productivity changes of Chinese construction industry from 1997 to 2003. The results of analyses indicate that productivity of the Chinese construction industry experienced a continuous improvement from 1997 to 2003 except for a decline from 2001 to 2002. It is found that there are gaps in productivity development level among western, midland, eastern, and northeastern regions in the Chinese construction industry. The DEA-based MPI approach provides a good tool to support setting up policies and strategic decisions for improving the performance of the Chinese construction industry and promoting the sustainable development of the industry between different regions. 相似文献
18.
Orna Raz Rebecca Buchheit Mary Shaw Philip Koopman Christos Faloutsos 《Canadian Metallurgical Quarterly》2004,18(4):291-300
Monitoring data from event-based monitoring systems are becoming more and more prevalent in civil engineering. An example is truck weigh-in-motion (WIM) data. These data are used in the transportation domain for various analyses, such as analyzing the effects of commercial truck traffic on pavement materials and designs. It is important that such analyses use good quality data or at least account appropriately for any deficiencies in the quality of data they are using. Low quality data may exist due to problems in the sensing hardware, in its calibration, or in the software processing the raw sensor data. The vast quantities of data collected make it infeasible for a human to examine all the data. The writers propose a data mining approach for automatically detecting semantic anomalies i.e., unexpected behavior in monitoring data. The writers’ method provides automated assistance to domain experts in setting up constraints for data behavior. The effectiveness of this method is shown by reporting its successful application to data from an actual WIM system, the experimental data the Minnesota department of transportation collected by its Minnesota road research project (Mn/ROAD) facilities. The constraints the expert set up by applying this method were useful for automatic anomaly detection over the Mn/ROAD data, i.e., they detected anomalies the expert cared about, e.g., unlikely vehicles and erroneously classified vehicles, and the misclassification rate was reasonable for a human to handle (usually less than 3%). Moreover, the expert gained insights about system behavior, such as realizing that a system-wide change had occurred. The constraints detected, for example, periods in which the WIM system reported that roughly 20% of the vehicles classified as three-axle single-unit trucks had only one axle. 相似文献
19.
One of the daunting tasks of a neural network modeler is prescribing an appropriate training termination criterion, a criterion that avoids underfitting or overfitting the underlying functional relationship between input and output variables. This is particularly true when dealing with smaller data sets that do not offer the luxury of splitting the database into traditional training, testing, and validation sets. In the absence of a testing data set or when the testing data set is small, which is not very uncommon when working with environmental databases, it is extremely difficult to know when to terminate the training exercise. This paper proposes a new criterion that provides adequate guidance on training termination without the necessity for a testing data set and illustrates the validity of the proposed criterion on three data sets for water resources and environmental engineering applications. An extensive study of a number of large and small data sets has indicated that the moving average of relative strength index of a randomly generated dummy input variable tends to reach zero at the optimal termination point and tends to move away from zero beyond the optimal point. Based on this observation, a training terminating index was developed, tested, and validated on three datasets. 相似文献
20.
As the construction industry is adapting to new computer technologies in terms of hardware and software, computerized construction data are becoming increasingly available. The explosive growth of many business, government, and scientific databases has begun to far outpace our ability to interpret and digest the data. Such volumes of data clearly overwhelm the traditional methods of data analysis such as spreadsheets and ad-hoc queries. The traditional methods can create informative reports from data, but cannot analyze the contents of those reports. A significant need exists for a new generation of techniques and tools with the ability to automatically assist humans in analyzing the mountains of data for useful knowledge. Knowledge discovery in databases (KDD) and data mining (DM) are tools that allow identification of valid, useful, and previously unknown patterns so that the construction manager may analyze the large amount of construction project data. These technologies combine techniques from machine learning, artificial intelligence, pattern recognition, statistics, databases, and visualization to automatically extract concepts, interrelationships, and patterns of interest from large databases. This paper presents the necessary steps such as (1) identification of problems, (2) data preparation, (3) data mining, (4) data analysis, and (5) refinement process required for the implementation of KDD. In order to test the feasibility of the proposed approach, a prototype of the KDD system was developed and tested with a construction management database, RMS (Resident Management System), provided by the U. S. Corps of Engineers. In this paper, the KDD process was applied to identify the cause(s) of construction activity delays. However, its possible applications can be extended to identify cause(s) of cost overrun and quality control/assurance among other construction problems. Predictable patterns may be revealed in construction data that were previously thought to be chaotic. 相似文献