首页 | 官方网站   微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 312 毫秒
1.
数据管理系统评测基准:从传统数据库到新兴大数据   总被引:4,自引:0,他引:4  
大数据时代的到来意味着新技术、新系统和新产品的出现.如何客观地比较和评价不同系统之间的优劣自然成为一个热门研究课题,这种情形与三十多年前数据库系统蓬勃发展时期甚为相似.众所周知,在数据库系统取得辉煌成就的发展道路上,基准评测研究一直扮演着重要角色,极大推进了数据库技术和系统的长足发展.数据管理系统评测基准是指一套可用于评测、比较不同数据库系统性能的规范,以客观、全面反映具有类似功能的数据库系统之间的性能差距,从而推动技术进步、引导行业健康发展.数据管理系统评测基准与应用息息相关:应用发展产生新的数据管理需求,继而引发数据管理技术革新,再催生多个数据管理系统/平台,进而产生新的数据管理系统评测基准.数据管理系统评测基准种类多样,不仅包括面向关系型数据的基准评测,还包括面向半结构化数据、对象数据、流数据、空间数据等非关系型数据的评测基准.在当今新的数据系统发展中,面向大数据管理系统的评测基准的研究热潮也如期而至.大数据评测基准研究与应用密切相关.总体而言,尽管已有的数据管理系统评测基准未能充分体现大数据的特征,但是从方法学层面而言,三十多年来数据管理系统评测基准的发展经验是开展大数据系统研发最值得借鉴和参考的,这也是该文的主要动机.该文系统地回顾了数据管理系统评测基准的发展历程,分析了取得的成就,并展望了未来的发展方向.  相似文献   

2.
时代发展到今天,技术已经带领人类进入了大数据时代,新技术、新系统和产品的出现,使得采用大数据进行评测的系统成为了热门.这与传统的数据库系统的建立有些许相似之处,就是在数据库系统的建立过程中,基准评测扮演了很重要的角色,在数据库系统的长足发展中成为了系统系能的规范.用于不同数据库性能的评测标准,能够将数据库系统的性能、差距、技术进步等加以反应,得到数据库健康发展和数据技术应用的相关数据.根据数据管理的发展需要,对数据刊进行技术的革新,得到新的数据管理系统的测评标准,如今的数据管理系统的评测基准种类呈现了多样性,并且催生出数据管理系统和平台,造成关系型数据的基准评测的结构化向着对象数据、空间数据、流数据的方向发展.成为新的数据系统.而且关于大数据管理系统的评测基准的研究正在不断地深入,成为了新的热点.与此相关的评测基准的大数据研究也成为了当今最为人们的专业.从现有的数据管理系统的基准评测,还不能得到关于大数据特征的定论,但是从数十年的关于数据管理系统评测基准的发展经验来看,关于大数据系统的研发的结论都是很宝贵的经验,值得借鉴和思考.本文就数据管理系统评测基准的发展历史开始加以阐释,对数据管理系统的成就和未来发展进行论证.  相似文献   

3.
2005年度863信息检索评测方法研究和实施   总被引:1,自引:0,他引:1  
本次863中文信息检索评测的目的是检测互联网环境下大规模数据的中文信息检索技术的研究现状和系统有效性,中文与接口技术评测组综合考虑了目前信息检索面临的难点以及中文信息检索具有的特点设计了本次信息检索评测,本文详细描述了本次评测的组织过程,包括查询条件设计,语料库情况,标准答案查找方法以及评价指标和评测软件的介绍,通过对参评队伍的结果数据进行分析并结合查询条件的类型,本文还讨论了现有检索技术的优点以及存在的不足.  相似文献   

4.
金山2002年1月15日完成了公司组织结构的调整,新成立的WPS事业部、金山毒霸事业部、金山词霸事业部以及西山居事业部将成为金山组织结构的核心。外围的营销平台、供应平台、研发平台和管理平台将围绕他们展开工作。葛珂WPS事业部经理,1999年加入金山,在方正从事过系统集成方面的工作,是一个技术出身的管理者。在金山曾担任过助理总裁,副总裁。2001年,葛珂负责公司内部管理,人力资源、行政财务,除了日常的管理工作之外,他重要的任务是开发政府资源和大客户。在金山明确2002年是“政府采购年”的背景下,葛珂销售方面的经验已经能够承担起WPS事业部总经理这个职位。目前WPS事业部有四条产品线:WPS政府版、WPS教师版、学生版以及军用版,此外除了现有的WPS蒙文版之外,金山正在开发藏文版和维文版。公  相似文献   

5.
云计算的服务模式对用户数据安全产生的严重威胁,必须通过第三方对云计算是否可信进行评测,才能保障云计算产业的健康发展。由此,总结了国家高技术研究发展计划课题“面向第三方的云平台可信性在线评测及分析技术”的研究内容与成果,描述了从云平台可信评测模型与体系结构、可信性动态评测技术、可信证据收集及可信测试机制、可信性的多维量化评估方法、评测原型系统与工具集5个方面的研究工作,并对云平台的可信性进行评测。  相似文献   

6.
面向方面的软件工程指南   总被引:1,自引:0,他引:1       下载免费PDF全文
莫倩  刘晓 《计算机工程》2007,33(14):62-65
面向方面的软件开发(AOSD)技术的目标,是在整个软件生命周期中提供系统化标识、模块化以及组合横切关注点。随着AOSD技术的成熟,需要一个指南来支持良好工程化的面向方面系统的开发。该文综述了现有面向方面软件工程的各种方法,分析了在需求分析、设计和编程实现阶段对方面进行考虑的方法,并提出了比较这些方法的准则。文章为面向方面的实际应用选择专门的方法(方法组)提供了指南。  相似文献   

7.
轨迹大数据异常检测:研究进展及系统框架   总被引:1,自引:0,他引:1  
定位技术与普适计算的蓬勃发展催生了轨迹大数据,轨迹大数据表现为定位设备所产生的大规模高速数据流。及时、有效地对以数据流形式出现的轨迹大数据进行分析处理,可以发现隐含在轨迹数据中的异常现象,从而服务于城市规划、交通管理、安全管控等应用。受限于轨迹大数据固有的不确定性、无限性、时变进化性、稀疏性和偏态分布性等特征,传统的异常检测技术不能直接应用于轨迹大数据的异常检测。由于静态轨迹数据集的异常检测方法通常假定数据分布先验已知,忽视了轨迹数据的时间特征,也不能评测轨迹大数据中动态演化的异常行为。面对轨迹大数据低劣的数据质量和快速的数据更新,需要利用有限的系统资源处理因时变带来的概念漂移,实时检测多样化的轨迹异常,分析轨迹异常间的因果联系,继而识别更大时空区域内进化的、关联的轨迹异常,这是轨迹大数据异常检测的核心研究内容。此外,融合与位置服务应用相关的多源异质数据,剖析异常轨迹的起因以及其隐含的异常事件,也是轨迹大数据异常检测当下亟待研究的问题。为解决上述问题,对轨迹异常检测技术的研究成果进行了分类总结。针对现有轨迹异常检测方法的局限性,提出了轨迹大数据异常检测的系统架构。最后,在面向轨迹流的在线异常检测、轨迹异常的演化分析、轨迹异常检测系统的基准评测、异常检测结果语义分析的数据融合、以及轨迹异常检测的可视化技术等方面探讨了今后的研究工作。  相似文献   

8.
数据库管理系统根据应用场景分为事务型(OLTP)系统和分析型(OLAP)系统.随着实时数据分析需求增长, OLTP任务和OLAP任务混合的场景越来越普遍,业界开始重视支持混合事务和分析处理(HTAP)的数据库管理系统.这种HTAP数据库系统除了需要满足高性能的事务处理外,还需要满足实时分析对数据新鲜度的要求.因此,对数据库系统的设计与实现提出了新的挑战.近年来,在工业界和学术界涌现了一批架构多样、技术各异的原型和产品.综述HTAP数据库的背景和发展现状,并且从存储和计算的角度对现阶段的HTAP数据库进行分类.在此基础上,按照从下往上的顺序分别总结HTAP系统在存储和计算方面采用的关键技术.在此框架下介绍各类系统的设计思想、优劣势以及适用的场景.此外,结合HTAP数据库的评测基准和指标,分析各类HTAP数据库的设计与其呈现出的性能与数据新鲜度的关联.最后,结合云计算、人工智能和新硬件技术为HTAP数据库的未来研究和发展提供思路.  相似文献   

9.
本文介绍了一个基于开源技术的全新的面向智慧城市大数据PaaS平台的构建与开发。在本项目中通过应用数据管控及加速技术,拉通业务与技术的沟通壁垒,提升数据的管控能力及利用能力,能更好、更快的挖掘出有效的数据价值,从而打造出全新的"面向政企行业应用的大数据PaaS平台",以实现面向智慧城市政企行业应用的、基于大数据PaaS平台的、多租户的、稳定的、统一的大数据管理与分析应用平台,为政企行业用户提供数据采集、分析、存储、可视化监管等功能,使本项目研发的用的大数据PaaS平台成为智慧城市各类网络、业务核心数据的处理中心。  相似文献   

10.
深度神经网络的对抗鲁棒性研究在图像识别领域具有重要意义, 相关研究聚焦于对抗样本的生成和防御模型鲁棒性增强, 但现有工作缺少对其进行全面和客观的评估. 因而, 一个有效的基准来评估图像分类任务的对抗鲁棒性的系统被建立. 本系统功能主要为榜单评测展示、对抗算法评测以及系统优化管理, 同时利用计算资源调度和容器调度保证评测任务的进行. 本系统不仅能够为多种攻击和防御算法提供动态导入接口, 还能够从攻防算法的相互对抗过程中全方面评测现有算法优劣性.  相似文献   

11.
基准测试程序是评估计算机系统的关键测试工具。然而,大数据时代的到来使得开发大数据系统基准测试程序面临着更加严峻的挑战,当前学术界和产业界还不存在得到广泛认可的大数据基准测试程序包。文章利用实际的交通大数据系统构建了一个基于Hadoop平台的交通大数据基准测试程序包SIAT-Bench。通过选取多个层次属性量化了程序行为特征,采用聚类算法分析了不同程序-输入数据集对的相似性。根据聚类结果,为SIATBench选取了有代表性的程序和输入数据集。实验结果表明,SIAT-Bench在满足程序行为多样性的同时消除了基准测试集中的冗余。  相似文献   

12.
Inability to identify weaknesses or to quantify advancements in software system robustness frequently hinders the development of robust software systems. Efforts have been made to develop benchmarks of software robustness to address this problem, but they all suffer from significant shortcomings. The paper presents the various features that are desirable in a benchmark of system robustness, and evaluates some existing benchmarks according to these features. A new hierarchically structured approach to building robustness benchmarks, which overcomes many deficiencies of past efforts, is also presented. This approach has been applied to building a hierarchically structured benchmark that tests part of the Unix file and virtual memory systems. The resultant benchmark has successfully been used to identify new response class structures that were not detected in a similar situation by other less organized techniques  相似文献   

13.
With the explosive growth of information, more and more organizations are deploying private cloud systems or renting public cloud systems to process big data. However, there is no existing benchmark suite for evaluating cloud performance on the whole system level. To the best of our knowledge, this paper proposes the first benchmark suite CloudRank-D to benchmark and rank cloud computing systems that are shared for running big data applications. We analyze the limitations of previous metrics, e.g., floating point operations, for evaluating a cloud computing system, and propose two simple metrics: data processed per second and data processed per Joule as two complementary metrics for evaluating cloud computing systems. We detail the design of CloudRank-D that considers representative applications, diversity of data characteristics, and dynamic behaviors of both applications and system software platforms. Through experiments, we demonstrate the advantages of our proposed metrics. In several case studies, we evaluate two small-scale deployments of cloud computing systems using CloudRank-D.  相似文献   

14.
We have witnessed exciting development of RAM technology in the past decade. The memory size grows rapidly and the price continues to decrease, so that it is feasible to deploy large amounts of RAM in a computer system. Several companies and research institutions have devoted a lot of resources to develop in-memory databases (IMDB) that implement queries after loading data into (virtual) memory in advance. The bloom of various in-memory databases pursues us to test and evaluate their performance objectively and fairly. Although the existing database benchmarks like Wisconsin benchmark and TPC-X series have achieved great success, they cannot suit for in-memory databases due to the lack of consideration of unique characteristics of an IMDB. In this study, we propose MemTest, a novel benchmark that concerns some major characteristics of an in-memory database. This benchmark constructs particular metrics, which cover processing time, compression ratio, minimal memory space and column strength of an in-memory database. We design a data model based on inter-bank transaction applications, and a data generator to support uniform and skew data distributions. The MemTest workload includes a set of queries and transactions against the metrics and data model. Finally, we illustrate the efficacy of MemTest through the implementations on two different in-memory databases.  相似文献   

15.
A key challenge for the semantic Web is to acquire the capability to effectively query large knowledge bases. As there will be several competing systems, we need benchmarks that will objectively evaluate these systems. Development of effective benchmarks in an emerging domain is a challenging endeavor. In this paper, we propose a requirements driven framework for developing benchmarks for semantic Web knowledge base systems (SW KBSs). In this paper, we make two major contributions. First, we provide a list of requirements for SW KBS benchmarks. This can serve as an unbiased guide to both the benchmark developers and personnel responsible for systems acquisition and benchmarking. Second, we provide an organized collection of techniques and tools needed to develop such benchmarks. In particular, the collection contains a detailed guide for generating benchmark workload, defining performance metrics, and interpreting experimental results  相似文献   

16.
Monitoring and analyzing floor vibrations to determine human activity has major applications in fields such as health care and security. For example, structural vibrations could be used to determine if an elderly person living independently falls, or if a room is occupied or empty. Monitoring human activity using floor vibration promises to have advantages over other methods. For example, it does not have the privacy concerns of other methods such as vision-based techniques, or the compliance challenges of wearable sensors. The analysis of the signals becomes a classification problem determining the type of human activity. Unfortunately only a few research groups are performing research of this subject even though there is a significant number of techniques that could be applied to this field. To date, no systematic study about the challenges and advantages of using different types of algorithms for this problem has been performed. This paper proposes a benchmark problem to: (i) encourage researchers to design new algorithms for monitoring human activity using floor vibrations, (ii) provide a dataset to test new algorithms, and (iii) allow the comparison of proposed methods based on a set of standard metrics. The benchmark consists of seven different cases of increasing difficulty. Each case has a specific number of sensors, calibration signals, and type of floor excitation forces to be considered. The paper also proposes specific metrics that enable the direct comparison of different techniques. Research groups interested in monitoring human activity using floor vibrations are encouraged to use the experimental data and evaluation metrics published in this paper to develop their own methodologies. This will enable the community of researchers to easily compare and contrasts techniques and better understand what type of methods will be appropriate in different applications.  相似文献   

17.
There is still a big question to the community of multi-objective optimization: how to compare effectively the performances of multi-objective stochastic optimizers? The existing metrics suffer from different drawbacks to address this question. In this article, three convergence-based M-ary cardinal metrics are proposed, based on different forms of dominance relations between two solutions, for comparing performances of two optimizers from their multiple runs. The metrics are first tested on some benchmark instances whose performances are already known, and then their outcomes for some other instances are compared with those of three existing metrics.  相似文献   

18.
《Information Systems》1999,24(6):475-493
A benchmark is a standard for measuring and comparing the performance of like systems. For new product makers, a benchmark can provide important statistical information so products can be fine-tuned before their deployment. For end users, on the other hand, a benchmark can be used to compare the strengths and weaknesses of different products so that an informed decision can be made about system adoption. Benchmarks aid in estimations of scalability in terms of the number of users and/or transactions that a system can support, and system response times under various loads and hardware/software deployment platforms.This paper focuses on the design issues in developing benchmarks for e-commerce. Because of the multidisciplinary aspects of e-commerce and the various emerging and distinct e-commerce business models, creating a single benchmark for the e-commerce application is not feasible. Add to this the diverse needs of small to medium enterprises (SMEs) and big business and we motivate the need for a benchmark suite for e-commerce.It is the thesis of this paper that the business model plays the primary role in the development of a e-commerce benchmark. It is the business that determines processes and transactions and thus also the database and navigational designs. For illustrative purposes, we step through the design of an e-commerce benchmark specification, WebEC, based on a e-broker (cybermediary) Internet business model. An example implementation of the benchmark specification, based on Microsoft's COM technology, and sample benchmark results are also presented.  相似文献   

19.
In the above paper by Cordon and Herrara (IEEE Trans. Fuzzy Syst., vol. 8, p. 335-44, 2000), the so-called accurate linguistic modeling (ALM) method was proposed to improve the accuracy of linguistic fuzzy models. A number of examples are given to demonstrate the benefits of the approach. We show that: 1) these examples are not suitable as benchmarks or demonstrators of nonlinear modeling techniques and 2) better results can be obtained by using both standard regression tools as well as other fuzzy modeling techniques. We argue that benchmark examples that are used in articles to demonstrate the effectiveness of fuzzy modeling techniques should be selected with great care. Critical analysis of the results should be made and linear models should be regarded as a lower bound on the acceptable performance.  相似文献   

20.
The increasing attention on deep learning has tremendously spurred the design of intelligence processing hardware. The variety of emerging intelligence processors requires standard benchmarks for fair comparison and system optimization (in both software and hardware). However, existing benchmarks are unsuitable for benchmarking intelligence processors due to their non-diversity and nonrepresentativeness. Also, the lack of a standard benchmarking methodology further exacerbates this problem. In this paper, we propose BenchIP, a benchmark suite and benchmarking methodology for intelligence processors. The benchmark suite in BenchIP consists of two sets of benchmarks: microbenchmarks and macrobenchmarks. The microbenchmarks consist of single-layer networks. They are mainly designed for bottleneck analysis and system optimization. The macrobenchmarks contain state-of-the-art industrial networks, so as to offer a realistic comparison of different platforms. We also propose a standard benchmarking methodology built upon an industrial software stack and evaluation metrics that comprehensively reflect various characteristics of the evaluated intelligence processors. BenchIP is utilized for evaluating various hardware platforms, including CPUs, GPUs, and accelerators. BenchIP will be open-sourced soon.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司    京ICP备09084417号-23

京公网安备 11010802026262号