首页 | 官方网站   微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   498篇
  免费   75篇
  国内免费   144篇
工业技术   717篇
  2024年   1篇
  2023年   3篇
  2022年   9篇
  2021年   13篇
  2020年   25篇
  2019年   37篇
  2018年   35篇
  2017年   59篇
  2016年   94篇
  2015年   110篇
  2014年   143篇
  2013年   83篇
  2012年   54篇
  2011年   34篇
  2010年   11篇
  2009年   4篇
  2007年   1篇
  1990年   1篇
排序方式: 共有717条查询结果,搜索用时 15 毫秒
1.
针对目前气象数据存储所面临的海量扩张、高并发读写、结构化和非结构化数据并存以及长时间序列和大数据集检索效率低下等问题,提出了以Hadoop开源框架为基础的气象数据分布式存储方案。通过对气象数据自身属性和特点进行分析,得出了气象数据在经过充分优化的基础上,在分布式存储框架中具有很强的适应性和规模化应用的潜力;并在HBase数据库中的Row Key设计和小文件合并策略方面做了创新。最后针对气象数据中广泛存在的结构化和非结构化这两种主要数据类型,以自动气象站数据和雷达产品数据为具体实例,给出了详细的设计思路和实现方法。  相似文献   
2.
国家电网公司信息化程度越来越高,单机运维审计系统产生的数据量日益增多,对海量数据高效率存储分析性能严重下降,系统稳定性降低。为满足国家电网当前对运维审计系统数据存储分析以及系统稳定性的需求,在Hadoop开源架构的基础上,本文提出基于Hadoop集群的海量数据分布式存储方法和基于Heartbeat的心跳检测技术,实现基于Hadoop的电力运维审计系统。〖JP2〗实验测试结果表明,基于Hadoop的电力运维审计系统相比单机系统可用性提高了8.42%,大大提升了存储分析海量数据的性能,具有系统工作稳定和服务不间断等优势。  相似文献   
3.
随着高校扩招,应届毕业生人数逐年增长,就业压力不容小觑。针对就业难的问题,进行招聘网站数据分析,挖 掘出岗位所需技能,再结合学生在校时所学课程,二者对比,就能很好地给学生推荐合适的工作,并且给学校专业人才培养方 案中的岗位职业定位  相似文献   
4.
安全管理平台(SMP)是实现安全管理工作常态化运行的技术支撑平台,在实际应用中需要实时处理来自安全设备所产生的海量日志信息。为解决现有SMP中海量日志查询效率低下的问题,设计基于云计算的SMP日志存储分析系统。基于Hive的任务转化模式,利用Hadoop架构的分布式文件系统和MapReduce并行编程模型,实现海量SMP日志的有效存储与查询。实验结果表明,与基于关系数据的多表关联查询方法相比,该系统使得SMP日志的平均查询效率提高约90%,并能加快SMP集中管控的整体响应速度。  相似文献   
5.
目前,我国各选矿厂均面临关键数据存储和利用问题,采取以往简单的磁盘和服务存储的方法,不仅不能有效保障数据的存储安全,而且无法实现数据的集中管理和应用,这对企业数据业务发展是极大的障碍。设计一套统一化数据中心框架并搭建,实现服务器、防火墙、交换机、共享存储科学搭配,多台服务器虚拟化部署与管理,基于Hadoop集群的高可用数据仓储搭建,对数据进行统一存储、管理与应用。使企业为用户提供优质高效服务、降低企业成本、提高企业经济效益、拓宽企业信息系统的范围,为企业带来更大的竞争力。通过在某金矿选矿厂的智能选厂项目的具体应用,证明系统的可行性与高效性。  相似文献   
6.
在对海量数据进行聚类的过程中,传统的串行模式局限性越来越明显,难以在有效时间内得出满意结果的问题,本文提出一种基于Hadoop平台下MapReduce框架的并行聚类模型。理论和实验结果证明该模型具有接近线速的加速比,针对海量数据具有较高效率。  相似文献   
7.
Traditional gazetteers are built and maintained by authoritative mapping agencies. In the age of Big Data, it is possible to construct gazetteers in a data-driven approach by mining rich volunteered geographic information (VGI) from the Web. In this research, we build a scalable distributed platform and a high-performance geoprocessing workflow based on the Hadoop ecosystem to harvest crowd-sourced gazetteer entries. Using experiments based on geotagged datasets in Flickr, we find that the MapReduce-based workflow running on the spatially enabled Hadoop cluster can reduce the processing time compared with traditional desktop-based operations by an order of magnitude. We demonstrate how to use such a novel spatial-computing infrastructure to facilitate gazetteer research. In addition, we introduce a provenance-based trust model for quality assurance. This work offers new insights on enriching future gazetteers with the use of Hadoop clusters, and makes contributions in connecting GIS to the cloud computing environment for the next frontier of Big Geo-Data analytics.  相似文献   
8.
Hadoop在处理海量小图像数据时,存在输入分片过多以及海量小图像存储问题。针对这些问题,不同于采用HIPI、SequenceFile等方法,提出了一个新型图像并行处理模型。利用Hadoop适合处理纯文本数据的特性,本模型使用存储了图像路径的文本文件替换图像数据作为输入,不需要设计图像数据类型。在Map阶段直接完成图像的读取、处理、存储过程。为了简化图像处理算法,将OpenCV和Map函数结合并设计了对应的存储方法,实现小图像文件的存储。实验表明,在Hadoop分布式系统平台下,模型不论在小数据量还是在大数据量的测试数据环境中,都具有良好的吞吐性能和稳定性。  相似文献   
9.
High-Utility Itemset Mining (HUIM) is considered a major issue in recent decades since it reveals profit strategies for use in industry for decision-making. Most existing works have focused on mining high-utility itemsets from databases showing large amount of patterns; however exact decisions are still challenging to make from that large amounts of discovered knowledge. Closed High-utility itemset mining (CHUIM) provides a smart way to present concise high-utility itemsets that can be more effective for making correct decisions. However, none of the existing works have focused on handling large-scale databases to integrate discovered knowledge from several distributed databases. In this paper, we first present a large-scale information fusion architecture to integrate discovered closed high-utility patterns from several distributed databases. The generic composite model is used to cluster transactions regarding their relevant correlation that can ensure correctness and completeness of the fusion model. The well-known MapReduce framework is then deployed in the developed DFM-Miner algorithm to handle big datasets for information fusion and integration. Experiments are then compared to the state-of-the-art CHUI-Miner and CLS-Miner algorithms for mining closed high-utility patterns and the results indicated that the designed model is well designed for handling large-scale databases with less memory usage. Moreover, the designed MapReduce framework can speed up the mining performance of closed high-utility patterns in the developed fusion system.  相似文献   
10.
Due to the distribution characteristic of the data source, such as astronomy and sales, or the legal prohibition, it is not always practical to store the world-wide data in only one data center (DC). Hadoop is a commonly accepted framework for big data analytics. But it can only deal with data within one DC. The distribution of data necessitates the study of Hadoop across DCs. In this situation, though, we can place mappers in the local DCs, where to place reducers is a great challenge, since each reducer needs to process almost all map output across all involved DCs. In this paper, a novel architecture and a key based scheme are proposed which can respect the locality principle of traditional Hadoop as much as possible while realizing deployment of reducers with lower costs. Considering both the DC level and the server level resource provision, bi-level programming is used to formalize the problem and it is solved by a tailored two level group genetic algorithm (TLGGA). The final results, which may be dispersed in several DCs, can be aggregated to a designative DC or the DC with the minimum transfer and storage cost. Extensive simulations demonstrate the effectiveness of TLGGA. It can outperform both the baseline and the state-of-the-art mechanisms by 49% and 40%, respectively.  相似文献   
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司    京ICP备09084417号-23

京公网安备 11010802026262号