首页 | 官方网站   微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   2967篇
  免费   159篇
  国内免费   24篇
工业技术   3150篇
  2024年   6篇
  2023年   59篇
  2022年   113篇
  2021年   181篇
  2020年   124篇
  2019年   106篇
  2018年   186篇
  2017年   135篇
  2016年   152篇
  2015年   90篇
  2014年   141篇
  2013年   252篇
  2012年   157篇
  2011年   189篇
  2010年   121篇
  2009年   135篇
  2008年   99篇
  2007年   92篇
  2006年   83篇
  2005年   43篇
  2004年   56篇
  2003年   61篇
  2002年   40篇
  2001年   32篇
  2000年   29篇
  1999年   29篇
  1998年   49篇
  1997年   32篇
  1996年   29篇
  1995年   39篇
  1994年   18篇
  1993年   25篇
  1992年   19篇
  1991年   18篇
  1990年   13篇
  1989年   20篇
  1988年   12篇
  1987年   11篇
  1986年   10篇
  1985年   21篇
  1984年   16篇
  1983年   8篇
  1982年   15篇
  1981年   12篇
  1980年   13篇
  1979年   9篇
  1977年   7篇
  1976年   6篇
  1975年   6篇
  1973年   8篇
排序方式: 共有3150条查询结果,搜索用时 703 毫秒
61.
State-of-the-art distributed RDF systems partition data across multiple computer nodes (workers). Some systems perform cheap hash partitioning, which may result in expensive query evaluation. Others try to minimize inter-node communication, which requires an expensive data preprocessing phase, leading to a high startup cost. Apriori knowledge of the query workload has also been used to create partitions, which, however, are static and do not adapt to workload changes. In this paper, we propose AdPart, a distributed RDF system, which addresses the shortcomings of previous work. First, AdPart applies lightweight partitioning on the initial data, which distributes triples by hashing on their subjects; this renders its startup overhead low. At the same time, the locality-aware query optimizer of AdPart takes full advantage of the partitioning to (1) support the fully parallel processing of join patterns on subjects and (2) minimize data communication for general queries by applying hash distribution of intermediate results instead of broadcasting, wherever possible. Second, AdPart monitors the data access patterns and dynamically redistributes and replicates the instances of the most frequent ones among workers. As a result, the communication cost for future queries is drastically reduced or even eliminated. To control replication, AdPart implements an eviction policy for the redistributed patterns. Our experiments with synthetic and real data verify that AdPart: (1) starts faster than all existing systems; (2) processes thousands of queries before other systems become online; and (3) gracefully adapts to the query load, being able to evaluate queries on billion-scale RDF data in subseconds.  相似文献   
62.
Citizens’ satisfaction is acknowledged as one of the most significant influences for e-government adoption and diffusion. This study examines the impact of information quality, system quality, trust, and cost on user satisfaction of e-government services. Using a survey, this study collected 1518 valid responses from e-government service adopters across the United Kingdom. Our empirical outcomes show the five factors identified in this study have a significant impact on U.K. citizens’ satisfaction with e-government services.  相似文献   
63.
The success of using Hidden Markov Models (HMMs) for speech recognition application has motivated the adoption of these models for handwriting recognition especially the online handwriting that has large similarity with the speech signal as a sequential process. Some languages such as Arabic, Farsi and Urdo include large number of delayed strokes that are written above or below most letters and usually written delayed in time. These delayed strokes represent a modeling challenge for the conventional left-right HMM that is commonly used for Automatic Speech Recognition (ASR) systems. In this paper, we introduce a new approach for handling delayed strokes in Arabic online handwriting recognition using HMMs. We also show that several modeling approaches such as context based tri-grapheme models, speaker adaptive training and discriminative training that are currently used in most state-of-the-art ASR systems can provide similar performance improvement for Hand Writing Recognition (HWR) systems. Finally, we show that using a multi-pass decoder that use the computationally less expensive models in the early passes can provide an Arabic large vocabulary HWR system with practical decoding time. We evaluated the performance of our proposed Arabic HWR system using two databases of small and large lexicons. For the small lexicon data set, our system achieved competing results compared to the best reported state-of-the-art Arabic HWR systems. For the large lexicon, our system achieved promising results (accuracy and time) for a vocabulary size of 64k words with the possibility of adapting the models for specific writers to get even better results.  相似文献   
64.
In the context of information retrieval (IR) from text documents, the term weighting scheme (TWS) is a key component of the matching mechanism when using the vector space model. In this paper, we propose a new TWS that is based on computing the average term occurrences of terms in documents and it also uses a discriminative approach based on the document centroid vector to remove less significant weights from the documents. We call our approach Term Frequency With Average Term Occurrence (TF-ATO). An analysis of commonly used document collections shows that test collections are not fully judged as achieving that is expensive and maybe infeasible for large collections. A document collection being fully judged means that every document in the collection acts as a relevant document to a specific query or a group of queries. The discriminative approach used in our proposed approach is a heuristic method for improving the IR effectiveness and performance and it has the advantage of not requiring previous knowledge about relevance judgements. We compare the performance of the proposed TF-ATO to the well-known TF-IDF approach and show that using TF-ATO results in better effectiveness in both static and dynamic document collections. In addition, this paper investigates the impact that stop-words removal and our discriminative approach have on TF-IDF and TF-ATO. The results show that both, stop-words removal and the discriminative approach, have a positive effect on both term-weighting schemes. More importantly, it is shown that using the proposed discriminative approach is beneficial for improving IR effectiveness and performance with no information on the relevance judgement for the collection.  相似文献   
65.
Bug fixing accounts for a large amount of the software maintenance resources. Generally, bugs are reported, fixed, verified and closed. However, in some cases bugs have to be re-opened. Re-opened bugs increase maintenance costs, degrade the overall user-perceived quality of the software and lead to unnecessary rework by busy practitioners. In this paper, we study and predict re-opened bugs through a case study on three large open source projects—namely Eclipse, Apache and OpenOffice. We structure our study along four dimensions: (1) the work habits dimension (e.g., the weekday on which the bug was initially closed), (2) the bug report dimension (e.g., the component in which the bug was found) (3) the bug fix dimension (e.g., the amount of time it took to perform the initial fix) and (4) the team dimension (e.g., the experience of the bug fixer). We build decision trees using the aforementioned factors that aim to predict re-opened bugs. We perform top node analysis to determine which factors are the most important indicators of whether or not a bug will be re-opened. Our study shows that the comment text and last status of the bug when it is initially closed are the most important factors related to whether or not a bug will be re-opened. Using a combination of these dimensions, we can build explainable prediction models that can achieve a precision between 52.1–78.6 % and a recall in the range of 70.5–94.1 % when predicting whether a bug will be re-opened. We find that the factors that best indicate which bugs might be re-opened vary based on the project. The comment text is the most important factor for the Eclipse and OpenOffice projects, while the last status is the most important one for Apache. These factors should be closely examined in order to reduce maintenance cost due to re-opened bugs.  相似文献   
66.
There is significant interest in the network management and industrial security community about the need to identify the “best” and most relevant features for network traffic in order to properly characterize user behaviour and predict future traffic. The ability to eliminate redundant features is an important Machine Learning (ML) task because it helps to identify the best features in order to improve the classification accuracy as well as to reduce the computational complexity related to the construction of the classifier. In practice, feature selection (FS) techniques can be used as a preprocessing step to eliminate irrelevant features and as a knowledge discovery tool to reveal the “best” features in many soft computing applications. In this paper, we investigate the advantages and disadvantages of such FS techniques with new proposed metrics (namely goodness, stability and similarity). We continue our efforts toward developing an integrated FS technique that is built on the key strengths of existing FS techniques. A novel way is proposed to identify efficiently and accurately the “best” features by first combining the results of some well-known FS techniques to find consistent features, and then use the proposed concept of support to select a smallest set of features and cover data optimality. The empirical study over ten high-dimensional network traffic data sets demonstrates significant gain in accuracy and improved run-time performance of a classifier compared to individual results produced by some well-known FS techniques.  相似文献   
67.
采用GLR算法对维吾尔语句子进行句法分析,并且与平行LR算法进行比较,比较它们的分析过程,针对于维吾尔语在单词集上进行句法分析。分析结果采用最优规则,选取最优的句法分析树,对下一步的句法分析研究提供很大的帮助。  相似文献   
68.
With the increased advancements of smart industries, cybersecurity has become a vital growth factor in the success of industrial transformation. The Industrial Internet of Things (IIoT) or Industry 4.0 has revolutionized the concepts of manufacturing and production altogether. In industry 4.0, powerful Intrusion Detection Systems (IDS) play a significant role in ensuring network security. Though various intrusion detection techniques have been developed so far, it is challenging to protect the intricate data of networks. This is because conventional Machine Learning (ML) approaches are inadequate and insufficient to address the demands of dynamic IIoT networks. Further, the existing Deep Learning (DL) can be employed to identify anonymous intrusions. Therefore, the current study proposes a Hunger Games Search Optimization with Deep Learning-Driven Intrusion Detection (HGSODL-ID) model for the IIoT environment. The presented HGSODL-ID model exploits the linear normalization approach to transform the input data into a useful format. The HGSO algorithm is employed for Feature Selection (HGSO-FS) to reduce the curse of dimensionality. Moreover, Sparrow Search Optimization (SSO) is utilized with a Graph Convolutional Network (GCN) to classify and identify intrusions in the network. Finally, the SSO technique is exploited to fine-tune the hyper-parameters involved in the GCN model. The proposed HGSODL-ID model was experimentally validated using a benchmark dataset, and the results confirmed the superiority of the proposed HGSODL-ID method over recent approaches.  相似文献   
69.
Skin lesions have become a critical illness worldwide, and the earlier identification of skin lesions using dermoscopic images can raise the survival rate. Classification of the skin lesion from those dermoscopic images will be a tedious task. The accuracy of the classification of skin lesions is improved by the use of deep learning models. Recently, convolutional neural networks (CNN) have been established in this domain, and their techniques are extremely established for feature extraction, leading to enhanced classification. With this motivation, this study focuses on the design of artificial intelligence (AI) based solutions, particularly deep learning (DL) algorithms, to distinguish malignant skin lesions from benign lesions in dermoscopic images. This study presents an automated skin lesion detection and classification technique utilizing optimized stacked sparse autoencoder (OSSAE) based feature extractor with backpropagation neural network (BPNN), named the OSSAE-BPNN technique. The proposed technique contains a multi-level thresholding based segmentation technique for detecting the affected lesion region. In addition, the OSSAE based feature extractor and BPNN based classifier are employed for skin lesion diagnosis. Moreover, the parameter tuning of the SSAE model is carried out by the use of sea gull optimization (SGO) algorithm. To showcase the enhanced outcomes of the OSSAE-BPNN model, a comprehensive experimental analysis is performed on the benchmark dataset. The experimental findings demonstrated that the OSSAE-BPNN approach outperformed other current strategies in terms of several assessment metrics.  相似文献   
70.

We perceive big data with massive datasets of complex and variegated structures in the modern era. Such attributes formulate hindrances while analyzing and storing the data to generate apt aftermaths. Privacy and security are the colossal perturb in the domain space of extensive data analysis. In this paper, our foremost priority is the computing technologies that focus on big data, IoT (Internet of Things), Cloud Computing, Blockchain, and fog computing. Among these, Cloud Computing follows the role of providing on-demand services to their customers by optimizing the cost factor. AWS, Azure, Google Cloud are the major cloud providers today. Fog computing offers new insights into the extension of cloud computing systems by procuring services to the edges of the network. In collaboration with multiple technologies, the Internet of Things takes this into effect, which solves the labyrinth of dealing with advanced services considering its significance in varied application domains. The Blockchain is a dataset that entertains many applications ranging from the fields of crypto-currency to smart contracts. The prospect of this research paper is to present the critical analysis and review it under the umbrella of existing extensive data systems. In this paper, we attend to critics' reviews and address the existing threats to the security of extensive data systems. Moreover, we scrutinize the security attacks on computing systems based upon Cloud, Blockchain, IoT, and fog. This paper lucidly illustrates the different threat behaviour and their impacts on complementary computational technologies. The authors have mooted a precise analysis of cloud-based technologies and discussed their defense mechanism and the security issues of mobile healthcare.

  相似文献   
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司    京ICP备09084417号-23

京公网安备 11010802026262号