首页 | 官方网站   微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   48560篇
  免费   3945篇
  国内免费   1995篇
工业技术   54500篇
  2024年   154篇
  2023年   868篇
  2022年   1246篇
  2021年   1941篇
  2020年   1539篇
  2019年   1291篇
  2018年   1460篇
  2017年   1617篇
  2016年   1433篇
  2015年   1934篇
  2014年   2363篇
  2013年   2885篇
  2012年   2937篇
  2011年   3318篇
  2010年   2780篇
  2009年   2686篇
  2008年   2772篇
  2007年   2520篇
  2006年   2603篇
  2005年   2291篇
  2004年   1439篇
  2003年   1351篇
  2002年   1224篇
  2001年   1098篇
  2000年   1212篇
  1999年   1420篇
  1998年   1119篇
  1997年   937篇
  1996年   842篇
  1995年   761篇
  1994年   626篇
  1993年   465篇
  1992年   352篇
  1991年   257篇
  1990年   213篇
  1989年   147篇
  1988年   131篇
  1987年   81篇
  1986年   56篇
  1985年   29篇
  1984年   25篇
  1983年   25篇
  1982年   22篇
  1981年   14篇
  1980年   11篇
  1979年   4篇
  1978年   1篇
排序方式: 共有10000条查询结果,搜索用时 31 毫秒
991.
992.
Efficient near-duplicate image detection is important for several applications that feature extraction and matching need to be taken online. Most image representations targeting at conventional image retrieval problems are either computationally expensive to extract and match, or limited in robustness. Aiming at this problem, in this paper, we propose an effective and efficient local-based representation method to encode an image as a binary vector, which is called Local-based Binary Representation (LBR). Local regions are extracted densely from the image, and each region is converted to a simple and effective feature describing its texture. A statistical histogram can be calculated over all the local features, and then it is encoded to a binary vector as the holistic image representation. The proposed binary representation jointly utilizes the local region texture and global visual distribution of the image, based on which a similarity measure can be applied to detect near-duplicate image effectively. The binary encoding scheme can not only greatly speed up the online computation, but also reduce memory cost in real applications. In experiments the precision and recall, as well as computational time of the proposed method are compared with other state-of-the-art image representations and LBR shows clear advantages on online near-duplicate image detection and video keyframe detection tasks.  相似文献   
993.
In this paper, a hierarchical dependency context model (HDCM) is firstly proposed to exploit the statistical correlations of DCT (Discrete Cosine Transform) coefficients in H.264/AVC video coding standard, in which the number of non-zero coefficients in a DCT block and the scanned position are used to capture the magnitude varying tendency of DCT coefficients. Then a new binary arithmetic coding using hierarchical dependency context model (HDCMBAC) is proposed. HDCMBAC associates HDCM with binary arithmetic coding to code the syntax elements for a DCT block, which consist of the number of non-zero coefficients, significant flag and level information. Experimental results demonstrate that HDCMBAC can achieve similar coding performance as CABAC at low and high QPs (quantization parameter). Meanwhile the context modeling and the arithmetic decoding in HDCMBAC can be carried out in parallel, since the context dependency only exists among different parts of basic syntax elements in HDCM.  相似文献   
994.
995.
996.
Currently, research on content based image copy detection mainly focuses on robust feature extraction. However, due to the exponential growth of online images, it is necessary to consider searching among large scale images, which is very time-consuming and unscalable. Hence, we need to pay much attention to the efficiency of image detection. In this paper, we propose a fast feature aggregating method for image copy detection which uses machine learning based hashing to achieve fast feature aggregation. Since the machine learning based hashing effectively preserves neighborhood structure of data, it yields visual words with strong discriminability. Furthermore, the generated binary codes leads image representation building to be of low-complexity, making it efficient and scalable to large scale databases. Experimental results show good performance of our approach.  相似文献   
997.
Automatic annotation is an essential technique for effectively handling and organizing Web objects (e.g., Web pages), which have experienced an unprecedented growth over the last few years. Automatic annotation is usually formulated as a multi-label classification problem. Unfortunately, labeled data are often time-consuming and expensive to obtain. Web data also accommodate much richer feature space. This calls for new semi-supervised approaches that are less demanding on labeled data to be effective in classification. In this paper, we propose a graph-based semi-supervised learning approach that leverages random walks and ? 1 sparse reconstruction on a mixed object-label graph with both attribute and structure information for effective multi-label classification. The mixed graph contains an object-affinity subgraph, a label-correlation subgraph, and object-label edges with adaptive weight assignments indicating the assignment relationships. The object-affinity subgraph is constructed using ? 1 sparse graph reconstruction with extracted structural meta-text, while the label-correlation subgraph captures pairwise correlations among labels via linear combination of their co-occurrence similarity and kernel-based similarity. A random walk with adaptive weight assignment is then performed on the constructed mixed graph to infer probabilistic assignment relationships between labels and objects. Extensive experiments on real Yahoo! Web datasets demonstrate the effectiveness of our approach.  相似文献   
998.
Complex queries are widely used in current Web applications. They express highly specific information needs, but simply aggregating the meanings of primitive visual concepts does not perform well. To facilitate image search of complex queries, we propose a new image reranking scheme based on concept relevance estimation, which consists of Concept-Query and Concept-Image probabilistic models. Each model comprises visual, web and text relevance estimation. Our work performs weighted sum of the underlying relevance scores, a new ranking list is obtained. Considering the Web semantic context, we involve concepts by leveraging lexical and corpus-dependent knowledge, such as Wordnet and Wikipedia, with co-occurrence statistics of tags in our Flickr corpus. The experimental results showed that our scheme is significantly better than the other existing state-of-the-art approaches.  相似文献   
999.
Recently, uncertain graph data management and mining techniques have attracted significant interests and research efforts due to potential applications such as protein interaction networks and social networks. Specifically, as a fundamental problem, subgraph similarity all-matching is widely applied in exploratory data analysis. The purpose of subgraph similarity all-matching is to find all the similarity occurrences of the query graph in a large data graph. Numerous algorithms and pruning methods have been developed for the subgraph matching problem over a certain graph. However, insufficient efforts are devoted to subgraph similarity all-matching over an uncertain data graph, which is quite challenging due to high computation costs. In this paper, we define the problem of subgraph similarity maximal all-matching over a large uncertain data graph and propose a framework to solve this problem. To further improve the efficiency, several speed-up techniques are proposed such as the partial graph evaluation, the vertex pruning, the calculation model transformation, the incremental evaluation method and the probability upper bound filtering. Finally, comprehensive experiments are conducted on real graph data to test the performance of our framework and optimization methods. The results verify that our solutions can outperform the basic approach by orders of magnitudes in efficiency.  相似文献   
1000.
Knowledge collaboration (KC) is an important strategy measure to improve knowledge management, focusing on not only efficiency of knowledge cooperation, but also adding value of intellectual capital and social capital. In virtual teams, many factors, such as team’s network characteristics, collaborative culture, and individual collaborative intention, affect the performance of KC. By discussing the nature of KC, this paper presents that the performance of can be measured from two aspects: effectiveness of collaboration and efficiency of cooperation. Among them, effectiveness of collaboration is measured through value added and efficiency of cooperation is measured through accuracy and timeliness. Then the paper discusses the factors affecting the performance of KC from network characteristics, individual attributes and team attributes. The results show that network characteristics, individual attributes and team attributes in virtual team have significant impacts on the performance of KC.  相似文献   
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司    京ICP备09084417号-23

京公网安备 11010802026262号