首页 | 官方网站   微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   311篇
  免费   40篇
  国内免费   45篇
工业技术   396篇
  2024年   1篇
  2023年   3篇
  2022年   8篇
  2021年   6篇
  2020年   12篇
  2019年   5篇
  2018年   15篇
  2017年   9篇
  2016年   11篇
  2015年   15篇
  2014年   23篇
  2013年   28篇
  2012年   31篇
  2011年   29篇
  2010年   27篇
  2009年   15篇
  2008年   17篇
  2007年   21篇
  2006年   17篇
  2005年   5篇
  2004年   18篇
  2003年   13篇
  2002年   9篇
  2001年   16篇
  2000年   12篇
  1999年   5篇
  1998年   6篇
  1997年   8篇
  1996年   1篇
  1995年   6篇
  1994年   1篇
  1993年   1篇
  1988年   1篇
  1986年   1篇
排序方式: 共有396条查询结果,搜索用时 7 毫秒
1.
2.
Site-specific strategies for exchanging segments of dsDNA are important for DNA library construction and molecular tagging. Deoxyuridine (dU) excision is an approach for generating 3’ ssDNA overhangs in gene assembly and molecular cloning procedures. Unlike approaches that use a multi-base pair motif to specify a DNA cut site, dU excision requires only a dT→dU substitution. Consequently, excision sites can be embedded in biologically active DNA sequences by placing dU substitutions at non-perturbative positions. In this work, I describe a molecular tagging method that uses dU excision to exchange a segment of a dsDNA strand with a long synthetic oligonucleotide. The core workflow of this method, called deoxyUridine eXcision-tagging (dUX-tagging), is an efficient one-pot reaction: strategically positioned dU nucleotides are excised from dsDNA to generate a 3’ overhang so that additional sequence can be appended by annealing and ligating a tagging oligonucleotide. The tagged DNA is then processed by one of two procedures to fill the 5’ overhang and remove excess tagging oligo. To facilitate its widespread use, all dUX-tagging procedures exclusively use commercially available reagents. As a result, dUX-tagging is a concise and easily implemented approach for high-efficiency linear dsDNA tagging.  相似文献   
3.
In social tagging system, a user annotates a tag to an item. The tagging information is utilized in recommendation process. In this paper, we propose a hybrid item recommendation method to mitigate limitations of existing approaches and propose a recommendation framework for social tagging systems. The proposed framework consists of tag and item recommendations. Tag recommendation helps users annotate tags and enriches the dataset of a social tagging system. Item recommendation utilizes tags to recommend relevant items to users. We investigate association rule, bigram, tag expansion, and implicit trust relationship for providing tag and item recommendations on the framework. The experimental results show that the proposed hybrid item recommendation method generates more appropriate items than existing research studies on a real-world social tagging dataset.  相似文献   
4.
朱国进  郑宁 《计算机工程》2014,(12):126-131
网络中的很多程序资源在知识概念上有内在的联系,却没有超链接将它们连接在一起。将网络程序资源中的算法知识名称获取出来,组织成一个算法知识专家库文件,用于识别程序设计资源所含的知识点,即可将程序设计资源按知识点相互联系。为了自动获取程序资源中的算法知识名称,提出一种基于自然语言处理的算法知识名称发现方法。通过发现含有算法知识名称语句的字符串模式,从程序资源中提取可能含算法知识名称的字符串,从中找出最有可能出现在算法知识名称中的分词,并根据这些分词获取算法知识名称。实验结果表明,与原有人工整理出的算法知识名称集合相比,该方法新增了11.2%的算法知识点和13.6%的算法知识名称。  相似文献   
5.
该文选取了藏语文中小学教材的部分语料,构建了带有藏语字性标记、词边界标记和词性标记的语料库,通过比较不同的分词、标注方法,证明分词、词性标注一体化效果比分步进行的效果好,准确率、召回率和F值分别提高了0.067、0.073和0.07。但词级标注模型难以解决词边界划分的一致性和未登录词的问题。基于此,作者提出可以利用字性和字构词的规律预测合成词的词性,既可以融入语言学知识又可以减少由未登录词导致的标注错误,实验结果证明,作为词性标注的后处理模块,基于字性标注的词性预测准确率提高到了0.916,这个结果已经比分词标注一体化结果好,说明字性标注对纠正词性错误标注有明显的效果。
  相似文献   
6.
分词和词性标注是中文处理中的一项基本步骤,其性能的好坏很大程度上影响了中文处理的效果。传统上人们使用基于词典的机械分词法,但是,在文本校对处理中的文本错误会恶化这种方法的结果,使之后的查错和纠错就建立在一个不正确的基础上。文中试探着寻找一种适用于文本校对处理的分词和词性标注算法。提出了全切分和一体化标注的思想。试验证明,该算法除了具有较高的正确率和召回率之外,还能够很好地抑制文本错误给分词和词性标注带来的影响。  相似文献   
7.
This paper discusses the basic design of the encoding scheme described by the Text Encoding Initiative'sGuidelines for Electronic Text Encoding and Interchange (TEI document number TEI P3, hereafter simplyP3 orthe Guidelines). It first reviews the basic design goals of the TEI project and their development during the course of the project. Next, it outlines some basic notions relevant for the design of any markup language and uses those notions to describe the basic structure of the TEI encoding scheme. It also describes briefly the core tag set defined in chapter 6 of P3, and the default text structure defined in chapter 7 of that work. The final section of the paper attempts an evaluation of P3 in the light of its original design goals, and outlines areas in which further work is still needed.C. M. Sperberg-McQueen is a Senior Research Programmer at the academic computer center of the University of Illinois at Chicago; his interests include medieval Germanic languages and literatures and the theory of electronic text markup. Since 1988 he has been editor in chief of the ACH/ACL/ALLC Text Encoding Initiative. Lou Burnard is Director of the Oxford Text Archive at Oxford University Computing Services, with interests in electronic text and database technology. He is European Editor of the Text Encoding Initiative's Guidelines.  相似文献   
8.
In this paper, we concentrate on justifying the decisions we made in developing the TEI recommendations for feature structure markup. The first four sections of this paper present the justification for the recommended treatment of feature structures, of features and their values, and of combinations of features or values and of alternations and negations of features and their values. Section 5 departs briefly from the linguistic focus to argue that the markup scheme developed for feature structures is in fact a general-purpose mechanism that can be used for a wide range of applications. Section 6 describes an auxiliary document called a feature system declaration that is used to document and validate a system of feature-structure markup. The seventh and final section illustrates the use of the recommended markup scheme with two examples, lexical tagging and interlinear text analysis.Terry Langendoen is Professor and Head of the Department of Linguistics at The University of Arizona. He was Chair of the TEI Committee on Analysis and Interpretation. He received his PhD in Linguistics from the Massachusetts Institute of Technology in 1964, and held teaching positions at The Ohio State University and the City University of New York (Brooklyn College and the Graduate Center) before moving to Arizona in 1988. He is author, co-author, or co-editor of six books in linguistics, and of numerous articles.Gary Simons is Director of the Academic Computing Department of the Summer Institute of Linguistics, Dallas, TX. He served on the TEI Committee on Analysis and Interpretation. He received his PhD in Linguistics (with minor emphasis in Computer Science) from Cornell University in 1979. Before taking up his current position in 1984, he spent five years in the Solomon Islands doing field work with SIL. He is author, co-author, or co-editor of eight books in the fields of linguistics and linguistic computing.The initial feature-structure recommendations were formulated by the Analysis and Interpretation Committee at a meeting in Tucson, Arizona in March 1990, following suggestions by Mitch Marcus and Beatrice Santorini. The authors received valuable help in the further revision and refinement of the recommendations from Steven Zepp.  相似文献   
9.
基于最大熵模型的汉语词义消歧与标注方法   总被引:3,自引:0,他引:3       下载免费PDF全文
张仰森 《计算机工程》2009,35(18):15-18
分析最大熵模型开源代码的原理和各参数的意义,采用频次和平均互信息相结合特征筛选和过滤方法,用Delphi语者编程实现汉语词义消歧的最大熵模型,运用GIS(Generalized Iterative Scaling)算法计算模型的参数。结合一些语占知识规则解决训练语料的数据稀疏问题,所实现的汉语词义消歧与标注系统,对800多个多义词进行词义标注,取得了较好的标注正确率。  相似文献   
10.
不同词性特征在文本聚类中有不同的贡献度。该文对四组有代表性的中英文数据集,利用三种聚类算法验证了四种主要词性及其组合对中英文文本聚类的影响。实验结果表明,在中文和英文两种语言中,名词均是表征文本内容的最重要词性,动词、形容词和副词均对文本聚类结果有帮助,仅选择名词作为特征聚类的结果与保留所有词性聚类的结果相近,但可大大降低文本的维度;选用名词为文本特征不能实现最好的聚类效果;相对其他词性组合和单一词性,采用名词、动词、形容词和副词的组合特征往往可以实现更好的聚类效果。在词性所占的比例以及单一词性聚类的结果上,同一词性在中英文文本聚类中呈现出较大差异。相对于英文,不同词性特征及其组合在中文文本聚类中呈现的差异更为稳定。  相似文献   
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司    京ICP备09084417号-23

京公网安备 11010802026262号