首页 | 官方网站   微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
We analyse the difference between the averaged (average of ratios) and globalised (ratio of averages) author-level aggregation approaches based on various paper-level metrics. We evaluate the aggregation variants in terms of (1) their field bias on the author-level and (2) their ranking performance based on test data that comprises researchers that have received fellowship status or won prestigious awards for their long-lasting and high-impact research contributions to their fields. We consider various direct and indirect paper-level metrics with different normalisation approaches (mean-based, percentile-based, co-citation-based) and focus on the bias and performance differences between the two aggregation variants of each metric. We execute all experiments on two publication databases which use different field categorisation schemes. The first uses author-chosen concept categories and covers the computer science literature. The second covers all disciplines and categorises papers by keywords based on their contents. In terms of bias, we find relatively little difference between the averaged and globalised variants. For mean-normalised citation counts we find no significant difference between the two approaches. However, the percentile-based metric shows less bias with the globalised approach, except for citation windows smaller than four years. On the multi-disciplinary database, PageRank has the overall least bias but shows no significant difference between the two aggregation variants. The averaged variants of most metrics have less bias for small citation windows. For larger citation windows the differences are smaller and are mostly insignificant.In terms of ranking the well-established researchers who have received accolades for their high-impact contributions, we find that the globalised variant of the percentile-based metric performs better. Again we find no significant differences between the globalised and averaged variants based on citation counts and PageRank scores.  相似文献   

2.
This paper explores a possible approach to a research evaluation, by calculating the renown of authors of scientific papers. The evaluation is based on the citation analysis and its results should be close to a human viewpoint. The PageRank algorithm and its modifications were used for the evaluation of various types of citation networks. Our main research question was whether better evaluation results were based directly on an author network or on a publication network. Other issues concerned, for example, the determination of weights in the author network and the distribution of publication scores among their authors. The citation networks were extracted from the computer science domain in the ISI Web of Science database. The influence of self-citations was also explored. To find the best network for a research evaluation, the outputs of PageRank were compared with lists of prestigious awards in computer science such as the Turing and Codd award, ISI Highly Cited and ACM Fellows. Our experiments proved that the best ranking of authors was obtained by using a publication citation network from which self-citations were eliminated, and by distributing the same proportional parts of the publications’ values to their authors. The ranking can be used as a criterion for the financial support of research teams, for identifying leaders of such teams, etc.  相似文献   

3.
We evaluate article-level metrics along two dimensions. Firstly, we analyse metrics’ ranking bias in terms of fields and time. Secondly, we evaluate their performance based on test data that consists of (1) papers that have won high-impact awards and (2) papers that have won prizes for outstanding quality. We consider different citation impact indicators and indirect ranking algorithms in combination with various normalisation approaches (mean-based, percentile-based, co-citation-based, and post hoc rescaling). We execute all experiments on two publication databases which use different field categorisation schemes (author-chosen concept categories and categories based on papers’ semantic information).In terms of bias, we find that citation counts are always less time biased but always more field biased compared to PageRank. Furthermore, rescaling paper scores by a constant number of similarly aged papers reduces time bias more effectively compared to normalising by calendar years. We also find that percentile citation scores are less field and time biased than mean-normalised citation counts.In terms of performance, we find that time-normalised metrics identify high-impact papers better shortly after their publication compared to their non-normalised variants. However, after 7 to 10 years, the non-normalised metrics perform better. A similar trend exists for the set of high-quality papers where these performance cross-over points occur after 5 to 10 years.Lastly, we also find that personalising PageRank with papers’ citation counts reduces time bias but increases field bias. Similarly, using papers’ associated journal impact factors to personalise PageRank increases its field bias. In terms of performance, PageRank should always be personalised with papers’ citation counts and time-rescaled for citation windows smaller than 7 to 10 years.  相似文献   

4.
《Journal of Informetrics》2019,13(2):593-604
In the past few decades, there has been increasing interest in public-private collaboration, which has motivated lengthy discussion of the implications of collaboration in general, and co-authorship in particular, for the scientific impact of research. However, despite this strong interest in the topic, there is little systematic knowledge on the relation between public-private collaboration and citation impact. This paper examines the citation impact of papers involving public-private collaboration in comparison with academic research papers. We examine the role of a variety of factors, such as international collaboration, the number of co-authors, academic disciplines, and whether the research is mainly basic or applied. We first examine citation impact for a comprehensive dataset covering all Web of Science journal articles with at least one Danish author in the period 1995–2013. Thereafter, we examine whether citation impact for individual researchers differs when collaborating with industry compared to work only involving academic researchers, by looking at a fixed group of researchers that have both engaged in public-private collaborations and university-only publications. For national collaboration papers, we find no significant difference in citation impact for public-only and public-private collaborations. For international collaboration, we observe much higher citation impact for papers involving public-private collaboration.  相似文献   

5.
《Journal of Informetrics》2019,13(2):515-539
Counting of number of papers, of citations and the h-index are the simplest bibliometric indices of the impact of research. We discuss some improvements. First, we replace citations with individual citations, fractionally shared among co-authors, to take into account that different papers and different fields have largely different average number of co-authors and of references. Next, we improve on citation counting applying the PageRank algorithm to citations among papers. Being time-ordered, this reduces to a weighted counting of citation descendants that we call PaperRank. We compute a related AuthorRank applying the PageRank algorithm to citations among authors. These metrics quantify the impact of an author or paper taking into account the impact of those authors that cite it. Finally, we show how self- and circular-citations can be eliminated by defining a closed market of Citation-coins. We apply these metrics to the InSpire database that covers fundamental physics, presenting results for papers, authors, journals, institutes, towns, countries for all-time and in recent time periods.  相似文献   

6.
The objective assessment of the prestige of an academic institution is a difficult and hotly debated task. In the last few years, different types of university rankings have been proposed to quantify it, yet the debate on what rankings are exactly measuring is enduring.To address the issue we have measured a quantitative and reliable proxy of the academic reputation of a given institution and compared our findings with well-established impact indicators and academic rankings. Specifically, we study citation patterns among universities in five different Web of Science Subject Categories and use the PageRank algorithm on the five resulting citation networks. The rationale behind our work is that scientific citations are driven by the reputation of the reference so that the PageRank algorithm is expected to yield a rank which reflects the reputation of an academic institution in a specific field. Given the volume of the data analysed, our findings are statistically sound and less prone to bias, than, for instance, ad–hoc surveys often employed by ranking bodies in order to attain similar outcomes. The approach proposed in our paper may contribute to enhance ranking methodologies, by reconciling the qualitative evaluation of academic prestige with its quantitative measurements via publication impact.  相似文献   

7.
Despite the increasing use of citation-based metrics for research evaluation purposes, we do not know yet which metrics best deliver on their promise to gauge the significance of a scientific paper or a patent. We assess 17 network-based metrics by their ability to identify milestone papers and patents in three large citation datasets. We find that traditional information-retrieval evaluation metrics are strongly affected by the interplay between the age distribution of the milestone items and age biases of the evaluated metrics. Outcomes of these metrics are therefore not representative of the metrics’ ranking ability. We argue in favor of a modified evaluation procedure that explicitly penalizes biased metrics and allows us to reveal metrics’ performance patterns that are consistent across the datasets. PageRank and LeaderRank turn out to be the best-performing ranking metrics when their age bias is suppressed by a simple transformation of the scores that they produce, whereas other popular metrics, including citation count, HITS and Collective Influence, produce significantly worse ranking results.  相似文献   

8.
Spatial analysis approaches have been long since adopted in citation studies. For instance, already in the early eighties, two works relied on input-output matrices to delve into citation transactions among journals (Noma, 1982; Price, 1981). However, the techniques meant to analyze spatial data have evolved since then, experiencing a major step change starting from the turn of the century or so. Here I aim to show that citation analysis may benefit from the development and latest improvements of spatial data analysis, primarily by borrowing the spatial autoregressive models commonly used to identify the occurrence of the so-called peer and neighborhood effects. I discuss features and potentialities of the suggested method using an Italian narrow academic sector as a test bed. The approach proves itself useful for identifying possible citation behavior and patterns. Especially, I delve into the relationships between citation frequency at author level and years of activity, references, references used by the closest peers, self-citations, number of co-authors, conference papers, and conference papers authored by the nearby researchers.  相似文献   

9.
自然科学期刊自引对影响因子的"调控"   总被引:14,自引:0,他引:14  
李运景  侯汉清 《情报学报》2006,25(2):172-178
本文利用《中国科技期刊引证报告》,重新计算了其中几个学科的一些期刊除去自引后的影响因子,并对去除前和去除后的影响因子与期刊排名进行了对比,以考察期刊自引对影响因子和期刊排名的影响。调查发现目前个别期刊过度自引已经使期刊排名发生了失真。最后对如何遏制这种现象提出了一些建议。  相似文献   

10.
Web search algorithms that rank Web pages by examining the link structure of the Web are attractive from both theoretical and practical aspects. Todays prevailing link-based ranking algorithms rank Web pages by using the dominant eigenvector of certain matrices—like the co-citation matrix or variations thereof. Recent analyses of ranking algorithms have focused attention on the case where the corresponding matrices are irreducible, thus avoiding singularities of reducible matrices. Consequently, rank analysis has been concentrated on authority connected graphs, which are graphs whose co-citation matrix is irreducible (after deleting zero rows and columns). Such graphs conceptually correspond to thematically related collections, in which most pages pertain to a single, dominant topic of interest.A link-based search algorithm A is rank-stable if minor changes in the link structure of the input graph, which is usually a subgraph of the Web, do not affect the ranking it produces; algorithms A,B are rank-similar if they produce similar rankings. These concepts were introduced and studied recently for various existing search algorithms.This paper studies the rank-stability and rank-similarity of three link-based ranking algorithms—PageRank, HITS and SALSA—in authority connected graphs. For this class of graphs, we show that neither HITS nor PageRank is rank stable. We then show that HITS and PageRank are not rank similar on this class, nor is any of them rank similar to SALSA.This research was supported by the Fund for the Promotion of Research at the Technion, and by the Barnard Elkin Chair in Computer Science.  相似文献   

11.
In the past, recursive algorithms, such as PageRank originally conceived for the Web, have been successfully used to rank nodes in the citation networks of papers, authors, or journals. They have proved to determine prestige and not popularity, unlike citation counts. However, bibliographic networks, in contrast to the Web, have some specific features that enable the assigning of different weights to citations, thus adding more information to the process of finding prominence. For example, a citation between two authors may be weighed according to whether and when those two authors collaborated with each other, which is information that can be found in the co-authorship network. In this study, we define a couple of PageRank modifications that weigh citations between authors differently based on the information from the co-authorship graph. In addition, we put emphasis on the time of publications and citations. We test our algorithms on the Web of Science data of computer science journal articles and determine the most prominent computer scientists in the 10-year period of 1996–2005. Besides a correlation analysis, we also compare our rankings to the lists of ACM A. M. Turing Award and ACM SIGMOD E. F. Codd Innovations Award winners and find the new time-aware methods to outperform standard PageRank and its time-unaware weighted variants.  相似文献   

12.
科研合作网络的重要作者发现   总被引:1,自引:0,他引:1  
近年来,使用复杂网络理论对文献的科研合作网进行分析得到了广泛的研究.文章对DBLP数据库中1998年至2007年的作者合作数据构造科研合作网络,并且通过复杂网络的基本统计度量,如度、聚集系数等对网络的总体面貌进行了宏观上的描述.在微观层面,文章提出了高效的重要作者发现算法,能够从作者的合作数量以及合作范围对重要作者进行排名.通过分析科研合作数据作者的影响力,从而为科研人才评价提供参考.  相似文献   

13.
Citation based approaches, such as the impact factor and h-index, have been used to measure the influence or impact of journals for journal rankings. A survey of the related literature for different disciplines shows that the level of correlation between these citation based approaches is domain dependent. We analyze the correlation between the impact factors and h-indices of the top ranked computer science journals for five different subjects. Our results show that the correlation between these citation based approaches is very low. Since using a different approach can result in different journal rankings, we further combine the different results and then re-rank the journals using a combination method. These new ranking results can be used as a reference for researchers to choose their publication outlets.  相似文献   

14.
在网页排名和论文排名基础上,采用引用频次标准和引文网络计算排名数值,建立专利排名算法。分析美国专利和商标局的数据库中的数字图书馆相关专利,研究结果显示专利排名算法能够区分相同引用次数的专利排名。该研究是网页排名算法的一种新型应用。  相似文献   

15.
运用共词分析的方法,检索CNKI数据库中的链接分析领域论文,确定高频关键词,用Bicomb建立关键词共词矩阵,以SPSS为工具进行因子分析和聚类分析,探讨国内链接分析的研究现状与研究热点,发现应用于链接分析的方法主要有引文分析、共链分析、可视化、社会网络分析等,链接分析算法主要包括PageRank算法、HIST算法、网页排序等,应用研究集中于网络信息资源评价、网站的网络影响力评价和大学评价.  相似文献   

16.
The Journal Impact Factor (JIF) is linearly sensitive to self-citations because each self-citation adds to the numerator, whereas the denominator is not affected. Pinski and Narin (1976) Influence Weights (IW) are not or marginally sensitive to these outliers on the main diagonal of a citation matrix and thus provide an alternative to JIFs. Whereas the JIFs are based on raw citation counts normalized by the number of publications in the previous two years, IWs are based on the eigenvectors in the matrix of aggregated journal-journal citations without a reference to size: the cited and citing sides are normalized and combined by a matrix approach. Upon normalization, IWs emerge as a vector; after recursive multiplication of the normalized matrix, IWs can be considered a network measure of prestige among the journals in the (sub)graph under study. As a consequence, the self-citations are integrated at the field level and no longer disturb the analysis as outliers. In our opinion, this independence of the diagonal values is a very desirable property of a measure of quality or impact. As an example, we elaborate Price’s (1981b) matrix of aggregated citation among eight biochemistry journals in 1977. Routines for the computation of IWs are made available at http://www.leydesdorff.net/iw.  相似文献   

17.
Across the various scientific domains, significant differences occur with respect to research publishing formats, frequencies and citing practices, the nature and organisation of research and the number and impact of a given domain's academic journals. Consequently, differences occur in the citations and h-indices of the researchers. This paper attempts to identify cross-domain differences using quantitative and qualitative measures. The study focuses on the relationships among citations, most-cited papers and h-indices across domains and for research group sizes. The analysis is based on the research output of approximately 10,000 researchers in Slovenia, of which we focus on 6536 researchers working in 284 research group programmes in 2008–2012.As comparative measures of cross-domain research output, we propose the research impact cube (RIC) representation and the analysis of most-cited papers, highest impact factors and citation distribution graphs (Lorenz curves). The analysis of Lotka's model resulted in the proposal of a binary citation frequencies (BCF) distribution model that describes well publishing frequencies. The results may be used as a model to measure, compare and evaluate fields of science on the global, national and research community level to streamline research policies and evaluate progress over a definite time period.  相似文献   

18.
A standard procedure in citation analysis is that all papers published in one year are assessed at the same later point in time, implicitly treating all publications as if they were published at the exact same date. This leads to systematic bias in favor of early-months publications and against late-months publications. This contribution analyses the size of this distortion on a large body of publications from all disciplines over citation windows of up to 15 years. It is found that early-month publications enjoy a substantial citation advantage, which arises from citations received in the first three years after publication. While the advantage is stronger for author self-citations as opposed to citations from others, it cannot be eliminated by excluding self-citations. The bias decreases only slowly over longer citation windows due to the continuing influence of the earlier years’ citations. Because of the substantial extent and long persistence of the distortions, it would be useful to remove or control for this bias in research and evaluation studies which use citation data. It is demonstrated that this can be achieved by using the newly introduced concept of month-based citation windows.  相似文献   

19.
Questionable publications have been accused of “greedy” practices; however, their influence on academia has not been gauged. Here, we probe the impact of questionable publications through a systematic and comprehensive analysis with various participants from academia and compare the results with those of their unaccused counterparts using billions of citation records, including liaisons, i.e., journals and publishers, and prosumers, i.e., authors. Questionable publications attribute publisher-level self-citations to their journals while limiting journal-level self-citations; yet, conventional journal-level metrics are unable to detect these publisher-level self-citations. We propose a hybrid journal-publisher metric for detecting self-favouring citations among QJs from publishers. Additionally, we demonstrate that the questionable publications were less disruptive and influential than their counterparts. Our findings indicate an inflated citation impact of suspicious academic publishers. The findings provide a basis for actionable policy-making against questionable publications.  相似文献   

20.
Although it is generally understood that different citation counting methods can produce quite different author rankings, and although “optimal” author co-citation counting methods have been identified theoretically, studies that compare author co-citation counting methods in author co-citation analysis (ACA) studies are still rare. The present study applies strict all-author-based ACA to the Information Science (IS) field, in that all authors of all cited references in a classic IS dataset are counted, and in that even the diagonal values of the co-citation matrix are computed in their theoretically optimal form. Using Scopus instead of SSCI as the data source, we find that results from a theoretically optimal all-author ACA appear to be excellent in practice, too, although in a field like IS where co-authorship levels are relatively low, its advantages over classic first-author ACA appear considerably smaller than in the more highly collaborative ones targeted before. Nevertheless, we do find some differences between the two approaches, in that first-author ACA appears to favor theorists who presumably tend to work alone, while all-author ACA appears to paint a somewhat more recent picture of the field, and to pick out some collaborative author clusters.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司    京ICP备09084417号-23

京公网安备 11010802026262号