首页 | 官方网站   微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 593 毫秒
1.
Motivated by the significance of optimizing network performance with limited resources in real applications, we investigate the problem of achieving fast and desired consensus in a complex network via pinning control a fraction of nodes in the network. An optimization problem is proposed to select p optimal pinned nodes to guarantee fastest consensus, which is relaxed to a Mixed‐Integer Semi‐Definite Program. In the single pinned node case (i.e., p = 1), we find that the optimal pinned node is the node with highest min‐max distance centrality within the nodes with highest closeness centrality in the network. In the multiple pinned nodes case (i.e., p > 1), we find that betweenness centrality could be used as a suboptimal index to select pinned nodes.  相似文献   

2.
Kumar  Sanjay  Panda  Ankit 《Applied Intelligence》2022,52(2):1838-1852

Influence maximization is an important research problem in the field of network science because of its business value. It requires the strategic selection of seed nodes called “influential nodes,” such that information originating from these nodes can reach numerous nodes in the network. Many real-world networks, such as transportation, communication, and social networks, are weighted networks. Influence maximization in a weighted network is more challenging compared to that in an unweighted network. Many methods, such as weighted degree rank, weighted h-index, weighted betweenness, and weighted VoteRank techniques, have been used to order the nodes based on their spreading capabilities in weighted networks. The VoteRank method is a popular method for finding influential nodes in an unweighted network using the idea of a voting scheme. Recently, the WVoteRank method was proposed to find the seed nodes; it extends the idea of the VoteRank method by considering the edge weights. This method considers only 1-hop neighbors to calculate the voting score of every node. In this study, we propose an improved WVoteRank method based on an extended neighborhood concept, which takes the 1-hop neighbors as well as 2-hop neighbors into account for the voting process to decide influential nodes in a weighted network. We also extend our proposed approach to unweighted networks. We compare the performance of the proposed improved WVoteRank method against the popular centrality measures, weighted degree, weighted closeness, weighted betweenness, weighted h-index, and weighted VoteRank on several real-life and synthetic datasets of diverse sizes and properties. We utilize the widely used stochastic susceptible–infected–recovered information diffusion model to calculate the infection scale, the final infected scale as a function of time, and the average distance between spreaders. The simulation results reveal that the proposed method, improved WVoteRank, considerably outperforms the other methods described above, including the recent WVoteRank.

  相似文献   

3.
In this paper we consider the problem of identifying the most influential (or central) group of nodes (of some predefined size) in a network. Such a group has the largest value of betweenness centrality or one of its variants, for example, the length-scaled or the bounded-distance betweenness centralities. We demonstrate that this problem can be modelled as a mixed integer program (MIP) that can be solved for reasonably sized network instances using off-the-shelf MIP solvers. We also discuss interesting relations between the group betweenness and the bounded-distance betweenness centrality concepts. In particular, we exploit these relations in an algorithmic scheme to identify approximate solutions for the original problem of identifying the most central group of nodes. Furthermore, we generalize our approach for identification of not only the most central groups of nodes, but also central groups of graph elements that consists of either nodes or edges exclusively, or their combination according to some pre-specified criteria. If necessary, additional cohesiveness properties can also be enforced, for example, the targeted group should form a clique or a κ-club. Finally, we conduct extensive computational experiments with different types of real-life and synthetic network instances to show the effectiveness and flexibility of the proposed framework. Even more importantly, our experiments reveal some interesting insights into the properties of influential groups of graph elements modelled using the maximum betweenness centrality concept or one of its variations.  相似文献   

4.

Identifying those nodes that play a critical role within a network is of great importance. Many applications such as gossip spreading, disease spreading, news dispersion, identifying prominent individuals in a social network, etc. may take advantage of this knowledge in a complex network. The basic concept is generally to identify the nodes with the highest criticality in a network. As a result, the centrality principle has been studied extensively and in great detail, focusing on creating a consistent and accurate location of nodes within a network in terms of their importance. Both single centrality measures and group centrality measures, although, have their certain drawbacks. Other solutions to this problem include the game-theoretic Shapley Value (SV) calculations measuring the effect of a collection of nodes in complex networks via dynamic network data propagation process. Our novel proposed algorithm aims to find the most significant communities in a graph with community structure and then employs the SV-based games to find the most influential node from each community. A Susceptible-Infected-Recovered (SIR) model has been employed to distinctly determine each powerful node's capacity to spread. The results of the SIR simulation have also been used to show the contrast between the spreading capacity of nodes found through our proposed algorithm and that of nodes found using SV-algorithm and centrality measures alone.

  相似文献   

5.
Previous approaches to the dynamic updating of Shortest Path Trees (SPTs) have in the main focused on just one link state change. Not much work has been done on the problem of deriving a new SPT from an existing SPT for multiple link state decrements in a network that applies link-state routing protocols such as OSPF and IS-IS. This problem is complex because in the process of updating an SPT there are, firstly, no simple forms of node set to presumable contain all nodes to be updated and, secondly, multiple decrements can be accumulated to make the updating much harder. If we adopt the updating mechanisms engaged in one link state change for the case of multiple link state decrements, the result is node update redundancy, as a node changes several times before it reaches its final state in the new SPT. This paper proposes two dynamic algorithms (MaxR, MinD) for obviating unnecessary node updates by having part nodes updated in a branch on the SPT only after selecting a particular node from a built node list. The algorithm complexity analysis and simulation results show that MaxR and MinD require fewer node updates during dynamic update procedures than do other algorithms for updating SPT of multiple link state decrements.
Qin LuEmail:
  相似文献   

6.
An efficient distributed algorithm for constructing small dominating sets   总被引:1,自引:0,他引:1  
The dominating set problem asks for a small subset D of nodes in a graph such that every node is either in D or adjacent to a node in D. This problem arises in a number of distributed network applications, where it is important to locate a small number of centers in the network such that every node is nearby at least one center. Finding a dominating set of minimum size is NP-complete, and the best known approximation is logarithmic in the maximum degree of the graph and is provided by the same simple greedy approach that gives the well-known logarithmic approximation result for the closely related set cover problem. We describe and analyze new randomized distributed algorithms for the dominating set problem that run in polylogarithmic time, independent of the diameter of the network, and that return a dominating set of size within a logarithmic factor from optimal, with high probability. In particular, our best algorithm runs in rounds with high probability, where n is the number of nodes, is one plus the maximum degree of any node, and each round involves a constant number of message exchanges among any two neighbors; the size of the dominating set obtained is within of the optimal in expectation and within of the optimal with high probability. We also describe generalizations to the weighted case and the case of multiple covering requirements. Received: January 2002 / Accepted: August 2002 RID="*" ID="*" Supported by NSF CAREER award NSF CCR-9983901 RID="*" ID="*" Supported by NSF CAREER award NSF CCR-9983901  相似文献   

7.
发现复杂网络中最具影响力的节点,有助于分析和控制网络中的信息传播,具有重要的理论意义和实用价值.传统的确定节点影响力的方法大多基于网络的邻接矩阵、拓扑结构等,普遍存在数据维度高和数据稀疏的问题,基于网络表征学习,本文提出了一种局部中心性指标来辨识网络中高影响节点(NLC),首先采用DeepWalk算法,把高维网络中的节点映射为一个低维空间的向量表示,并计算局部节点对之间的欧氏距离;接着根据网络的拓扑结构,计算每个节点在信息的传播过程中,对所在局部的影响力大小,用以识别高影响力节点.在八个真实网络中,以SIR和SI传播模型作为评价手段,将NLC算法和度中心性、接近中心性、介数中心性、邻居核中心性、半局部中心性做了对比,结果表明NLC算法具有良好的识别高影响力传播节点的性能.  相似文献   

8.
田艳  刘祖根 《计算机科学》2015,42(Z11):296-300
准确高效地发现网络中有影响力的传播者具有非常重要的理论和现实意义。近年来,结点影响力排序受到了多领域学者的广泛关注。K-shell是一种较好的结点影响力评价指标;然而,仅仅依赖结点自身K-shell值实现的算法通常具有评估结果精确度不高、适用性较差等缺陷。针对此问题,提出KSN(the K-shell and neighborhood centrality)中心性模型,该算法综合考虑了结点本身及其所有二阶以内邻居结点的K-shell值。实验结果表明,所提出算法 度量结点传播的能力 比度中心性、介数中心性、K-shell分解、混合度分解等方法更准确。  相似文献   

9.
With great theoretical and practical significance, the studies of information spreading on social media become one of the most exciting domains in many branches of sciences. How to control the spreading process is of particular interests, where the identification of the most influential nodes in larger-scale social networks is a crucial issue. Degree centrality is one of the simplest method which supposes that the node with more neighbours may be more influential. K-shell decomposition method partitions the networks into several shells based on the assumption that nodes in the same shell have similar influence and nodes in higher-level shells (e.g., central) are probably to infect more nodes. Degree centrality and k-shell decomposition are local methods which are efficient but less relevant. Global methods such as closeness and betweenness centralities are more exact but time-consuming. For effectively identifying the more influential spreaders in large-scale social networks, in this paper we proposed an algorithm framework to solve this dilemma by combining the local and global methods. All the nodes are graded by the local methods and then the periphery of the network is removed according to their central values. At last, the global methods are employed to find out which node is more influential. The experimental results show that our framework can be efficient and even more accurate than the global methods  相似文献   

10.
识别复杂网络中的重要节点一直是社会网络分析和挖掘领域的热点问题,有助于理解有影响力的传播者在信息扩散和传染病传播中的作用。现有的节点重要性算法充分考虑了邻居信息,但忽略了邻居节点与节点之间的结构信息。针对此问题,考虑到不同结构下邻居节点对节点的影响力不同,提出了一种综合考虑节点的邻居数量和节点与邻居间亲密程度的节点重要性评估算法,其同时体现了节点的度属性和“亲密”属性。该算法利用相似性指标来测量节点间的亲密程度,以肯德尔相关系数为节点排序的准确度评价指标。在多个经典的实际网络上利用SIR(易感-感染-免疫)模型对传播过程进行仿真,结果表明,与度指标、接近中心性指标、介数中心性指标与K-shell指标相比,KI指标可以更精确地对节点传播影响力进行排序。  相似文献   

11.
网络中重要节点的发现是研究网络特性的重要方面之一,在复杂网络、系统科学、社会网分析和互联网搜索等领域中具有广泛的应用价值。为提高全网范围内重要节点发现的效率和有效性,提出了一种基于最短路径介数及节点中心接近度的重要节点发现算法,通过最短路径介数的方法确定全网内的重要节点,利用中心接近度分析重要节点的重要性。测试结果表明,与同类的系统比较起来,该方法具有比较好的性能。  相似文献   

12.
在对基于核磁共振成像技术重构得到的人脑结构网络的研究中,核心节点的识别是对全脑网络特性展开研究的基础,具有重要意义。给出了一种基于K-shell和介中心性的核心节点评价方法,首先使用以节点局部重要性为标准的度中心性、邻近中心性和介中心性三个中心性评价方法分别对人脑结构网络中的节点重要性展开评估和分析;接着利用以节点全局地位为标准的K-shell分解法对人脑结构网络的核心节点展开分析。实验结果显示,由于同时兼顾了脑网络节点的整体特性和局部特性,该方法能够更全面和准确地识别核心脑区节点。  相似文献   

13.
In this paper, we have considered the distributed scheduling problem for channel access in TDMA wireless mesh networks. The problem is to assign time-slot(s) for nodes to access the channels, and it is guaranteed that nodes can communicate with all their one-hop neighbors in the assigned time-slot(s). And, the objective is to minimize the cycle length, i.e., the total number of different time-slots in one scheduling cycle. In single-channel ad hoc networks, the best known result for this problem is proved to be K 2 in arbitrary graphs (Chlamtac and Pinter in IEEE Trans. Comput. C-36(6):729–737, 1987) and 25K in unit disk graphs () with K as the maximum node degree. There are multiple channels in wireless mesh networks, and different nodes can use different control channels to reduce congestion on the control channels. In this paper, we have considered two scheduling models for wireless mesh networks. The first model is that each node has two radios, and the scheduling is simultaneously done on the two radios. We have proved that the upper bound of the cycle length in arbitrary graphs can be 2K. The second model is that the time-slots are scheduled for the nodes regardless of the number of radios on them. In this case, we have proved that the upper bound can be (4K−2). We also have proposed greedy algorithms with different criterion. The basic idea of these algorithms is to organize the conflicting nodes by special criterion, such as node identification, node degree, the number of conflicting neighbors, etc. And, a node cannot be assigned to a time-slot(s) until all neighbor nodes, which have higher criterion and might conflict with the current node, are assigned time-slot(s) already. All these algorithms are fully distributed and easy to realize. Simulations are also done to verify the performance of these algorithms.  相似文献   

14.
In a wireless sensor network (WSN), the unbalanced distribution of communication loads often causes the problem of energy hole, which means the energy of the nodes in the hole region will be exhausted sooner than the nodes in other regions. This is a key factor which affects the lifetime of the networks. In this paper we propose an improved corona model with levels for analyzing sensors with adjustable transmission ranges in a WSN with circular multi-hop deployment (modeled as concentric coronas). Based on the model we consider that the right transmission ranges of sensors in each corona is the decision factor for optimizing the network lifetime after nodes deployment. We prove that searching optimal transmission ranges of sensors among all coronas is a multi-objective optimization problem (MOP), which is NP hard. Therefore, we propose a centralized algorithm and a distributed algorithm for assigning the transmission ranges of sensors in each corona for different node distributions. The two algorithms can not only reduce the searching complexity but also obtain results approximated to the optimal solution. Furthermore, the simulation results of our solutions indicate that the network lifetime approximates to that ensured by the optimal under both uniform and non-uniform node distribution.  相似文献   

15.
影响力最大化是社交网络分析中的一个重要问题,旨在挖掘可以使得信息在网络中传播范围最大化的一小组节点(通常称为种子节点)。基于网络拓扑结构的启发式影响力最大化算法通常仅考虑某单一的网络中心性,没有综合考虑节点特性和网络拓扑结构,导致其效果受网络结构的影响较大。为了解决上述问题,提出了一种融合覆盖范围和结构洞的影响力最大化算法NCSH。该算法首先计算所有节点的覆盖范围和网格约束系数;然后通过覆盖范围增益最大原则选择种子节点;其次,若存在多个节点增益相同,则按照网格约束系数最小原则选取;最后,重复上述步骤直至选出所有种子节点。NCSH在不同种子数量和不同传播概率条件下,在六个真实网络数据集上均保持着优异的效果,在影响力传播范围方面,比同类的基于节点覆盖范围的算法(NCA)平均提高了3.8%;在时间消耗方面,比同类的基于结构洞和度折扣的最大化算法(SHDD)减少了43%。实验结果表明,NCSH能有效解决影响力最大化问题。  相似文献   

16.
本文在斑块环境下基于易感–感染–易感模型(SIS模型)研究了感染者迁移限制对传染病传播的影响,其中迁移限制用双层网络进行表示,并提出了双层集合种群动态网络.子种群(即斑块)用双层网络上的节点表示,双层网络上的链接分别代表易感节点斑块和感染节点斑块间的迁移路径,易感染和感染节点分别通过双层网络上的链接随机游走.并提出了两种反应扩散方程分别作为易感染与感染节点的微分方程,分别计算其数值解,以评估每个斑块(节点)的感染风险.研究表明:在双层网络中,迁移限制会降低感染节点密度,将感染节点限制在中心节点(度值最高的子种群)中.感染节点密度高度依赖于双层网络结构.  相似文献   

17.
A complex network can be modeled as a graph representing the “who knows who” relationship. In the context of graph theory for social networks, the notion of centrality is used to assess the relative importance of nodes in a given network topology. For example, in a network composed of large dense clusters connected through only a few links, the nodes involved in those links are particularly critical as far as the network survivability is concerned. This may also impact any application running on top of it. Such information can be exploited for various topological maintenance issues to prevent congestion and disruption. This can also be used offline to identify the most important actors in large social interaction graphs. Several forms of centrality have been proposed so far. Yet, they suffer from imperfections: initially designed for small social graphs, they are either of limited use (degree centrality), either incompatible in a distributed setting (e.g. random walk betweenness centrality).In this paper we introduce a novel form of centrality: the second order centrality which can be computed in a distributed manner. This provides locally each node with a value reflecting its relative criticity and relies on a random walk visiting the network in an unbiased fashion. To this end, each node records the time elapsed between visits of that random walk (called return time in the sequel) and computes the standard deviation (or second order moment) of such return times. The key point is that central nodes see regularly the random walk compared to other topology nodes. Both through theoretical analysis and simulation, we show that the standard deviation can be used to accurately identify critical nodes as well as to globally characterize graphs topology in a distributed way. We finally compare our proposal to well-known centralities to assess its competitivity.  相似文献   

18.
Abstract

Influence maximization is a fundamental problem in the study of complex relationship networks, such as viral marketing in business application areas. It is directed towards extracting a minimal (or k-sized) subset of most influential nodes with largest cascading effect across the network as per seeding budget. The problem is categorized as NP hard and hence greedy/heuristic techniques are extensively studied in the literature for generating reasonably acceptable solutions. This article proposes a novel nature based heuristic optimization algorithm IM-GSO to dynamically evolve near to optimal K-sized influential seed nodes for varied structural real world networks. IM-GSO smartly incorporates hidden structural patterns like communities, node degrees, betweenness and similarities for efficient candidate population generation. This smartly initialized population is then evolved using a discrete adaption of Group Search Optimization (GSO) algorithm. Correctness of IM-GSO is verified by optimizing two prominent spread estimation functions SIMPATH and MAGA, on varied sized (small/medium/large) networks. Detailed experimental evaluations by execution of 10,000 Monte Carlo simulations under Information Cascade (IC) model indicates a significantly high influence spread for IM-GSO seeds in contrast to standard heuristics techniques.  相似文献   

19.
博弈是启发式搜索的一个重要应用领域,博弈的过程可以用一棵博弈搜索树表示,通过对博弈树进行搜索求取问题的解,搜索策略常采用α-β剪枝技术。在深入研究α-β剪枝技术的基础上,提出在扩展未达到规定深度节点时,对扩展出的子节点按照估价函数大小顺序插入到搜索树中,从而在α-β剪枝过程中剪掉更多的分枝,提高搜索效率。  相似文献   

20.
Active search on graphs focuses on collecting certain labeled nodes (targets) given global knowledge of the network topology and its edge weights (encoding pairwise similarities) under a query budget constraint. However, in most current networks, nodes, network topology, network size, and edge weights are all initially unknown. In this work we introduce selective harvesting, a variant of active search where the next node to be queried must be chosen among the neighbors of the current queried node set; the available training data for deciding which node to query is restricted to the subgraph induced by the queried set (and their node attributes) and their neighbors (without any node or edge attributes). Therefore, selective harvesting is a sequential decision problem, where we must decide which node to query at each step. A classifier trained in this scenario can suffer from what we call a tunnel vision effect: without any recourse to independent sampling, the urge to only query promising nodes forces classifiers to gather increasingly biased training data, which we show significantly hurts the performance of active search methods and standard classifiers. We demonstrate that it is possible to collect a much larger set of targets by using multiple classifiers, not by combining their predictions as a weighted ensemble, but switching between classifiers used at each step, as a way to ease the tunnel vision effect. We discover that switching classifiers collects more targets by (a) diversifying the training data and (b) broadening the choices of nodes that can be queried in the future. This highlights an exploration, exploitation, and diversification trade-off in our problem that goes beyond the exploration and exploitation duality found in classic sequential decision problems. Based on these observations we propose D\(^3\)TS, a method based on multi-armed bandits for non-stationary stochastic processes that enforces classifier diversity, which outperforms all competing methods on five real network datasets in our evaluation and exhibits comparable performance on the other two.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司    京ICP备09084417号-23

京公网安备 11010802026262号