首页 | 官方网站   微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 781 毫秒
1.
We consider the problem of inferring the evolutionary tree of a set of n species. We propose a quartet reconstruction method which specifically produces trees whose edges have strong combinatorial evidence. Let Q be a set of resolved quartets defined on the studied species, the method computes the unique maximum subset Q* of Q which is equivalent to a tree and outputs the corresponding tree as an estimate of the species’ phylogeny. We use a characterization of the subset Q* due to Bandelt and Dress (Adv. Appl. Math. 7 (1986) 309–343) to provide an O(n4) incremental algorithm for this variant of the NP-hard quartet consistency problem. Moreover, when chosing the resolution of the quartets by the four-point method (FPM) and considering the Cavender–Farris model of evolution, we show that the convergence rate of the Q* method is at worst polynomial when the maximum evolutive distance between two species is bounded. We complete these theoretical results by an experimental study on real and simulated data sets. The results show that (i) as expected, the strong combinatorial constraints it imposes on each edge leads the Q* method to propose very few incorrect edges; (ii) more surprisingly; the method infers trees with a relatively high degree of resolution.  相似文献   

2.
离散线性一致性算法噪声问题研究   总被引:2,自引:1,他引:1  
窦全胜  丛玲  姜平  史忠植 《自动化学报》2015,41(7):1328-1340
多智能体一致性问题在传感网、社交网、协同控制等诸多领域有着广泛的实际应用背景, 本文对离散线性一致性算法的噪声问题进行了研究, 证明了离散线性 一致性算法的噪声不可控性; 提出基于抑噪算子ε(t)的噪声控制策略, 指出当ε(t)为t-0.5的高阶无穷小时, 抑噪后的一致性算法噪声可控; 分析了抑噪算子对一致性 算法收敛性的影响, 证明了在无噪声条件下, 当抑噪算子ε(t为t-1的低阶无穷小时, 抑噪后的一致性算法依然可以使Agent收敛至原收敛状态x*.在上述结论基础上进一步指出, 当t→∞ 时, 若抑噪算子ε(t)的阶在t-0.5~t-1之间, 所有Agent 的状态将以原收敛状态x* 为中心呈正态分布. 最后, 以DHA 为例对相应理论结果进行了验证和讨论. 本文为线性一致性算法的噪声控制提供了理论依据, 对抑噪算s子的确定有较强的指导意义.  相似文献   

3.
位置感知查询(LAQ)是移动系统中常用的一种查询方式。提出了一种位置感知查询中的协作缓存管理技术(CoMA-LA),该方法包括三方面的内容:(1)缓存中语义相近数据项的合并;(2)相邻缓存间的协作替换策略;(3)缓存间的数据一致性保证。通过仿真实验将CoMA-LA和传统的LRU算法以及一些已有的缓存替换方法进行了比较,实验结果表明采用CoMA-LA技术能够有效提高缓存利用率,从而降低平均访问时间,提高查询命中率。  相似文献   

4.
建立准确的缓存分析模型有助于更好地预测缓存行为,对于网络性能分析与规划具有重要作用。现有面向缓存强一致性研究的分析模型普遍基于最近最少使用(LRU)缓存替换策略,然而在实际环境中需要根据不同的应用场景和缓存节点能力采取LRU、q-LRU、先进先出等不同的缓存替换策略。为扩展缓存强一致性分析模型的适用范围,基于缓存建模的基本假设构建缓存强一致性通用分析模型,并给出被动查询、主动移除、主动更新3种缓存强一致性策略下缓存命中率和服务器负载的计算方法。利用模型计算结果绘制缓存参数变化曲线图找出使缓存性能达到最优的值,通过分析模型计算结果选出给定缓存参数时对应的最优缓存替换策略。实验结果表明,该模型在3种缓存强一致性策略下均具有较高的计算精确度,其中计算结果与仿真结果的最大误差和最小误差分别为6.92%和0.06%,适用于通过特征时间近似的缓存替换策略。  相似文献   

5.
In the literature, there exit two types of cache consistency maintenance algorithms for mobile computing environments: stateless and stateful. In a stateless approach, the server is unaware of the cache contents at a mobile user (MU). Even though stateless approaches employ simple database management schemes, they lack scalability and ability to support user disconnectedness and mobility. On the other hand, a stateful approach is scalable for large database systems at the cost of nontrivial overhead due to server database management. We propose a novel algorithm, called Scalable Asynchronous Cache Consistency Scheme (SACCS), which inherits the positive features of both stateless and stateful approaches. SACCS provides a weak cache consistency for unreliable communication (e.g., wireless mobile) environments with small stale cache hit probability. It is also a highly scalable algorithm with minimum database management overhead. The properties are accomplished through the use of flag bits at the server cache (SC) and MU cache (MUC), an identifier (ID) in MUC for each entry after its invalidation, and estimated time-to-live (TTL) for each cached entry, as well as rendering of all valid entries of MUC to uncertain state when an MU wakes up. The stale cache hit probability is analyzed and also simulated under the Rayleigh fading model of error-prone wireless channels. Comprehensive simulation results show that the performance of SACCS is superior to those of other existing stateful and stateless algorithms in both single and multicell mobile environments.  相似文献   

6.
Performance evaluation of Web proxy cache replacement policies   总被引:10,自引:0,他引:10  
Martin  Rich  Tai 《Performance Evaluation》2000,39(1-4):149-164
The continued growth of the World-Wide Web and the emergence of new end-user technologies such as cable modems necessitate the use of proxy caches to reduce latency, network traffic and Web server loads. In this paper we analyze the importance of different Web proxy workload characteristics in making good cache replacement decisions. We evaluate workload characteristics such as object size, recency of reference, frequency of reference, and turnover in the active set of objects. Trace-driven simulation is used to evaluate the effectiveness of various replacement policies for Web proxy caches. The extended duration of the trace (117 million requests collected over 5 months) allows long term side effects of replacement policies to be identified and quantified.

Our results indicate that higher cache hit rates are achieved using size-based replacement policies. These policies store a large number of small objects in the cache, thus increasing the probability of an object being in the cache when requested. To achieve higher byte hit rates a few larger files must be retained in the cache. We found frequency-based policies to work best for this metric, as they keep the most popular files, regardless of size, in the cache. With either approach it is important that inactive objects be removed from the cache to prevent performance degradation due to pollution.  相似文献   


7.
In this paper, we consider symbolic model checking of safety properties of linear parametrized systems. Sets of configurations are represented by regular languages and actions by regular relations. Since the verification problem amounts to the computation of the reachability set, we focus on the computation of R*(φ) for a regular relation R and a regular language φ. We present a technique called regular widening that allows, when it terminates, the computation of either the reachability set R*(φ) of a system or the transitive closure R* of a regular relation. We show that our method can be uniformly applied to several parametrized systems. Furthermore, we show that it is powerful enough to simulate some existing methods that compute either R* or R*(φ) for each R (resp. φ) belonging to a subclass of regular relations (resp. belonging to a subclass of regular languages).  相似文献   

8.
一种有效的Web代理缓存替换算法   总被引:2,自引:0,他引:2       下载免费PDF全文
设计良好的Web缓存替换策略能使网络上的资源得到最有效的利用。文章设计了一个较有效率的Web缓存替换策略LFRU,期望以较佳的方式获得网络资源及改善Web缓存的性能和服务质量。实验结果表明该策略有较高的文档命中率和字节命中率。  相似文献   

9.
赵晓  王铮  黄程侃  赵燕伟 《机器人》2018,40(6):903-910
为了解决较大场景下A*寻路算法存在的内存开销大、计算时间长等问题,本文在A*算法的基础上,结合跳点搜索算法,提出一种改进的A*算法.该算法通过筛选跳点进行扩展,直到生成最终路径,扩展过程中使用跳点代替A*算法中大量可能被添加到OpenList和ClosedList的不必要节点,从而减少计算量.为了验证改进A*算法的有效性,分别在不同尺寸的2维栅格地图中进行仿真,仿真结果表明,相比A*算法,改进A*算法在寻路过程中扩展更少的节点,寻路速度更快,且加速效果随环境地图的增大更加明显.最后将改进A*算法应用于移动机器人Turtlebot2进行对比实验.实验结果表明,在生成相同路径的基础上,改进A*算法的寻路速度较A*算法提高了约200%,能够满足移动机器人路径规划的要求.  相似文献   

10.
The power consumed by memory systems accounts for 45% of the total power consumed by an embedded system, and the power consumed during a memory access is 10 times higher than during a cache access. Thus, increasing the cache hit rate can effectively reduce the power consumption of the memory system and improve system performance. In this study, we increased the cache hit rate and reduced the cache-access power consumption by developing a new cache architecture known as a single linked cache (SLC) that stores frequently executed instructions. SLC has the features of low power consumption and low access delay, similar to a direct mapping cache, and a high cache hit rate similar to a two way-set associative cache by adding a new link field. In addition, we developed another design known as a multiple linked caches (MLC) to further reduce the power consumption during each cache access and avoid unnecessary cache accesses when the requested data is absent from the cache. In MLC, the linked cache is split into several small linked caches that store frequently executed instructions to reduce the power consumption during each access. To avoid unnecessary cache accesses when a requested instruction is not in the linked caches, the addresses of the frequently executed blocks are recorded in the branch target buffer (BTB). By consulting the BTB, a processor can access the memory to obtain the requested instruction directly if the instruction is not in the cache. In the simulation results, our method performed better than selective compression, traditional cache, and filter cache in terms of the cache hit rate, power consumption, and execution time.  相似文献   

11.
翁唱玲  杨清 《计算机应用》2013,33(11):3267-3270
针对移动数据库系统性能有待提高的问题,提出了一种移动数据库缓存模型。采用基于消息摘要的同步算法,通过比较移动客户端与服务器消息摘要表中的消息摘要值,完成缓存同步,维护移动客户端缓存与服务器数据的一致性;该模型还考虑了数据的时效性与事务的优先级,设计了一种基于价值函数的缓存替换算法。实验结果表明,随着缓存数据个数的增加,所提算法的缓存命中率高于最近最少使用(LRU)和LA2U算法,同时随着访问频率的增加,事务的重启率低于LRU和LA2U,有效提高了移动数据库缓存的性能。  相似文献   

12.
针对基于降维技术改进的多目标A*(NAMOAdr*)算法中存在的高原搜索现象,结合蒙特卡罗随机游走策略提出了一种基于随机游走的多目标A*(RWNAMOAdr*)算法,其基本思想是当NAMOAdr*算法陷入高原搜索时,利用随机游走策略及时找到一个出口(具有被上次扩展标签的启发值非支配的启发值的标签)逃离该高原搜索。针对NAMOAdr*算法何时陷入高原搜索的问题,提出了一种检测高原搜索的方法,即当连续扩展m次标签的启发值都被上一次扩展的标签的启发值支配时则认为NAMOAdr*算法陷入了高原搜索。使用多目标搜索算法的标准测试平台——随机网格进行了实验。实验结果表明RWNAMOAdr*算法比NAMOAdr*算法的运行时间平均减少了50.69%,占用的空间平均减少了约10%,能够为现实生活中加速多目标路径搜索提供理论支撑。  相似文献   

13.
In this paper we deal with algorithm A* and its application to the problem of finding the shortest common supersequence of a set of sequences. A* is a powerful search algorithm which may be used to carry out concurrently the construction of a network and the solution of a shortest path problem on it. We prove a general approximation property of A* which, by building a smaller network, allows us to find a solution with a given approximation ratio. This is particularly useful when dealing with large instances of some problem. We apply this approach to the solution of the shortest common supersequence problem and show its effectiveness.  相似文献   

14.
针对带启发式的快速扩展随机树(RRT-Connect)算法路径生成的随机性以及渐进最优的双向快速扩展随机树(B-RRT*)算法收敛速度的缓慢性,提出了一种基于B-RRT*改进的高效路径规划算法(EB-RRT*)。首先引入一种智能采样函数,使随机树的扩展更具方向性,从而减少寻路时间,并提高路径的平滑性;其次在B-RRT*算法的基础上,在EB-RRT*算法中加入了一种快速扩展策略,使改进后的算法在自由空间中使用RRT-Connect算法的扩展方式进行快速扩展,而在障碍物空间则使用改进的渐进最优的快速扩展随机树(RRT*)算法进行扩展,在提高扩展效率的同时避免算法陷入局部最优。将EB-RRT*算法分别与快速扩展随机树(RRT)、RRT-Connect、RRT*和B-RRT*算法进行仿真对比,仿真结果表明,改进后的算法在路径规划效率及路径平滑性方面均明显优于其他算法;且相对于B-RRT*算法,其在路径规划时间上降低了68.3%,在迭代次数上减少了48.6%。  相似文献   

15.
自适应一致性替换算法的设计与实现   总被引:1,自引:0,他引:1  
针对代理缓存的一致性策略和替换策略还没有很好地结合起来的技术现状,基于最优化模型,提出、设计和实现了一种新的优化代理缓存的自适应一致性--替换算法(即 ACR算法).这种算法包括一致性策略和替换策略两部分,一致性策略采用自适应TTL机制,替换策略是基于成本/价值模型的算法.通过Trace-Driven模拟实验,结果表明ACR算法在陈旧命中比上均优于传统的几个替换算法.  相似文献   

16.
分布访问环境中的数据缓存体系研究   总被引:1,自引:0,他引:1       下载免费PDF全文
本文讨论了一种分布信息访问环境下提高数据利用率和减少通信流量的分布缓存体系。通过引入缓存节点,该缓存体系能缓存集成多数据源的信息,并能使多个用户相互重用数据缓存,提高缓存的命中率。该缓存体系采用了多种方法,很好地解决了单数据源和集成多数据源的数据缓存一致性问题。  相似文献   

17.
Web代理服务器缓存能够在一定程度上解决用户访问延迟和网络拥塞问题,Web代理缓存的缓存替换策略直接影响缓存的命中率,从而影响网络请求响应的效果;为此,使用一种通过固定大小的循环滑动窗口提取Web日志数据的多项特征,并使用高斯混合模型对Web日志数据进行聚类分析,预测在窗口时间内可能再次访问到Web对象,结合最近最少使用(LRU)算法,提出一种新的基于高斯混合模型的Web代理服务器缓存替换策略;实验结果表明,与传统的缓存替换策略LRU、LFU、FIFO、GDSF相比,该策略有效提高了Web代理缓存的请求命中率和字节命中率。  相似文献   

18.
基于反馈控制理论,通过系统辨识设计了缓存控制器。动态调整不同类别缓存对象的缓存空间,可保证高优先级Web对象的高命中率,而不同类别的Web对象命中率之比保持不变。在服务器端实现了基于比例命中率的缓存区分服务。经实验验证,在GDSF,LRU,LFU缓存替换算法下,无论是请求命中率还是字节命中率,均有良好的区分效果。  相似文献   

19.
在分析用户访问行为基础上实现代理缓存   总被引:3,自引:0,他引:3  
文中提出一个描述WWW结构的网站图Site-Graph模型,在此基础上进行用户访问行为分析,从而提出了一个考虑实际请问请求模式的代理缓存系统URAC.文中详细描述了URAC的工作原理,对代理缓存设计时所要解决的命中率,一致性和替换算法等主要问题进行了讨论,并给出了性能分析,得到URAC以提高命中率和降低访问延迟为目标是一个更加实用的代理缓存系统的结论。  相似文献   

20.
对MANET环境下的缓冲技术进行了研究,提出了一种组间协作缓冲算法。该算法充分考虑了数据的访问频率,数据缓冲节点到数据源节点的距离、数据的大小和数据的一致性。从缓冲数据的使用角度出发,实现了组间的协作缓冲,使缓冲的数据更合理化,提高了缓冲的命中率。实验证明,提出的算法能显著地减少查询执行的能量消耗,改善查询的平均响应时间。  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司    京ICP备09084417号-23

京公网安备 11010802026262号