首页 | 官方网站   微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 115 毫秒
1.
Mobile Agent系统位置透明通信的实现   总被引:1,自引:0,他引:1  
在Mobile Agent系统中,Agent之间需要不断进行协作和信息交换,如何实现Agent通信的位置透明性,保证消息不会因为移动Agent的迁移而丢失,一直是移动Agent通信面临的难题,在现有的系统中都没得到很好的解决。提出了一种Mobile Agent 位置透明通信的方法,实现了通信的位置透明性、可靠性、高效性,并极大地减少了Agent迁移引起的地址注册(或更改)开销。  相似文献   

2.
采用增量时间戳安全技术的Mobile Agent系统   总被引:2,自引:0,他引:2  
万燕  孙永强  朱向华  唐进 《软件学报》2002,13(7):1331-1337
安全技术决定了Mobile Agent的实用性,其主要解决的问题是防范恶意Mobile Agent的重复攻击和越权攻击.在介绍了Mobile Agent的概念、特点及安全的重要性的基础上,提出了增量时间戳的概念及优点,并给出了具体例证.采用增量时间戳的Mobile Agent系统能够较好地避免执行系统受到恶意Mobile Agent的损害.增量时间戳技术已经在基于Lucent公司的Inferno系统上开发的Mobile Agent系统中得到了验证和实现.  相似文献   

3.
针对Mobile Agent整体迁移时的缺点,文中提出了Mobile Agent的远程代码动态装配思想,即把部分执行代码存放在迁移节点,Agent在此节点运行时进行动态组装,运行完毕后动态拆卸。探讨了在此思想下Agent实体以及Agent Server的结构,分析了Agent的执行流程和执行效率。  相似文献   

4.
徐练 《计算机应用》2004,24(Z1):58-62
以智能Agent和Internet为背景,研究Mobile Agent的移动机制,和实现Agent移动所需要的Agent传输协议.  相似文献   

5.
针对Mobile Agent搭建自动答疑系统平台的实现策略问题,首先提出适合系统特点的Mobile Agent命名机制和通信方式,接下来阐述系统中Mobile Agent迁移机制和路由策略,最后讨论Mobile Agent访问路径自适应机制问题。实验表明,使用这种方式搭建系统平台,能够实现多个网络教育站点共享,提高用户答疑问题的查全率和查准率。  相似文献   

6.
基于Mobile Agent的信息检索系统的结构及相关技术   总被引:5,自引:0,他引:5  
介绍了Mobile Agent系统的概念,提出了一个应用Grasshopper开发平台的基于Agent技术的网络信息检索系统的设计方案,讨论了该系统的结构模型、具体实现方法及相关技术。  相似文献   

7.
多Agent协作技术在电子商务中介平台中的应用   总被引:1,自引:0,他引:1       下载免费PDF全文
描述了一个B2B电子商务中介平台的设计与实现过程。在设计过程中广泛地借鉴了J2EE技术和Mobile Agent的特点和优势,完成了二者的有效结合。并结合电子商务中介平台的特点完成了中介平台Agent的功能设计和Agent与J2EE互操作中间件的研制。  相似文献   

8.
叶蓉  陈莘萌 《计算机工程》2004,30(2):138-140
G.Vigna提出的加密跟踪方法是一种比较有代表性的保护Mobile Agent不受恶意主机或其他Agent写攻击的软件方法。该文针对G.Vigna加密跟踪方法的不足提出一种改进方法,使Mobile Agent的检测变得及时、主动,减少存储量和消息数,并使检测的比较结果直接化。  相似文献   

9.
本文阐述了Mobile Agent的基本概念,研究和分析了Mobile Agent的基本系统结构和服务设施。在此基础上对目前所使用的分布式计算模型进行了分析提出了基于Mobile Agent的分布式计算模型,评价了该模型的可行性和优点。  相似文献   

10.
针对传统需求链MAS模型的不足,分析了Mobile Agent在需求链管理中的优势,提出了基于Mobile Agent的需求链优化模型,详细描述了各Agent的功能和协作,最后实现了Mobile Agent平台,对Mobile Agent的通信、迁移和安全性进行了讨论。该模型在需求链中的应用大大提高了需求链各成员对多变的市场需求做出快速反应的速度,增强了企业的竞争力。  相似文献   

11.
Multi-agent learning (MAL) studies how agents learn to behave optimally and adaptively from their experience when interacting with other agents in dynamic environments. The outcome of a MAL process is jointly determined by all agents’ decision-making. Hence, each agent needs to think strategically about others’ sequential moves, when planning future actions. The strategic interactions among agents makes MAL go beyond the direct extension of single-agent learning to multiple agents. With the strategic thinking, each agent aims to build a subjective model of others decision-making using its observations. Such modeling is directly influenced by agents’ perception during the learning process, which is called the information structure of the agent’s learning. As it determines the input to MAL processes, information structures play a significant role in the learning mechanisms of the agents. This review creates a taxonomy of MAL and establishes a unified and systematic way to understand MAL from the perspective of information structures. We define three fundamental components of MAL: the information structure (i.e., what the agent can observe), the belief generation (i.e., how the agent forms a belief about others based on the observations), as well as the policy generation (i.e., how the agent generates its policy based on its belief). In addition, this taxonomy enables the classification of a wide range of state-of-the-art algorithms into four categories based on the belief-generation mechanisms of the opponents, including stationary, conjectured, calibrated, and sophisticated opponents. We introduce Value of Information (VoI) as a metric to quantify the impact of different information structures on MAL. Finally, we discuss the strengths and limitations of algorithms from different categories and point to promising avenues of future research.  相似文献   

12.
主体的命名和定位是移动主体中的关键技术,它的优劣影响到整个移动主体系统的性能,在分析了目前几种主体命名和定位方式之后,提出一种新的主体命名和定位的方法:Agent Shadow Tracing。这种方法将主体看成是生成它的主机的一种资源,为主体在生成它的主机上分配一个固定的界面(shadow),在主体移动时,别的主体和程序可以通过与界面交互达到与移动主体通信的目的,这种方法避免了目前主体命名和定位方式的一些缺点,较好地解决了移动主体之间的通信问题。  相似文献   

13.
In this work, we address a relatively unexplored aspect of designing agents that learn from human reward. We investigate how an agent’s non-task behavior can affect a human trainer’s training and agent learning. We use the TAMER framework, which facilitates the training of agents by human-generated reward signals, i.e., judgements of the quality of the agent’s actions, as the foundation for our investigation. Then, starting from the premise that the interaction between the agent and the trainer should be bi-directional, we propose two new training interfaces to increase a human trainer’s active involvement in the training process and thereby improve the agent’s task performance. One provides information on the agent’s uncertainty which is a metric calculated as data coverage, the other on its performance. Our results from a 51-subject user study show that these interfaces can induce the trainers to train longer and give more feedback. The agent’s performance, however, increases only in response to the addition of performance-oriented information, not by sharing uncertainty levels. These results suggest that the organizational maxim about human behavior, “you get what you measure”—i.e., sharing metrics with people causes them to focus on optimizing those metrics while de-emphasizing other objectives—also applies to the training of agents. Using principle component analysis, we show how trainers in the two conditions train agents differently. In addition, by simulating the influence of the agent’s uncertainty–informative behavior on a human’s training behavior, we show that trainers could be distracted by the agent sharing its uncertainty levels about its actions, giving poor feedback for the sake of reducing the agent’s uncertainty without improving the agent’s performance.  相似文献   

14.
In this paper, we propose two new techniques for real-time crowd simulations; the first one is the clustering of agents on the GPU and the second one is incorporating the global cluster information into the existing microscopic navigation technique. The proposed model combines the agent-based models with macroscopic information (agent clusters) into a single framework. The global cluster information is determined on the GPU, and based on the agents' positions and velocities. Then, this information is used as input for the existing agent-based models (velocity obstacles, rule-based steering and social forces). The proposed hybrid model not only considers the nearby agents but also the distant agent configurations. Our test scenarios indicate that, in very dense circumstances, agents that use the proposed hybrid model navigate the environment with actual speeds closer to their intended speeds (less stuck) than the agents that are using only the agent-based models.  相似文献   

15.
This paper proposes a consensus protocol for continuous-time double-integrator multi-agent systems under noisy communication in directed topologies. Each agent’s control input relies on its own velocity and the relative positions with neighbours; it does not require the relative velocities. The agent receives its neighbours’ positions information corrupted by time-varying measurement noises whose intensities are proportional to the absolute relative distance that separates the agent from the neighbours. The consensus protocol is mainly based on the velocity damping gain to derive conditions under which the unbiased mean square χ-consensus is achieved in directed fixed topologies, and the unbiased mean square average consensus is achieved in directed switching topologies. The mean square state errors are quantified for both the positions and velocities. Finally, to illustrate the approach presented, some numerical simulations are performed.  相似文献   

16.
In this paper, a multi-agent reinforcement learning method based on action prediction of other agent is proposed. In a multi-agent system, action selection of the learning agent is unavoidably impacted by other agents’ actions. Therefore, joint-state and joint-action are involved in the multi-agent reinforcement learning system. A novel agent action prediction method based on the probabilistic neural network (PNN) is proposed. PNN is used to predict the actions of other agents. Furthermore, the sharing policy mechanism is used to exchange the learning policy of multiple agents, the aim of which is to speed up the learning. Finally, the application of presented method to robot soccer is studied. Through learning, robot players can master the mapping policy from the state information to the action space. Moreover, multiple robots coordination and cooperation are well realized.  相似文献   

17.
This research treats a bargaining process as a Markov decision process, in which a bargaining agent’s goal is to learn the optimal policy that maximizes the total rewards it receives over the process. Reinforcement learning is an effective method for agents to learn how to determine actions for any time steps in a Markov decision process. Temporal-difference (TD) learning is a fundamental method for solving the reinforcement learning problem, and it can tackle the temporal credit assignment problem. This research designs agents that apply TD-based reinforcement learning to deal with online bilateral bargaining with incomplete information. This research further evaluates the agents’ bargaining performance in terms of the average payoff and settlement rate. The results show that agents using TD-based reinforcement learning are able to achieve good bargaining performance. This learning approach is sufficiently robust and convenient, hence it is suitable for online automated bargaining in electronic commerce.  相似文献   

18.
Learning from rewards generated by a human trainer observing an agent in action has been proven to be a powerful method for teaching autonomous agents to perform challenging tasks, especially for those non-technical users. Since the efficacy of this approach depends critically on the reward the trainer provides, we consider how the interaction between the trainer and the agent should be designed so as to increase the efficiency of the training process. This article investigates the influence of the agent’s socio-competitive feedback on the human trainer’s training behavior and the agent’s learning. The results of our user study with 85 participants suggest that the agent’s passive socio-competitive feedback—showing performance and score of agents trained by trainers in a leaderboard—substantially increases the engagement of the participants in the game task and improves the agents’ performance, even though the participants do not directly play the game but instead train the agent to do so. Moreover, making this feedback active—sending the trainer her agent’s performance relative to others—further induces more participants to train agents longer and improves the agent’s learning. Our further analysis shows that agents trained by trainers affected by both the passive and active social feedback could obtain a higher performance under a score mechanism that could be optimized from the trainer’s perspective and the agent’s additional active social feedback can keep participants to further train agents to learn policies that can obtain a higher performance under such a score mechanism.  相似文献   

19.
In the paper, a scheme is proposed for hierarchical quantum information splitting with an unknown eight-qubit cluster state. The Boss Alice wants to distribute a quantum secret to seven distant agents who are divided into two grades. Three agents are in the upper grade and four agents are in the lower grade. Every agent of the upper grade only needs the collaboration of three of the other six agents to get the secret, but all the agents of the lower grade need the collaboration of all the other six agents. In other words, different agents in different grades have different authorities to recover Boss’ secret. And the agent in upper grade is more powerful than the one in the lower grades which needs more information to recover the secret.  相似文献   

20.
This paper introduces an approach for sharing beliefs in collaborative multi-agent application domains where some agents can be more credible than others. In this context, we propose a formalization where every agent has its own partial order among its peers representing the credibility the agent assigns to its informants; each agent will also have a belief base where each sentence is attached with an agent identifier which represents the credibility of that sentence. We define four different forwarding criteria for computing the credibility information for a belief to be forwarded, and for determining how the receiver should handle the incoming information; the proposal considers both the sender’s and the receiver’s points of view with respect to the credibility of the source of the information.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司    京ICP备09084417号-23

京公网安备 11010802026262号