首页 | 官方网站   微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 250 毫秒
1.
In many dynamic open systems, agents have to interact with one another to achieve their goals. Here, agents may be self-interested, and when trusted to perform an action for another, may betray that trust by not performing the action as required. In addition, due to the size of such systems, agents will often interact with other agents with which they have little or no past experience. There is therefore a need to develop a model of trust and reputation that will ensure good interactions among software agents in large scale open systems. Against this background, we have developed TRAVOS (Trust and Reputation model for Agent-based Virtual OrganisationS) which models an agent’s trust in an interaction partner. Specifically, trust is calculated using probability theory taking account of past interactions between agents, and when there is a lack of personal experience between agents, the model draws upon reputation information gathered from third parties. In this latter case, we pay particular attention to handling the possibility that reputation information may be inaccurate.  相似文献   

2.
For agents to collaborate in open multi-agent systems, each agent must trust in the other agents’ ability to complete tasks and willingness to cooperate. Agents need to decide between cooperative and opportunistic behavior based on their assessment of another agents’ trustworthiness. In particular, an agent can have two beliefs about a potential partner that tend to indicate trustworthiness: that the partner is competent and that the partner expects to engage in future interactions. This paper explores an approach that models competence as an agent’s probability of successfully performing an action, and models belief in future interactions as a discount factor. We evaluate the underlying decision framework’s performance given accurate knowledge of the model’s parameters in an evolutionary game setting. We then introduce a game-theoretic framework in which an agent can learn a model of another agent online, using the Harsanyi transformation. The learning agents evaluate a set of competing hypotheses about another agent during the simulated play of an indefinitely repeated game. The Harsanyi strategy is shown to demonstrate robust and successful online play against a variety of static, classic, and learning strategies in a variable-payoff Iterated Prisoner’s Dilemma setting.  相似文献   

3.
Decentralized Peer-to-Peer (P2P) networks offer not only opportunities but also threats. Due to the autonomy, self-interest and heterogeneousness of peers, the interaction outcomes are uncertain. One way to minimize the threats in such an open environment is exploiting the reputation method to evaluate the trustworthiness and predict the future behaviors of peers. While most of the existing reputation-based trust models focus on preventing network from the malicious peers, peers’ capabilities to fulfill the tasks are mostly ignored. In this paper, we present a novel trust model MHFTrust which quantifies and compares the trustworthiness of peers based on hierarchical fuzzy system. Six capability factors are identified to describe the peers’ trust on the capability, and one security factor, named “Malicious behavior” is used to evaluate the peers’ trust on security. Our trust model consisted of local-trust computation based on fuzzy techniques and global reputation aggregation, which integrates feedback from other peers to produce a global reputation for each peer. Credibility and weight of feedback are introduced to facilitate the computation of global reputation. It is shown in simulation that our trust model greatly improves the efficiency of P2P system, while the number of inauthentic files on the network is significantly decreased.  相似文献   

4.
Decentralized Reputation Systems have recently emerged as a prominent method of establishing trust among self-interested agents in online environments. A key issue is the efficient aggregation of data in the system; several approaches have been proposed, but they are plagued by major shortcomings. We put forward a novel, decentralized data management scheme grounded in gossip-based algorithms. Rumor mongering is known to possess algorithmic advantages, and indeed, our framework inherits many of their salient features: scalability, robustness, a global perspective, and simplicity. We demonstrate that our scheme motivates agents to maintain a very high reputation, by showing that the higher an agent’s reputation is above the threshold set by its peers, the more transactions it would be able to complete within a certain time unit. We analyze the relation between the amount by which an agent’s average reputation exceeds the threshold and the time required to close a deal. This analysis is carried out both theoretically, and empirically through a simulation system called GossipTrustSim. Finally, we show that our approach is inherently impervious to certain kinds of attacks. A preliminary version of this article appeared in the proceedings of IJCAI 2007.  相似文献   

5.
Artificial societies—distributed systems of autonomous agents—are becoming increasingly important in open distributed environments, especially in e‐commerce. Agents require trust and reputation concepts to identify communities of agents with which to interact reliably. We have noted in real environments that adversaries tend to focus on exploitation of the trust and reputation model. These vulnerabilities reinforce the need for new evaluation criteria for trust and reputation models called exploitation resistance which reflects the ability of a trust model to be unaffected by agents who try to manipulate the trust model. To examine whether a given trust and reputation model is exploitation‐resistant, the researchers require a flexible, easy‐to‐use, and general framework. This framework should provide the facility to specify heterogeneous agents with different trust models and behaviors. This paper introduces a Distributed Analysis of Reputation and Trust (DART) framework. The environment of DART is decentralized and game‐theoretic. Not only is the proposed environment model compatible with the characteristics of open distributed systems, but it also allows agents to have different types of interactions in this environment model. Besides direct, witness, and introduction interactions, agents in our environment model can have a type of interaction called a reporting interaction, which represents a decentralized reporting mechanism in distributed environments. The proposed environment model provides various metrics at both micro and macro levels for analyzing the implemented trust and reputation models. Using DART, researchers have empirically demonstrated the vulnerability of well‐known trust models against both individual and group attacks.  相似文献   

6.
Agent动态交互信任预测与行为异常检测模型   总被引:4,自引:0,他引:4  
在agent理论中,信任计算是一个有意义的研究方向.然而目前agent信任研究都是以平均交互成功率来计算,较少考虑信任动态变化,因而准确预测和行为异常检测的能力不能令人满意.针对上述问题,以概率论为工具,按时间分段交互历史给出agent交互信任计算模型CMAIT;结合信任的变化率,给出信任计算的置信度和异常检测机制.实验以网上电子商务为背景,实验结果显示该计算模型的预测误差为TRAVOS的0.5倍,计算量也较少;既可用于对手历史行为的异常检测,防止被欺骗,又可用于对手未来行为的预测.改进了Jennings等人关于agent信任的工作.  相似文献   

7.
Reinforcement learning techniques like the Q-Learning one as well as the Multiple-Lookahead-Levels one that we introduced in our prior work require the agent to complete an initial exploratory path followed by as many hypothetical and physical paths as necessary to find the optimal path to the goal. This paper introduces a reinforcement learning technique that uses a distance measure to the goal as a primary gauge for an autonomous agent’s action selection. In this paper, we take advantage of the first random walk to acquire initial information about the goal. Once the agent’s goal is reached, the agent’s first perceived internal model of the environment is updated to reflect and include said goal. This is done by the agent tracing back its steps to its origin starting point. We show in this paper, no exploratory or hypothetical paths are required after the goal is initially reached or detected, and the agent requires a maximum of two physical paths to find the optimal path to the goal. The agent’s state occurrence frequency is introduced as well and used to support the proposed Distance-Only technique. A computation speed performance analysis is carried out, and the Distance-and-Frequency technique is shown to require less computation time than the Q-Learning one. Furthermore, we present and demonstrate how multiple agents using the Distance-and-Frequency technique can share knowledge of the environment and study the effect of that knowledge sharing on the agents’ learning process.  相似文献   

8.
Statistical relational learning of trust   总被引:1,自引:0,他引:1  
The learning of trust and distrust is a crucial aspect of social interaction among autonomous, mentally-opaque agents. In this work, we address the learning of trust based on past observations and context information. We argue that from the truster’s point of view trust is best expressed as one of several relations that exist between the agent to be trusted (trustee) and the state of the environment. Besides attributes expressing trustworthiness, additional relations might describe commitments made by the trustee with regard to the current situation, like: a seller offers a certain price for a specific product. We show how to implement and learn context-sensitive trust using statistical relational learning in form of a Dirichlet process mixture model called Infinite Hidden Relational Trust Model (IHRTM). The practicability and effectiveness of our approach is evaluated empirically on user-ratings gathered from eBay. Our results suggest that (i) the inherent clustering achieved in the algorithm allows the truster to characterize the structure of a trust-situation and provides meaningful trust assessments; (ii) utilizing the collaborative filtering effect associated with relational data does improve trust assessment performance; (iii) by learning faster and transferring knowledge more effectively we improve cold start performance and can cope better with dynamic behavior in open multiagent systems. The later is demonstrated with interactions recorded from a strategic two-player negotiation scenario.  相似文献   

9.
The FIRE trust and reputation model is a de-centralized trust model that can be applied for trust management in unstructured Peer-to-Peer (P2P) overlays. The FIRE model does not, however, consider malicious activity and possible collusive behavior in nodes of network and it is therefore susceptible to collusion attacks. This investigation reveals that FIRE is vulnerable to lying and cheating attacks and presents a trust management approach to detect collusion in direct and witness interactions among nodes based on colluding node’s history of interactions. A witness ratings based graph building approach is utilized to determine possibly collusive behavior among nodes. Furthermore, various interaction policies are defined to detect and prevent collaborative behavior in colluding nodes. Finally a multidimensional trust model FIRE+ is devised for avoiding collusion attacks in direct and witness based interactions. The credibility of the proposed trust management scheme as an enhancement of the FIRE trust model is verified by extensive simulation experiments.  相似文献   

10.
Cooperation among agents is a crucial problem in autonomous distributed systems composed of selfish agents pursuing their own profits. An earlier study of a self-repairing network revealed that a systemic payoff was able to make the selfish agents cooperate with others. The systemic payoff is a payoff mechanism that sums up not only an agent’s own payoff, but also its neighborhood’s payoff. In the systemic payoff, the distance effect between the agents has not yet been studied. This article considers the systemic payoff that involves the distance effect among agents. We studied the effectiveness of the proposed mechanism for the network performance by multi-agent simulations.  相似文献   

11.
Software agents’ ability to interact within different open systems, designed by different groups, presupposes an agreement on an unambiguous definition of a set of concepts, used to describe the context of the interaction and the communication language the agents can use. Agents’ interactions ought to allow for reliable expectations on the possible evolution of the system; however, in open systems interacting agents may not conform to predefined specifications. A possible solution is to define interaction environments including a normative component, with suitable rules to regulate the behaviour of agents. To tackle this problem we propose an application-independent metamodel of artificial institutions that can be used to define open multiagent systems. In our view an artificial institution is made up by an ontology that models the social context of the interaction, a set of authorizations to act on the institutional context, a set of linguistic conventions for the performance of institutional actions and a system of norms that are necessary to constrain the agents’ actions.  相似文献   

12.
Based on the analysis of the morphogenesis process of real biological organisms, the basic principles are formulated for the simulation of genomic control of the morphogenesis of a virtual agent in a physically correct environment. A model of the genome of a compound agent’s body composed of different functional subsystems is developed. A timing algorithm for genetically conditioned processes of regeneration and division of agent’s microcells is developed. The problem of forming agents with a morphology that is locally optimal for a given environment is formulated as a multi-generation optimization problem in a genetic algorithm. The fitness function is defined as the life duration of agents. Constraints are connected with the physical correctness of an environment and an energy deficit in the environment. Agent genomes are considered as chromosomes in the genetic algorithm. __________ Translated from Kibernetika i Sistemnyi Analiz, No. 2, pp. 42–54, March–April 2008.  相似文献   

13.
开放多Agent系统的一个信任信誉系统模型   总被引:3,自引:0,他引:3  
信任和信誉对于开放多Agent系统的有效交互是十分重要的.FIRE模型是最近提出的适用于开放环境下问题求解的信任信誉系统模型中的一种,在该模型中没有考虑消费者的个性特征,从而使得消费者给出的评分仅直接反映了提供者的服务质量,这就减弱了该模型的实用价值.提出了扩展模型E-FIRE,引入了消费者的个性特征.其个性特征包括两个方面:消费者对服务质量的预期和消费者采取的态度.这两方面共同影响消费者对待同一服务的评分.因此,消费者的评分不再只反映提供者的服务质量,这样使得该模型更加符合实际情况.同时,在消费者为选择提供者而计算提供者的综合信任评分时,更多地依靠提供证据的消费者对提供者的直接信任,从而减少了Agent 间的通信量.实验结果表明,在交互次数较少时,E-FIRE模型的性能与FIRE模型的性能相当;随着交互次数的增多,E-FIRE模型的性能更优.  相似文献   

14.
Establishing cooperation and protecting individuals from selfish and malicious behavior are key goals in open multiagent systems. Incomplete information regarding potential interaction partners can undermine typical cooperation mechanisms such as trust and reputation, particularly in lightweight systems designed for individuals with significant resource constraints. In this article, we (i) propose extending a low‐cost reputation mechanism to use gossiping to mitigate against the effect of incomplete information, (ii) define four simple aggregation strategies for incorporating gossiped information, and (iii) evaluate our model on a variety of synthetic and real‐world topologies and under a range of configurations. We show that (i) gossiping can significantly reduce the potentially detrimental influence of incomplete information and the underlying network structure on lightweight reputation mechanisms, (ii) basing decisions on the most recently received gossip results in up to a 25% reduction in selfishness, and (iii) gossiping is particularly effective at aiding agents with little or no interaction history, such as when first entering a system.  相似文献   

15.
Web services open a door for better B2B collaboration in large distributed environment such as Internet. Process-oriented systems like workflow management systems have been taking the main role for web service-based B2B collaboration in such an environment. However, conventional workflow management systems don’t offer complete solutions for B2B collaborations considering many unsolved issues such as security, trust and complex and flexible interaction handling. In this paper, we propose a web service-based multi-agent platform, which can be used as a complementary solution for B2B collaborations. It fits naturally into the B2B interaction model and provides a very loosely coupled open system architecture.  相似文献   

16.
Agent trust researches become more and more important because they will ensure good interactions among the software agents in large-scale open systems. Moreover, individual agents often interact with long-term coalitions such as some E-commerce web sites. So the agents should choose a coalition based on utility and trust. Unfortunately, few studies have been done on agent coalition credit and there is a need to do it in detail. To this end, a long-term coalition credit model (LCCM) is presented. Furthermore, the relationship between coalition credit and coalition payoff is also attended. LCCM consists of internal trust based on agent direct interactions and external reputation based on agent direct observation. Generalization of LCCM can be demonstrated through experiments applied in both cooperative and competitive domain environment. Experimental results show that LCCM is capable of coalition credit computation efficiently and can properly reflect various factors effect on coalition credit. Another important advantage that is a useful and basic property of credit is that LCCM can effectively filter inaccurate or lying information among interactions.  相似文献   

17.
18.
This paper presents a novel approach to the facility layout design problem based on multi-agent society where agents’ interactions form the facility layout design. Each agent corresponds to a facility with inherent characteristics, emotions, and a certain amount of money, forming its utility function. An agent’s money is adjusted during the learning period by a manager agent while each agent tries to tune the parameters of its utility function in such a way that its total layout cost can be minimized in competition with others. The agents’ interactions are formed based on market mechanism. In each step, an unoccupied location is presented to all applicant agents, for which each agent proposes a price proportionate to its utility function. The agent proposing a higher price is selected as the winner and assigned to that location by an appropriate space-filling curve. The proposed method utilizes the fuzzy theory to establish each agent’s utility function. In addition, it provides a simulation environment using an evolutionary algorithm to form different interactions among the agents and makes it possible for them to experience various strategies. The experimental results show that the proposed approach achieves a lower total layout cost compared with state of the art methods.  相似文献   

19.
Multiagent systems have become popular over the last few years for building complex, adaptive systems in a distributed, heterogeneous setting. Multiagent systems tend to be more robust and, in many cases, more efficient than single monolithic applications. However, unpredictable application environments make multiagent systems susceptible to individual failures that can significantly reduce its ability to accomplish its overall goal. The problem is that multiagent systems are typically designed to work within a limited set of configurations. Even when the system possesses the resources and computational power to accomplish its goal, it may be constrained by its own structure and knowledge of its member’s capabilities. To overcome these problems, we are developing a framework that allows the system to design its own organization at runtime. This paper presents a key component of that framework, a metamodel for multiagent organizations named the Organization Model for Adaptive Computational Systems. This model defines the requisite knowledge of a system’s organizational structure and capabilities that will allow it to reorganize at runtime and enable it to achieve its goals effectively in the face of a changing environment and its agent’s capabilities.  相似文献   

20.
In multiagent semi-competitive environments, competitions and cooperations can both exist. As agents compete with each other, they have incentives to lie. Sometimes, agents can increase their utilities by cooperating with each other, then they have incentives to tell the truth. Therefore, being a receiver, an agent needs to decide whether or not to trust the received message(s). To help agents make this decision, some of the existing models make use of trust or reputation only, which means agents choose to believe (or cooperate with) the trustworthy senders or senders with high reputation. However, a trustworthy agent may only bring little benefit. Another way to make the decision is to use expected utility. However, agents who only believe messages with high expected utilities can be cheated easily. To solve the problems, this paper introduces the Trust Model, which makes use of trust, expected utility, and also agents’ attitudes towards risk to make decisions. On the other hand, being a sender, an agent needs to decide whether or not to be honest. To help agents make this decision, this paper introduces the Honesty Model, which is symmetric to the Trust Model. In addition, we introduce an adaptive strategy to the Trust/Honesty Model, which enables agents to learn from and adapt to the environment. Simulations show that agents with the Adaptive Trust/Honesty Model perform much better than agents which only use trust or expected utility to make the decision  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司    京ICP备09084417号-23

京公网安备 11010802026262号