首页 | 官方网站   微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 312 毫秒
1.
In multiagent settings where agents have different preferences, preference aggregation can be an important issue. Voting is a general method to aggregate preferences. We consider the use of voting tree rules to aggregate agents’ preferences. In a voting tree, decisions are taken by performing a sequence of pairwise comparisons in a binary tree where each comparison is a majority vote among the agents. Incompleteness in the agents’ preferences is common in many real-life settings due to privacy issues or an ongoing elicitation process. We study how to determine the winners when preferences may be incomplete, not only for voting tree rules (where the tree is assumed to be fixed), but also for the Schwartz rule (in which the winners are the candidates winning for at least one voting tree). In addition, we study how to determine the winners when only balanced trees are allowed. In each setting, we address the complexity of computing necessary (respectively, possible) winners, which are those candidates winning for all completions (respectively, at least one completion) of the incomplete profile. We show that many such winner determination problems are computationally intractable when the votes are weighted. However, in some cases, the exact complexity remains unknown. Since it is generally computationally difficult to find the exact set of winners for voting trees and the Schwartz rule, we propose several heuristics that find in polynomial time a superset of the possible winners and a subset of the necessary winners which are based on the completions of the (incomplete) majority graph built from the incomplete profiles.  相似文献   

2.
Preference queries incorporate the notion of binary preference relation into relational database querying. Instead of returning all the answers, such queries return only the best answers, according to a given preference relation. Preference queries are a fast growing area of database research. Skyline queries constitute one of the most thoroughly studied classes of preference queries. A well-known limitation of skyline queries is that skyline preference relations assign the same importance to all attributes. In this work, we study p-skyline queries that generalize skyline queries by allowing varying attribute importance in preference relations. We perform an in-depth study of the properties of p-skyline preference relations. In particular, we study the problems of containment and minimal extension. We apply the obtained results to the central problem of the paper: eliciting relative importance of attributes. Relative importance is implicit in the constructed p-skyline preference relation. The elicitation is based on user-selected sets of superior (positive) and inferior (negative) examples. We show that the computational complexity of elicitation depends on whether inferior examples are involved. If they are not, elicitation can be achieved in polynomial time. Otherwise, it is NP complete. Our experiments show that the proposed elicitation algorithm has high accuracy and good scalability.  相似文献   

3.
We consider soft constraint problems where some of the preferences may be unspecified. This models, for example, settings where agents are distributed and have privacy issues, or where there is an ongoing preference elicitation process. In this context, we study how to find an optimal solution without having to wait for all the preferences. In particular, we define algorithms, that interleave search and preference elicitation, to find a solution which is necessarily optimal, that is, optimal no matter what the missing data will be, with the aim to ask the user to reveal as few preferences as possible. We define a combined solving and preference elicitation scheme with a large number of different instantiations, each corresponding to a concrete algorithm, which we compare experimentally. We compute both the number of elicited preferences and the user effort, which may be larger, as it contains all the preference values the user has to compute to be able to respond to the elicitation requests. While the number of elicited preferences is important when the concern is to communicate as little information as possible, the user effort measures also the hidden work the user has to do to be able to communicate the elicited preferences. Our experimental results on classical, fuzzy, weighted and temporal incomplete CSPs show that some of our algorithms are very good at finding a necessarily optimal solution while asking the user for only a very small fraction of the missing preferences. The user effort is also very small for the best algorithms.  相似文献   

4.
《Artificial Intelligence》2006,170(8-9):686-713
In many situations, a set of hard constraints encodes the feasible configurations of some system or product over which multiple users have distinct preferences. However, making suitable decisions requires that the preferences of a specific user for different configurations be articulated or elicited, something generally acknowledged to be onerous. We address two problems associated with preference elicitation: computing a best feasible solution when the user's utilities are imprecisely specified; and developing useful elicitation procedures that reduce utility uncertainty, with minimal user interaction, to a point where (approximately) optimal decisions can be made. Our main contributions are threefold. First, we propose the use of minimax regret as a suitable decision criterion for decision making in the presence of such utility function uncertainty. Second, we devise several different procedures, all relying on mixed integer linear programs, that can be used to compute minimax regret and regret-optimizing solutions effectively. In particular, our methods exploit generalized additive structure in a user's utility function to ensure tractable computation. Third, we propose various elicitation methods that can be used to refine utility uncertainty in such a way as to quickly (i.e., with as few questions as possible) reduce minimax regret. Empirical study suggests that several of these methods are quite successful in minimizing the number of user queries, while remaining computationally practical so as to admit real-time user interaction.  相似文献   

5.
In this work we consider preference relations that might not be total. Partial preferences may be helpful to represent those situations where, due to lack of information or vacillating desires, the decision maker would like to maintain different options “alive” and defer the final decision. In particular, we show that, when totality is relaxed, different axiomatizations of classical Decision Theory are no longer equivalent but form a hierarchy where some of them are more restrictive than others. We compare such axiomatizations with respect to theoretical aspects—such as their ability to propagate comparability/incomparability over lotteries and the induced topology—and to different preference elicitation methodologies that are applicable in concrete domains. We also provide a polynomial-time procedure based on the bipartite matching problem to determine whether one lottery is preferred to another.  相似文献   

6.
Now the handling of user preference is becoming an increasingly important issue in database fields where they capture soft criteria for queries. A broader category of qualitative preferences with dependent relations among multiple attributes is widely existing, which is CP-nets. In this article, we focus on designing the operators of preference composition for CP-nets. Firstly, we extend Pareto composition to our model by including equivalence relation ≈, incomparability relation ∥ and conflicting relation ⊥, which can preserve a strict partial order and conditional associativity. On this basis, two questions are solved: (a) the generation of satisfiability sequences for CP-nets, (b) the top-k queries of relational database with CP-nets preference. For (a), a CP-net is induced into multiple tables, consequently the strong dominance tests between outcomes can be solved by using preference composition instead of using induced preference graph of CP-nets. For (b), we adopt the concept of Query Lattice to provide a natural semantics for the block sequence answering a preference query, where two algorithms (called QOCP and IQOCP) are introduced. These questions are solved efficiently and effectively at the perspective of combination of graph model and relational database.  相似文献   

7.
《Information Systems》1999,24(1):47-65
An effective optimization strategy for evaluating statistical queries is to use precomputed summary data on certain categories. An important step in this strategy is to compare categories for containment in order to decide whether the summary data on one category can be used to compute the summary on another. This paper studies optimization for such comparisons. A category in this paper is represented by a relation whose attributes are partitioned into pair-wise disjoint sets, each called a dimension. A category is said to be orthogonal if it is equal to the cross product of the projections of itself on all the dimensions, and k-partially orthogonal if it is the union of k orthogonal ones. Comparing k-partially orthogonal categories for containment is computationally much easier than comparing arbitrary categories if k is small and all the orthogonal subcategories are known. It is shown however that it is computationally intractable (NP-hard) to partition an arbitrarily given category into the smallest number of orthogonal subcategories. In order to avoid this intractable task but still take advantage of orthogonality, this paper investigates methods that derive orthogonality in categories which are results of relational queries, assuming the orthogonality in input categories is known. The methods are based on a careful examination of each relational operation and on certain auxiliary constructs for labelling orthogonal subcategories.  相似文献   

8.
Multistage stochastic programming with endogenous uncertainty is a new topic in which the timing of uncertainty realization is decision-dependent. In this case, the number of nonanticipativity constraints (NACs) increases very quickly with the number of scenarios, making the problem computationally intractable. Fortunately, a large number of NACs are typically redundant and their elimination leads to a considerable reduction in the problem size. Identifying redundant NACs has been addressed in the literature only in the special case where the scenario set is equal to the Cartesian product of all possible outcomes for endogenous parameters; however, this is a scarce condition in practice. In this paper, we consider the general case where the scenario set is an arbitrary set; and two approaches, able to identify all redundant NACs, are proposed. The first approach is by mixed integer programming formulation and the second one is an exact polynomial time algorithm. Proving the fact that the proposed algorithm is able to make the uppermost reduction in the number of NACs is another novelty of this paper. Computational results evaluate the efficiency of the proposed approaches.  相似文献   

9.
In this paper, we propose a social approach for learning agents. In dynamic environments, smart agents should detect changes and adapt themselves, applying dynamic learning strategies and drift detection algorithms. Recent studies note that an ensemble of learners can be coordinated by simple protocols based on votes or weighted votes; however, they are not capable of determining the number of learners or the ensemble composition properly. Conversely, we show in this paper that Social Network Theory can provide the multi-agent learning community with sophisticated and well-founded reputation models that outperform well-known ensemble-based drift detection techniques, generating accurate and small ensembles of learning agents. Our approach is evaluated considering dynamic bilateral negotiation scenarios and benchmark databases, presenting statistically significant results that are better than those of other ensemble-based techniques.  相似文献   

10.
The frequent connected subgraph mining problem, i.e., the problem of listing all connected graphs that are subgraph isomorphic to at least a certain number of transaction graphs of a database, cannot be solved in output polynomial time in the general case. If, however, the transaction graphs are restricted to forests then the problem becomes tractable. In this paper we generalize the positive result on forests to graphs of bounded tree-width. In particular, we show that for this class of transaction graphs, frequent connected subgraphs can be listed in incremental polynomial time. Since subgraph isomorphism remains NP-complete for bounded tree-width graphs, the positive complexity result of this paper shows that efficient frequent pattern mining is possible even for computationally hard pattern matching operators.  相似文献   

11.
刘惊雷 《自动化学报》2011,37(3):290-302
偏好处理是人工智能中的一个重要研究内容, 它的4个研究热点是偏好的表示、 提取、 聚合和推理. 条件偏好网(Conditional preference networks, CP-nets)是一种简单直观的偏好表示的图形工具, 但很少有工作研究CP-nets的表达能力. 本文研究CP-nets的表达能力, 详细研究了CP-nets表达偏好的完备性, 其上构造的运算复杂度以及适用的场合. 首先给出了CP-nets模型上的几个运算, 利用改进的Warshall算法求出了二值网的强占优测试在最坏情况下的复杂度为O(4n). 其次通过构造CP-nets导出图及其性质的研究, 得出CP-nets特别适合不完全信息下的多属性定性偏好决策. 当需要处理更完全信息时, 可借助于与Agent的交互来完成. 虽然我们给出了CP-nets的强占优测试的理论解, 但其理论上可解, 实际上不可解. 为了解决强占优测试的指数级复杂度问题, 本文最后给出了一种带有软约束的满足问题(Soft constraint satisfaction problem, SCSP)的求解方法. 它把CP-nets中的定性运算转为约束半环中的定量运算, 从而将指数级的复杂度转化为多项式的复杂度, 间接提高了部分CP-nets的表达能力. 本文所做的工作是对Boutilier和Bistarelli工作的改进和提高.  相似文献   

12.
This study investigates the content characteristics of Twitter during an election campaign, and the relationship between candidates’ style of online campaigning (i.e., politically personalized and interactive communication) and electoral support for those candidates. Thereby, it provides a better understanding of the linkage between the use of Twitter by candidates and effects on preferential votes. Two data sources are used to examine this relationship: first, a quantitative computer-assisted as well as a manual content analysis of tweets posted by political candidates during the Dutch national elections of 2010 (N = 40,957) and second, a dataset containing the number of votes for electable political candidates during that period. The findings show that using Twitter has positive consequences for political candidates. Candidates who used Twitter during the course of the campaign received more votes than those who did not, and using Twitter in an interactive way had a positive impact as well.  相似文献   

13.
We report two experiments which tested whether cognitive capacities are limited to those functions that are computationally tractable (PTIME-Cognition Hypothesis). In particular, we investigated the semantic processing of reciprocal sentences with generalized quantifiers, i.e., sentences of the form Q dots are directly connected to each other, where Q stands for a generalized quantifier, e.g. all or most. Sentences of this type are notoriously ambiguous and it has been claimed in the semantic literature that the logically strongest reading is preferred (Strongest Meaning Hypothesis). Depending on the quantifier, the verification of their strongest interpretations is computationally intractable whereas the verification of the weaker readings is tractable. We conducted a picture completion experiment and a picture verification experiment to investigate whether comprehenders shift from an intractable reading to a tractable reading which should be dispreferred according to the Strongest Meaning Hypothesis. The results from the picture completion experiment suggest that intractable readings occur in language comprehension. Their verification, however, rapidly exceeds cognitive capacities in case the verification problem cannot be solved using simple heuristics. In particular, we argue that during verification, guessing strategies are used to reduce computational complexity.  相似文献   

14.
Metamodels are often used to replace expensive simulations of engineering problems. When a training set is given, a series of metamodels can be constructed, and then there are two strategies to deal with these metamodels: (1) picking out the best one with the highest accuracy as an approximation of the computationally intensive simulation; and (2) combining all of them into an ensemble model. However, since the choice of approximate model depends on design of experiments (DOEs), employing of the first strategy thus increases the risk of adopting an inappropriate model. Nevertheless, the second strategy also seems not to be a good choice, since adding redundant metamodels may lead to loss of accuracy. Therefore, it is a necessary step to eliminate the redundant metamodels from the set of the candidates before constructing the final ensemble. Illuminated by the method of variable selection widely used in polynomial regression, a metamodel selection method based on stepwise regression is proposed. In our method, just a subset of n ones (np, where p is the number of all of the candidate metamodels) is used. In addition, a new ensemble technique is proposed from the view of polynomial regression in this work. This new ensemble technique, combined with metamodel selection method, has been evaluated using six benchmark problems. The results show that eliminating the redundant metamodels before constructing the ensemble can provide more ideal prediction accuracy than directly constructing the ensemble by utilizing all of the candidates.  相似文献   

15.
An additive value function is one of the prevailing preference models in Multiple Criteria Decision Aiding (MCDA). Its indirect elicitation through pairwise questions is often applied due to lowering the cognitive effort on the part of a Decision Maker (DM). A practical usefulness of this approach is influenced by both expressiveness of the assumed model and robustness of the recommendation computed with its use. We experimentally evaluate the above characteristics in view of using an additive value function in the preference disaggregation context. The simulation results are quantified with the following four measures: (1) the share of decision scenarios for which a set of compatible value functions is non-empty, (2) the minimal difference between comprehensive values of reference alternatives compared pairwise by the DM, (3) the number of pairs of alternatives for which the necessary preference relation confirmed by all compatible functions holds, and (4) the number of non-trivial certain inferences which cannot be derived directly from the preference information. We discuss how these measures are influenced by the settings with different numbers of alternatives, criteria, pairwise comparisons, and performance distributions. We also study how the results change when applying various procedures for selection of the characteristic points which define the shape of per-criterion marginal value functions. In this regard, we compare four existing discretization algorithms with a new supervised technique proposed in this paper. Overall, we indicate that expressiveness and robustness are contradictory objectives, and a compromise between them needs to be reached to increase the usefulness of an additive value model in the preference disaggregation methods.  相似文献   

16.
Lexicographic preference models (LPMs) are an intuitive representation that corresponds to many real-world preferences exhibited by human decision makers. Previous algorithms for learning LPMs produce a “best guess” LPM that is consistent with the observations. Our approach is more democratic: we do not commit to a single LPM. Instead, we approximate the target using the votes of a collection of consistent LPMs. We present two variations of this method—variable voting and model voting—and empirically show that these democratic algorithms outperform the existing methods. Versions of these democratic algorithms are presented in both the case where the preferred values of attributes are known and the case where they are unknown. We also introduce an intuitive yet powerful form of background knowledge to prune some of the possible LPMs. We demonstrate how this background knowledge can be incorporated into variable and model voting and show that doing so improves performance significantly, especially when the number of observations is small.  相似文献   

17.
In this paper, for a finitely generated monoid M, we tackle the following three questions:
  1. Is it possible to give a characterization of rational subsets of M which have polynomial growth?
  2. What is the structure of the counting function of rational sets which have polynomial growth?
  3. Is it true that every rational subset of M has either exponential growth or it has polynomial growth? Can one decide for a given rational set which of the two options holds?
We give a positive answer to all the previous questions in the case that M is a direct product of free monoids. Some of the proved results also extend to trace monoid.  相似文献   

18.
Inspired by the CONFIDANT protocol (Buchegger and Boudec in Proceedings of the 3rd ACM International Symposium on Mobile Ad Hoc Networking & Computing, pp. 226–236, 2002), we define and study a basic reputation-based protocol in multihop wireless networks with selfish nodes. Its reputation mechanism is implemented through the ability of any node to define a threshold of tolerance for any of its neighbors, and to cut the connection to any of these neighbors that refuse to forward an amount of flow above that threshold. The main question we would like to address is whether one can set the initial conditions so that the system reaches an equilibrium state where a non-zero amount of every commodity is routed. This is important in emergency situations, where all nodes need to be able to communicate even with a small bandwidth. Following a standard approach, we model this protocol as a game, and we give necessary and sufficient conditions for the existence of non-trivial Nash equilibria. Then we enhance these conditions with extra conditions that give a set of necessary and sufficient conditions for the existence of connected Nash equilibria. We note that it is not always necessary for all the flow originating at a node to reach its destination at equilibrium. For example, a node may be using unsuccessful flow in order to effect changes in a distant part of the network that will prove quite beneficial to it. We show that we can decide in polynomial time whether there exists a (connected) equilibrium without unsuccessful flows. In that case we calculate (in polynomial time) initial values that impose such an equilibrium on the network. On the negative side, we prove that it is NP-hard to decide whether a connected equilibrium exists in general (i.e., with some nodes using unsuccessful flows at equilibrium).  相似文献   

19.
Yong Gao 《Artificial Intelligence》2009,173(14):1343-1366
Data reduction is a key technique in the study of fixed parameter algorithms. In the AI literature, pruning techniques based on simple and efficient-to-implement reduction rules also play a crucial role in the success of many industrial-strength solvers. Understanding the effectiveness and the applicability of data reduction as a technique for designing heuristics for intractable problems has been one of the main motivations in studying the phase transition of randomly-generated instances of NP-complete problems.In this paper, we take the initiative to study the power of data reductions in the context of random instances of a generic intractable parameterized problem, the weighted d-CNF satisfiability problem. We propose a non-trivial random model for the problem and study the probabilistic behavior of the random instances from the model. We design an algorithm based on data reduction and other algorithmic techniques and prove that the algorithm solves the random instances with high probability and in fixed-parameter polynomial time O(dknm) where n is the number of variables, m is the number of clauses, and k is the fixed parameter. We establish the exact threshold of the phase transition of the solution probability and show that in some region of the problem space, unsatisfiable random instances of the problem have parametric resolution proof of fixed-parameter polynomial size. Also discussed is a more general random model and the generalization of the results to the model.  相似文献   

20.
Groups may need assistance in reaching a joint decision. Elections can reveal the winning item, but this means the group members need to vote on, or at least consider all available items. Our challenge is to minimize the amount of preferences that need to be elicited and thus reduce the effort required from the group members. We present a model that offers a few innovations. First, rather than offering a single winner, we propose to offer the group the best top-k alternatives. This can be beneficial if a certain item suddenly becomes unavailable, or if the group wishes to choose manually from a few selected items. Secondly, rather than offering a definite winning item, we suggest to approximate the item or the top-k items that best suit the group, according to a predefined confidence level. We study the tradeoff between the accuracy of the proposed winner item and the amount of preference elicitation required. Lastly, we offer to consider different preference aggregation strategies. These strategies differ in their emphasis: towards the individual users (Least Misery Strategy) or towards the majority of the group (Majority Based Strategy). We evaluate our findings on data collected in a user study as well as on real world and simulated datasets and show that selecting the suitable aggregation strategy and relaxing the termination condition can reduce communication cost up to 90%. Furthermore, the commonly used Majority strategy does not always outperform the Least Misery strategy. Addressing these three challenges contributes to the minimization of preference elicitation in expert systems.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司    京ICP备09084417号-23

京公网安备 11010802026262号