首页 | 官方网站   微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
In this paper, a variable neighborhood search (VNS) algorithm is developed and analyzed that can generate fifth species counterpoint fragments. The existing species counterpoint rules are quantified and form the basis of the objective function used by the algorithm. The VNS developed in this research is a local search metaheuristic that starts from a randomly generated fragment and gradually improves this solution by changing one or two notes at a time. An in-depth statistical analysis reveals the significance as well as the optimal settings of the parameters of the VNS. The algorithm has been implemented in a user-friendly software environment called Optimuse. Optimuse allows a user to input basic characteristics such as length, key and mode. Based on this input, a fifth species counterpoint fragment is generated by the system that can be edited and played back immediately.  相似文献   

2.
Dun Liu  Tianrui Li 《Information Sciences》2011,181(17):3709-3722
In dealing with risk in real decision problems, decision-theoretic rough sets with loss functions aim to obtain optimization decisions by minimizing the overall risk with Bayesian decision procedures. Two parameters generated by loss functions divide the universe into three regions as the decision of acceptance, deferment and rejection. In this paper, we discuss the semantics of loss functions, and utilize the differences of losses replace actual losses to construct a new “four-level” approach of probabilistic rules choosing criteria. Ten types of probabilistic rough set models can be generated by the “four-level” approach and form two groups of models: two-way probabilistic decision models and three-way probabilistic decision models. A reasonable decision with these criteria is demonstrated by an illustration of oil investment.  相似文献   

3.
Biao  Yuni   《Data & Knowledge Engineering》2008,67(3):485-503
Managing uncertain information using probabilistic databases has drawn much attention recently in many fields. Generating efficient safe plans is the key to evaluating queries whose data complexities are PTIME. In this paper, we propose a new approach generating efficient safe plans for queries. Our algorithm adopts effective preprocessing and multiway split techniques, thus the generating safe plans avoid unnecessary probabilistic cartesian-products and have the minimum number of probabilistic projections. Further, we extend existing transformation rules to allow the safe plans generated by the Safe-Plan algorithm [N. Dalvi, D. Suciu, Efficient query evaluation on probabilistic database, The VLDB Journal 16 (4) (2007) 523–544] and the proposed algorithm to be transformed by each other. Applying our approach through the TPC-H benchmark queries, the experiments show that the safe plans generated by our algorithm are more efficient than those generated by the Safe-Plan algorithm.  相似文献   

4.
Knowledge encoded in information systems can be represented by different sets of rules generated by these systems. One can consider sets of deterministic, nondeterministic or probabilistic rules. Such sets of rules can be treated as theories of information systems. Any such a theory generated from a given information system corresponds to a subjective view on knowledge encoded in this information system. Such theories can be used for solving different problems. For example, the maximal consistent extensions of information systems were studied for synthesis of concurrent processes specified by information systems. In this approach, the maximal consistent extension of a given information system consists of all objects perceived by means of attributes which are consistent with the theory including all the so called true and realizable deterministic rules extracted from the original information system. In this paper, we report results on the maximal consistent extensions of information systems relative to some other theories of information systems, e.g., theories consisting of rules such as true and realizable inhibitory rules, true inhibitory rules, and true deterministic rules. We also discuss algorithmic problems related to the maximal consistent extensions. In particular, from the obtained results it follows that solutions based on these new sets of rules, e.g., on inhibitory rules can be of higher quality than in the case of deterministic rules.  相似文献   

5.
Ron  Dana  Singer  Yoram  Tishby  Naftali 《Machine Learning》1996,25(2-3):117-149
We propose and analyze a distribution learning algorithm for variable memory length Markov processes. These processes can be described by a subclass of probabilistic finite automata which we name Probabilistic Suffix Automata (PSA). Though hardness results are known for learning distributions generated by general probabilistic automata, we prove that the algorithm we present can efficiently learn distributions generated by PSAs. In particular, we show that for any target PSA, the KL-divergence between the distribution generated by the target and the distribution generated by the hypothesis the learning algorithm outputs, can be made small with high confidence in polynomial time and sample complexity. The learning algorithm is motivated by applications in human-machine interaction. Here we present two applications of the algorithm. In the first one we apply the algorithm in order to construct a model of the English language, and use this model to correct corrupted text. In the second application we construct a simple stochastic model for E.coli DNA.  相似文献   

6.
汪涛  靳聪  李小兵  帖云  齐林 《计算机应用》2021,41(12):3585-3589
符号音乐的生成在人工智能领域中仍然是一个尚未解决的问题,面临着诸多挑战。经研究发现,现有的多音轨音乐生成方法在旋律、节奏及和谐度上均达不到市场所要求的效果,并且生成的音乐大多不符合基础的乐理知识。为了解决以上问题,提出一种新颖的基于Transformer的多音轨音乐生成对抗网络(Transformer-GAN),以乐理规则为指导来产生具有高音乐性的音乐作品。首先,采用Transformer的译码部分与在Transformer基础之上改编的Cross-Track Transformer(CT-Transformer)分别对单音轨内部及多音轨之间的信息进行学习;然后,使用乐理规则和交叉熵损失相结合的方法引导生成网络的训练,并在训练鉴别网络的同时优化精心设计的目标损失函数;最后,生成具有旋律性、节奏性及和谐性的多音轨音乐作品。实验结果表明,与其他多乐器音乐生成模型相比,在钢琴轨、吉他轨及贝斯轨上,Transformer-GAN的预测精确度(PA)最低分别提升了12%、11%及22%,序列相似度(SS)最低分别提升了13%、6%及10%,休止符指标最低分别提升了8%、4%及17%。由此可见,Transformer-GAN在加入了CT-Transformer及音乐规则奖励模块之后能有效提升音乐的PA、SS等指标,使生成的音乐质量整体上有较大的提升。  相似文献   

7.
We describe a general approach to compute a similarity measure between distributions generated by probabilistic tree automata that may be used in a number of applications in the pattern recognition field. In particular, we show how this similarity can be computed for families of structured (XML) documents. In such case, the use of regular expressions to specify the right part of the expansion rules adds some complexity to the task.  相似文献   

8.
Tresp  Volker  Hollatz  Jürgen  Ahmad  Subutai 《Machine Learning》1997,27(2):173-200
There is great interest in understanding the intrinsic knowledge neural networks have acquired during training. Most work in this direction is focussed on the multi-layer perceptron architecture. The topic of this paper is networks of Gaussian basis functions which are used extensively as learning systems in neural computation. We show that networks of Gaussian basis functions can be generated from simple probabilistic rules. Also, if appropriate learning rules are used, probabilistic rules can be extracted from trained networks. We present methods for the reduction of network complexity with the goal of obtaining concise and meaningful rules. We show how prior knowledge can be refined or supplemented using data by employing either a Bayesian approach, by a weighted combination of knowledge bases, or by generating artificial training data representing the prior knowledge. We validate our approach using a standard statistical data set.  相似文献   

9.
Attribute reduction in decision-theoretic rough set models   总被引:6,自引:0,他引:6  
Yiyu Yao 《Information Sciences》2008,178(17):3356-3373
Rough set theory can be applied to rule induction. There are two different types of classification rules, positive and boundary rules, leading to different decisions and consequences. They can be distinguished not only from the syntax measures such as confidence, coverage and generality, but also the semantic measures such as decision-monotocity, cost and risk. The classification rules can be evaluated locally for each individual rule, or globally for a set of rules. Both the two types of classification rules can be generated from, and interpreted by, a decision-theoretic model, which is a probabilistic extension of the Pawlak rough set model.As an important concept of rough set theory, an attribute reduct is a subset of attributes that are jointly sufficient and individually necessary for preserving a particular property of the given information table. This paper addresses attribute reduction in decision-theoretic rough set models regarding different classification properties, such as: decision-monotocity, confidence, coverage, generality and cost. It is important to note that many of these properties can be truthfully reflected by a single measure γ in the Pawlak rough set model. On the other hand, they need to be considered separately in probabilistic models. A straightforward extension of the γ measure is unable to evaluate these properties. This study provides a new insight into the problem of attribute reduction.  相似文献   

10.
Phrase-based translation models, with sequences of words (phrases) as translation units, achieve state-of-the-art translation performance. However, phrase reordering is a major challenge for this model. Recently, researchers have focused on utilizing syntax to improve phrase reordering. In adding syntactic knowledge into phrase reordering model, using handcrafted or probabilistic syntactic rules to reorder the source-language approximating the target-language word order has been successful in improving translation quality. However, it suffers from propagating the pre-ordering errors to the later translation step (e.g. decoding). In this paper, we propose a novel framework to uniformly represent the handcrafted and probabilistic syntactic rules and integrate them more effectively into phrase-based translation. In the translation phase, for a source sentence to be translated, handcrafted or probabilistic syntactic rules are first acquired from the source parse tree prior to translation, and then instead of reordering the source sentence directly, we input these rules into the decoder and design a new algorithm to apply these rules during decoding. In order to attach more importance to the syntactic rules and distinguish reordering between syntactic and non-syntactic unit reordering, we propose to design respectively a syntactic reordering model and a non-syntactic reordering model. The syntactic rules will guide phrase reordering in decoding within the syntactic reordering model. Extensive experiments on Chinese-to-English translation show that our approach, whether incorporating handcrafted or probabilistic syntactic rules, significantly outperforms the previous methods.  相似文献   

11.
Computer music composition is the dream of computer music researchers. In this paper, a top-down approach is investigated to discover the rules of musical composition from given music objects and to create a new music object of which style is similar to the given music objects based on the discovered composition rules. The proposed approach utilizes the data mining techniques in order to discover the styled rules of music composition characterized by music structures, melody styles and motifs. A new music object is generated based on the discovered rules. To measure the effectiveness of the proposed approach in computer music composition, a method similar to the Turing test was adopted to test the differences between the machine-generated and human-composed music. Experimental results show that it is hard to distinguish between them. The other experiment showed that the style of generated music is similar to that of the given music objects.  相似文献   

12.
We propose two models for improving the performance of rule-based classification under unbalanced and highly imprecise domains. Both models are probabilistic frameworks aimed to boost the performance of basic rule-based classifiers. The first model implements a global-to-local scheme, where the response of a global rule-based classifier is refined by performing a probabilistic analysis of the coverage of its rules. In particular, the coverage of the individual rules is used to learn local probabilistic models, which ultimately refine the predictions from the corresponding rules of the global classifier. The second model implements a dual local-to-global strategy, in which single classification rules are combined within an exponential probabilistic model in order to boost the overall performance as a side effect of mutual influence. Several variants of the basic ideas are studied, and their performances are thoroughly evaluated and compared with state-of-the-art algorithms on standard benchmark datasets.  相似文献   

13.
We consider the problem of formal automatic verification of cryptographic protocols when some data, like poorly chosen passwords, can be guessed by dictionary attacks. First, we define a theory of these attacks and propose an inference system modeling the deduction capabilities of an intruder. This system extends a set of well-studied deduction rules for symmetric and public key encryption, often called Dolev–Yao rules, with the introduction of a probabilistic encryption operator and guessing abilities for the intruder. Then, we show that the intruder deduction problem in this extended model is decidable in PTIME. The proof is based on a locality lemma for our inference system. This first result yields to an NP decision procedure for the protocol insecurity problem in the presence of a passive intruder. In the active case, the same problem is proved to be NP-complete: we give a procedure for simultaneously solving symbolic constraints with variables that represent intruder deductions. We illustrate the procedure with examples of published protocols and compare our model to other recent formal definitions of dictionary attacks.  相似文献   

14.
It is known that Hough transform computation can be significantly accelerated by polling instead of voting. A small part of the data set is selected at random and used as input to the algorithm. The performance of these probabilistic Hough transforms depends on the poll size. Most probabilistic Hough algorithms use a fixed poll size, which is far from optimal since conservative design requires the fixed poll size to be much larger than necessary under average conditions. It has recently been experimentally demonstrated that adaptive termination of voting can lead to improved performance in terms of the error rate versus average poll size tradeoff. However, the lack of a solid theoretical foundation made general performance evaluation and optimal design of adaptive stopping rules nearly impossible. In this paper it is shown that the statistical theory of sequential hypotheses testing can provide a useful theoretical framework for the analysis and development of adaptive stopping rules for the probabilistic Hough transform. The algorithm is restated in statistical terms and two novel rules for adaptive termination of the polling are developed. The performance of the suggested stopping rules is verified using synthetic data as well as real images. It is shown that the extension suggested in this paper to A. Wald's one-sided alternative sequential test (Sequential Analysis,Wiley, New York, 1947) performs better than previously available adaptive (or fixed) stopping rules.  相似文献   

15.
Probabilistic Horn abduction is a simple framework to combine probabilistic and logical reasoning into a coherent practical framework. The numbers can be consistently interpreted probabilistically, and all of the rules can be interpreted logically. The relationship between probabilistic Horn abduction and logic programming is at two levels. At the first level probabilistic Horn abduction is an extension of pure Prolog, that is useful for diagnosis and other evidential reasoning tasks. At another level, current logic programming implementation techniques can be used to efficiently implement probabilistic Horn abduction. This forms the basis of an “anytime” algorithm for estimating arbitrary conditional probabilities. The focus of this paper is on the implementation.  相似文献   

16.
In this study, we propose a fuzzy logic based approach for the ‘harmonization with constraints’ problem in music. After the mathematical modeling of the harmonization problem, the solution is carried out by means of proper fuzzy membership functions depending on the rules imposed by the music theory. To demonstrate the applicability of the proposed technique, particular problems of note-against-note two-voice counterpoint are considered. The method is flexible, adaptable and simple in terms of implementation. Moreover, from the constraint satisfaction perspective, the solutions generated by the method satisfy ‘arc-consistency’; which could not have been achieved by majority of the previous studies existing in the literature. The method also provides a gateway for the arranger/composer to incorporate his/her own stylistic preferences to the solution by simply adjusting the shapes of the membership functions. Additional features (such as providing variability in the final solutions at different executions) increase the power of the method in terms of creativity. This approach can be extended for the solution of more complicated problems in music such as orchestration, improvisation, and even composition.  相似文献   

17.
In this paper, a meta-structure of piano accompaniment figure (meta-structure for short) is proposed to harmonize a melodic piece of music so as to construct a multi-voice music. Here we approach melody harmonization with piano accompaniment as a machine learning task in a probabilistic framework. A series of piano accompaniment figures are collected from the massive existing sample scores and converted into a set of meta-structure. After the procedure of samples training, a model is formulated to generate a proper piano accompaniment figure for a harmonizing unit in the context. This model is flexible in harmonizing a melody with piano accompaniment. The experimental results are evaluated and discussed.  相似文献   

18.
In this paper, the fusion of probabilistic knowledge-based classification rules and learning automata theory is proposed and as a result we present a set of probabilistic classification rules with self-learning capability. The probabilities of the classification rules change dynamically guided by a supervised reinforcement process aimed at obtaining an optimum classification accuracy. This novel classifier is applied to the automatic recognition of digital images corresponding to visual landmarks for the autonomous navigation of an unmanned aerial vehicle (UAV) developed by the authors. The classification accuracy of the proposed classifier and its comparison with well-established pattern recognition methods is finally reported.  相似文献   

19.
针对海上航行中障碍物躲避问题, 提出改进的随机路径图及和声算法为舰船进行航线规划. 该算法首先利 用改进的随机路径图, 在障碍物边缘、起点与终点连线等关键区域进行节点设置及扩充, 根据舰船及障碍物运动特 征, 分阶段在海图上设置节点并连接, 利用较少的节点生成完备的路径网络图, 基于此选择节点生成初始全局航线; 其次利用改进的和声算法对航线进行优化, 障碍物的运动特性导致解空间为复杂的多峰形态, 为避免节点位置变动 导致新生成航线不可行, 设置限定条件, 仅对满足要求的航线利用航线交叉、消除节点、微调等策略进行优化. 实验 结果表明, 相较对比算法, 所提算法能够有效生成更高质量的全局航线, 且在优化过程中生成的不可行航线数量远 低于其余几种算法, 具有更高的可靠性及稳定性.  相似文献   

20.
This work will present a review of the concept of classifier combination based on the combined discriminant function. We will present a Bayesian approach, in which the discriminant function assumes the role of the posterior probability. We will propose a probabilistic interpretation of expert rules and conditions of knowledge consistency for expert rules and learning sets. We will suggest how to measure the quality of learning materials and we will use the measure mentioned above for an algorithm that eliminates contradictions in the rule set. In this work several recognition algorithms will be described, based on either: (i) pure rules, or; (ii) rules together with learning sets. Furthermore, the original concept of information unification, which enables the formation of rules on the basis of learning set or learning set on the basis of rules will be proposed. The obtained conclusions will serve as a spring‐board for the formulation of new project guidelines for this type of decision‐making system. At the end, experimental results of the proposed algorithms will be presented, both from computer generated data and for a real problem from the medical diagnostics field.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司    京ICP备09084417号-23

京公网安备 11010802026262号