首页 | 官方网站   微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
A review of musical creativity in collaborative virtual environments (CVE) shows recurring interaction metaphors that tend from precise control of individual parameters to higher level gestural influence over whole systems. Musical performances in CVE also show a consistent re-emergence of a unique form of collaboration called “melding” in which individual virtuosity is subsumed to that of the group. Based on these observations, we hypothesized that CVE could be a medium for creating new forms of music, and developed the audiovisual augmented reality system (AVIARy) to explore higher level metaphors for composing spatial music in CVE. This paper describes the AVIARy system, the initial experiments with interaction metaphors, and the application of the system to develop and stage a collaborative musical performance at a sound art concert. The results from these experiments indicate that CVE can be a medium for new forms of musical creativity and distinctive forms of music.  相似文献   

2.
音乐喷泉是指利用音乐的各种特征要素来控制喷泉水泵的运行组合和转速变化,以及灯光的组合变化的喷泉。好的音乐喷泉能将人对水柱变化的视觉感受与对音乐的听觉感受融为一体,烘托出完美的环境艺术效果。为实现对音乐喷泉的可编辑水型程序控制,本文基于CAN总线的性能特点,构建音乐喷泉控制系统。在分析系统整体结构的基础上,分别进行了系统的硬件设计和软件设计。通过实际应用,该系统表现出较好的可靠性、灵活性、易操作性和可扩展性。  相似文献   

3.
In a New Interface for Musical Expression (NIME), the design of the relationship between a musician’s actions and the instrument’s sound response is critical in creating instruments that facilitate expressive music performance. A growing body of NIMEs expose this design task to the end performer themselves, leading to the possibility of new insights into NIME mapping design: what can be learned from the mapping design strategies of practicing musicians? This research contributes a qualitative study of four highly experienced users of an end-user mapping instrument to examine their mapping practice. The study reveals that the musicians focus on designing simple, robust mappings that minimize errors, embellishing these control gestures with theatrical ancillary gestures that express metaphors. However, musical expression is hindered by the unintentional triggering of musical events. From these findings, a series of heuristics are presented that can be applied in the future development of NIMEs.  相似文献   

4.
Vibrato is one of the most common techniques in musical performances for enriching the sound. We propose a novel method that automatically detects vibrato in monophonic music. It is based on modeling the probability of vibrato existence using three vibrato parameters, i.e., the vibrato rate, extent, and intonation. Experiments using various musical instrument tones and the solo performance show the effectiveness of the method. The proposed method can be applied to music recognition such as the wav-to-midi conversion.  相似文献   

5.
6.
G. Tremblay  F. Champagne 《Software》2007,37(2):207-230
Musical dictations for ear training and training in music writing form a key practice of basic musical training. Marking students' dictation exercises for large groups of students can require a lot of work. In this paper, we present a tool, called CADiM, that can help automate the marking of such musical dictations. The edit distance, which computes the similarity between any two strings, has been used in various areas such as string/text analysis, protein/genome matching in bio‐computing and musical applications, for example music retrieval or musicological analysis. CADiM's marking algorithm is based on an earlier edit distance proposed for musical sequences, but adapted to reflect the marking heuristic used by a domain expert's specific approach to musical training. Computing an edit distance on musical scores requires using an appropriate representation. More precisely, given our specific context, a symbolic representation is required. We use MusicXML, an XML application for standard Western music notation. Given a Document Type Definition for MusicXML, existing Java tools can generate a MusicXML parser. Such a parser, given appropriate input files, then generates an intermediate form (DOM object) on which analyses and transformations are performed in order to compute the edit distance. In turn, the edit distance is used to give a mark as well as identify the key errors. CADiM has been applied to a number of test cases and the results compared with those obtained by a domain expert. Overall, the results are promising, namely, only 3% difference between the domain expert's marks and those produced by CADiM. Copyright © 2006 John Wiley & Sons, Ltd.  相似文献   

7.
音符识别是音乐信号分析处理领域内非常重要的研究内容,它为计算自动识谱、乐器调音、音乐数据库检索和电子音乐合成提供技术基础。传统的音符识别方法通过估计音符基频与标准频率进行一一对应识别。然而一一对应较为困难,且随着音符基频的增大将导致误差增大,可识别的音符基频范围不广。为此,文中采用分类的思想进行音符识别。首先,建立所需识别的音符音频库,并针对音乐信号低频信息的重要性,选取梅尔频率倒谱系数(Mel Frequency Cepstrum Coefficients,MFCC)和常数Q变换(Constant Q Transform,CQT)作为音符信号提取特征。然后,将提取的特征MFCC和CQT分别作为音符识别的单一特征输入和两者特征融合输入;结合Softmax回归模型在多分类问题中的优势以及BP神经网络良好的非线性映射能力与自学习能力,构建基于Softmax回归模型的BP神经网络多分类识别器。在MATLAB R2016a的仿真环境下,将特征参数输入到多分类器中进行学习与训练,通过调整网络参数来寻找最优解。通过改变训练样本数进行对比实验。实验结果表明,将融合特征(MFCC+CQT)作为特征输入时,可以识别出从大字组到小字三组的25类音符,并可以获得95.6%的平均识别率;在识别过程中,特征CQT比特征MFCC的贡献更大。实验数据充分说明,利用分类的思想提取音符信号的MFCC和CQT特征来进行音符识别,可以取得很好的识别效果,并且不受音符基频范围的限制。  相似文献   

8.

According to narratology or narrative theory, a piece of artwork should tell a story based on its various tensions. In this study, an automated music composition algorithm using musical tension energy was proposed; this algorithm can generate a musical piece by changing the musical tension. The proposed innovative Algorithmic Composition Musical Tension Energy (ACMTE) method uses the level of musical tension; this level is determined primarily by the chord progression and also the musical parameters of pitch interval and rhythm. The effects of musical tension energy on those parameters were analyzed. This paper presents a formula that unifies all generated parts. The experimental results demonstrate that thousands of beautiful pieces can easily be made without the use of a music database. This algorithmic composition method can be easily applied in both streaming media and to portable music devices, such as smart phones, notebooks, and MP3 players.

  相似文献   

9.
The design and implementation of Harbin Institute of Technology—Digital Music Library (HIT-DML) is presented in this paper. Firstly, a novel framework, a music data model, and a query language are proposed as the theoretical foundation of the library. Secondly, music computing algorithms used in the library for feature extracting and matching are described. In addition, indices are introduced for both mining themes of music objects and accelerating content-based information retrieval. Finally, experimental results on the indices and the current development of the library are provided. HIT-DML is distinguished by the following points. First, it is inherently based on database systems, and combines database technologies with multimedia technologies seamlessly. Musical data are structurally stored. Second, it has a solid theoretical foundation, from framework and data model to query language. Last, it can retrieve musical information based on content against different kinds of musical instruments. The indices used, also power the library.  相似文献   

10.
Musical scores are traditionally retrieved by title, composer or subject classification. Just as multimedia computer systems increase the range of opportunities available for presenting musical information, so they also offer new ways of posing musically-oriented queries. This paper shows how scores can be retrieved from a database on the basis of a few notes sung or hummed into a microphone. The design of such a facility raises several interesting issues pertaining to music retrieval. We first describe an interface that transcribes acoustic input into standard music notation. We then analyze string matching requirements for ranked retrieval of music and present the results of an experiment which tests how accurately people sing well known melodies. The performance of several string matching criteria are analyzed using two folk song databases. Finally, we describe a prototype system which has been developed for retrieval of tunes from acoustic input and evaluate its performance.  相似文献   

11.
In this paper, we present a system that visualizes the expressive quality of a music performance using a virtual head. We provide a mapping through several parameter spaces: on the input side, we have elaborated a mapping between values of acoustic cues and emotion as well as expressivity parameters; on the output side, we propose a mapping between these parameters and the behaviors of the virtual head. This mapping ensures a coherency between the acoustic source and the animation of the virtual head. After presenting some background information on behavior expressivity of humans, we introduce our model of expressivity. We explain how we have elaborated the mapping between the acoustic and the behavior cues. Then, we describe the implementation of a working system that controls the behavior of a human-like head that varies depending on the emotional and acoustic characteristics of the musical execution. Finally, we present the tests we conducted to validate our mapping between the emotive content of the music performance and the expressivity parameters.  相似文献   

12.
13.
In the News     
The first story, "AI Heralds a New Musical Age," discusses current uses of AI technology to analyze and categorize music, as well as synthesize musical accompaniment. The second story, "Multiagent Designs Could Safeguard Networks across the Web," looks at how multiagent systems and neural networks can provide security against malicious software attacks.  相似文献   

14.
This paper focuses on the modeling of musical melodies as networks. Notes of a melody can be treated as nodes of a network. Connections are created whenever notes are played in sequence. We analyze some main tracks coming from different music genres, with melodies played using different musical instruments. We find out that the considered networks are, in general, scale free networks and exhibit the small world property. We measure the main metrics and assess whether these networks can be considered as formed by sub-communities. Outcomes confirm that peculiar features of the tracks can be extracted from this analysis methodology. This approach can have an impact in several multimedia applications such as music didactics, multimedia entertainment, and digital music generation.  相似文献   

15.
This article describes a prototype immersive musical instrument that expands the concepts of traditional musical elements and allows the integration of a spatial dimension using 3D music and sound objects into the musical environment by employing physical, visual, and sound immersion. From the prototype's evaluation results, we conclude that immersive musical instruments naturally give users a way to perform, compose, or improvise music (in real time) with a high degree of control.  相似文献   

16.
简谱是大家非常熟悉和常用的乐谱之一,但是在目前光学乐谱识别领域中对它的研究几乎空白,研究的焦点都集中在五线谱识别上。在深入分析简谱特征的基础上,提出了一整套简谱识别的实现方法。输入的光学简谱经预处理后,首先通过每行的小节线特征提取出简谱部分,然后通过投影法和种子填充算法定位出简谱符号基元的位置,并由此采用不同的识别算法识别出每个简谱符号基元的类型,最后通过组装把各个简谱符号组装成音乐特征符,形成数字化乐谱。实验表明,这套方法对印刷乐谱的识别达到了令人满意的效果,是一项有意义的研究。  相似文献   

17.
Cope  D. 《Computer》1991,24(7):22-28
A research project called Experiments in Musical Intelligence (EMI) is discussed. One subprogram of EMI is an expert system that uses pattern recognition processes to create recombinant music, i.e. music written in the styles of various composers by means of a contextual recombination of elements in the music of those composers. This EMI subprogram separates and analyzes musical pitches and durations and then mixes and recombines the patterns of these pitches and durations so that while each new composition is different, it substantially conforms to the style of the original. The fundamental problems in building a program to produce effective recombinant music are identified. The three steps used by the EMI program are discussed. They are: pattern matching for characteristics of the composer's style, analyzing each component for its deep hierarchical musical function, and reassembling the parts sensitively with a technique drawn from natural-language processing. Some examples of EMI's output are examined  相似文献   

18.
It is generally agreed that sound quality is one of the most difficult to measure characteristics of an electroacoustic products such as an earphone or a loudspeaker. A conventional approach used to measure people's subjective perception of these sound reproduction products is to conduct a jury test on a group of experiment participants; however, jury tests require considerable costs, including those of effort and time. As development speed and cost become strategic competitive dimensions, electroacoustic industry needs a more efficient approach to assess their newly developed products for subjective sound quality. This study developed and validated a quantitative model, the tonal harmony level (THL), that can effectively predict people's subjective perceptions of music quality. Participants' subjective perception and preference was measured for four music genres by listening to short music excerpts (8 s) in both ordinal and interval scales. The purpose of using two scales is to examine the consistency between subjective perceptions and to determine the robustness of the subjective measurements. The experimental results were very stable over the two assessment procedures, and the objective THL measure is highly correlated to subjective preference. The analysis suggests that the construction of subjective music quality prediction models should also consider music genre. Among four types of music, musical solos consisted of human vocals accompanied by a few instruments has a distinct pattern from the other three types. Thus, while R2 value of the overall regression model is 0.707, the R2 values are 0.955 and 0.901 when four music genres are categorized into two groups according to their patterns. When efficiency and accuracy were simultaneously considered, according to the results of this study, the approach of two-group categorization can be adopted.  相似文献   

19.
NURBS (non-uniform rational b-spline) modelling has become a ubiquitous tool within architectural design praxis. In this article I examine three projects that utilise NURBS modelling as a means for which a musical system's inherent spatiality is visualised. There are numerous precedents for which architectural form is a derivation of a musical system, or a musical system is proportionally informed by architectonic gesture. I propose in this article three NURBS modelling methodologies: for the spatial analysis of Karlheinz Stockhausen's sound projection geometries in Pole für 2; for a spatial realisation of John Cage's indeterminate work Variations III; and for the generation of a surface manifold informed by musically derived soundscape data from the Japanese garden Kyu Furukawa Teien. Rather than seeking to translate music into inhabitable architecture, or architectonic form into music, I highlight an approach that produces an interstitial territory between discourses on architecture and music analysis.  相似文献   

20.
We propose a new approach to instrument recognition in the context of real music orchestrations ranging from solos to quartets. The strength of our approach is that it does not require prior musical source separation. Thanks to a hierarchical clustering algorithm exploiting robust probabilistic distances, we obtain a taxonomy of musical ensembles which is used to efficiently classify possible combinations of instruments played simultaneously. Moreover, a wide set of acoustic features is studied including some new proposals. In particular, signal to mask ratios are found to be useful features for audio classification. This study focuses on a single music genre (i.e., jazz) but combines a variety of instruments among which are percussion and singing voice. Using a varied database of sound excerpts from commercial recordings, we show that the segmentation of music with respect to the instruments played can be achieved with an average accuracy of 53%.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司    京ICP备09084417号-23

京公网安备 11010802026262号