Centre for Digital Music
School of Electronic Engineering and Computer Science
Queen Mary University of London, UK
Mon. June 30th, 2014
3:00-5:30PM
Engineering Building - Room ENG 209
Free access
⇒ see maps and how to get to QMUL
Computer music tools for music notation have long been restricted to conventional approaches and dominated by a few systems, mainly oriented towards music engraving. During the last decade and driven by artistic and technological evolutions, new tools and new forms of music representation have emerged. The recent advent of systems like Bach, MaxScore or INScore (to cite just a few), clearly indicates that computer music notation tools have become mature enough to diverge from traditional approaches and to explore new domains and usages.
This seminar takes place in the series of events organized by the AFIM Work Group on music notation issues. It is oriented to notation tools for music computation and performance.
See also: http://www.eecs.qmul.ac.uk/events/view/seminar-on-music-notation-computation
AFIM work groupe Les nouveaux espaces de la notation musicale - Dominique Fober, Pierre Couprie, Yann Geslin, Jean Bresson (GRAME, Lyon / IReMus, Université de Paris-Sorbonne / INA-GRM, Paris / IRCAM UMR STMS, Paris) This presentation summarizes the work of a group dedicated to the music notation problematics and issues in the field of computer music.
Thomas Coffy, Arshia Cont,Jean-Louis Giavitto (IRCAM / CNRS / INRIA) Composing interactive music (possibly with score following), requires tight coordination and several round trips between many tools to write the score, and to author the electronic actions, and assess their synchronisation. Unifying composition and performance phases provides composers and electronic music designers a global approach with the best of both worlds. The AscoGraph Interface is an incarnation of the need for unifying authorship and performance using Antescofo reactive engine and (optionally) score following.
Shengchen Li (Queen Mary University of London)
Phrases are common musical units akin to that in speech and text. In music performance, performers often change the way they vary the tempo from one phrase to the next in order to choreograph patterns of repetition and contrast. This activity is commonly referred to as expressive music performance. Despite its importance, expressive performance is still poorly understood. No formal models exist that would explain, or at least quantify and characterise, aspects of commonalities and differences in performance style. We present a method that showing tempo variation across phrases such that expressiveness across performances are comparable by the tempo pattern annotation.
David Rizo, José M. Iñesta, Beatriz Pascual, Antonio Ezquerro, Luis A. González (DLSI - University of Alicante / EASD Alicante / CSIC)
An interactive notation system to encode mensural spanish music from the XVII and XVIII centuries is presented that is able to deal with the huge variability of manuscripts available. Based on classic computational methods for notate music, some new contributions that have been made for allowing the encoding of this kind of music will be presented.
Arild Stenberg (Faculty of Music, University of Cambridge)
In this workshop I will comment on a recently completed experiment, in which graphically modified versions of musical pieces were compared to conventional versions. Results showed that the modified versions, which used layout and spacing rules that reflected the underlying structure of the musical discourse, drew out less reading errors. I would like to demonstrate the possible advantages of the new score designs, with the concur of a few participants.
Katerina Kosta, Elaine Chew, Oscar F. Bandtlow (Queen Mary University of London)
Our study aims to understand the connection between performed loudness and dynamic notation such as piano (p) and forte (f) in the score. The study serves to demonstrate that the meanings of these dynamic markings change, depending on the intended (score-defined) and projected (actual) dynamic levels of the surrounding context. We use the PELTchange-point algorithm to detect changes in loudness (in sones) time series data derived from the Mazurka dataset. Next, we verify that dynamic markings correspond to change points so as to conceptualize differences in dynamic values. Knowing the change points allows us to abstract the shaping of dynamics. The dynamic level gradations further allow us to construct an evolving dynamic map linking the performer's intended loudness and the composer's score markings. We show how these intentions can be modelled as score-based rules. Applications include the synthesis of more natural sounding music from symbolic information, and transcription of expressive score markings from audio.
INEDIT ANR-12-CORD-0009
EFFICACe ANR-13-JS02-0004-01.