Computational Music Performance Research, Applied
Computational Music Performance Research, Applied
Disciplines
Computer Sciences (50%); Arts (40%); Physics, Astronomy (10%)
Keywords
-
Artifical Intelligence,
Expressive Music Perfromance,
Intelligent Music Processing,
Computational Perception
This project builds on many years of basic research where we investigated various aspects of expressive music performance with novel computational methods, and developed predictive computer models of performance dimensions such as expressive timing, dynamics, and articulation. In the process, we not only made interesting discoveries about this complex art, but also developed a lot of new computational music processing technology (e.g., algorithms for beat and tempo tracking, audio-to-audio synchronisation, performance visualisation, or performance prediction, to name but a few). The goal of the present project is to build on these methods and develop them to the point where they may become efficient and robust enough for real-world applications. Expression in music may seem to be an unlikely topic for application-oriented research, but the fact is that computers that can deal effectively with expressively played music (audio) could make possible entirely new and creative (and commercially relevant) applications. The research to be performed in this project is motivated by three such visionary application scenarios: 1. Multimedia Arts: Real-time visualisation of music-dramatic works on stage (as realised, in particular, by the Ars Electronica Center / Futurelab, Linz). 2. Synthetic Musical "Expression": Naturally expressive synthetic instruments and virtual orchestras. 3. Automatic Feedback on Expressive Performance: Interactive tools for music performance teaching and analysis. From an analysis of the requirements that such applications would pose, we have derived the following three main research goals: 1. Expression Extraction: computer methods for automatically and precisely extracting details of expressive performance (i.e., tempo, timing, dynamics, etc.) from audio recordings, possibly in real time; this also involves methods for the precise alignment of audio recordings to musical scores. 2. Expression Tracking: algorithms for following live performance (audio streams) and aligning them to given scores or reference audio recordings in real time; these methods should also work with incomplete scores or recordings and be extremely robust and reactive in the face of expressive deviations, unforeseen disruptions, errors, etc. A sub-problem here is the development of methods for inducing robust predictive tempo models on-line. 3. Expression Rendering: computational models of specific dimensions of expressive performance that permit the automatic generation of synthetic music performances that sound reasonably musical and "natural"; possibly also methods for interactively modifying or controlling such performances. A number of potential application partners - e.g., the Ars Electronica Center and the Vienna Symphonic Library - are supporting this work (e.g., by supplying test data and audio materials), motivated by the prospect of being able to realise complex application projects later on.
The overall goal of the project was to develop musically intelligent computers that can deal with expressive performances and recordings of classical music in useful ways. The term expressive performance is used here to mean all the things that performing musicians (singers and instrumentalists) add to a piece in order to make it come alive and give the piece a specific character (e.g., the specific choice of tempo, expressive timing (speeding up, slowing down, accentuating notes and chords, changing the loudness, etc.). The motivation of the project was to develop computers that have an understanding of such aspects of music and can support real-world (artistic and music-didactic) applications such as real-time music and performance visualisation; synthetic performers and virtual orchestras; and didactic tools for music performance analysis and teaching. A large number of computational methods were developed from computer programs that can identify the piece being played, simply by listening to a live performance, to computers that listen to live on-stage performances in real time (through a microphone) and are able to track the performers position in the musical piece, regardless of the chosen tempo or whether the musicians make mistakes or leave out entire parts. (This can be used, among other things, for live music visualisation, score display, and to drive a fully autonomous automatic page turning device for musicians). Also, computer programs were developed that learn to play a given musical piece more or less expressively, by shaping tempo and loudness in such a way that the piece sounds lively and (most of the time) natural. These programs won all relevant awards at an international Computer Piano Performance Contest (RENCON 2011, Padova, Italy), where scientists compare their computer models of expressive music performance.Some of the new technologies described above can be seen in action in our YouTube Channel www.youtube.com/user/CPJKU .The project has laid the technological foundations for a number of follow-up projects and cooperations where the technologies will be used in real applications among others, the multi-national European research project PHENICX (http://phenicx.upf.edu), where we cooperate with the world-famous Royal Concertgebouw Orchestra Amsterdam to create new multi-media experiences around classical music concerts.
- Universität Linz - 100%
Research Output
- 44 Citations
- 19 Publications
-
2013
Title Siarct-Cfp: Improving Precision And The Discovery Of Inexact Musical Patterns In Point-Set Representations. DOI 10.5281/zenodo.1416622 Type Other Author Arzt A Link Publication -
2013
Title Siarct-Cfp: Improving Precision And The Discovery Of Inexact Musical Patterns In Point-Set Representations. DOI 10.5281/zenodo.1416621 Type Other Author Arzt A Link Publication -
2012
Title Sound/tracks: artistic real-time sonification of train journeys DOI 10.1007/s12193-011-0089-x Type Journal Article Author Knees P Journal Journal on Multimodal User Interfaces Pages 87-93 -
2012
Title Linear Basis Models for Prediction and Analysis of Musical Expression DOI 10.1080/09298215.2012.731071 Type Journal Article Author Grachten M Journal Journal of New Music Research Pages 311-322 -
2012
Title Towards a Complete Classical Music Companion. Type Journal Article Author Arzt A Journal Proceedings of the 20th European Conference on Artificial Intelligence (ECAI 2012), Montpellier, France. -
2012
Title Towards a Complete Classical Music Companion DOI 10.3233/978-1-61499-098-7-67 Type Book Chapter Author Arzt Andreas Publisher IOS Press -
2014
Title The Complete Classical Music Companion v0.9. Type Conference Proceeding Abstract Author Arzt A Conference Demo paper, 53rd AES Conference on Semantic Audio, Audio Engineering Society, London, Jan -
2014
Title Bridging the Audio-Symbolic Gap: The Discovery of Repeated Note Content Directly from Polyphonic Music Audio. Type Conference Proceeding Abstract Author Collins T Conference Proceedings of the 53rd AES Conference on Semantic Audio, Audio Engineering Society, London. -
2012
Title Expressive Performance Rendering with Probabilistic Models DOI 10.1007/978-1-4471-4123-5_3 Type Book Chapter Author Flossmann S Publisher Springer Nature Pages 75-98 -
2012
Title Fast Identification of Piece and Score Position via Symbolic Fingerprinting. Type Conference Proceeding Abstract Author Arzt A Conference Proceedings of the 13th International Society for Music Information Retrieval Conference (ISMIR 2012), Porto, Portugal. -
2011
Title On the Importance of 'Real' Audio Data for MIR Algorithm Evaluation at the Note-Leve - A comparative Study. Type Conference Proceeding Abstract Author Niedermayer B Conference Proceedings of the 12th International Society for Music Information Retrieval Conference (ISMIR 2011), Miami, Florida, USA. -
2011
Title The Vowel Worm: Real-Time Mapping and Visualisation of Sung Vowels in Music. Type Conference Proceeding Abstract Author Frostel H Conference Proceedings of the 8th Sound and Music Computing Conference (SMC 2011), Padova, Italy. -
2013
Title Tracking Rests and Tempo Changes: Improved Score Following with Particle Filters. Type Conference Proceeding Abstract Author Korzeniowski F Conference Proceedings of the International Computer Music Conference (ICMC), Perth, Australia. -
2010
Title Towards Effective 'Any-Time' Music Tracking. Type Conference Proceeding Abstract Author Arzt A Conference Proceedings of the Starting AI Researchers' Symposium (STAIRS 2010), Lisbon, Portugal. -
2010
Title Strategies towards the Automatic Annotation of Classical Piano Music. Type Conference Proceeding Abstract Author Niedermayer B Conference Proceedings of the 7th Sound and Music Computing Conference (SMC 2010), Barcelona, Spain. -
2010
Title Robust Real-time Music Tracking. Type Conference Proceeding Abstract Author Arzt A Conference Proceedings of the 2nd Vienna Talk on Musical Acoustics (VITA 2010), Vienna, Austria. -
2014
Title What Really Moves Us in Music: Expressivity as a Challenge to Semantic Audio Research. Type Conference Proceeding Abstract Author Widmer G Conference (Invited Abstract) Proceedings of the 53rd AES Conference on Semantic Audio, Audio Engineering Society, London. -
2010
Title Simple Tempo Models for Real-time Music Tracking. Type Conference Proceeding Abstract Author Arzt A Conference Proceedings of the 7th Sound and Music Computing Conference (SMC 2010), Barcelona, Spain. -
2011
Title Version Detection for Historical Musical Automata. Type Conference Proceeding Abstract Author Niedermayer B Conference Proceedings of the 8th Sound and Music Computing Conference (SMC 2011), Padova, Italy.