This Case Study focuses on research undertaken in the Centre for Digital Music (C4DM) into Music Informatics. This research – sometimes also known as Music Information Retrieval, Semantic Audio or Intelligent Audio – investigates methods to extract semantic information from musical audio files, and use this extracted information in the production, distribution and consumption of music. According to EPSRC’s 2010 “Programme landscapes” for ICT, C4DM is one of the top 5 centres in “People and Interactivity,” and in EPSRC’s Music and Acoustic Technology area, C4DM is the largest of the “Current Major EPSRC Research Investments”.
The research has had an economic impact, and an impact on society, culture and creativity. To make our Music Informatics research available to as many potential users as possible, we release many open source software tools. Our website isophonics.net, which also hosts videos and screencasts of demos, receives some 1500 visitors/month (Google Analytics). In Nov 2011 Isophonics was selected as one of eight European music-tech startups to be invited to pitch to a panel of judges at “TechPitch4.5”, held at EMI Music headquarters [I6].
Sonic Visualiser is available for Linux, Mac OS/X and Windows, and has been downloaded over 330,000 times since its release in 2007 (SourceForge and code.soundsoftware.ac.uk download statistics, June 2013). In a survey of 821 users (11 Oct 2010 to 25 Apr 2013), 49% reported that they were non-academic users, using it as a professional (9%) or for personal use (40%). Usage is truly international, with users from 66 countries including the USA (32%), UK (25%), Germany (10%) and France (8%). An overwhelming majority (82%) report that they enjoy using SV [I7].
Sonic Visualiser chord labelling example from Matthias Mauch on Vimeo.
A special version of SV, Sonic Visualiser Library Edition [Im9], has been developed with specific evidence-based adaptations both for the workflow of musicologists, and to satisfy strict BL requirements, such as no export of audio (for copyright reasons). It is currently available to BL staff members and Edison Fellows. BL has also approved its installation in BL reading rooms. Once some legacy technology issues are dealt with, it is anticipated to be deployed on the computers equipped with the British Library Sound Server, allowing any reader the possibility to use SV for playback, analyses, and visualisations. Results of analyses can be exported via email, allowing readers the possibility to further employ their analyses in different setting (but without the audio for copyright reasons).
The “Yanno” system [I12], above, is a re-purposing of SV-based VAMP API technology into a web service for music teaching in schools. Its development was underpinned by observational research [R5] on how to achieve the strongest impact in the high-school music classroom, and uses Mauch & Dixon’s chord analysis plugin [R3].
Teachers and students gave very positive feedback (verbally and by email) and the tool was subsequently used in classroom teaching in our partner schools. One school enthusiastically forwarded details to a music education network, leading to significant interest in the service. In one month (March 2012) Yanno has had 7637 views, and since its launch in early 2012 it has analysed 3236 distinct Youtube videos with currently 2800 hits/month (Website statistics, Jan-Jun 2013).
SV has been used in a variety of public engagement events, including at the Science Museum Dana Centre (over 100 attendees) and the EPSRC Pioneers ’09 event (over 700 attendees) [Im8]. SV has also been used to create visualisations of music as a backdrop for a dance performance during the Cats Meet Show at Latitude Festival 2011 (Henham Park, 14-17 July 2011) [I10]. The show was held on the Music and Film stage at the Festival, and attracted over 500 people.
Sonic Visualiser was featured (Sep 2011) in “SoundStage! HiFi”, an online magazine for audiophiles. The article showed how music enthusiasts could use Sonic Visualiser to explore music which claimed to be “high resolution” (sampled at higher sample rate), to see if it has really been produced at a higher sampling rate, or merely a traditional 44.1kHz or 48kHz source which has been upsampled to 88.2kHz or 96kHz. [I11]
Sonic Visualiser was featured in the Coursera course “Introduction to Digital Sound Design” [I4], with a 20 minute tutorial dedicated to SV (Jan 2013). This course has had 6,800 facebook likes and led to a “spike” of approximately 6,000 additional downloads of SV by users of the course.
SV is used to help teaching of Audio & Music at Queen Mary and in other institutions. Examples of non-QM courses include:
C4DM research into Music Informatics is grounded in digital signal processing (DSP) for extracting features from musical audio. In 1998 we published one of the field’s earliest papers to tackle automatic music genre analysis [R1]. This contributed to a JISC/NSF Digital Libraries co-funded project, OMRAS (On-line Music Recognition and Search, www.omras.org, 1999-2003) with Sandler, Oxford University and University of Massachusetts (Amherst). Sandler (Professor of Signal Processing) and other key researchers (Bello, Reiss) and academics (Plumbley) moved to Queen Mary in 2001. The OMRAS project (now at Queen Mary and Goldsmiths) produced pioneering work using audio queries to search symbolic music databases, summarized in [R2]. It also established the ISMIR conference series (ismir2000.ismir.net), with 260 delegates in 2012.
The EU FP6 SIMAC project (mtg.upf.edu/static/semanticaudio, 2004-2006) was one of the first major European music informatics projects. C4DM was responsible for developing and defining feature extraction algorithms, including rigidly defined structured semantics for audio features. This led to a major EPSRC ICT “Large Grant” project OMRAS2 (www.omras2.org, 2006-2010, £2.5M).
The SIMAC and OMRAS2 project (and others) supported the development of our open source cross-platform tool “Sonic Visualiser”, designed to allow our music informatics research methods to be used by a wide range of users (see Impact). Examples of our methods available in Sonic Visualiser “Vamp” plugins (www.vamp-plugins.org) include: note onset detector, beat and barline tracker, tempo estimator, key estimator, tonal change detector, structural segmenter, timbral and rhythmic similarity estimator, adaptive spectrogram, note transcription, chromagram, harmony extraction, chord extraction [R3], and audio alignment.
For particular audiences we have undertaken ethnographic research into potential users of our music informatics visualization technology during the development of tailored systems, specifically for musicologists in the British Library [R4] and during music lessons in schools [R5].
To help promote take-up of the research and maximize impact, the Sonic Visualiser website includes a range of tutorial material, including videos on user-oriented tasks such as Mapping Rubato and Loudness, Mapping Melody and Adding in Bar and Beat Numbers.
C4DM research in Music Informatics has a major impact in other research fields. For example, the work on classification in the wavelet domain [R1] has received 105 citations in a wide range of fields including medical, finance, oil & gas, semiconductor and sports (Google Scholar, March 2012), and also has 50 patent citations [I5].