Queen Mary University of LondonTuesday 19th December 2023
News
Keynote speakers: Stefan Bilbao
Tittle: Physics-based Audio: Sound Synthesis and Virtual Acoustics (video link)
Abstract: Any acoustically-produced sound produced must be the result of physical laws that describe the dynamics of a given system---always at least partly mechanical, and sometimes with an electronic element as well. One approach to the synthesis of natural acoustic timbres, thus, is through simulation, often referred to in this context as physical modelling, or physics-based audio. In this talk, the principles of physics-based audio, and the various different approaches to simulation are described, followed by a set of examples covering: various musical instrument types; the important related problem of the emulation of room acoustics or “virtual acoustics”; the embedding of instruments in a 3D virtual space; electromechanical effects; and also new modular instrument designs based on physical laws, but without a counterpart in the real world. Some more technical details follow, including the strengths, weaknesses and limitations of such methods, and pointers to some links to data-centred black-box approaches to sound generation and effects processing. The talk concludes with some musical examples and recent work on moving such algorithms to a real-time setting..
Bio: Stefan is a Professor (full) at Reid School of Music, University of Edinburgh, he is the Personal Chair of Acoustics and Audio Signal Processing, Music. He currently works on computational acoustics, for applications in sound synthesis and virtual acoustics. Special topics of interest include: Finite difference time domain methods, distributed nonlinear systems such as strings and plates, architectural acoustics, spatial audio in simulation, multichannel sound synthesis, and hardware and software realizations.
More information on: https://www.acoustics.ed.ac.uk/group-members/dr-stefan-bilbao/
DMRN+18 is sponsored by
The UKRI Centre for Doctoral Training in Artificial Intelligence and Music (AIM); a leading PhD research programme aimed at the Music/Audio Technology and Creative Industries, based at Queen Mary University of London.
Information on a new AIM CDT call for PhD positions will be on our website soon.
Call for Contributions - Closed
The Digital Music Research Network (DMRN) aims to promote research in the area of digital music, by bringing together researchers from UK and overseas universities, as well as industry, for its annual workshop. The workshop will include invited and contributed talks and posters. The workshop will be an ideal opportunity for networking with other people working in the area.
* Call for Contributions
You are invited to submit a proposal for a "talk" and/or a "poster" to be presented at this event.
TALKS may range from the latest research, through research overviews or surveys, to opinion pieces or position statements, particularly those likely to be of interest to an interdisciplinary audience. We plan to keep talks to about 15 minutes each, depending on the number of submissions. Short announcements about other items of interest (e.g. future events or other networks) are also welcome.
POSTERS can be on any research topic of interest to the members of the network.
The abstracts of presentations will be collated into a digest and distributed on the day.
* Submission
Please prepare your talk or poster proposal in the form of an abstract (1 page A4, using the template 1page-dmrn18-template-word [DOC 97KB], DMRN-2023-Template--latex [407KB] . Submit it via email to dmrn@lists.eecs.qmul.ac.uk giving the following information about your presentation:
* Deadlines
19 Nov 2023: Abstract submission deadline
24 Nov 2023: Notification of acceptance
15 Dec 2023: Registration deadline
19 Dec 2023: DMRN+18 Workshop
Registration
The event will be in person but people could follow talks online, registration is mandatory for those comming in person (the registration cost is £25, it covers catering services, coffee and lunch).
eshop-QMUL DRMN+18 link
Programme
Location: Mason Lecture Theatre, Bancroft building
Queen Mary University of London - Mile End Campus
10:00
Welcome - Simon Dixon
10:10
KEYNOTE
Physics-based Audio: Sound Synthesis and Virtual Acoustics, Stefan Bilbao- (Acoustics and Audio Group- University of Edinburgh)
(video link)
11:10
Break (Coffee break)
11:30
“In-depth performance analysis of the state-of-the-art algorithm for automatic drum transcription”, Mickaël Zehren (Umea Universitet, Sweden), Marco Alunno (Universidad EAFIT, Colombia) and Paolo Bientinesi (Umea Universitet, Sweden) (video link)
11:50
“Automatic Guitar Transcription with a Composable Audio-to-MIDI-to-Tablature Architecture”, Xavier Riley, Drew Edwards and Simon Dixon (Queen Mary University of London, UK)
12:10
“The Stories Behind the Sounds: Finding Meaning in Creative Musical Interactions with AI”, Jon Gillick (University of the Arts London, UK)
12:30
Announcements:
“The Cadenza Challenge for Improving Music for People with a Hearing Loss”, Gerardo Roa Dabike and Trevor Cox (Universities of Salford, UK).
“Timbre Tools Hackathon; Timbre Tools for the Digital Instrument Maker”, Charalampos Saitis, (Queen Mary University of London, UK).
12:45
Lunch - Poster Session
14:15
“An automated pipeline for characterizing timing in jazz trios”, Huw Cheston, Ian Cross, and Peter Harrison (University of Cambridge, UK)
14:35
“Electric Guitar Sound Restoration with Diffusion Models”, Ronald Mo (University of Sunderland, UK)
14:55
“DedAI: Advanced AI-Driven Music Composition Informed by EEG-Based Emotional Analysis”, Elliott Mitchell (University of Westminster, UK) (video link)
15:15
15:35
“PolyDDSP: A lightweight, Polyphonic Differentiable Digital Signal Processing Library”, Tom Baker, Ke Chen (University of Manchester, UK) and Ricardo Climent (NOVARS Research Institute, University of Manchester, UK)
15:55
“A Two-Stage Differentiable Critic Model for Symbolic Music”, Yuqiang Li, Shengchen Li (Xi’an Jiaotong-Liverpool University, China) and George Fazekas (Queen Mary University of London, UK) (video link)
16:15
Close - Simon Dixon
“DedAI: Advanced AI-Driven Music Composition Informed by EEG-Based Emotional Analysis”, Elliott Mitchell (University of Westminster, UK)
* - There will be an opportunity to continue discussions after the Workshop in a nearby Pub/Restaurant for those in London.
Posters
1
“Tokenization Informativeness and its Impact on Symbolic MIR Tasks”, Dinh-Viet-Toan Le (Univ. Lille, CNRS, Inria, Centrale Lille, UMR 9189 CRIStAL, F-59000 Lille, France), Louis Bigo (Univ. Bordeaux, CNRS, Bordeaux INP, LaBRI, UMR 5800, F-33400 Talence, France) and Mikaela Keller (Univ. Lille, CNRS, Inria, Centrale Lille, UMR 9189 CRIStAL, F-59000 Lille, France)
2
“Rhythm Guitar Tablature Continuation from Chord Progression and Tablature Prompt”, Alexandre D'Hooge (Univ. Lille, CNRS, Centrale Lille, UMR 9189 CRIStAL, F-59000 Lille, France), Louis Bigo ((Univ. Bordeaux, CNRS, Bordeaux INP, LaBRI, UMR 5800, F-33400 Talence, France) and Ken Déguernel (Univ. Lille, CNRS, Centrale Lille, UMR 9189 CRIStAL, F-59000 Lille, France)
3
“Subjective Evaluation of Roughness for Perceptual Audio Coding”, He Xie, Bruno Fazenda and Duncan Williams (University of Salford, UK)
4
“Adapting Beat Tracking Models for Salsa Music: Establishing a Baseline with a novel dataset”, Antonin Rapini and Anna Jordanous (University of Kent, UK)
5
“Efficient Optimisation Techniques for Large Generative Audio Models”, Bradley Aldous and Ahmed M. A. Sayed (Queen Mary University of London, UK)
6
“Towards Melodic Development with Discrete Diffusion Models for Symbolic Music”, Keshav Bhandari and Simon Colton (Queen Mary University of London, UK)
7
“Rethinking music representation learning for music and musicians”, Julien Guinot (Queen Mary University of London, UK), Eliot Quinton (Universal Music group, UK) and George Fazekas (Queen Mary University of London, UK)
8
“Towards End-to-End Automatic Guitar Transcription via a Multimodal Approach”, Zixun (Nicolas) Guo and Simon Dixon (Queen Mary University of London, UK)
9
“Neuro-Symbolic Meta-Composition”, Adam Z. He (Queen Mary University of London and DAACI, UK) Doon MacDonald (DAACI, UK), Geraint A. Wiggins (Vrije Universiteit Brussel, Belgium and Queen Mary University of London, UK)
10
“Improving music recommendation and representation using DJ mix tracklists”, Gregor Meehan and Johan Pauwels (Queen Mary University of London, UK)
11
“Self-Supervised Music Source-Separation using Vector-Quantized Source Category Estimates”, Marco Pasini (Queen Mary University of London, UK), Stefan Lattner (Sony CSL) and George Fazekas (Queen Mary University of London, UK)
12
“Limited-Data Incremental Learning in Music”, Christos Plachouras, Johan Pauwels and Emmanouil Benetos (Queen Mary University of London, UK)
13
“Timbre Tools for the Digital Instrument Maker”, Haokun Tian and Charalampos Saitis (Queen Mary University of London, UK)
14
“Music-Driven dance generation”, Qing Wang and Shanxin Yuan (Queen Mary University of London, UK)
15
“Using AI to Help Render Orchestral Scores to Expressive Mockups”, Yifan Xie and Mathieu Barthet (Queen Mary University of London, UK)
16
“Computational auditory scene analysis: what next?”, Farida Yusuf and Marcus Pearce (Queen Mary University of London, UK)
17
“Multimodal AI for musical collaboration in immersive environments”, Qiaoxi Zhang and Mathieu Barthet (Queen Mary University of London, UK)
18
“Generative Deep Learning for Explainable AI Music-Making: A survey and Taxonomy”, Shuoyang Zheng (Queen Mary University of London, UK), Anna Xambó (De Monfort University, UK) and Nick Bryan-Kinns (University of the Arts London, UK)
Announcements
“Planning The 2nd Cadenza Challenge for Improving Music for People with a Hearing Loss”, Gerardo Roa Dabike (University of Salford, UK), Michael A. Akeroyd (University of Nottingham, UK), Scott Bannister (University of Leeds, UK), Jon Barker (University of Sheffield, UK), Trevor J. Cox (University of Salford, UK) Bruno Fazenda (University of Salford, UK), Jennifer Firth(University of Nottingham, UK), Simone Graetzer (University of Salford, UK), Alinka Greasley(University of Leeds, UK), Rebecca R. Vos(University of Salford, UK), William M. Whitmer (University of Nottingham, UK)
“Timbre Tools Hackathon; Timbre Tools for the Digital Instrument Maker”, Charalampos Saitis, Haokun Tian, Jordie Shier and Bleiz Macsen Del Sette (Queen Mary University of London, UK).