Skip to main content
Digital Music Research Network

DMRN+19: Digital Music Research Network One-day Workshop 2024

 

Queen Mary University of London

Tuesday 17th December 2024

 

News

 

Keynote speakers: Stefan Lattner (Research Leader at Sony CSL Paris)

Tittle: Models of Musical Signals: Representation, Learning & Generation

Abstract: Low-level audio representations and higher-level representation learning are at the heart of music analysis and synthesis. Thus, the talk will dive into some previous works of Sony CSL on audio representations, covering different concepts and use cases. Learning first- and second-order basis functions to obtain desired invariances, investigating the choice of low-level audio representations for generation, self-supervised learning of higher-order representations, and audio codecs. Finally, musical audio synthesis will be discussed, ranging from GANs to latent diffusion to recent advancements in continuous autoregressive models.

 

Bio: Stefan Lattner serves as a researcher leader at the music team at Sony CSL Paris, where he focuses on generative AI for music production, music information retrieval, and representation learning. He earned his PhD in 2019 from Johannes Kepler University (JKU) in Linz, Austria, following his research at the Austrian Research Institute for Artificial Intelligence in Vienna and the Institute of Computational Perception Linz. His studies centered on the modeling of musical structure, encompassing transformation learning and computational relative pitch perception. His current interests include human-computer interaction in music creation, live staging, and information theory in music. He specializes in latent diffusion, self-supervised learning, generative sequence models, computational short-term memories, and models of human perception.

 

DMRN+19 is sponsored by

The UKRI Centre for Doctoral Training in Artificial Intelligence and Music (AIM); a leading PhD research programme aimed at the Music/Audio Technology and Creative Industries, based at Queen Mary University of London.

Information on a new AIM CDT call for PhD positions will be on our website soon.

 

Location:     Arts Two Theatre - QMUL Mile end Campus (In person)

 

Call for Contributions -

The Digital Music Research Network (DMRN) aims to promote research in the area of digital music, by bringing together researchers from UK and overseas universities, as well as industry, for its annual workshop. The workshop will include invited and contributed talks and posters. The workshop will be an ideal opportunity for networking with other people working in the area. 

 

* Call for Contributions

You are invited to submit a proposal for a "talk" and/or a "poster" to be presented at this event.

TALKS may range from the latest research, through research overviews or surveys, to opinion pieces or position statements, particularly those likely to be of interest to an interdisciplinary audience. We plan to keep talks to about 15 minutes each, depending on the number of submissions. Short announcements about other items of interest (e.g. future events or other networks) are also welcome.

POSTERS can be on any research topic of interest to the members of the network.

The abstracts of presentations will be collated into a digest and distributed on the day.

 

* Submission

Please prepare your talk or poster proposal in the form of an abstract (1 page A4, using the template 1-page word template DMRN+19 [DOC 93KB], LaTeX template 2024 [404KB]. Submit it via email to dmrn@lists.eecs.qmul.ac.uk giving the following information about your presentation:

  • Authors
  • Title
  • Preference for talk or poster (or "no preference")

 

* Deadlines

 

21 Nov 2024: Abstract submission deadline 

25 Nov 2024: Notification of acceptance 

15 Dec 2024: Registration deadline 

17 Dec 2024: DMRN+19 Workshop

 

Registration

The event will be in person, registration is mandatory for those coming in person (the registration cost is £25, it covers catering services, coffee and lunch).

Registration link eshop.qmul.ac.uk

 

 Programme:

 

9:40

Registration - Coffee

10:00

Welcome - Simon Dixon

10:10

KEYNOTE

 

Models of Musical Signals: Representation, Learning & Generation, Stefan Lattner (Research Leader at Sony CSL Paris)

11:10

Break (Coffee break)

11:30

"PAGURI: a user experience study of creative interaction with text-to-music models", Ronchini Francesca, Comanducci Luca, Gabriele Perego and Fabio Antonacci (Politecnico di Milano, Italy)

11:50

“REBUS: Exploring the space between instrument and controller”, Eleonora Oreggia (Goldsmiths, University of London, UK)

12:10

"An Improvisation Analysis Method with Machine Learning for Embodied Rhythm Research", Evan O'Donnell (Goldsmiths, University of London, UK)

12:30

Creative Apps Hackathon Winners Presentation; Jean-Baptiste Thiebaut (Music Hackspace, UK) and György Fazekas (Queen Mary University of London, UK)

 

12:45

 

 

Lunch - Poster Session

 

14:15

"Analysis of MIDI as Input Representations for Guitar Synthesis", Jackson Loth, Pedro Sarmento, Saurjya Sarkar and Mathieu Barthet (Queen Mary University of London, UK)

14:35

"Emulating LA-2A Optical Compressor with a Feed-Forward Digital Compressor Using the Newton-Raphson Method", Chin-Yun Yu and György Fazekas (Queen Mary University of London, UK)

14:55

“AFX-Research: a repository and website of audio effects research”, Marco Comunità and Joshua D. Reiss (Queen Mary University of London, UK)

 

15:15

Break (Coffee break)

15:35

“Characterizing Jazz Improvisation Style Through Explainable Performer Identification Models”, Huw Cheston, Reuben Bance, Peter Harrison (Centre for Music & Science, Cambridge, UK)

15:55

"Framework for Predicting Eurovision Song Contest Results," Katarzyna Adamska and Joshua D. Reiss (Queen Mary University of London, UK)

16:15

"MINDS: Mutual Inclusion through Neurodiversity in Science", Daniel Gill (Queen Mary University of London, UK)

16:35

Close – Emmanouil Benetos

 

 

*There will be an opportunity to continue discussions after the Workshop in a nearby Pub/Restaurant for those in London.

 

posters

 

1

“Improving Automatic Guitar Tablature Transcription with LLMs”, Omar Ahmed (University of Oxford and Queen Mary University of London, UK), Pedro Sarmento and Emmanouil Benetos (Queen Mary University of London, UK)

2

“Towards Detecting Interleaved Voices in Telemann Flute Fantasias”, Patrice Thibaud, Mathieu Giraud and Yann Teytaut (Univ. Lille, CNRS, Inria, Centrale Lille, UMR 9189 CRIStAL, F-59000 Lille, France)

3

“Embodied Movement-Sound Interaction in Latent Terrain Synthesis”, Shuoyang Zheng, Anna Xambó Sedó (Queen Mary University of London, UK) and Nick Bryan-Kinns (University of the Arts London, UK)

4

“Towards Differentiable Digital Waveguide Synthesis”, Pablo Tablas de Paula and Joshua D. Reiss (Queen Mary University of London, UK)

5

“Classification of Spontaneous and Scripted Speech for Multilingual Audio”, Shahar Elisha (Spotify Ltd, Queen Mary University of London, UK), Andrew McDowell, Mariano Beguerisse-Díaz, (Spotify Ltd, UK) and Emmanouil Benetos (Queen Mary University of London, UK)

6

“Personalising equalisation using psychological and contextual factors”, Yorgos Velissaridis, Charalampos Saitis and György Fazekas (Queen Mary University of London, UK)

7

“Can downbeat trackers predict hypermetre?” Jose Alejandro Esquivel de Jesus and Jordan B. L. Smith (Queen Mary University of London, UK)

8

“A Transposition-Invariant Chord Encoder for Bigram Modelling” Yuqiang Li and György Fazekas (Queen Mary University of London, UK)

9

“Multimodal techniques for the control of procedural audio”, Xavier Marcello D'Cruz and Joshua D. Reiss (Queen Mary University of London, UK)

10

“ Towards differentiable modular approaches for dynamic sound synthesis: A case study in vehicular sound effects”, Minhui Lu and Joshua D. Reiss (Queen Mary University of London, UK)

11

“Procedural Music Generation for games”, Shangxuan Luo and Joshua D. Reiss (Queen Mary University of London, UK)

12

“ Advancing Expressive Performance Rendering in Pop Music Using Computational Models”, Jinwen Zhou and Aidan Hogg (Queen Mary University of London, UK)

 

 

Back to top