Tentative as of 9th September
Scientific Program
This tentative program is subject to change.
Information (title, authors) of some papers may be based on the old records. It will be updated soon.
13th (Mon)
14:00-15:00 Opening & Sponsor Talks
15:00-16:00 Session 1a: Creative Music Systems
Takuya Takahashi, Shigeki Sagayama and Toru Nakashika
Controllable Automatic Melody Composition Model across Pitch/Stress-accent Languages
David Rizo, Jorge Calvo-Zaragoza, Juan Carlos Martínez Sevilla, Adrián Roselló and Eliseo Fuentes-Martínez
Design of a music recognition, encoding, and transcription online tool
Ryota Mibayashi, Takehiro Yamamoto, Kosetsu Tsukuda, Kento Watanabe, Tomoyasu Nakano, Masataka Goto and Hiroaki Ohshima
Verse Generation by Reverse Generation Considering Rhyme and Answer in Japanese Rap Battles
16:20-17:20 Session 1b: Cognitive Science for Music
Max Graf and Mathieu Barthet
Combining Vision and EMG-Based Hand Tracking for Extended Reality Musical Instruments
Timothy Schmele, Eleonora De Filippi, Arijit Nandi, Alexandre Pereda Baños and Adan Garriga
Emotional Impact of Source Localization in Music Using Machine Learning and EEG: a proof-of-concept study
Geetika Arora, Keyur Choudhari, Ponnurangam Kumaraguru and Vinoo Alluri
From Sunrise to Sunset: Investigating Diurnal Rhythmic Patterns in Music Listening Habits in India
14th (Tue)
9:40-10:40 Session 2a: [Special session] Music and Sound Generation: Emerging Approaches and Diverse Applications 1
Yoshitaka Tomiyama, Tetsuro Kitahara, Taro Masuda, Koki Kitaya, Yuya Matsumura, Ayari Takezawa, Tsuyoshi Odaira and Kanako Baba
Benzaiten: A Non-expert-friendly Event of Automatic Melody Generation Contest
Yuqiang Li, Shengchen Li and George Fazekas
Pitch Class and Octave-Based Pitch Embedding Training Strategies for Symbolic Music Generation
Juan Huerta, Bo Liu and Peter Stone
VaryNote: A Method to Automatically Vary the Number of Notes in Symbolic Music
11:00-12:00 Session 2b: [Special session] Music and Sound Generation: Emerging Approaches and Diverse Applications 2
Pedro Sarmento, Adarsh Kumar, Dekun Xie, Cj Carr, Zack Zukowski and Mathieu Barthet
ShredGP: Guitarist Style-Conditioned Tablature Generation
Jackson Loth, Pedro Sarmento, Cj Carr, Zack Zukowski and Mathieu Barthet
ProgGP: From GuitarPro Tablature Neural Generation To Progressive Metal Production
Jingjing Tang, Geraint Wiggins and George Fazekas
Reconstructing Human Expressiveness in Piano Performances with a Transformer Network
13:20-14:40 Session 2c: [Poster & Demo]
[Poster]
[P1] Thomas Ottolin, Raghavasimhan Sankaranarayanan, Qinying Lei, Nitin Hugar and Gil Weinberg
Balancing Musical Co-Creativity: The Case Study of Mixboard, a Mashup Application for Novices
[P2] Tatsunori Hirai
A Melody Input Support Interface by Presenting Subsequent Candidates based on a Connection Cost
[P3] Junya Koguchi and Masanori Morise
Phoneme-inspired playing technique representation and its alignment method for electric bass database
[P4] Tomoo Kouzai and Tetsuro Kitahara
An Audio-to-Audio Approach to Generate Bass Lines from Guitar’s Chord Backing
[P5] Eunjin Choi, Hyerin Kim, Juhan Nam and Dasaem Jeong
ExpertBach: Chorale Generation Powered by Domain Knowledge
[P7] Hyon Kim and Xavier Serra
DiffVel: Note-Level MIDI Velocity Estimation for Piano Performance by A Double Conditioned Diffusion Model
[P8] Emmanouil Karystinaios, Francesco Foscarin, Florent Jacquemard, Masahiko Sakai, Satoshi Tojo and Gerhard Widmer
8+8=4: Formalizing Time Units to Handle Symbolic Music Durations
[P9] Rikard Lindell and Henrik Frisk
The Unfinder: Finding and reminding in electronic music
[Demo]
[D1] Kaito Abiki, Saizo Aoyagi, Akira Hattori, Ken Honda and Tatsunori Hirai
AR-based Guitar Strumming Learning Support System that Provides Audio Feedback by Hand Tracking
[D2] Yasumasa Yamauguchi, Taku Kawada, Toru Nagahama and Tatsuya Horita
The Demonstration of MVP Support System as an AR Realtime Pitch Feedback System
[D3] Hinata Segawa, Shunsuke Sakai and Tetsuro Kitahara
Melody Reduction for Beginners’ Guitar Practice
[D4] Nami Iino, Hiroya Miura, Hideaki Takeda, Masatoshi Hamanaka and Takuichi Nishimura
Structural Analysis of Utterances during Guitar Instruction
[D5] Ji Won Yoon and Woon Seung Yeo
Music in the Air: Creating Music from Practically Inaudible Ambient Sound
[D6] Patricia Alessandrini, Constantin Basica and Prateek Verma
Creating an interactive and accessible remote performance system with the Piano Machine
[D7] Daniel Hernan Molina Villota, D’Alessandro Christophe, Gregoire Locqueville and
Thomas Lucas
A Singing Toolkit: Gestural Control of Voice Synthesis, Voice Samples and Live Voice
[D8] Masaki Okuta and Tetsuro Kitahara
Sonifying Players’ Positional Relation in Football
[D9] Gabriel Zalles Ballivian
Talking with Fish: an OpenCV Musical Installation
15:00-16:00 Session 2d: [Keynote]
Yi-Hsuan Yang Deep Learning-based Automatic Music Generation
16:20-17:00 Session 2e: Computational musicology 1
Christofer Julio, Feng-Hsu Lee and Li Su
Interpretable Rule Learning and Evaluation of Early Twentieth-century Music Styles
Yu-Fen Huang and Li Su
Toward empirical analysis for stylistic expression in piano performance
15th (Wed)
9:20-10:40 Session 3a: [Special session] Music and Sound Generation: Emerging Approaches and Diverse Applications 3
Eleanor Row, Jingjing Tang and George Fazekas
JAZZVAR: A Dataset of Variations found within Solo Piano Performances of Jazz Standards for Music Overpainting
Marco Amerotti, Steve Benford, Bob Sturm and Craig Vear
A Live Performance Rule System arising from Irish Traditional Dance Music
Damian Dziwis
VERSNIZ – Audiovisual Worldbuilding through Live Coding as a Performance Practice in the Metaverse
Gregory Beller, Jacob Sello, Georg Hajdu and Thomas Görne
Spatial Sampling in Mixed Reality – Ten Years of Research and Creation
11:00-12:00 Session 3b: HCI in Music
Hans Kretz
Networked performance as a space for collective creation and student engagement
Rory Hoy and Doug Van Nort
eLabOrate(D): An Exploration of Human/Machine Collaboration in a Telematic Deep Listening Context
Matthias Nowakowski and Aristotelis Hadjakos
Simulating Interaction Time in Music Notation Editors
Session 3c: [Poster & Demo]
[Poster]
[P1] Pedro Lucas and Kyrre Glette
Human-Swarm Interactive Music Systems: Design, Algorithms, Technologies, and Evaluation
[P2] Sora Miyaguchi, Naotoshi Osaka and Yusuke Ikeda
Improving Instrumentality of Sound Collage Using CNMF Constraint Model
[P3] Tatsunori Hirai
Quantum Circuit Design using Genetic Algorithm for Melody Generation with Quantum Computing
[P4] Matthew McCloskey, Gabrielle Curcio, Amulya Badineni, Kevin McGrath, Georgios Papamichail and Dimitris Papamichail
Automated Arrangements of Multi-Part Music for Sets of Monophonic Instruments
[P5] Takuto Nabeoka, Eita Nakamura and Kazuyoshi Yoshii
Automatic Orchestration of Piano Scores for Wind Bands with User-Specified Instrumentation
[P6] Yasumasa Yamauguchi, Taku Kawada, Toru Nagahama and Tatsuya Horita
A quantitative evaluation of a musical performance support system utilizing a musical sophistication test battery
[P7] Mastuti Puspitasari, Takuya Takahashi, Gen Hori, Shigeki Sagayama and Toru Nakashika
SBERT-based Musical Components Estimation from Lyrics Trained with Imbalanced Orpheus Data
[P8] Tom Baker, Ricardo Climent and Ke Chen
PolyDDSP: A Lightweight and Polyphonic Differentiable Digital Signal Processing Library
[P9] João Das Neves, Pedro Martins, Fernando Amílcar Cardoso, Jônatas Manzolli, Mariana Seiça and Mário Zenha Rela
Soundscape4dei as a model for multilayered sonifications
[Demo]
[D1] Marcelo Caetano and Richard Kronland-Martinet
The Sound Morphing Toolbox: Musical Instrument Sound Modeling and Transformation Techniques
[D2] Mizuki Kawahara, Tomoo Kouzai and Tetsuro Kitahara
Morphing of Drum Loop Sound Sources Using CNN-VAE
[D3] Shunsuke Sakai, Hinata Segawa and Tetsuro Kitahara
Generating Tablature of Polyphony Consisting of Melody and Bass Line
[D4] Takanori Horibe and Masanori Morise
Development of an easily-usable smartphone application for recording instrumental sounds
[D5] Arturo Alejandro Arzamendia Lopez, Akinori Ito and Koji Mikami
A Research on Music Generation by Deep-Learning including ornaments – A case study of world harp instruments-
[D6] Noriko Otani, So Hirawata and Daisuke Okabe
Automatic Music Composition System to Enjoy Brewing Delicious Coffee
[D7] Tolly Collins and Mathieu Barthet
Expressor: A Transformer Model for Expressive MIDI Performance
[D8] Kit Armstrong, Ji-Xuan Huang, Tzu-Ching Hung, Jing-Heng Huang and Yi-Wen Liu
Real-Time Piano Accompaniment Using Kuramoto Model for Human-Like Synchronization
[D9] Mitsuko Aramaki, Corentin Bernard, Richard Kronland-Martinet, Samuel Poirot and Sølvi Ystad
Intuitive control of scraping and rubbing through audio tactile synthesis
Session 3d: [Keynote]
Shigeki Sagayama
17 Years with Automatic Music Composition System “Orpheus”
Session 3e: [Special session] Singing information processing
Antonia Stadler, Emilia Parada-Cabaleiro and Markus Schedl
Towards Potential Applications of Machine Learning in Computer-Assisted Vocal Training
Tung-Cheng Su, Yung-Chuan Chang and Yi-Wen Liu
Effects of Convolutional Autoencoder Bottleneck Width on StarGAN-based Singing Technique Conversion
16th (Thu)
Session 4a: [Special session] Computational research on music evolution
Eita Nakamura, Tim Eipert and Fabian C. Moss
Historical Changes of Modes and their Substructure Modeled as Pitch Distributions in Plainchant from the 1100s to the 1500s
Eita Nakamura
Computational Analysis of Selection and Mutation Probabilities in the Evolution of Chord Progressions
Marco Buongiorno Nardelli
A network approach to harmonic evolution and complexity in western classical music
Halla Kim and Juyong Park
Analyzing Voicing Novelty in Classical Piano Music
Dongju Park and Juyong Park
Bipartite network analysis of the stylistic evolution of sample-based music
Session 4b: Audio Signal Processing
Jeremy Hyrkas
Algorithms for Roughness Control Using Frequency Shifting and Attenuation of Partials in Audio
António Sá Pinto and Gilberto Bernardes
Bridging the Rhythmic Gap: A User-Centric Approach to Beat Tracking in Challenging Music Signals
Session 4c: [Poster & Demo]
[Poster]
[P1] So Hirawata, Noriko Otani, Daisuke Okabe and Masayuki Numao
Creating a New Lullaby Using an Automatic Music Composition System in Collaboration with a Musician
[P2] Madoka Goto, Masahiko Sakai and Satoshi Tojo
Automatic Phrasing System for Expressive Performance Based on The Generative Theory of Tonal Music
[P3] Sai Oshita and Tetsuro Kitahara
NUFluteDB: Flute Sound Dataset with Appropriate and Inappropriate Blowing Styles
[P4] Stefano Kalonaris and Omer Gold
Melody Blending: A Review and an Experiment
[P5] Rina Kagawa, Nami Iino, Hideaki Takeda and Masaki Matsubara
Effective Textual Feedback in Musical Performance Education: A Quantitative Analysis Across Oboe, Piano, and Guitar
[P6] Riku Takahashi, Risa Izu, Yoshinari Takegawa and Keiji Hirata
Global Prediction of Time-span Tree by Fill-in-the-blank Task
[P7] Emilia Parada-Cabaleiro, Anton Batliner, Maximilian Schmitt, Björn Schuller and Markus Schedl
Music Emotions in Solo Piano: Bridging the gap between Human Perception and Machine Learning
[P8] Ève Poudrier, Bryan Jacob Bell and Craig Stuart Sapp
Listeners’ Perceived Emotions in Human vs. Synthetic Performance of Rhythmically Complex Musical Excerpts
[Demo]
[D1] Cory McKay
From jSymbolic 2 to 3: More Musical Features
[D2] Daniel Hernan Molina Villota and D’Alessandro Christophe
Comparing vocoders for automatic vocal tuning
[D3] David Rizo, Jorge Calvo-Zaragoza, Adrian Rosello, Eliseo Fuentes-Martínez and Juan Carlos Martínez-Sevilla
Music recognition, encoding, and transcription (MuRET) online tool demonstration
[D4] Tatsunori Hirai, Lamo Nagasaka and Takuya Kato
Microtonal Music Dataset v1
[D5] Shoyu Shinjo and Aiko Uemura
Lighting Control based on Colors Associated with Lyrics at Bar Positions
[D6] Masatoshi Hamanaka
Melody Changing Interfaces for Melodic Morphing
[D7] Risa Izu, Yoshinari Takegawa and Keiji Hirata
Relative Representation of Time-Span Tree
[D8] Megha Sharma and Yoshimasa Tsuruoka
Zero-Shot Music Retrieval For Japanese Manga
[D9] Justin Tomoya Wulf and Tetsuro Kitahara
Visualizing Musical Structure of House Music
Session 4d: [Ketnote]
Tatsuya Daikoku
Exploring the Neural and Computational Basis of Statistical Learning in the Brain to Unravel Musical Creativity and Cognitive Individuality
Session 4e: Computational Musicology 2
Masatoshi Hamanaka, Keiji Hirata and Satoshi Tojo
deepGTTM-IV: Deep Leaning Based Time-span Tree Analyzer of GTTM
Matteo Bizzarri
Music and Logic: a connection between two worlds
17th (Fri)
Session 5a: Music Information Retrieval
Tiange Zhu, Danny Diamond, James McDermott, Raphaël Fournier-S’Niehotta, Mathieu Daquin and Philippe Rigaux
A Novel Local Alignment-Based Approach to Motif Extraction in Polyphonic Music
Ryusei Hayashi and Tetsuro Kitahara
Predicting Audio Features of Background Music from Game Scenes
Tomoyasu Nakano, Momoka Sasaki, Mayuko Kishi, Masahiro Hamasaki, Masataka Goto and Yoshinori Hijikata
A Music Exploration Interface Based on Vocal Timbre and Pitch in Popular Music
Le Cai, Sam Ferguson, Hani Alshamran and Gengfa Fang
Exploring Diverse Sounds: Identifying Outliers in a Music Corpus
Session 5b: Music Tools/Datasets
Ayane Sasaki, Mio Matsuura, Masaki Matsubara, Yoshinari Takegawa and Keiji Hirata
Exploring Patterns of Skill Gain and Loss on Long-term Training and Non-training in Rhythm Game
Chandan Misra and Swarup Chattopadhyay
SANGEET: A XML based Open Dataset for Research in Hindustani Sangeet