Special Sessions

In addition to traditional CMMR topics, we propose a certain number of special sessions. The aim of these sessions is to encourage more specific contributions.

Submissions to these sessions are made by selecting the title of the special session in the topics section of the standard paper submission page.

Singing information processing

organized by Tomoyasu Nakano (National Institute of Advanced Industrial Science and Technology (AIST))

The singing voice is one of the most important elements in music. Since the singing voice has both speech and music aspects, its basic research is important from an academic perspective. The development of singing information processing technology is also important from an industrial application perspective because it contributes to a wide variety of people, from music (singing) specialists to end users. This session welcomes a wide range of research on the singing voice, including but not limited to singing-related analysis, signal processing, machine learning, interactive systems, and visualization.

Music as/with pop-culture

organized by Ryosuke Yamanishi (Kansai University) and Yudai Tsujino (Meiji University)

Music is art, culture, and entertainment content. Music has evolved with society and technology for a long time. Especially in recent years, music is introduced into other entertainment contents: movies, animes, and games. Such pop-culture contents benefit the power of music to enrich its attractiveness. Also, the music itself is influenced by pop-culture. Song, performance and listening style has changed with technologies in media processing and telecommunication. Based on the background, this session welcomes the research where music is assumed as pop-culture and music is addressed with pop-culture.

Music and Sound Generation: Emerging Approaches and Diverse Applications

organized by Taketo Akama (Sony Computer Science Laboratories, Inc.)

Recent advances in deep/machine learning-based music generative models have produced high-quality music/sound and opened up new applications, such as multimodal generation of music/sound from X or X from music/sound, where X can be an image, video, text, or lyrics. There is also a trend towards real-time, highly-controllable, and context-driven music/sound generation. Deep/machine learning-based music generative models can be combined with traditional approaches, such as music generative theories, grammatical modeling, signal processing, and physical modeling, providing opportunities for hybridization. We prioritize innovation and originality in music generation research and encourage authors to present new approaches/applications that may lead to new directions in the field, even if they are incomplete.

Computational research on music evolution

organized by Eita Nakamura (Kyoto University)

This special session welcomes computational/quantitative research on music evolution. The availability of large-scale music data and computational music analysis techniques opens new possibilities and directions for studying music evolution in a quantitative manner. We expect fruitful discussion on this multidisciplinary theme with presentations in various perspectives.  Possible topics include folk song evolution, musicality evolution, music style evolution, model-based study on cultural evolution of music, and laboratory evolution of music.