The Third Summit on Music Intelligence opened on Sunday at the Central Conservatory of Music in Beijing, bringing together leading experts, scholars, and industry representatives from various fields including music artificial intelligence, computer music, music neuroscience, digital arts, performance science, music education, and industry applications.
The summit focuses on how technology, especially AI, is reshaping music creation, distribution, education and the wider industry ecosystem.
In his opening address, Dai Qionghai, chairman of Chinese Association for Artificial Intelligence, emphasized how AI is profoundly reshaping music's creation, transmission, and educational frameworks.
He highlighted that music AI, as a critical domain merging art and technology, is paving the way for significant developments and breakthroughs. Dai expressed hope that the summit would deepen academic collaboration, drive innovation and expand real-world applications.
Ke Yang, vice-president of the conservatory, reaffirmed the institution's commitment to integrating music and technology. He noted ongoing advances in areas such as intelligent composition, music neuroscience and the digitalization of music education.
The summit opened with keynote speeches, including a presentation by Guan Xiaohong, an academician of the Chinese Academy of Sciences, titled "Advances in Quantifying Music Intelligence and Cognition".
Guo explored the mathematical and physical patterns in musical structures and their links to broader systems in nature, engineering, and society. His work suggests that quantifying music intelligence can reveal underlying patterns while offering new approaches to cognition and AI-driven composition.
Another highlight was a pre-recorded keynote from Chris Chafe, director of Stanford University's Center for Computer Research in Music and Acoustics. In his presentation titled "Listening to Data: Crafting Custom Computer Music Network Applications for Music and Science via Data Sonification", Chafe introduced sonification — the process of translating data into sound and music. He explained how "musical listening" can help individuals detect patterns and trends in data. Chafe's talk also demonstrated how web-based applications can share real-time sonification results and how large language models can assist in programming workflows, fostering new interdisciplinary creative paths for both scientists and musicians.
Georg Hajdu, distinguished German composer, multimedia artist, and educator, delivered a keynote speech titled "From Concert Hall to Social Space: Recontextualizing Contemporary Music Through Technology". He highlighted innovations such as distributed performance systems, generative music and adaptive sound environments that are reshaping audience engagement.
Li Xiaobing, professor and chair of the department of music artificial intelligence at the Central Conservatory of Music, introduced the idea of "machine humanism". He argued that AI is fundamentally changing artistic authorship, meaning-making and music education. Drawing on the conservatory's research, Li outlined a framework for how music institutions may evolve in response to AI.
Participants agreed that AI is no longer just a technical tool but a force driving deep change across artistic practice, academic structures and industry models. They called for stronger collaboration across disciplines — including music, AI, neuroscience, engineering and cultural industries — to accelerate real-world applications.
As the summit continues, attendees hope it will spark new breakthroughs and redefine how music is created, experienced and understood in the digital era.