AI and Music Festival (S+T+ARTS): musical creativity with AI

XR4ED: A digital infrastructure for applications oriented towards learning, training and education with Extended Reality (XR)
January 22, 2026

30/1/2026

The Image and Video Processing Group (GPI), part of the IDEAI-UPC research group, and the Digital Culture and Creative Technologies Research Group (DiCode) from the Image Processing and Multimedia Technology Center (CITM) at the Universitat Politècnica de Catalunya – BarcelonaTech (UPC), have co-organised the AI and Music Festival (S+T+ARTS) together with Sónar+D and Betevé, to explore the creative use of artificial intelligence in music.


The emergence of artificial intelligence (AI) is redefining how music is created, produced and consumed, and poses new challenges for industry, creativity and the arts. In a context of accelerated change, it is essential to understand how AI —neural networks, emotion analysis, computer vision, motion capture…— can be applied both to creative processes and to ways of bringing music to wider audiences, as well as to explore its potential as a transformative agent for the sector.

These challenges were addressed by the AI and Music Festival (S+T+ARTS), held on 27–28 October 2021 at the CCCB in Barcelona. The Festival brought together engineers, scientists and creatives (musicians, DJs, visual artists and choreographers) from around the world and across multiple genres, who premiered their works while experimenting live with AI. Through performances, workshops and co-creation spaces, talks and debates, and two hackathons, the festival demonstrated the transformative potential of machine learning applied to live music and opened up new pathways for innovation in the cultural, educational and audiovisual industries.

The UPC team formed by the GPI (IDEAI-UPC) and the CITM, with expertise in image and video processing, computer vision, data analysis and creative technologies, contributed to developing systems capable of capturing movements, gestures and visual patterns, and transforming them into musical or visual parameters in real time, expanding artists’ expressive possibilities.

The initiative created a collaborative experimental framework to develop performances and prototypes integrating real-time AI. Machine learning technologies, neural networks for sound generation or transformation, emotional analysis models, computer vision systems and motion capture techniques were applied. The main innovation was the seamless integration between humans and AI systems, exploring innovative communication channels —visual, gestural and auditory— in live performances.

Of the 21 shows programmed within the Festival, most of them world premieres, several stand out as. Some of these have been developed by co-creation teams bringing together artists and UPC AI researchers:

  • The Festival’s opening concert at the Auditori de Barcelona, by pianist and composer Marco Mezquida, who conceived his performance ‘Piano + AI’ together with GPI researchers and in collaboration with the Escola Superior de Música de Catalunya (ESMUC): a live conversation between piano and AI, using real-time audio analysis and digital sound synthesis. The solo piano on stage experiments while interacting with an algorithm created by UPC and trained to recognise the instrument’s timbres.
  • The audiovisual performance ‘Engendered Otherness. A symbiotic AI dance ensemble’, by Hamill Industries and choreographer Kiani del Valle. In this show, UPC researchers applied AI to the world of dance, created in collaboration with songwriter Stefano Rosso. The performance uses computer vision and motion capture to transform a choreography performed in real time by Kiani del Valle and her synthetic dancers.
  • The interactive show by DJ and producer Awwz, with audience participation, titled ‘Awwz b2b AI DJ’, involving IDEAI researchers. The performance brings machine learning into the world of DJing and invites the audience to collaborate directly, through AI technology trained on music databases using emojis or text capturing moods, which are translated into song suggestions. The set was shared with an artificial DJ that also adapted the tracks being played based on comments from social networks.
  • As a novelty, the stage in the Pati de les Dones at the CCCB featured a DJ set entitled ‘AI Rave’, created by a neural network that generated entirely new music from the archive of programmes from the past five years of Dublab Barcelona radio. This proposal was designed by Berlin-based artists Dadabots and Hexorcismos.

Also in a co-creation format, the research and art project by Holly Herndon, a pioneer in experimentation with voice and AI, together with María Arnal and Tarta Relena, stood out. This unique vocal assemblage created a human and virtual polyphony presented exclusively at the Festival.

Alliances between science, technology and the arts

The European Commission selected the consortium formed by Sónar, UPC and Betevé to organise the AI and Music Festival within the S+T+ARTS (Science, Technology & the Arts) programme, a European initiative to foster alliances between science, technology and the arts. The AI and Music Festival is sponsored by everis, part of the NTT DATA group.

Organisations from different European countries participated in developing the Festival’s content: Artificia, the Barcelona Symphony Orchestra and National Orchestra of Catalonia (OBC), IQuantum Music, Polifonia, L’Auditori and ESMUC, Factory Berlin and c/o Pop, Bozar, Today’s Art. Polifonia.

Budget and funding

AI & Music has been funded with a total budget of €265,000 and ran for one year (March 2021 – March 2022).


 

Related Projects