
Experimental methodoly for optimization of telehandlers for AUSA
December 16, 2019
Technical assistance with the restoration of buildings 1, 2 and 4 of can Bagaria
December 21, 2019The Language and Speech Technologies and Applications Center (TALP) is participating in the AMALEU (A Machine-Learned Universal Language Representation) project. The aim of the project is to obtain a universal language representation based on automatic learning: one for spoken language and one for written.

Why is automatic translation between English and Portuguese significantly better than automatic translation between Dutch and Spanish? Why does voice recognition work better in German than in Finnish? The main problem is the insufficient amount of labelled data for training in both cases. Although the world is multimodal and highly multilingual, speech and language technology provide a satisfactory response for all languages. We need better learning methods that take advantage of the advances in some modalities and languages to benefit others.
The aim of AMALEU is to automatically learn a universal representation of language, whether it is with voice or text. This can be used in artificial intelligence applications for different languages. The project will use unlabelled information sources and language information. The project focuses on the challenge of learning from few resources and an approach to automatic multilingual translation.
AMALEU will have an impact on highly multidisciplinary communities of specialists in computer sciences, mathematics, engineering and linguistics who work with natural language understanding applications, natural language and speech processing.
AMALEU is funded by the Spanish Ministry of Economy and Competitiveness (MINECO), as part of the Europe Excellence programme. The project lasts two years (January 2019 – December 2020).
Technology
You want to know more?
Related Projects
- The AgroTech research group at UPC, in collaboration with its spin-off Ugiat Technologies, has developed uPlayer, a new multimedia player concept that enables more intuitive video navigation and viewing, intelligently enhancing the user experience, especially on YouTube and other platforms, by integrating as a plugin or advanced player.
- The AgroTech research group at the Universitat Politècnica de Catalunya – BarcelonaTech (UPC), together with its spin-off Ugiat Technologies, have driven DoblAI, an AI platform that integrates transcription, translation, subtitling and video dubbing into a single workflow. The solution, which uses deep learning technology and cloned or default voice models, is specifically designed for the journalism and communications sector.
- The Image and Video Processing Group (GPI), part of the IDEAI-UPC research group, and the Digital Culture and Creative Technologies Research Group (DiCode) from the Image Processing and Multimedia Technology Center (CITM) at the Universitat Politècnica de Catalunya – BarcelonaTech (UPC), have co-organised the AI and Music Festival (S+T+ARTS) together with Sónar+D and Betevé, to explore the creative use of artificial intelligence in music.
- The Visualisation, Virtual Reality and Graphic Interaction Research Group (ViRVIG) at the Universitat Politècnica de Catalunya - BarcelonaTech (UPC) has participated in the XR4ED project, an initiative that connects the educational technology (EdTech) and Extended Reality (XR) sectors, with the aim of transforming learning and training across Europe.




