
Experimental methodoly for optimization of telehandlers for AUSA
December 16, 2019
Technical assistance with the restoration of buildings 1, 2 and 4 of can Bagaria
December 21, 2019The Language and Speech Technologies and Applications Center (TALP) is participating in the AMALEU (A Machine-Learned Universal Language Representation) project. The aim of the project is to obtain a universal language representation based on automatic learning: one for spoken language and one for written.

Why is automatic translation between English and Portuguese significantly better than automatic translation between Dutch and Spanish? Why does voice recognition work better in German than in Finnish? The main problem is the insufficient amount of labelled data for training in both cases. Although the world is multimodal and highly multilingual, speech and language technology provide a satisfactory response for all languages. We need better learning methods that take advantage of the advances in some modalities and languages to benefit others.
The aim of AMALEU is to automatically learn a universal representation of language, whether it is with voice or text. This can be used in artificial intelligence applications for different languages. The project will use unlabelled information sources and language information. The project focuses on the challenge of learning from few resources and an approach to automatic multilingual translation.
AMALEU will have an impact on highly multidisciplinary communities of specialists in computer sciences, mathematics, engineering and linguistics who work with natural language understanding applications, natural language and speech processing.
AMALEU is funded by the Spanish Ministry of Economy and Competitiveness (MINECO), as part of the Europe Excellence programme. The project lasts two years (January 2019 – December 2020).
Technology
You want to know more?
Related Projects
- The Image and Video Processing Group (GPI), part of the IDEAI-UPC research group, and the Digital Culture and Creative Technologies Research Group (DiCode) from the Image Processing and Multimedia Technology Center (CITM) at the Universitat Politècnica de Catalunya – BarcelonaTech (UPC), have co-organised the AI and Music Festival (S+T+ARTS) together with Sónar+D and Betevé, to explore the creative use of artificial intelligence in music.
- The Visualisation, Virtual Reality and Graphic Interaction Research Group (ViRVIG) at the Universitat Politècnica de Catalunya - BarcelonaTech (UPC) has participated in the XR4ED project, an initiative that connects the educational technology (EdTech) and Extended Reality (XR) sectors, with the aim of transforming learning and training across Europe.
- The inLab FIB at the UPC has collaborated with Lizcore® for the development of a proof of concept based on artificial intelligence to improve safety in climbing with autobelay devices. The system allows the automatic and accurate detection of risk situations before starting a route.
- Researchers from the Centre for Image and Multimedia Technology of the UPC (CITM) and from the DiCode research group (Digital Culture and Creative Technologies Research Group) of the Universitat Politècnica de Catalunya – BarcelonaTech (UPC) have worked on the project The Eyes of History, an initiative of the Catalan Agency for Cultural Heritage that offers an immersive view of Catalan cultural heritage. It is especially aimed at the first and second cycles of secondary education and was created to bring heritage into the classroom. Its goal is to bring the history and monuments of Catalonia closer in a vivid and innovative way, using tools such as virtual reality and new museographic narratives.




