Semi-supervised Learning for Low-resource Multilingual and Multimodal Speech Proc...(Sakriani Sakti)
Автор: HiTZ zentroa
Загружено: 2022-05-06
Просмотров: 200
*Title: Semi-supervised Learning for Low-resource Multilingual and Multimodal Speech Processing with Machine Speech Chain.
Summary: The development of advanced spoken language technologies based on automatic speech recognition (ASR) and text-to-speech synthesis (TTS) has enabled computers to either learn how to listen or speak. Many applications and services are now available but still support fewer than 100 languages. Nearly 7000 living languages that are spoken by 350 million people remain uncovered. This is because the construction is commonly done based on machine learning trained in a supervised fashion where a large amount of paired speech and corresponding transcription is required. In this talk, we will introduce a semi-supervised learning mechanism based on a machine speech chain framework. First, we describe the primary machine speech chain architecture that learns not only to listen or speak but also to listen while speaking. The framework enables ASR and TTS to teach each other given unpaired data. After that, we describe the use of machine speech chain for code-switching and cross-lingual ASR and TTS of several languages, including low-resourced ethnic languages. Finally, we describe the recent multimodal machine chain that mimics overall human communication to listen while speaking and visualizing. With the support of image captioning and production models, the framework enables ASR and TTS to improve their performance using an image-only dataset.
Доступные форматы для скачивания:
Скачать видео mp4
-
Информация по загрузке: