Speech Technology
1.59K subscribers
122 photos
4 videos
1 file
2.12K links
Download Telegram
New model from Assembly AI. Definitely improved from before, but not as great as Speechmatics.

On a toy test WER 10.89, previous assemblyAI (version 9) was at 11.04, version before 11.89. Speechmatics 6.88. Whisper large 8.94

https://twitter.com/AssemblyAI/status/1636050346240884744

Introducing Conformer-1: our latest state-of-the-art speech recognition model.

Built on top of the Conformer architecture and trained on 650K hours of audio data, it achieves near-human-level performance, making up to 43% fewer errors on noisy data than other ASR models.

We use a modified version of the conformer neural net published by Google Brain.

It's built on top of an Efficient Conformer (Orange Labs, 2021), that introduces the following technical modifications:

- Progressive Downsampling to reduce the length of the encoded sequence
- Grouped Attention: A modified version of the attention mechanism that makes it agnostic to sequence-length

These changes yield speedups of 29% at inference time and 36% at training time.

To further improve our model’s accuracy on noisy audio, we implemented a modified version of Sparse Attention, a pruning method for achieving sparsity of the model’s weights in order to achieve regularization.

We took inspiration from the data scaling laws described in DeepMind's Chinchilla paper and adapted them to the ASR domain.

Our team curated a dataset of 650K hours of English audio - making our model the largest-trained supervised model for English available today.

Based on our results, Conformer-1 is more robust on real-world data than popular commercial and open-source ASR models, making up to 43% fewer errors on average on noisy data:

The biggest improvement with this new release is in our robustness to a wide variety of data domains and noisy audio.
Kincaid46 WER from Ursa announcement:

AssemblyAI: 8.6
Speechmatics: 7.88
Microsoft: 9.70
Whisper Large-v2: 8.7
Vosk 0.42 Gigaspeech 15.8
Google 12.52
Amazon 10.94
The amount of models this guy trained is quite outstanding

https://malaya-speech.readthedocs.io/en/latest/index.html
12th ISCA Speech Synthesis Workshop (SSW) is now open for submissions!
Final submission deadline: May, 3 2023
Late breaking reports submission deadline : June, 28 2023

The Speech Synthesis Workshop will be held in Grenoble, France and is organized as a satellite event of the Interspeech conference in Dublin, Ireland
Come and join the SSW community and the people who creates machines that talk!

Visit the official site for more information
https://ssw2023.org/
Forwarded from Machinelearning
WavCaps: A ChatGPT-Assisted Weakly-Labelled Audio Captioning Dataset for Audio-Language Multimodal Research

Propose a three-stage processing pipeline for filtering noisy data and generating high-quality captions, where ChatGPT.

Конвейер обработки для фильтрации зашумленных данных и создания высококачественных титров
.

🖥 Github: https://github.com/xinhaomei/wavcaps

Paper: https://arxiv.org/abs/2303.17395v1

💨 Dataset: https://paperswithcode.com/dataset/sounddescs

ai_machinelearning_big_data
Please open Telegram to view this post
VIEW IN TELEGRAM
https://www.openslr.org/136/

EMNS
Identifier: SLR136

Summary: An emotive single-speaker dataset for narrative storytelling. EMNS is dataset containing transcriptions, emotion, emotion intensity, and description of acted speech.

Category: Speech, text-to-speech, automatic speech recognition

License: Apache 2.0
About this resource:

Emotive Narrative Storytelling (EMNS) corpus introduces a dataset consisting of a single speaker, British English speech with high-quality labelled utterances tailored to drive interactive experiences with dynamic and expressive language. Each audio-text pairs are reviewed for artefacts and quality. Furthermore, we extract critical features using natural language descriptions, including word emphasis, level of expressiveness and emotion.

EMNS data collection tool: https://github.com/knoriy/EMNS-DCT

EMNS cleaner: https://github.com/knoriy/EMNS-cleaner
https://groups.inf.ed.ac.uk/edacc/

The Edinburgh International Accents of English Corpus: Towards the Democratization of English ASR. Ramon Sanabria, Bogoychev, Markl, Carmantini, Klejch, and Bell. ICASSP 2023. Presentation of the EdAcc.
NeMo 1.17 is now released and and includes a lot of improvements that users have long requested.

This includes a high level Diarization API, PyCTCDecode support for beam search, InterCTC Loss support, AWS Sagemaker tutorial and more !

https://twitter.com/alphacep/status/1644685634404073472
Space is closer than you think. Happy Cosmonautics day my friends.
Not sure about claimed accuracy but numbers are interesting

https://blog.deepgram.com/nova-speech-to-text-whisper-api/


A remarkable 22% reduction in word error rate (WER)

A blazing-fast 23-78x quicker inference time

A budget-friendly 3-7x lower cost starting at only $0.0043/min
AUDIT:
Audio Editing by Following Instructions with Latent Diffusion Models

Yuancheng Wang, Zeqian Ju, Xu Tan, Lei He, Zhizheng Wu, Jiang Bian, Sheng Zhao
Abstract. Audio editing is applicable for various purposes, such as adding background sound effects, replacing a musical instrument, and repairing damaged audio. Recently, some diffusion-based methods achieved zero-shot audio editing by using a diffusion and denoising process conditioned on the text description of the output audio. However, these methods still have some problems: 1) they have not been trained on editing tasks and cannot ensure good editing effects; 2) they can erroneously modify audio segments that do not require editing; 3) they need a complete description of the output audio, which is not always available or necessary in practical scenarios. In this work, we propose AUDIT, an instruction-guided audio editing model based on latent diffusion models. Specifically, AUDIT has three main design features: 1) we construct triplet training data (instruction, input audio, output audio) for different audio editing tasks and train a diffusion model using instruction and input (to be edited) audio as conditions and generating output (edited) audio; 2) it can automatically learn to only modify segments that need to be edited by comparing the difference between the input and output audio; 3) it only needs edit instructions instead of full target audio descriptions as text input. AUDIT achieves state-of-the-art results in both objective and subjective metrics for several audio editing tasks (e.g., adding, dropping, replacement, inpainting, super-resolution).

This research is done in alignment with Microsoft's responsible AI principles.

https://audit-demo.github.io/
NaturalSpeech 2, a new powerful zero-shot TTS model in NaturaSpeech series🔥
1. Latent diffusion model + continuous codec, avoiding the dilemma in language model + discrete codec;
2. Strong zero-shot speech synthesis with a 3s prompt, singing synthesis with only a speech prompt!

abs: https://arxiv.org/abs/2304.09116
project page: https://speechresearch.github.io/naturalspeech2/