Speech Technology
1.6K subscribers
122 photos
4 videos
1 file
2.12K links
Download Telegram
Audio Reasoning Challenge results

https://audio-reasoning-challenge.github.io/leaderboard/

some info about winner Taltech entry

https://www.linkedin.com/posts/aivo-olev-73944965_its-official-i-built-an-ai-agent-that-outperformed-ugcPost-7429801097202069504-G3U8

The task was to build an agent that can reason about audio using any open-source tools and my unique solution basically taught a deaf LLM (Kimi K2) to answer questions about 1000 audio files (music, speech, other sounds). That would be hard for a human as well. It had input from other LLMs and 35 tools that were able to pick up some unreliable info (ofter incorrect or even hallucinated) from the audio and that is what made this challenge the most exiting and why I basically worked non-stop for the 4 weeks. A normal AI agent can be pretty sure that when it reads a file or gets some other tool input that the information is correct. It might be irrelevant for the task, but mostly LLMs trust input (which is a problem in the real word with input from web search, malicious input, another agent's opinion etc). They also reason quite linearly which is a problem when you have unreliable info.
Somehow one can create multimodal embeddings from speech and text and make them useful. Some projects I've around recently:

https://github.com/facebookresearch/SONAR

Used for ASR WER approximation

On the Robust Approximation of ASR Metrics
Abdul Waheed, Hanin Atwany, Rita Singh, Bhiksha Raj
https://arxiv.org/abs/2502.12408

Another one to detect dataset quality issues

https://huggingface.co/yuriyvnv/WAVe-1B-Multimodal-PT
No model weights, but somewhat interesting ideas.

Transfusion: Transfusion (Zhou et al., 2025) was originally proposed in computer vision to develop a model that can jointly perform generation and understanding tasks.

https://arxiv.org/abs/2602.17097

AudioChat: Unified Audio Storytelling, Editing, and Understanding with Transfusion Forcing

William Chen, Prem Seetharaman, Rithesh Kumar, Oriol Nieto, Shinji Watanabe, Justin Salamon, Zeyu Jin

Despite recent breakthroughs, audio foundation models struggle in processing complex multi-source acoustic scenes. We refer to this challenging domain as audio stories, which can have multiple speakers and background/foreground sound effects. Compared to traditional audio processing tasks, audio stories introduce new layers of semantic, temporal, and physical complexity. To address this challenge, we propose AudioChat, a framework for developing audio foundation models that can generate, edit, and understand audio stories. AudioChat introduces a new paradigm in which LLM-based toolcalling agents simulate interactions between users and the system, and these simulated dialogues are used as training data. We also introduce a novel Audio Transfusion Forcing objective to train the AudioChat model, allowing it to simultaneously decompose high-level instructions via structured chain-of-thought reasoning and perform interactive multi-turn audio understanding/generation. To evaluate generation and editing performance, we develop three new metrics that directly measure task performance instead of relying upon distribution-based scoring. We highly encourage readers to visit our demo to better understand the capabilities of AudioChat: this https URL.
Interesting job, those are rare nowdays

Bland.ai builds AI voice agents that handle real phone calls for some of the largest companies in the world. Our software runs inside critical workflows at companies like Samsara, Gallup, TripAdvisor, Snapchat, Signant Health, Better.com, and others. We have raised $65 million from top Silicon Valley investors including Emergence Capital, Scale Venture Partners, Y Combinator, and the founders of Twilio, Affirm, and ElevenLabs.

We are expanding our research team as we train and deploy our own TTS and STT models in production. We are also investing heavily in next generation speech to speech and speech inference systems.

We are currently hiring for two roles:

Research
If you have designed and trained your own models, published papers or in depth technical writing, and are working at the leading edge of audio research, we would love to hear from you:
https://jobs.ashbyhq.com/bland/d2e08077-61f0-4810-bc72-3efd7944647b

You might be a strong fit if you have experience with:
- Large scale TTS, STT, or neural audio codec systems
- Self supervised learning, generative modeling, or multimodal modeling
- Neural audio codecs, discrete or continuous latent representations, and compression tradeoffs
- Running tight ablations and controlled experiments that move ideas from hypothesis to validation quickly
- Optimizing inference for real time, low latency production systems

Machine Learning Engineer
If you are a strong programmer who enjoys building terabyte scale datasets, designing training pipelines, and working on model inference and deployment, while staying closely connected to research, apply here:
https://jobs.ashbyhq.com/bland/05906608-0628-412c-8b01-a050d87986c5

If you have any questions please feel free to shoot me a DM!
Or friend @vancheeck recently pushed a new generation of an outstanding speaker identification architecture

https://github.com/PalabraAI/redimnet2

It is great this project continues in Palabra https://www.palabra.ai
IWSLT 2026 has some interesting competitions (like subtitling) with data available for download

https://iwslt.org/2026/subtitling

Evaluation period starts April 1st
Google DeepMind released African ASR/TTS data, somewhat interesting

The WAXAL dataset is a large-scale multilingual speech corpus for African languages, introduced in the paper WAXAL: A Large-Scale Multilingual African Language Speech Corpus.

https://huggingface.co/datasets/google/WaxalNLP
Reasoning in audio LLMs is a problem

https://github.com/Blinorot/ALARM

https://arxiv.org/abs/2603.09556

This is the official implementation of ALARM: Audio–Language Alignment for Reasoning Models, an audio reasoning language model trained in a self-generation setup that achieves state-of-the-art performance on Speech Understanding benchmarks with a 4B backbone.

Abstract: Large audio language models (ALMs) extend LLMs with auditory understanding. A common approach freezes the LLM and trains only an adapter on self-generated targets. However, this fails for reasoning LLMs (RLMs) whose built-in chain-of-thought traces expose the textual surrogate input, yielding unnatural responses. We propose self-rephrasing, converting self-generated responses into audio-understanding variants compatible with RLMs while preserving distributional alignment. We further fuse and compress multiple audio encoders for stronger representations. For training, we construct a 6M-instance multi-task corpus (2.5M unique prompts) spanning 19K hours of speech, music, and sound. Our 4B-parameter ALM outperforms similarly sized models and surpasses most larger ALMs on related audio-reasoning benchmarks, while preserving textual capabilities with a low training cost. Notably, we achieve the best open-source result on the MMAU-speech and MMSU benchmarks and rank third among all the models.
https://huggingface.co/datasets/ai-coustics/dawn_chorus_en

dawn_chorus_en
An open-source evaluation dataset for accurate foreground speaker transcription.

The dataset targets mixture conditions where foreground speech remains generally transcribable by speech-to-text systems, while background speech is distinctly perceived as background. It provides around 90 minutes of foreground–background speech mixtures composed of recorded and synthesized foreground speech, along with ground truth foreground speech and corresponding transcripts.

Inspired by DAPS, which frames speech enhancement as a direct transformation from real-world device recordings to professionally produced studio speech via aligned input–output pairs, we design this dataset around an equally application-driven mapping: from realistic foreground–background speech mixtures to isolated primary-speaker speech that remains robustly transcribable by downstream STT systems. Like DAPS, our approach emphasizes time-aligned references and real recording / transmission conditions rather than purely synthetic degradations, enabling evaluation of suppression strength versus foreground speech distortion.
DiTs are powering modern TTS systems however one rarely mentions their issues. Longer training time, higher data requirements. Convolutions still have sense given the speech data is locally uniform. A research like this still makes sense for us GPU-poor guys

https://arxiv.org/abs/2603.09408v1
Just another reminder there is no point in ONNX

https://github.com/eschmidbauer/moonshine-c

source is pure C 825 lines of code, executable is 40kb. It runs ASR just fine.
Interesting community on Reddit

https://www.reddit.com/r/VoiceAutomationAI/

will host AMA session with Tony Robinson, one of the most knowledgeable person I know

Upcoming AMA with Dr Tony Robinson (Founder Speechmatics)

Excited to announce that Dr Tony Robinson will be joining Unio - The Voice AI Community powered by SLNG for a live AMA with builders & founders.

If you’re building voice AI, you already know this:
it works in demos… and breaks in production.

Dr Tony has spent 36+ years in Voice AI, starting in 1989 at Cambridge where he built one of the earliest neural network based speech recognition systems, long before deep learning became mainstream.

Today, Speechmatics powers voice AI across 50+ languages, with customers seeing 9x growth in voice agent adoption in 2025.

📅 Date: 27 March
Time: 10:30 AM PST / 11:00 PM IST
📍 Location: Reddit (r/VoiceAutomationAI)

For the next 24 hours, he’ll be answering questions about:

• What actually breaks in production voice AI (and how to fix it)
• Accents, noise, latency & real-world edge cases
• Designing reliable STT-LLM-TTS pipelines
• Lessons from 35+ years building speech systems
• Where voice AI is really heading (beyond the hype)
• What he’d do differently if starting today

If you're building in Voice AI, AI agents, or conversational automation, this is a rare opportunity to learn from someone who has been solving these problems for decades.

Join the reddit community to drop questions👇
Link in the first comment.
Good talk on SpeechLMs

https://www.youtube.com/watch?v=m65SiSnsZ3g

Explained the paper below. Basically at different point of time one has to pick different layers from text LM for adapters. Word boundaries require more linguistic knowledge, middle words more acoustic knowledge. Big improvements with adjusted adapters as a result.

https://arxiv.org/abs/2503.06211

Late Fusion and Multi-Level Fission Amplify Cross-Modal Transfer in Text-Speech LMs

Santiago Cuervo, Adel Moumen, Yanis Labrak, Sameer Khurana, Antoine Laurent, Mickael Rouvier, Phil Woodland, Ricard Marxer

Text-Speech Language Models (TSLMs) -- language models trained to jointly process and generate text and speech -- are commonly trained through an early modality fusion/fission approach, in which both modalities are fed and predicted from a shared backbone via linear layers. We hypothesize that this approach limits cross-modal transfer by neglecting feature compositionality -- specifically, the finer-grained nature of speech representations compared to text -- preventing the emergence of a shared feature hierarchy within model layers. In this paper, we argue that this limitation can be addressed through late fusion and fission, with a fission process that accesses both high- and low-level features for speech generation. Our models implementing these principles, SmolTolk, rival or surpass state-of-the-art TSLMs trained with orders of magnitude more compute, and achieve significantly improved cross-modal performance relative to early fusion/fission baselines. Representation analyses further suggest that our method enhances the model's ability to abstract higher-level, more semantic features from speech, and leads to increasingly shared representation spaces across layers.