Speech Technology
1.62K subscribers
122 photos
4 videos
1 file
2.13K links
Download Telegram
https://huggingface.co/datasets/ai-coustics/dawn_chorus_en

dawn_chorus_en
An open-source evaluation dataset for accurate foreground speaker transcription.

The dataset targets mixture conditions where foreground speech remains generally transcribable by speech-to-text systems, while background speech is distinctly perceived as background. It provides around 90 minutes of foreground–background speech mixtures composed of recorded and synthesized foreground speech, along with ground truth foreground speech and corresponding transcripts.

Inspired by DAPS, which frames speech enhancement as a direct transformation from real-world device recordings to professionally produced studio speech via aligned input–output pairs, we design this dataset around an equally application-driven mapping: from realistic foreground–background speech mixtures to isolated primary-speaker speech that remains robustly transcribable by downstream STT systems. Like DAPS, our approach emphasizes time-aligned references and real recording / transmission conditions rather than purely synthetic degradations, enabling evaluation of suppression strength versus foreground speech distortion.
DiTs are powering modern TTS systems however one rarely mentions their issues. Longer training time, higher data requirements. Convolutions still have sense given the speech data is locally uniform. A research like this still makes sense for us GPU-poor guys

https://arxiv.org/abs/2603.09408v1
Just another reminder there is no point in ONNX

https://github.com/eschmidbauer/moonshine-c

source is pure C 825 lines of code, executable is 40kb. It runs ASR just fine.
Interesting community on Reddit

https://www.reddit.com/r/VoiceAutomationAI/

will host AMA session with Tony Robinson, one of the most knowledgeable person I know

Upcoming AMA with Dr Tony Robinson (Founder Speechmatics)

Excited to announce that Dr Tony Robinson will be joining Unio - The Voice AI Community powered by SLNG for a live AMA with builders & founders.

If you’re building voice AI, you already know this:
it works in demos… and breaks in production.

Dr Tony has spent 36+ years in Voice AI, starting in 1989 at Cambridge where he built one of the earliest neural network based speech recognition systems, long before deep learning became mainstream.

Today, Speechmatics powers voice AI across 50+ languages, with customers seeing 9x growth in voice agent adoption in 2025.

📅 Date: 27 March
Time: 10:30 AM PST / 11:00 PM IST
📍 Location: Reddit (r/VoiceAutomationAI)

For the next 24 hours, he’ll be answering questions about:

• What actually breaks in production voice AI (and how to fix it)
• Accents, noise, latency & real-world edge cases
• Designing reliable STT-LLM-TTS pipelines
• Lessons from 35+ years building speech systems
• Where voice AI is really heading (beyond the hype)
• What he’d do differently if starting today

If you're building in Voice AI, AI agents, or conversational automation, this is a rare opportunity to learn from someone who has been solving these problems for decades.

Join the reddit community to drop questions👇
Link in the first comment.
Good talk on SpeechLMs

https://www.youtube.com/watch?v=m65SiSnsZ3g

Explained the paper below. Basically at different point of time one has to pick different layers from text LM for adapters. Word boundaries require more linguistic knowledge, middle words more acoustic knowledge. Big improvements with adjusted adapters as a result.

https://arxiv.org/abs/2503.06211

Late Fusion and Multi-Level Fission Amplify Cross-Modal Transfer in Text-Speech LMs

Santiago Cuervo, Adel Moumen, Yanis Labrak, Sameer Khurana, Antoine Laurent, Mickael Rouvier, Phil Woodland, Ricard Marxer

Text-Speech Language Models (TSLMs) -- language models trained to jointly process and generate text and speech -- are commonly trained through an early modality fusion/fission approach, in which both modalities are fed and predicted from a shared backbone via linear layers. We hypothesize that this approach limits cross-modal transfer by neglecting feature compositionality -- specifically, the finer-grained nature of speech representations compared to text -- preventing the emergence of a shared feature hierarchy within model layers. In this paper, we argue that this limitation can be addressed through late fusion and fission, with a fission process that accesses both high- and low-level features for speech generation. Our models implementing these principles, SmolTolk, rival or surpass state-of-the-art TSLMs trained with orders of magnitude more compute, and achieve significantly improved cross-modal performance relative to early fusion/fission baselines. Representation analyses further suggest that our method enhances the model's ability to abstract higher-level, more semantic features from speech, and leads to increasingly shared representation spaces across layers.
VoxCPM2 is the latest major release — a 2B parameter model trained on over 2 million hours of multilingual speech data, now supporting 30 languages, Voice Design, Controllable Voice Cloning, and 48kHz studio-quality audio output. Built on a MiniCPM-4 backbone.

https://github.com/OpenBMB/VoxCPM
Rissa Cao, FishAudio CEO, a bit of marketing but very valid point on data importance and lack of real high quality data for speech systems

https://www.linkedin.com/feed/update/urn:li:activity:7448399470251356160/

Early on, we made a mistake. We trained our TTS model on whatever voice data we could find online.
It sounded great on podcasts. But terrible for creation, companionship, anime dubbing. Everything fell apart.
The data distribution was wrong.
The Beyond Transcription Challenge, an IEEE SLT 2026 shared task tackling a foundational question in audio AI: can a model reason over speech without first converting it to text?

https://betrac.github.io

The research question: Current speech models still struggle to extract meaning directly from audio, especially when the signal includes overlapping speakers, ambient sounds, and room acoustics. Clinical note generation from doctor-patient conversations is an ideal stress test for this: it demands that a model attend to who said what, filter environmental noise, and produce faithful structured output. Yet on the Synth-DoPaCo dataset, end-to-end models hallucinate at alarming rates, with 99–100% of clinical claims unsupported by the source audio, compared to just 21–23% for traditional transcribe-then-summarize pipelines. BeTraC is a shared evaluation challenge aimed at closing this gap by advancing the technology.

Two competition tracks:
- Lightweight (≤ 6B params): Single end-to-end model, one invocation. Audio in, SOAP note out.
- Heavyweight (≤ 36B params): Tools and agents allowed. Only the final model generates text from audio.

The Synth-DoPaCo dataset: 8,800 synthetic doctor-patient conversations (~1,329 hrs), 66 ambient sound classes, room reverberation, Opus compression. Available now on Hugging Face.

Key dates:
- May 4, 2026: Open-Source Inclusion Proposals Deadline
- June 24, 2026: System submission deadline
- July 8, 2026: Challenge paper due

Data is live. Baselines are posted. Team registration is open.

If you work on speech, audio understanding, or multimodal AI, we'd love to have you compete.
https://www.deepl.com/en/press-release/deepl_launches_voice_api_for_real_time_speech_transcription_and_translation
DeepL, a global AI product and research company, today announced the general availability of DeepL Voice API. This innovative product empowers developers to integrate real-time voice transcription and translation capabilities into their applications, significantly enhancing multilingual support for businesses.
For a long time AudioSet was big pain to download, finally available on HF

https://huggingface.co/datasets/agkphysics/AudioSet

Overall, even small speech models need to understand non-speech sounds better. More on this later.