Played with Qwen3-Omni a bit. Full version requires 90Gb of RAM, 4-bit quantization fits 24. 4-bit version only runs with VLLM and doesn't support audio output yet.
Speech recognition accuracy in HF space is OK but intelligence is below expectation. Video understanding is not really required for us.
My impression that video part makes this model too big for practical speech cases as it requires huge compute. A pure audio model might be more light and accurate.
Speech recognition accuracy in HF space is OK but intelligence is below expectation. Video understanding is not really required for us.
My impression that video part makes this model too big for practical speech cases as it requires huge compute. A pure audio model might be more light and accurate.
People still use whisperx for speaker separation and recognition, pyannote4 patch is pending
https://github.com/m-bain/whisperX/pull/1243
https://github.com/m-bain/whisperX/pull/1243
GitHub
Upgrade to pyannote-audio 4 by borgoat · Pull Request #1243 · m-bain/whisperX
There's a couple of new pyannote models:1 pyannote/speaker-diarization-community-1 (offline) and pyannote/speaker-diarization-precision-2 (hosted by pyannote)
I did a minimal upgrade to pya...
I did a minimal upgrade to pya...
This is an interesting talk, we also recommend to participate online since Google and DeepMind frequently doesn't allow recordings, there were many cases like that.
[Oct 30th, 2025]
Gemini Voice Agent: A Natively Multimodal Dialog Model with Advanced Reasoning and Tool Use
Presenter:Michael Han Google DeepMind
https://poonehmousavi.github.io/rg.html
https://concordia-ca.zoom.us/j/81004805542
[Oct 30th, 2025]
Gemini Voice Agent: A Natively Multimodal Dialog Model with Advanced Reasoning and Tool Use
Presenter:Michael Han Google DeepMind
https://poonehmousavi.github.io/rg.html
https://concordia-ca.zoom.us/j/81004805542
poonehmousavi.github.io
Pooneh Mousavi
Homepage of Pooneh Mousavi
Some emotion work from LAION, Emolia dataset with finegrained emotion annotation for Emlia data
https://huggingface.co/datasets/laion/Emolia
EmoNet-Voice: A Fine-Grained, Expert-Verified Benchmark for Speech Emotion Detection
https://arxiv.org/abs/2506.09827
EmoNet-Voice: A Fine-Grained, Expert-Verified Benchmark for Speech Emotion Detection
Christoph Schuhmann, Robert Kaczmarczyk, Gollam Rabby, Felix Friedrich, Maurice Kraus, Kourosh Nadi, Huu Nguyen, Kristian Kersting, Sören Auer
The advancement of text-to-speech and audio generation models necessitates robust benchmarks for evaluating the emotional understanding capabilities of AI systems. Current speech emotion recognition (SER) datasets often exhibit limitations in emotional granularity, privacy concerns, or reliance on acted portrayals. This paper introduces EmoNet-Voice, a new resource for speech emotion detection, which includes EmoNet-Voice Big, a large-scale pre-training dataset (featuring over 4,500 hours of speech across 11 voices, 40 emotions, and 4 languages), and EmoNet-Voice Bench, a novel benchmark dataset with human expert annotations. EmoNet-Voice is designed to evaluate SER models on a fine-grained spectrum of 40 emotion categories with different levels of intensities. Leveraging state-of-the-art voice generation, we curated synthetic audio snippets simulating actors portraying scenes designed to evoke specific emotions. Crucially, we conducted rigorous validation by psychology experts who assigned perceived intensity labels. This synthetic, privacy-preserving approach allows for the inclusion of sensitive emotional states often absent in existing datasets. Lastly, we introduce Empathic Insight Voice models that set a new standard in speech emotion recognition with high agreement with human experts. Our evaluations across the current model landscape exhibit valuable findings, such as high-arousal emotions like anger being much easier to detect than low-arousal states like concentration.
https://huggingface.co/datasets/laion/Emolia
EmoNet-Voice: A Fine-Grained, Expert-Verified Benchmark for Speech Emotion Detection
https://arxiv.org/abs/2506.09827
EmoNet-Voice: A Fine-Grained, Expert-Verified Benchmark for Speech Emotion Detection
Christoph Schuhmann, Robert Kaczmarczyk, Gollam Rabby, Felix Friedrich, Maurice Kraus, Kourosh Nadi, Huu Nguyen, Kristian Kersting, Sören Auer
The advancement of text-to-speech and audio generation models necessitates robust benchmarks for evaluating the emotional understanding capabilities of AI systems. Current speech emotion recognition (SER) datasets often exhibit limitations in emotional granularity, privacy concerns, or reliance on acted portrayals. This paper introduces EmoNet-Voice, a new resource for speech emotion detection, which includes EmoNet-Voice Big, a large-scale pre-training dataset (featuring over 4,500 hours of speech across 11 voices, 40 emotions, and 4 languages), and EmoNet-Voice Bench, a novel benchmark dataset with human expert annotations. EmoNet-Voice is designed to evaluate SER models on a fine-grained spectrum of 40 emotion categories with different levels of intensities. Leveraging state-of-the-art voice generation, we curated synthetic audio snippets simulating actors portraying scenes designed to evoke specific emotions. Crucially, we conducted rigorous validation by psychology experts who assigned perceived intensity labels. This synthetic, privacy-preserving approach allows for the inclusion of sensitive emotional states often absent in existing datasets. Lastly, we introduce Empathic Insight Voice models that set a new standard in speech emotion recognition with high agreement with human experts. Our evaluations across the current model landscape exhibit valuable findings, such as high-arousal emotions like anger being much easier to detect than low-arousal states like concentration.
arXiv.org
EmoNet-Voice: A Fine-Grained, Expert-Verified Benchmark for Speech...
Speech emotion recognition (SER) systems are constrained by existing datasets that typically cover only 6-10 basic emotions, lack scale and diversity, and face ethical challenges when collecting...
From comments to KanyTTS release
https://www.reddit.com/r/LocalLLaMA/comments/1oitanf/just_dropped_kani_tts_english_a_400m_tts_model/
Nice quick evaluation of TTS engines. Kokoro leads due to stability, many other systems expose issues
https://paper2audio.com/posts/review-of-text-to-speech-models-for-reading-research-papers
https://www.reddit.com/r/LocalLLaMA/comments/1oitanf/just_dropped_kani_tts_english_a_400m_tts_model/
Nice quick evaluation of TTS engines. Kokoro leads due to stability, many other systems expose issues
https://paper2audio.com/posts/review-of-text-to-speech-models-for-reading-research-papers
https://github.com/pykeio/earshot
Very fast voice activity detection in Rust, 10 times faster than TEN VAD
Very fast voice activity detection in Rust, 10 times faster than TEN VAD
GitHub
GitHub - pykeio/earshot: Ridiculously fast & accurate voice activity detection in pure Rust
Ridiculously fast & accurate voice activity detection in pure Rust - pykeio/earshot
The attention patterns in speech definitely have potential
https://github.com/smulelabs/windowed-roformer
Efficient Vocal Source Separation Through Windowed Sink Attention
State-of-the-art vocal separation models like Mel-Band-Roformer rely on full temporal self-attention mechanisms, where each temporal frame interacts with every other frames. This incurs heavy computational costs that scales quadratically with input audio length, motivating chunking and windowing approaches. Through analysis of a pre-trained vocal separation model, we discovered that temporal attention patterns are highly localized. Building on this insight, we replaced full attention with windowed sink attention (WSA) with small temporal attention window and attention sinks. We show empirically that fine-tuning from the original checkpoint recovers 92% of the original SDR performance while reducing FLOPs by 44.5x.
Related is
https://github.com/SamsungLabs/SummaryMixing
SummaryMixing is a linear-time alternative to self-attention (SA) for speech processing models such as Transformers, Conformers or Branchformers. Instead of computing pair-wise scores between tokens (leading to quadratic-time complexity for SA), it summarises a whole utterance with mean over vectors for all time steps. SummaryMixing is based on the recent findings demonstrating that self-attention could be useless for speech recognition as the attention weights of trained ASR systems are almost uniformly distributed accross the tokens composing a sequence. SummaryMixing also is a generalisation of the recent HyperMixer and HyperConformer to better and simpler mixing functions. In a SummaryMixing cell, that takes the same inputs and produces the same outputs than self-attention, contributions from each time step are first transformed and then averaged globally before being fed back to each time step. This is visible in Figure 1 in the article. Therefore, the time-complexity is reduced to linear.
https://github.com/smulelabs/windowed-roformer
Efficient Vocal Source Separation Through Windowed Sink Attention
State-of-the-art vocal separation models like Mel-Band-Roformer rely on full temporal self-attention mechanisms, where each temporal frame interacts with every other frames. This incurs heavy computational costs that scales quadratically with input audio length, motivating chunking and windowing approaches. Through analysis of a pre-trained vocal separation model, we discovered that temporal attention patterns are highly localized. Building on this insight, we replaced full attention with windowed sink attention (WSA) with small temporal attention window and attention sinks. We show empirically that fine-tuning from the original checkpoint recovers 92% of the original SDR performance while reducing FLOPs by 44.5x.
Related is
https://github.com/SamsungLabs/SummaryMixing
SummaryMixing is a linear-time alternative to self-attention (SA) for speech processing models such as Transformers, Conformers or Branchformers. Instead of computing pair-wise scores between tokens (leading to quadratic-time complexity for SA), it summarises a whole utterance with mean over vectors for all time steps. SummaryMixing is based on the recent findings demonstrating that self-attention could be useless for speech recognition as the attention weights of trained ASR systems are almost uniformly distributed accross the tokens composing a sequence. SummaryMixing also is a generalisation of the recent HyperMixer and HyperConformer to better and simpler mixing functions. In a SummaryMixing cell, that takes the same inputs and produces the same outputs than self-attention, contributions from each time step are first transformed and then averaged globally before being fed back to each time step. This is visible in Figure 1 in the article. Therefore, the time-complexity is reduced to linear.
GitHub
GitHub - smulelabs/windowed-roformer: Official Repository for "Efficient Vocal Source Separation Through Windowed RoFormer"
Official Repository for "Efficient Vocal Source Separation Through Windowed RoFormer" - smulelabs/windowed-roformer
News from other universe
LongCat-Flash-Omni is open sourced: Multimodal + Low-Latency
* ScMoE architecture on LongCat-Flash: 560B Parameters, 27B Active
* Leading Performance among Open-Source Omni-modal models
* Training: Novel Early-Fusion Omni-modal training paradigm -> No Single Modality Left Behind
* Real-time Spoken Interaction: Millisecond-level E2E latency
* 128K context + Supports > 8min real-time AV interaction
* Multimodal I/O: Arbitrary Combination of Text/Image/Audio/Video Input → Text/Speech Output (w/ LongCat-Audio-Codec)
* Efficient Infrastructure: With optimized modality-decoupled parallel training, Omni sustains >90% throughput of pure-text training efficiency.
https://github.com/meituan-longcat/LongCat-Flash-Omni
LongCat-Flash-Omni is open sourced: Multimodal + Low-Latency
* ScMoE architecture on LongCat-Flash: 560B Parameters, 27B Active
* Leading Performance among Open-Source Omni-modal models
* Training: Novel Early-Fusion Omni-modal training paradigm -> No Single Modality Left Behind
* Real-time Spoken Interaction: Millisecond-level E2E latency
* 128K context + Supports > 8min real-time AV interaction
* Multimodal I/O: Arbitrary Combination of Text/Image/Audio/Video Input → Text/Speech Output (w/ LongCat-Audio-Codec)
* Efficient Infrastructure: With optimized modality-decoupled parallel training, Omni sustains >90% throughput of pure-text training efficiency.
https://github.com/meituan-longcat/LongCat-Flash-Omni
GitHub
GitHub - meituan-longcat/LongCat-Flash-Omni: This is the official repo for the paper "LongCat-Flash-Omni Technical Report"
This is the official repo for the paper "LongCat-Flash-Omni Technical Report" - meituan-longcat/LongCat-Flash-Omni
We like reviews. People still use ngram rescoring + LSTM for best accuracy. Most effective system just ensemble everything, kaggle-style.
https://arxiv.org/abs/2507.18161
Recent Trends in Distant Conversational Speech Recognition: A Review of CHiME-7 and 8 DASR Challenges
Samuele Cornell, Christoph Boeddeker, Taejin Park, He Huang, Desh Raj, Matthew Wiesner, Yoshiki Masuyama, Xuankai Chang, Zhong-Qiu Wang, Stefano Squartini, Paola Garcia, Shinji Watanabe
The CHiME-7 and 8 distant speech recognition (DASR) challenges focus on multi-channel, generalizable, joint automatic speech recognition (ASR) and diarization of conversational speech. With participation from 9 teams submitting 32 diverse systems, these challenges have contributed to state-of-the-art research in the field. This paper outlines the challenges' design, evaluation metrics, datasets, and baseline systems while analyzing key trends from participant submissions. From this analysis it emerges that: 1) Most participants use end-to-end (e2e) ASR systems, whereas hybrid systems were prevalent in previous CHiME challenges. This transition is mainly due to the availability of robust large-scale pre-trained models, which lowers the data burden for e2e-ASR. 2) Despite recent advances in neural speech separation and enhancement (SSE), all teams still heavily rely on guided source separation, suggesting that current neural SSE techniques are still unable to reliably deal with complex scenarios and different recording setups. 3) All best systems employ diarization refinement via target-speaker diarization techniques. Accurate speaker counting in the first diarization pass is thus crucial to avoid compounding errors and CHiME-8 DASR participants especially focused on this part. 4) Downstream evaluation via meeting summarization can correlate weakly with transcription quality due to the remarkable effectiveness of large-language models in handling errors. On the NOTSOFAR-1 scenario, even systems with over 50% time-constrained minimum permutation WER can perform roughly on par with the most effective ones (around 11%). 5) Despite recent progress, accurately transcribing spontaneous speech in challenging acoustic environments remains difficult, even when using computationally intensive system ensembles.
https://arxiv.org/abs/2507.18161
Recent Trends in Distant Conversational Speech Recognition: A Review of CHiME-7 and 8 DASR Challenges
Samuele Cornell, Christoph Boeddeker, Taejin Park, He Huang, Desh Raj, Matthew Wiesner, Yoshiki Masuyama, Xuankai Chang, Zhong-Qiu Wang, Stefano Squartini, Paola Garcia, Shinji Watanabe
The CHiME-7 and 8 distant speech recognition (DASR) challenges focus on multi-channel, generalizable, joint automatic speech recognition (ASR) and diarization of conversational speech. With participation from 9 teams submitting 32 diverse systems, these challenges have contributed to state-of-the-art research in the field. This paper outlines the challenges' design, evaluation metrics, datasets, and baseline systems while analyzing key trends from participant submissions. From this analysis it emerges that: 1) Most participants use end-to-end (e2e) ASR systems, whereas hybrid systems were prevalent in previous CHiME challenges. This transition is mainly due to the availability of robust large-scale pre-trained models, which lowers the data burden for e2e-ASR. 2) Despite recent advances in neural speech separation and enhancement (SSE), all teams still heavily rely on guided source separation, suggesting that current neural SSE techniques are still unable to reliably deal with complex scenarios and different recording setups. 3) All best systems employ diarization refinement via target-speaker diarization techniques. Accurate speaker counting in the first diarization pass is thus crucial to avoid compounding errors and CHiME-8 DASR participants especially focused on this part. 4) Downstream evaluation via meeting summarization can correlate weakly with transcription quality due to the remarkable effectiveness of large-language models in handling errors. On the NOTSOFAR-1 scenario, even systems with over 50% time-constrained minimum permutation WER can perform roughly on par with the most effective ones (around 11%). 5) Despite recent progress, accurately transcribing spontaneous speech in challenging acoustic environments remains difficult, even when using computationally intensive system ensembles.
arXiv.org
Recent Trends in Distant Conversational Speech Recognition: A...
The CHiME-7 and 8 distant speech recognition (DASR) challenges focus on multi-channel, generalizable, joint automatic speech recognition (ASR) and diarization of conversational speech. With...
We like some in-depth evaluations in this research
https://github.com/Anuttacon/speech_drame
https://arxiv.org/abs/2511.01261
Speech-DRAME: A Framework for Human-Aligned Benchmarks in Speech Role-Play
Jiatong Shi, Jionghao Han, Yichen Lu, Santiago Pascual, Pengfei Wu, Chenye Cui, Shinji Watanabe, Chao Weng, Cong Zhou
Role-play has become a key testbed for generative models, expanding from text-only dialogue to multimodal interaction. Extending role-play to speech captures prosody, emotion, and delivery, but also poses new evaluation challenges. Current pipelines often use audio large language models (ALLMs) as zero-shot judges, which miss paralinguistic cues, collapse multiple aspects into coarse scores, and rely on synthetic speech references that fail to reflect real-world roles. We present Speech-DRAME, a unified framework that contributes at three levels: (i) Speech-DRAME-EvalBench, an evaluation benchmark with bilingual human-annotated data and protocols for training and testing speech evaluation models (SEMs), (ii) DRAME-Eval, a fine-tuned evaluation model, which substantially outperforms zero-shot and few-shot ALLMs, and (iii) Speech-DRAME-RoleBench, a speech role-play benchmark that leverages DRAME-Eval as an automatic judge to compare speech foundation models (SFMs). Speech-DRAME distinguishes between two complementary evaluation strategies: Archetype Evaluation, a top-down approach measuring adherence to broad role archetypes, and Realism Evaluation, a bottom-up approach grounded in real human speech that emphasizes nuanced role quality. Compared to zero-shot ALLM judges, DRAME-Eval achieves stronger agreement with human ratings (Pearson correlation from 0.480 to 0.629 in archetypes, and 0.390 to 0.625 in realism). By integrating transparent benchmark resources, modeling approaches, and system-level evaluation, Speech-DRAME provides the first comprehensive, reproducible foundation for assessing spoken role-play.
https://github.com/Anuttacon/speech_drame
https://arxiv.org/abs/2511.01261
Speech-DRAME: A Framework for Human-Aligned Benchmarks in Speech Role-Play
Jiatong Shi, Jionghao Han, Yichen Lu, Santiago Pascual, Pengfei Wu, Chenye Cui, Shinji Watanabe, Chao Weng, Cong Zhou
Role-play has become a key testbed for generative models, expanding from text-only dialogue to multimodal interaction. Extending role-play to speech captures prosody, emotion, and delivery, but also poses new evaluation challenges. Current pipelines often use audio large language models (ALLMs) as zero-shot judges, which miss paralinguistic cues, collapse multiple aspects into coarse scores, and rely on synthetic speech references that fail to reflect real-world roles. We present Speech-DRAME, a unified framework that contributes at three levels: (i) Speech-DRAME-EvalBench, an evaluation benchmark with bilingual human-annotated data and protocols for training and testing speech evaluation models (SEMs), (ii) DRAME-Eval, a fine-tuned evaluation model, which substantially outperforms zero-shot and few-shot ALLMs, and (iii) Speech-DRAME-RoleBench, a speech role-play benchmark that leverages DRAME-Eval as an automatic judge to compare speech foundation models (SFMs). Speech-DRAME distinguishes between two complementary evaluation strategies: Archetype Evaluation, a top-down approach measuring adherence to broad role archetypes, and Realism Evaluation, a bottom-up approach grounded in real human speech that emphasizes nuanced role quality. Compared to zero-shot ALLM judges, DRAME-Eval achieves stronger agreement with human ratings (Pearson correlation from 0.480 to 0.629 in archetypes, and 0.390 to 0.625 in realism). By integrating transparent benchmark resources, modeling approaches, and system-level evaluation, Speech-DRAME provides the first comprehensive, reproducible foundation for assessing spoken role-play.
GitHub
GitHub - Anuttacon/speech_drame
Contribute to Anuttacon/speech_drame development by creating an account on GitHub.
Greetings from Voice Tech For All team!
We are pleased to announce the launch of the Voice Tech for All Challenge — a Text-to-Speech (TTS) innovation challenge hosted by IISc and SPIRE Lab, powered by Bhashini, GIZ’s FAIR Forward, ARMMAN, and ARTPARK, along with Google for Developers as our Community Partner.
This challenge invites startups, developers, researchers, students and faculty members to build the next generation of multilingual, expressive Text-to-Speech (TTS) systems, making voice technology accessible to community health workers, especially for low-resource Indian languages.
Why Join?
Access high-quality open datasets in 11 Indian languages (SYSPIN + SPICOR)
Build the SOTA open source multi-speaker, multilingual TTS with accent & style transfer
Winning model to be deployed in maternal health assistant (ARMMAN)
🏆 Prizes worth ₹8.5 Lakhs await!
🔗 Registration link: https://syspin.iisc.ac.in/register
🌐Learn more: https://syspin.iisc.ac.in/voicetechforall
Warm regards,
Team Voice Tech For All
IISc (Indian Institute of Science)
We are pleased to announce the launch of the Voice Tech for All Challenge — a Text-to-Speech (TTS) innovation challenge hosted by IISc and SPIRE Lab, powered by Bhashini, GIZ’s FAIR Forward, ARMMAN, and ARTPARK, along with Google for Developers as our Community Partner.
This challenge invites startups, developers, researchers, students and faculty members to build the next generation of multilingual, expressive Text-to-Speech (TTS) systems, making voice technology accessible to community health workers, especially for low-resource Indian languages.
Why Join?
Access high-quality open datasets in 11 Indian languages (SYSPIN + SPICOR)
Build the SOTA open source multi-speaker, multilingual TTS with accent & style transfer
Winning model to be deployed in maternal health assistant (ARMMAN)
🏆 Prizes worth ₹8.5 Lakhs await!
🔗 Registration link: https://syspin.iisc.ac.in/register
🌐Learn more: https://syspin.iisc.ac.in/voicetechforall
Warm regards,
Team Voice Tech For All
IISc (Indian Institute of Science)
This should have nice properties
https://huggingface.co/aiola/drax-v1
https://github.com/aiola-lab/drax
https://arxiv.org/abs/2510.04162
Drax: Speech Recognition with Discrete Flow Matching
Aviv Navon, Aviv Shamsian, Neta Glazer, Yael Segal-Feldman, Gill Hetz, Joseph Keshet, Ethan Fetaya
Diffusion and flow-based non-autoregressive (NAR) models have shown strong promise in large language modeling, however, their potential for automatic speech recognition (ASR) remains largely unexplored. We propose Drax, a discrete flow matching framework for ASR that enables efficient parallel decoding. To better align training with inference, we construct an audio-conditioned probability path that guides the model through trajectories resembling likely intermediate inference errors, rather than direct random noise to target transitions. Our theoretical analysis links the generalization gap to divergences between training and inference occupancies, controlled by cumulative velocity errors, thereby motivating our design choice. Empirical evaluation demonstrates that our approach attains recognition accuracy on par with state-of-the-art speech models while offering improved accuracy-efficiency trade-offs, highlighting discrete flow matching as a promising direction for advancing NAR ASR.
https://huggingface.co/aiola/drax-v1
https://github.com/aiola-lab/drax
https://arxiv.org/abs/2510.04162
Drax: Speech Recognition with Discrete Flow Matching
Aviv Navon, Aviv Shamsian, Neta Glazer, Yael Segal-Feldman, Gill Hetz, Joseph Keshet, Ethan Fetaya
Diffusion and flow-based non-autoregressive (NAR) models have shown strong promise in large language modeling, however, their potential for automatic speech recognition (ASR) remains largely unexplored. We propose Drax, a discrete flow matching framework for ASR that enables efficient parallel decoding. To better align training with inference, we construct an audio-conditioned probability path that guides the model through trajectories resembling likely intermediate inference errors, rather than direct random noise to target transitions. Our theoretical analysis links the generalization gap to divergences between training and inference occupancies, controlled by cumulative velocity errors, thereby motivating our design choice. Empirical evaluation demonstrates that our approach attains recognition accuracy on par with state-of-the-art speech models while offering improved accuracy-efficiency trade-offs, highlighting discrete flow matching as a promising direction for advancing NAR ASR.
Sounds reasonable for TTS
https://github.com/auspicious3000/ProsodyLM
ProsodyLM — a speech language model
→ With novel prosody tokenization (not audio tokenization)
→ Achieves superior prosody capabilities with pre-training only (no alignment)
https://arxiv.org/abs/2507.20091
ProsodyLM: Uncovering the Emerging Prosody Processing Capabilities in Speech Language Models
Kaizhi Qian, Xulin Fan, Junrui Ni, Slava Shechtman, Mark Hasegawa-Johnson, Chuang Gan, Yang Zhang
Speech language models refer to language models with speech processing and understanding capabilities. One key desirable capability for speech language models is the ability to capture the intricate interdependency between content and prosody. The existing mainstream paradigm of training speech language models, which converts speech into discrete tokens before feeding them into LLMs, is sub-optimal in learning prosody information -- we find that the resulting LLMs do not exhibit obvious emerging prosody processing capabilities via pre-training alone. To overcome this, we propose ProsodyLM, which introduces a simple tokenization scheme amenable to learning prosody. Each speech utterance is first transcribed into text, followed by a sequence of word-level prosody tokens. Compared with conventional speech tokenization schemes, the proposed tokenization scheme retains more complete prosody information, and is more understandable to text-based LLMs. We find that ProsodyLM can learn surprisingly diverse emerging prosody processing capabilities through pre-training alone, ranging from harnessing the prosody nuances in generated speech, such as contrastive focus, understanding emotion and stress in an utterance, to maintaining prosody consistency in long contexts.
https://github.com/auspicious3000/ProsodyLM
ProsodyLM — a speech language model
→ With novel prosody tokenization (not audio tokenization)
→ Achieves superior prosody capabilities with pre-training only (no alignment)
https://arxiv.org/abs/2507.20091
ProsodyLM: Uncovering the Emerging Prosody Processing Capabilities in Speech Language Models
Kaizhi Qian, Xulin Fan, Junrui Ni, Slava Shechtman, Mark Hasegawa-Johnson, Chuang Gan, Yang Zhang
Speech language models refer to language models with speech processing and understanding capabilities. One key desirable capability for speech language models is the ability to capture the intricate interdependency between content and prosody. The existing mainstream paradigm of training speech language models, which converts speech into discrete tokens before feeding them into LLMs, is sub-optimal in learning prosody information -- we find that the resulting LLMs do not exhibit obvious emerging prosody processing capabilities via pre-training alone. To overcome this, we propose ProsodyLM, which introduces a simple tokenization scheme amenable to learning prosody. Each speech utterance is first transcribed into text, followed by a sequence of word-level prosody tokens. Compared with conventional speech tokenization schemes, the proposed tokenization scheme retains more complete prosody information, and is more understandable to text-based LLMs. We find that ProsodyLM can learn surprisingly diverse emerging prosody processing capabilities through pre-training alone, ranging from harnessing the prosody nuances in generated speech, such as contrastive focus, understanding emotion and stress in an utterance, to maintaining prosody consistency in long contexts.
GitHub
GitHub - auspicious3000/ProsodyLM: ProsodyLM: Uncovering the Emerging Prosody Processing Capabilities in Speech Language Models
ProsodyLM: Uncovering the Emerging Prosody Processing Capabilities in Speech Language Models - auspicious3000/ProsodyLM
It's important to have the means to adjust network behaviour, so methods like below are very interesting
https://arxiv.org/abs/2505.12973
https://arxiv.org/abs/2505.12973
arXiv.org
Fast, Not Fancy: Rethinking G2P with Rich Data and Rule-Based Models
Homograph disambiguation remains a significant challenge in grapheme-to-phoneme (G2P) conversion, especially for low-resource languages. This challenge is twofold: (1) creating balanced and...
Also
Combining Autoregressive Models and Phonological Knowledge Bases for Improved Accuracy in Korean Grapheme-to-Phoneme Conversion
https://ieeexplore.ieee.org/document/11045935
Combining Autoregressive Models and Phonological Knowledge Bases for Improved Accuracy in Korean Grapheme-to-Phoneme Conversion
https://ieeexplore.ieee.org/document/11045935
Real-Time Speech AI just got faster with Parakeet-Realtime-EOU-120m.
This NVIDIA streaming ASR model is designed specifically for Voice AI agents requiring low-latency interactions.
* Ultra-Low Latency: Achieves streaming recognition with latency as low as 80ms.
* Smart EOU Detection: Automatically signals "End-of-Utterance" with a dedicated <EOU> token, allowing agents to know exactly when a user stops speaking without long pauses.
* Efficient Architecture: Built on the cache-aware FastConformer-RNNT architecture with 120M parameters, optimized for edge deployment.
🤗 Try the model on Hugging Face: https://huggingface.co/nvidia/parakeet_realtime_eou_120m-v1
This NVIDIA streaming ASR model is designed specifically for Voice AI agents requiring low-latency interactions.
* Ultra-Low Latency: Achieves streaming recognition with latency as low as 80ms.
* Smart EOU Detection: Automatically signals "End-of-Utterance" with a dedicated <EOU> token, allowing agents to know exactly when a user stops speaking without long pauses.
* Efficient Architecture: Built on the cache-aware FastConformer-RNNT architecture with 120M parameters, optimized for edge deployment.
🤗 Try the model on Hugging Face: https://huggingface.co/nvidia/parakeet_realtime_eou_120m-v1
https://huggingface.co/spaces/Supertone/supertonic released their models. Fast and well tuned NAR TTS with flow matching. Sound a bit uniform, but overall very nice.
No code, just ONNX model.
Paper here:
https://arxiv.org/abs/2503.23108
SupertonicTTS: Towards Highly Efficient and Streamlined Text-to-Speech System
Hyeongju Kim, Jinhyeok Yang, Yechan Yu, Seunghun Ji, Jacob Morton, Frederik Bous, Joon Byun, Juheon Lee
We introduce SupertonicTTS, a novel text-to-speech (TTS) system designed for efficient and streamlined speech synthesis. SupertonicTTS comprises three components: a speech autoencoder for continuous latent representation, a text-to-latent module leveraging flow-matching for text-to-latent mapping, and an utterance-level duration predictor. To enable a lightweight architecture, we employ a low-dimensional latent space, temporal compression of latents, and ConvNeXt blocks. The TTS pipeline is further simplified by operating directly on raw character-level text and employing cross-attention for text-speech alignment, thus eliminating the need for grapheme-to-phoneme (G2P) modules and external aligners. In addition, we propose context-sharing batch expansion that accelerates loss convergence and stabilizes text-speech alignment with minimal memory and I/O overhead. Experimental results demonstrate that SupertonicTTS delivers performance comparable to contemporary zero-shot TTS models with only 44M parameters, while significantly reducing architectural complexity and computational cost. Audio samples are available at: this https URL.
No code, just ONNX model.
Paper here:
https://arxiv.org/abs/2503.23108
SupertonicTTS: Towards Highly Efficient and Streamlined Text-to-Speech System
Hyeongju Kim, Jinhyeok Yang, Yechan Yu, Seunghun Ji, Jacob Morton, Frederik Bous, Joon Byun, Juheon Lee
We introduce SupertonicTTS, a novel text-to-speech (TTS) system designed for efficient and streamlined speech synthesis. SupertonicTTS comprises three components: a speech autoencoder for continuous latent representation, a text-to-latent module leveraging flow-matching for text-to-latent mapping, and an utterance-level duration predictor. To enable a lightweight architecture, we employ a low-dimensional latent space, temporal compression of latents, and ConvNeXt blocks. The TTS pipeline is further simplified by operating directly on raw character-level text and employing cross-attention for text-speech alignment, thus eliminating the need for grapheme-to-phoneme (G2P) modules and external aligners. In addition, we propose context-sharing batch expansion that accelerates loss convergence and stabilizes text-speech alignment with minimal memory and I/O overhead. Experimental results demonstrate that SupertonicTTS delivers performance comparable to contemporary zero-shot TTS models with only 44M parameters, while significantly reducing architectural complexity and computational cost. Audio samples are available at: this https URL.
huggingface.co
Supertonic (TTS) - a Hugging Face Space by Supertone
Lightning-Fast, On-Device TTS
So no more Kuytai? Gradium is out of stealth to solve voice includes Laurent Mazare and Alexandre Défossez
https://x.com/mattturck/status/1995899063175155852
https://x.com/mattturck/status/1995899063175155852
X (formerly Twitter)
Matt Turck (@mattturck) on X
A major new entrant in voice AI: @GradiumAI
If we were designing computers from scratch today, the default interface probably wouldn’t be a keyboard. It would be voice.
Voice is the most natural interface we have, and probably the most underserved modality…
If we were designing computers from scratch today, the default interface probably wouldn’t be a keyboard. It would be voice.
Voice is the most natural interface we have, and probably the most underserved modality…
Interspeech 2026 challenges are about to start
* NeckVibe Challenge: Voice Disorder Detection via Real-World Monitoring of Neck-Surface Vibration
* TidyVoice Challenge: Cross-Lingual Speaker Verification
* Transfer of Pragmatic Intent in Speech-to-Speech Translation
* Audio Encoder Capability Challenge for Large Audio Language Models
* IQRA: Arabic Mispronunciation Detection and Diagnosis Challenge
* Audio Reasoning Challenge
* Unsupervised Speech in the Wild Challenge https://upschallenge.org/
* NeckVibe Challenge: Voice Disorder Detection via Real-World Monitoring of Neck-Surface Vibration
* TidyVoice Challenge: Cross-Lingual Speaker Verification
* Transfer of Pragmatic Intent in Speech-to-Speech Translation
* Audio Encoder Capability Challenge for Large Audio Language Models
* IQRA: Arabic Mispronunciation Detection and Diagnosis Challenge
* Audio Reasoning Challenge
* Unsupervised Speech in the Wild Challenge https://upschallenge.org/