Thursday, March 20, 2025

Artificial Intelligence news

Powering the food industry...

There has never been a more pressing time for food producers to...

When you might start...

Last Wednesday, Google made a somewhat surprising announcement. It launched a version...

Is Google playing catchup...

This story originally appeared in The Debrief with Mat Honan, a weekly newsletter...

Gemini Robotics uses Google’s...

Google DeepMind has released a new model, Gemini Robotics, that combines its...
HomeMachine LearningImproving the Quality...

Improving the Quality of Neural TTS Using Long-form Content and Multi-speaker Multi-style Modeling



Neural text-to-speech (TTS) can provide quality close to natural speech if an adequate amount of high-quality speech material is available for training. However, acquiring speech data for TTS training is costly and time-consuming, especially if the goal is to generate different speaking styles. In this work, we show that we can transfer speaking style across speakers and improve the quality of synthetic speech by training a multi-speaker multi-style (MSMS) model with long-form recordings, in addition to regular TTS recordings. In particular, we show that 1) multi-speaker modeling improves the…



Article Source link and Credit

Continue reading

M2R2: Mixture of Multi-Rate Residuals for Efficient Transformer Inference

Residual transformations enhance the representational depth and expressive power of large language models (LLMs). However, applying static residual transformations across all tokens in auto-regressive generation leads to a suboptimal trade-off between inference efficiency and generation fidelity. Existing methods,...

Does Spatial Cognition Emerge in Frontier Models?

Not yet. We present SPACE, a benchmark that systematically evaluates spatial cognition in frontier models. Our benchmark builds on decades of research in cognitive science. It evaluates large-scale mapping abilities that are brought to bear when an organism...

SELMA: A Speech-Enabled Language Model for Virtual Assistant Interactions

In this work, we present and evaluate SELMA, a Speech-Enabled Language Model for virtual Assistant interactions that integrates audio and text as inputs to a Large Language Model (LLM). SELMA is designed to handle three primary and two...