Wednesday, December 11, 2024

Artificial Intelligence news

Google’s new Project Astra...

Google DeepMind has announced an impressive grab bag of new products and...

Bluesky has an impersonator...

Like many others, I recently fled social media platform X for Bluesky....

AI’s hype and antitrust...

This story originally appeared in The Algorithm, our weekly newsletter on AI....

We saw a demo...

One afternoon in late November, I visited a weapons test site in...
HomeMachine LearningLess Is More:...

Less Is More: A Unified Architecture for Device-Directed Speech Detection with Multiple Invocation Types



Suppressing unintended invocation of the device because of the speech that sounds like wake-word, or accidental button presses, is critical for a good user experience, and is referred to as False-Trigger-Mitigation (FTM). In case of multiple invocation options, the traditional approach to FTM is to use invocation-specific models, or a single model for all invocations. Both approaches are sub-optimal: the memory cost for the former approach grows linearly with the number of invocation options, which is prohibitive for on-device deployment, and does not take advantage of shared training data;…



Article Source link and Credit

Continue reading

Memory-Retaining Finetuning via Distillation

This paper was accepted at the Fine-Tuning in Modern Machine Learning: Principles and Scalability (FITML) Workshop at NeurIPS 2024. Large language models (LLMs) pretrained on large corpora of internet text possess much of the world's knowledge. Following pretraining, one...

Kaleido Diffusion: Improving Conditional Diffusion Models with Autoregressive Latent Modeling

Diffusion models have emerged as a powerful tool for generating high-quality images from textual descriptions. Despite their successes, these models often exhibit limited diversity in the sampled images, particularly when sampling with a high classifier-free guidance weight. To...

Towards Time-Series Reasoning with LLMs

Multi-modal large language models (MLLMs) have enabled numerous advances in understanding and reasoning in domains like vision, but we have not yet seen this broad success for time-series. Although prior works on time-series MLLMs have shown promising performance...