Thursday, June 13, 2024

Artificial Intelligence news

Apple is promising personalized...

At its Worldwide Developer Conference on Monday, Apple for the first time...

What using artificial intelligence...

This story originally appeared in The Algorithm, our weekly newsletter on AI....

The data practitioner for...

The rise of generative AI, coupled with the rapid adoption and democratization...

Five ways criminals are...

Artificial intelligence has brought a big boost in productivity—to the criminal underworld.  Generative...
HomeMachine LearningState Spaces Aren’t...

State Spaces Aren’t Enough: Machine Translation Needs Attention



*= Equal Contributors
Structured State Spaces for Sequences (S4) is a recently proposed sequence model with successful applications in various tasks, e.g., vision, language modeling, and audio. Thanks to its mathematical formulation, it compresses its input to a single hidden state and is able to capture long-range dependencies while avoiding the need for an attention mechanism. In this work, we apply S4 to Machine Translation (MT) and evaluate several encoder-decoder variants on WMT’14 and WMT’16. In contrast with the success in language modeling, we find that S4 lags behind the Transformer…



Article Source link and Credit

Continue reading

Swallowing the Bitter Pill: Simplified Scalable Conformer Generation

We present a novel way to predict molecular conformers through a simple formulation that sidesteps many of the heuristics of prior works and achieves state of the art results by using the advantages of scale. By training a...

KPConvX: Modernizing Kernel Point Convolution with Kernel Attention

In the field of deep point cloud understanding, KPConv is a unique architecture that uses kernel points to locate convolutional weights in space, instead of relying on Multi-Layer Perceptron (MLP) encodings. While it initially achieved success, it has...

Efficient Diffusion Models without Attention

Transformers have demonstrated impressive performance on class-conditional ImageNet benchmarks, achieving state-of-the-art FID scores. However, their computational complexity increases with transformer depth/width or the number of input tokens and requires patchy approximation to operate on even latent input sequences....