Sunday, November 10, 2024

Artificial Intelligence news

How ChatGPT search paves...

This story originally appeared in The Algorithm, our weekly newsletter on AI....

This AI-generated Minecraft may...

When you walk around in a version of the video game Minecraft...

OpenAI brings a new...

ChatGPT can now search the web for up-to-date answers to a user’s...

Chasing AI’s value in...

Inspired by an unprecedented opportunity, the life sciences sector has gone all...
HomeNewsGoogle is throwing...

Google is throwing generative AI at everything


Google is stuffing powerful new AI tools into tons of its existing products and launching a slew of new ones, including a coding assistant, it announced at its annual I/O conference today. 

Billions of users will soon see Google’s latest AI language mode, PaLM 2, integrated into over 25 products like Maps, Docs, Gmail, Sheets, and the company’s chatbot, Bard. For example, people will be able to simply type a request such as “Write a job description” into a text box that appears in Google Docs, and the AI language model will generate a text template that users can customize. 

Because of safety and reputational risks, Google has been slower than competitors to launch AI-powered products. But fierce competition from competitors Microsoft, OpenAI, and others has left it no choice but to start, says Chirag Shah, a computer science professor at the University of Washington.

It’s a high-risk strategy, given that AI language models have numerous flaws with no known fixes. Embedding them into its products could backfire and run afoul of increasingly hawkish regulators, experts warn.

Google is also opening up access to its ChatGPT competitor, Bard, from a select group in the US and the UK to the general public in over 180 countries. Bard will “soon” allow people to prompt it using images as well as words, Google said, and the chatbot will be able to reply to queries with pictures. Google is also launching AI tools that let people generate and debug code.

Google has been using AI technology for years in products like text translation and speech recognition. But this is the company’s biggest push yet to integrate the latest wave of AI technology into a variety of products. 

“[AI language models’] capabilities are getting better. We’re finding more and more places where we can integrate them into our existing products, and we’re also finding real opportunities to provide value to people in a bold but responsible way,” Zoubin Ghahramani, vice president of Google DeepMind, told MIT Technology Review. 

“This moment for Google is really a moment where we are seeing the power of putting AI in people’s hands,” he says.

The hope, Ghahramani says, is that people will get so used to these tools that they will become an unremarkable part of day-to-day life.  

One-stop shop

Google’s announcement comes as rivals like Microsoft, OpenAI, and Meta and open-source groups like Stability.AI compete to launch impressive AI tools that can summarize text, fluently answer people’s queries, and even produce images and videos from word prompts. 

With this updated suite of AI-powered products and features, Google is targeting not only individuals but also startups, developers, and companies that might be willing to pay for access to models, coding assistance, and enterprise software, says Shah.

“It’s very important for Google to be that one-stop shop,” he says. 

Google is making new features and models available that harness its AI language technology as a coding assistant, allowing people to generate and complete code and converse with a chatbot to get help with debugging and code-related questions. 

The trouble is that the sorts of large language models Google is embedding in its products are prone to making things up. Google experienced this firsthand when it originally announced it was launching Bard as a trial in the US and the UK. Its own advertising for the bot contained a factual error, an embarrassment that wiped billions off the company’s stock price. 

Google faces a trade-off between releasing new, exciting AI products and doing scientific research that would make its technology reproducible and allow external researchers to audit it and test it for safety, says Sasha Luccioni, an AI researcher at AI startup Hugging Face. 

In the past, Google has taken a more open approach and has open-sourced its language models, such as BERT in 2018. “But because of the pressure from the market and from OpenAI, they’re shifting all that,” Luccioni says.

The risk with code generation is that users will not be skilled enough at programming to spot any errors introduced by AI, says Luccioni. That could lead to buggy code and broken software. There is also a risk of things going wrong when AI language models start giving advice on life in the real world, she adds.

Even Ghahramani warns that businesses should be careful about what they choose to use these tools for, and he urges them to check the results thoroughly rather than just blindly trusting them. 

“These models are very powerful. If they generate things that are flawed, then with software you have to be concerned about whether you just take the generated output and incorporate it into your mission-critical software,” he says. 

But there are risks associated with AI language models that even the most up-to-date and tech-savvy people have barely begun to understand. It is hard to detect when text and, increasingly, images are AI generated, which could allow these tools to be used for disinformation or scamming on a large scale. 

The models are easy to “jailbreak” so that they violate their own policies against, for example, giving people instructions to do something illegal. They are also vulnerable to attacks from hackers when integrated into products that browse the web, and there is no known fix for that problem. 

Ghahramani says Google does regular tests to improve the safety of its models and has built in controls to prevent people from generating toxic content. But he admits that it still hasn’t solved that vulnerability—nor the problem of “hallucination,” in which chatbots confidently generate nonsense. 

Hard launch

Going all in on generative AI could backfire on Google. Tech companies are currently facing heightened scrutiny from regulators over their AI products. The EU is finalizing its first AI regulation, the AI Act, while in the US, the White House recently summoned leaders from Google, Microsoft, and OpenAI to discuss the need to develop AI responsibly. US federal agencies, such as the Federal Trade Commission, have signaled that they are paying more attention to the harm AI can cause.

Shah says that if some of the AI-related fears do end up panning out, it could give regulators or lawmakers grounds for action with the teeth to actually hold Google accountable. 

But in a fight to retain its grip on the enterprise software market, Google feels it can’t risk losing out to its rivals, Shah believes. “This is the war they created,” he says. And at the moment, “there’s very little to nothing to stop them.”



Article Source link and Credit

Continue reading

Cultivating the next generation of AI innovators in a global tech hub

A few years ago, I had to make one of the biggest decisions of my life: continue as a professor at the University of Melbourne or move to another part of the world to help build a brand...

Palmer Luckey’s vision for the future of mixed reality

This story originally appeared in The Algorithm, our weekly newsletter on AI. To get stories like this in your inbox first, sign up here. War is a catalyst for change, an expert in AI and warfare told me in...

AI will add to the e-waste problem. Here’s what we can do about it.

Generative AI could account for up to 5 million metric tons of e-waste by 2030, according to a new study. That’s a relatively small fraction of the current global total of over 60 million metric tons of e-waste each...