Thursday, January 23, 2025

Artificial Intelligence news

Implementing responsible AI in...

Many organizations have experimented with AI, but they haven’t always gotten the...

OpenAI ups its lobbying...

OpenAI spent $1.76 million on lobbying in 2024 and $510,000 in the...

Why it’s so hard...

This story originally appeared in The Algorithm, our weekly newsletter on AI....

The second wave of...

Ask people building generative AI what generative AI is good for right...
HomeTechnologyPhysicist Michio Kaku...

Physicist Michio Kaku exposes ‘dangerous’ side of AI chatbots


A famed theoretical physicist has issued a stark warning about the dangers of software like ChatGPT.

Michio Kaku said AI chatbots appear to be intelligent but are only actually capable of spitting out what humans have already written.

The technology, which is free, is unable to detect whether something is false and can therefore be “tricked” into giving the wrong information.

“Even though there is a good aspect to all these software programs, the downside is that you can fabricate, because it can’t tell the difference between what is true and false,” he said in a recent episode of the Joe Rogan Experience.

“They are just instructed to cobble together existing paragraphs, splice them together, polish it up and spit it out. But is it correct? It doesn’t care, and it doesn’t know.”


Michio Kaku
Physicist Michio Kaku has issued a stark warning about the dangers of software like ChatGPT.
PowerfulJRE/Youtube

“A chatbot is like a teenager who plagiarises and passes things off as their own.”

However, Kaku said that there was a possibility that quantum computing (which uses atoms instead of microchips) could be adapted in future to act as a fact checker.

Kaku believes the power of quantum computing could eradicate the issues presented by consumer-tier chatbots.

“When they get together, watch out,” he said.


ChatGPT
Kaku said AI chatbots appear to be intelligent but are only actually capable of spitting out what humans have already written.
Getty Images

“Quantum computers can act as a fact checker. You can ask it to remove all the garbage from articles. So the hardware may act as a check for all the wild statements made by the software.”

Kaku’s warning came after Geoffrey Hinton, an AI pioneer known as the “godfather of artificial intelligence”, announced his resignation from Google, citing growing concerns about the potential dangers of artificial intelligence.

He said AI systems like GPT-4 already eclipse humans in terms of general knowledge and could soon surpass them in reasoning ability as well.

In a few short months of it being available people have already used the service to generate income.


Joe Rogan
Kaku’s warning is that AI is unable to detect whether something is false and can therefore be “tricked” into giving the wrong information.
PowerfulJRE/Youtube

Hinton described the “existential risk” AI poses to modern life, highlighting the possibility for corrupt leaders to interfere with democracy.

He also expressed concern about the potential for “bad actors” to misuse AI technology, such as Russian President Vladimir Putin giving robots autonomy that could lead to dangerous outcomes.

“Right now, what we’re seeing is things like GPT-4 eclipse a person in the amount of general knowledge it has and it eclipses them by a long way. In terms of reasoning, it’s not as good, but it does already do simple reasoning,” he said in a recent interview aired by the BBC.

“And given the rate of progress, we expect things to get better quite fast. So we need to worry about that.”



Article Source link and Credit

Continue reading

Samsung Galaxy S25 series photos and features leaked online

Samsung had social media buzzing over the anticipated release of its new Galaxy S25 series of smartphones, which are expected to rival Apple’s latest editions of the iPhone. The South Korean tech giant — which will unveil the new...

Two words in an email are a big red flag for scams: FBI

They’re words of warning. Phishing emails are becoming trickier to spot in this age of sophisticated — and often AI-powered — cyberscams. Fortunately, the Federal Bureau of Investigation has flagged some telltale signs that the message in your inbox...

Microsoft’s LinkedIn allegedly shared customer info for AI training

Microsoft’s LinkedIn has been sued by Premium customers who said the business-focused social media platform disclosed their private messages to third parties without permission to train generative artificial intelligence models. According to a proposed class action filed on Tuesday night...