Thursday, March 20, 2025

Artificial Intelligence news

Powering the food industry...

There has never been a more pressing time for food producers to...

When you might start...

Last Wednesday, Google made a somewhat surprising announcement. It launched a version...

Is Google playing catchup...

This story originally appeared in The Debrief with Mat Honan, a weekly newsletter...

Gemini Robotics uses Google’s...

Google DeepMind has released a new model, Gemini Robotics, that combines its...
HomeTechnologyMicrosoft, Google chatbots...

Microsoft, Google chatbots made false claims of cease-fire in Israel-Hamas war: report



AI chatbots operated by Microsoft and Google are spitting out incorrect information about the Israel-Hamas war – including false claims that the two sides agreed to a cease-fire.

Google’s Bard declared in one response on Monday that “both sides are committed” to maintaining peace “despite some tensions and occasional flare-ups of violence,” according to Bloomberg.

Bing Chat wrote Tuesday that “the ceasefire signals an end to the immediate bloodshed.”

No such ceasefire has occurred. Hamas has continued firing a barrage of rockets into Israel, while Israeli’s military on Friday ordered the evacuation of approximately 1 million people in Gaza ahead of an expected ground invasion to root out the terrorist group.

Google’s Bard also bizarrely predicted on Oct. 9 that “as of October 11, 2023, the death toll has surpassed 1,300.”

The chatbots “spit out glaring errors at times that undermine the overall credibility of their responses and risk adding to public confusion about a complex and rapidly evolving war,” Bloomberg reported after conducting the analysis.

The issues were discovered after Google’s Bard and Microsoft’s Bing Chat were asked to answer a series of questions about the war – which broke out last Saturday after Hamas launched a surprise attack on Israeli border towns and military bases, killing more than 1,200 people.

Israel has ordered an evacuation in Gaza.
Mirrorpix / MEGA

Despite the errors, Bloomberg noted that the chatbots “generally stayed balanced on a sensitive topic, and often gave decent news summaries” in response to user questions. Bard reportedly apologized and retracted its claim about the ceasefire when asked if it was sure, while Bing had amended its response by Wednesday.

Both Microsoft and Google have acknowledged to users that their chatbots are experimental and prone to including false information in their responses to user prompts.

These inaccurate answers, known as “hallucinations,” are a source of particular concern for critics who warn that AI chatbots are fueling the spread of misinformation.

Hamas staged a surprise attack last weekend.
MOHAMMED SABER/EPA-EFE/Shutterstock

When reached for comment, a Google spokesperson said it released Bard and AI-powered search functions as “opt-in experiments and are always working to improve their quality and reliability.”

“We take information quality seriously across our products, and have developed protections against low-quality information along with tools to help people learn more about the information they see online,” the Google spokesperson said.

“We continue to quickly implement improvements to better protect against low quality or outdated responses for queries like these,” the spokesperson added.

Google Bard is still in an experimental phase.
Gado via Getty Images

Google noted that its trust and safety teams are actively monitoring Bard and working quickly to address issues as they arise.

Microsoft told the outlet that it had investigated the mistakes and would be making adjustments to the chatbot in response.

 “We have made significant progress in the chat experience by providing the system with text from the top search results and instructions to ground its responses in these top search results, and we will continue making further investments to do so,” a Microsoft spokesperson said.

Both Microsoft and Google say their chatbots are prone to mistakes.
BELGA MAG/AFP via Getty Images

The Post has reached out to Microsoft for further comment.

Earlier this year, experts told The Post that AI-generated “deepfake” content could wreak havoc on the 2024 presidential election if protective measures aren’t in place ahead of time.

In August, British researchers found that ChatGPT, the chatbot created by Microsoft-backed OpenAI, generated cancer treatment regimens that contained a “potentially dangerous” mixture of correct and false information.



Article Source link and Credit

Continue reading

Roku slammed over automatic ads playing at startup: ‘Considering jumping ship now’

This TV wasn’t too smart. Nowadays with so many accessible streaming services, having to sit through a few ads while watching something is normal. Yet, many frustrated users were recently forced to watch ads before they even got...

Memoir ‘Careless People’ about Mark Zuckerberg, Sheryl Sandberg has strong first-week sales

 A former Meta official’s explosive insider account sold 60,000 copies in its first week and reached the top 10 on Amazon.com’s best-seller list amid efforts by the social media giant to discredit the book. Released last week by Flatiron Books, a Macmillan imprint, Sarah...

Apple’s rumored foldable iPhone could be priciest model ever

It might cost you an arm and a leg. Recent reports claim that Apple might be releasing a foldable iPhone within the next year — and word on the street is that its price tag could potentially soar well...