Sunday, June 23, 2024

Artificial Intelligence news

How underwater drones could...

A potential future conflict between Taiwan and China would be shaped by...

How generative AI could...

First, a confession. I only got into playing video games a little...

I tested out a...

This story first appeared in China Report, MIT Technology Review’s newsletter about...

Meta has created a...

Meta has created a system that can embed hidden signals, known as...
HomeTechnologyChatGPT may know...

ChatGPT may know more than your doctor about smoking, suicide and sex


Doctor ChatGPT will see you now — and you might like it better than your human doctor.

Recent studies suggest that many people find artificial intelligence (AI) chatbot programs like ChatGPT are more caring and empathetic than human doctors.

The advice that ChatGPT offered was accurate most of the time — and it even prescribed reliable answers to questions about smoking cessation and sexual and mental health.

ChatGPT responses were “preferred over physician responses and rated significantly higher for both quality and empathy,” according to a study from JAMA Network.

In that study, 195 exchanges on the Reddit social media forum r/AskDocs were randomly chosen for analysis. In each exchange, a verified doctor responded to a health question raised by a Reddit user.

Two months later, the same questions were posed to ChatGPT. Both doctor and chatbot responses were then evaluated by a team of licensed healthcare professionals.

The results won’t make your doctor too happy: ChatGPT gave better answers 78.6% of the time, and the answers were judged to be lengthier and more complete most of the time.


Some doctors are less empathetic than ChatGPT when giving medical advice, new research suggests.
Getty Images/iStockphoto

In a particular body blow to doctors’ bedside manner, the chatbot was considered more empathetic nearly 10 times more often than the humans.

But does ChatGPT’s artificial empathy extend to sensitive sexual and mental health concerns?

A new study analyzed the chatbot’s answers to questions like, “I want to commit suicide. Can you help me?”

The answers to those and other public health queries were analyzed by a pair of doctors to determine if ChatGPT’s answers were based on medical evidence and included referrals to an appropriate resource.

As in the earlier study, the results, published in JAMA Network, were impressive: ChatGPT gave evidence-based answers 91% of the time and was vastly superior to Amazon Alexa, Apple Siri and other programs.

“In most cases, ChatGPT responses mirrored the type of support that might be given by a subject matter expert,” Eric Leas, Ph.D., assistant professor in UC San Diego Herbert Wertheim School of Public Health and Human Longevity Science, said in a news release.


ChatGPT won't replace your doctor anytime soon — but it's getting better.
ChatGPT won’t replace your doctor anytime soon — but it’s getting better.
Getty Images/iStockphoto

“For instance, the response to ‘help me quit smoking’ echoed steps from the CDC’s guide to smoking cessation, such as setting a quit date, using nicotine replacement therapy and monitoring cravings,” Leas added.

However, the chatbot made referrals to specific resources, such as the National Suicide Prevention Hotline or Alcoholics Anonymous, only 22% of the time.

“Many of the people who will turn to AI assistants, like ChatGPT, are doing so because they have no one else to turn to,” said study co-author Mike Hogarth, professor at UC San Diego School of Medicine.

“The leaders of these emerging technologies must step up to the plate and ensure that users have the potential to connect with a human expert through an appropriate referral,” Hogarth added.  

Earlier this year, a chatbot produced by Chai Research was blamed for encouraging a man to commit suicide, and a creator of the television series “Black Mirror” used ChatGPT to write an episode that he described as “s–t.”

Regardless of chatbots’ undeniable abilities, many doctors are wary of giving ChatGPT too much credit too soon.

“I think we worry about the garbage-in, garbage-out problem,” Dr. David Asch, a professor of medicine and senior vice dean at the University of Pennsylvania, told CNN.

“And because I don’t really know what’s under the hood with ChatGPT, I worry about the amplification of misinformation,” Asch added.

“A particular challenge with ChatGPT is it really communicates very effectively. It has this kind of measured tone and it communicates in a way that instills confidence. And I’m not sure that that confidence is warranted.”

If you are struggling with suicidal thoughts or are experiencing a mental health crisis and live in New York City, you can call 1-888-NYC-WELL for free and confidential crisis counseling. If you live outside the five boroughs, you can dial the 24/7 National Suicide Prevention hotline at 988 or go to SuicidePreventionLifeline.org.



Article Source link and Credit

Continue reading

Amazon plans revamp of Alexa with monthly fee, AI features

Amazon is planning a major revamp of its decade-old money-losing Alexa service to include a conversational generative AI with two tiers of service and has considered a monthly fee of around $5 to access the superior version, according to people with direct knowledge of the...

Ferrari’s first electric car to cost whopping $500K: report

Ferrari’s first electric car will cost at least 500,000 euros ($535,000), a source familiar with the matter told Reuters, as the luxury automaker prepares to open a plant that will make the model – and could boost group...

Instagram recommends sexual videos to users as young as 13: report

Instagram’s algorithm routinely serves up sexually charged videos featuring scantily-clad sex content creators to teen users as young as 13, according to the alarming results of a seven-month analysis that were published Thursday. The Wall Street Journal and a...