Monday, June 24, 2024

Artificial Intelligence news

Synthesia’s hyperrealistic deepfakes will...

Startup Synthesia’s AI-generated avatars are getting an update to make them even...

How underwater drones could...

A potential future conflict between Taiwan and China would be shaped by...

How generative AI could...

First, a confession. I only got into playing video games a little...

I tested out a...

This story first appeared in China Report, MIT Technology Review’s newsletter about...
HomeTechnologyCriminals are using...

Criminals are using AI in terrifying ways


Artificial intelligence is the ultimate double-edged sword.

It’s advancing medical technology at an astonishing rate and improving the quality of life globally, but it’s also being used already for nefarious purposes.

“When we’re talking about bad actors, stuff is now available to a lot of people who wouldn’t otherwise think of themselves as technically sophisticated,” J.S. Nelson, a cybersecurity expert and a visiting researcher in business ethics at Harvard Law School, told The Post.

“It’s happening on a global scale,” Lisa Palmer, chief AI strategist for the consulting firm AI Leaders, told The Post. “This isn’t just something that’s happening in the United States. It’s a problem in multiple countries.”

Through AI, individuals’ facial data has been used to create pornographic imagery, while others have had their voices replicated to trick family and close friends over the phone — often, to send money to a scammer.

Read on to learn more about the frightening ways AI is being used to exploit and steal from people — and how it’s likely to only get worse.

Generative AI and Deepfakes


AI-generating apps put a person’s biometrics at risk.
Getty Images/iStockphoto

Fake images on Donald Trump created with AI went viral for appearing so realistic.
Fake images on Donald Trump created with AI went viral for appearing so realistic.
Twitter / Eliot Higgins

Popular photo apps where users submit snaps of themselves and have AI render them into a sci-fi character or a piece of Renaissance art have a very dark side.

When Melissa Heikkilä of the MIT Technology Review tested the hit app Lensa AI, it generated “tons of nudes” and “overtly sexualized” images without her consent, she wrote at the end of 2022.

“Some of those apps, in their terms of service, they make it very clear that you are sharing your face to their data storage,” said Palmer, who gave a keynote Wednesday on AI’s potential benefits and downsides to the Society of Information Management.

And, in the wrong hands, the theft of a person’s biometric facial data could be catastrophic.

She continued, “That’s a terrible case scenario where somebody could potentially breach a [military or government] facility as a result of having someone’s biometric data.”

Easily made deepfake and generative AI content — like false images of Donald Trump’s arrest — are also emerging. Palmer is “exceptionally concerned” this will be a problem come the next election cycle.

Particularly, she fears unethical — but not illegal — uses that some politicians might see as “just smart marketing.”

Nelson, who preaches “how dangerous it is to have AI just make stuff up,” also fears that easy access to generative AI could lead to fake news and mass panics — such as a computer-generated extreme weather event being widely shared on social media.

She said, “It’s going to keep going way off the rails. We are just starting to see this all happen.”

Phishing


AI is enhancing the abilities of phishing scams.
AI is enhancing the abilities of phishing scams.
Getty Images/iStockphoto

AI is bringing a high degree of sophistication to scam emails and robocalls, experts warn.

“It’s very compelling,” Palmer said. “Now they can create these phishing emails at [a massive] scale that are personalized,” she said, adding that phishers will include convincing pieces of personal info taken from a target’s online profile

ChatGPT recently introduced Code Interpreter — a plug-in that can access and break down major datasets in several minutes. It can makes a scammer’s life substantially easier.

“You [could] have somebody that gets access to an entire list of political donors and their contact information,” she added. “Perhaps it has some demographic information about how, ‘We really appreciate your last donation of X number of dollars.’”

AI is also enhancing the ability to create phony phone calls. All that’s needed is three seconds recorded of the person speaking — 10 to 15 seconds will get an almost exact match, Palmer said.


AI voice scams have become a huge part of phishing.
AI voice scams have become a huge part of phishing.
Getty Images/iStockphoto

Last month, a mom in Arizona was convinced her daughter had been kidnapped for $1 million in ransom after hearing the child’s voice cloned over the phone, something the FBI publicly addressed.

“If you have it [your info] public, you’re allowing yourself to be scammed by people like this,” said Dan Mayo, the assistant special agent in charge of the FBI’s Phoenix office. “They’re going to be looking for public profiles that have as much information as possible on you, and when they get ahold of that, they’re going to dig into you.”

Employees, especially in tech and finance, may also be getting calls with their boss’s fake voice on the other end, Nelson predicted.


Last month, Federal Reserve Chair Jerome Powell was tricked by a phony call into thinking he was speaking with Ukraine's President Volodymyr Zelensky.
Last month, Federal Reserve Chair Jerome Powell was tricked by a phony call into thinking he was speaking with Ukraine’s President Volodymyr Zelensky.
REUTERS

“You’re dealing with a chatbot that literally sounds like your boss,” she warned.

But ordinary citizens aren’t the only ones getting had.

At the end of April, Federal Reserve Chair Jerome Powell was tricked by pro-Putin Russian pranksters into thinking he was speaking with Ukrainian President Volodymyr Zelensky.

The dupers later broadcast the resulting lengthy conversation with Powell on Russian television.

Even Apple co-founder Steve Wozniak has his concerns about heightened scams.

“AI is so intelligent it’s open to the bad players, the ones that want to trick you about who they are,” he told BBC News. “A human really has to take the responsibility for what is generated by AI.”

Malware


AI is also enhancing the capabilities of malware.
AI is also enhancing the capabilities of malware.
Getty Images/iStockphoto

AI’s ability to enhance malware, which experts have recently tested with ChatGPT, is also raising alarms with experts.

“Malware can be used to give bad actors access to the data that you store on your phone or in your iCloud,” Palmer said. “Obviously it would be things like your passwords into your banking systems, your passwords into your medical records, your passwords into your children’s school records, whatever the case may be, anything that is secured.”

Specifically, what AI can do to enhance malware is create instantaneous variants, “which makes it more and more difficult for those that are working on securing the systems to stay in front of them,” Palmer said.

In addition to everyday people — especially those with access to government systems — Palmer predicts that high-profile individuals will be targets for AI-assisted hacking efforts aimed at stealing sensitive information and photos.

“Ransomware is another primary target for bad actors,” she said. “They take over your system, change your password, lock you out of your own systems, and then demand ransom from you.”



Article Source link and Credit

Continue reading

Car dealerships forced to process orders by hand after cyberattacks

More than 15,000 car dealerships throughout North America scrambled to process orders by hand after cyberattacks shut down their computerized management system this week. Car dealers in the US and Canada including those that sell BMW were having difficulty...

Amazon plans revamp of Alexa with monthly fee, AI features

Amazon is planning a major revamp of its decade-old money-losing Alexa service to include a conversational generative AI with two tiers of service and has considered a monthly fee of around $5 to access the superior version, according to people with direct knowledge of the...

Ferrari’s first electric car to cost whopping $500K: report

Ferrari’s first electric car will cost at least 500,000 euros ($535,000), a source familiar with the matter told Reuters, as the luxury automaker prepares to open a plant that will make the model – and could boost group...