Wednesday, June 19, 2024

Artificial Intelligence news

I tested out a...

This story first appeared in China Report, MIT Technology Review’s newsletter about...

Meta has created a...

Meta has created a system that can embed hidden signals, known as...

Why artists are becoming...

This story originally appeared in The Algorithm, our weekly newsletter on AI....

Why does AI hallucinate?

MIT Technology Review Explains: Let our writers untangle the complex, messy world...
HomeTechnologySam Altman --...

Sam Altman — who warned AI poses ‘risk of extinction’ to humanity — is also a ‘doomsday prepper’


OpenAI boss Sam Altman is a self-admitted doomsday prepper who once bragged about his stash of guns, gold and other survival goods — long before he and other experts warned AI posed a “risk of extinction” to humanity on par with nuclear weapons and pandemics.

In 2016, a profile of Altman in the New Yorker recounted a conversation in which he told two tech entrepreneurs that one of his hobbies, besides collecting cars and flying planes, was prepping “for survival” in the event of catastrophe – such as a lethal synthetic virus or the onset of a rogue AI “that attacks us.”

“I try not to think about it too much,” Altman reportedly said. “But I have guns, gold, potassium iodide, antibiotics, batteries, water, gas masks from the Israeli Defense Force, and a big patch of land in Big Sur I can fly to.”

The OpenAI executive downplayed his past remarks during an April appearance on the podcast “Honestly with Bari Weiss,” telling the journalist he was not a doomsday prepper “in the way I would think about.”

“It was like a fun hobby, but there’s nothing else to do. None of this is going to help you if [artificial general intelligence] goes wrong, but it’s like, a fun hobby,” Altman said.

The Post has reached out to OpenAI for comment.


Sam Altman has since downplayed his past talk about survival prepping.
AFP via Getty Images

Altman’s doomsday vision of AI gone wrong is commonplace in Silicon Valley, where a growing number of tech billionaires have poured money into post-apocalyptic contingency plans such as remote bunkers in recent years.

Some, such as Peter Thiel and Google co-founder Larry Page, have snapped up land in New Zealand. The same New Yorker profile revealed Altman’s “backup plan” was to fly to New Zealand with Thiel if society crumbled.

Critics of tech’s obsession with a “Terminator”-like future, such as Douglas Rushkoff, author of “Survival of the Richest: Escape Fantasies of the Tech Billionaires,” have taken a cynical view of the recent AI panic.


AI
Experts warned this week that the risk of rogue AI was similar to that of nuclear weapons or pandemics.
Getty Images; NY Post composite

They argue the tech industry’s warnings of potential Armageddon are a convenient distraction from more pressing issues – and a way to secure a seat at the table to shape regulations that will raise the barrier of entry for potential AI rivals.

It’s also a way for Altman and other frontrunners to tout their progress toward achieving artificial general intelligence, or systems with human-level abilities, when current services like ChatGPT are actually just “trained” to regurgitate information, Rushkoff added.

“The real problem with tech bro existential panic at scale is it tends to make them ignore, or even exacerbate, the problems that are happening right now,” Rushkoff told The Post.

“They’re focused on a kind of Terminator end-game nightmare, and for me, it’s a way of distracting themselves from the very real and present dangers they pose to us,” he added. “It’s not the AI that’s going to set off a nuclear bomb in a fictional future. It’s the stuff they’re doing right now at this very moment.”

Nonetheless, catering to paranoid tech bros has become a lucrative business for some firms – including Rising S, a Texas-based bunker manufacturer that builds shelters and installs them at a customer’s preferred destination.


Rising S
Some doomsday bunkers include amenities such as bowling allies and hot tubs.
Rising S Bunkers

Rising S sells about 20 to 40 bunkers per year, depending on the size and complexity, according to Gary Lynch, a general manager at the firm for the last 20 years. The company lists prices ranging from $49,000 for its smallest model to more than $9.6 million for a decked-out bunker dubbed “The Aristocrat.”

“I’ve absolutely sold plenty to tech people,” Lynch told The Post, noting a rush of sales to customers in Silicon Valley and Napa during the COVID-19 pandemic. “It seemed like every bunker we sold was going out there for about three years.”

Lynch said tech sector customers generally buy “larger model” bunkers of 2,000-square-feet or larger – with bells and whistles such as an onsite bowling alley, swimming pools, hot tubs and biometric entryways.


Rising S
Rising S builds and installs steel bunkers for customers around the country.
Rising S Bunkers

Lynch said none of his buyers have specifically mentioned AI as the reason for their purchase – though he noted it isn’t unusual for customers to withhold their thinking “in fear of sounding cooky.”

John Ramey, a longtime Silicon Valley entrepreneur and founder of the disaster prep blog The Prepared, said he personally disagrees “with the default assumption that AI will be an evil Skynet” — but offered a defense of Altman and others who feel the need to call out the possibility.

“Seeing that advanced computing is inevitable and seeing the predictable problems that derive from it and seeing what it takes to prepare for those problems is why preppers are disproportionately represented in the Valley crowd,” Ramey told The Post.

Altman isn’t the only expert warning of potential doom of AI advances unchecked. Earlier this month, Elon Musk said he sees a risk of AI “going Terminator” in the future – and has previously described his plan to build a sustainable colony on Mars as key to humanity’s long-term survival in the event of a catastrophe.


Sam Altman
Sam Altman testified during a Senate hearing on AI earlier this month.
REUTERS

Ex-Google boss Eric Schmidt warned AI is an “existential risk” to humanity that could result in “many, many, many, many people harmed or killed”

Altman also hinted at his fear while testifying before a Senate panel earlier this month, declaring that his worst fear is that advanced AI will “cause significant harm to the world.”



Article Source link and Credit

Continue reading

Apple has ‘very serious’ non-compliance issues with EU digital law: Margrethe Vestager

European Union antitrust cops have identified several “very serious” issues with Apple’s business practices that are potentially noncompliant with the bloc’s digital business law, competition chief Margrethe Vestager warned Tuesday. Apple has faced ongoing scrutiny of its App Store...

Tesla renews legal fight to reinstate Elon Musk’s $56B pay package after shareholders back deal

Tesla launched a fresh bid to secure legal reinstatement of Elon Musk’s controversial $56 billion pay package on Monday – days after the company’s shareholders voted to ratify the record-setting agreement. In January, Delaware Chancery Court Judge Kathaleen McCormick...

GameStop shares tank after CEO says store network will shrink despite huge cash pile

Shares of GameStop tumbled on Monday after CEO Ryan Cohen told investors that the videogame retailer plans to operate a smaller network of stores and gave no details on what it plans to do with its cash pile. GameStop shares...