Elon Musk, OpenAI CEO Sam Altman and other tech bigwigs are right to publicly raise alarms about possible doomsday scenarios involving advanced artificial intelligence – even though the technology is currently “not an existential threat” to humanity, billionaire Mark Cuban told The Post.
The “Shark TanK” host rejected the idea that Musk, Altman and others have been too extreme with their warnings – pointing out there is still “completely uncertainty” regarding how AI will impact society.
“It’s not an overreach. It’s a request for people to pay attention,” Cuban told The Post.
Altman raised eyebrows this week after joining other AI experts in signing a statement which warned advanced AI could pose a “risk of extinction” on par with nukes or pandemics.
Days earlier, Musk suggested there was a “non-zero chance” that future AI would “go Terminator” and seek to destroy or control humanity.
Critics have warned that AI could cause major job losses, fuel misinformation, allow “bad actors” to cause societal chaos – or even attack humanity.
“AI as it is today is not an existential threat,” Cuban said. “But in the future, say 15 to 20 years, there will be simulations of threat creation and responses for things we haven’t even considered to this point.”
“Remember for all the damage an AI can do, there will be an opposing AI searching for the traits of harmful AI, trying to stop them from making the first move,” he added.
Much of the panic about the technology has centered on the rise of so-called “artificial general intelligence,” a still-unachieved concept in which advanced AI would develop human-level cognitive abilities.
ChatGPT and other public-facing AI tools currently on the market are “large-language models” that are trained to respond to prompts using information available online.
Altman was one of a trio of experts who testified before a Senate panel last month regarding the future of AI. At the time, Altman called for federal regulation of the fledgling industry – including the potential creation of an oversight agency that would ensure safety guidelines are being followed.
Separately, Musk was one of more than 1,000 experts who called in March for a six-month pause in advanced AI development until guardrails were in place.
Cuban asserted the “real science fiction” scenarios involving advanced AI “will come about 100 years from now,” as engineers and scientists develop ways for the human body and mind to interface with AI systems.
“It’s not inconceivable that we will record everything we see, say, write and do in our lives, store it in an AI, effectively being a version of our own brains,” Cuban said.
“Combine that with the replication of ourselves and the AI can live on for us after we are gone. Insane, but definitely on the 100-year knowledge curve of AI and precision biology that is beginning now.”