Rogue AI: U of M professor joins tech leaders calling for pause
MINNEAPOLIS (FOX 9) - The exponential advancement of Chat GPT this year has artificial intelligence and its risks on the minds of a lot of people, including President Joe Biden.
"Tech companies have a responsibility, in my view, to make sure their products are safe before making them public," he said Tuesday.
An open letter urging a pause on AI came last week from a group of industry leaders and educators — including Elon Musk, Apple’s Steve Wozniak, and a University of Minnesota professor.
Because of how quickly AI is updating, the Future of Life Institute identified it as the first of its four main risks to human existence — ranking ahead of climate change, biotechnology, and even nuclear weapons.
Cheating in school is about the worst use folks could find for the original Chat GPT.
But when the artificial intelligence absorbed generations of human learning in less than four months, tech observers could see a future where it stumbled headfirst into dangerous territory with a shove from bad actors.
"It can happen instantaneously and in ways that are irreversible," said Marek Oziewicz, a University of Minnesota professor of literacy education and director of the Center for Climate Literacy.
Oziewicz is one of the original signers on The Future of Life’s open letter asking artificial intelligence labs to pause AI training for six months to make sure it doesn’t go too far.
Related stories: ChatGPT goes to University of Minnesota law school and passes final exams
The Terminator’s AI-induced Armageddon isn’t what worries Oziewicz.
"The concern is that we will not have like a robot uprising," he said. "That’s too much. But sort of rogue agents or states or organizations can get their hands on the type of weapons that will do most of the job for them."
Even though it’s trained to not do harm, we asked Chat GPT about the risks.
It admitted an AI system may pursue an objective at all costs, even if it causes harm to humans and that AI in the hands of a terrorist organization is a growing concern.
But Dr. Maria Gini is focused on the benefits instead.
"Rather than being afraid I think a good thing is let’s embrace it and figure out what we can do and what are the risks," said University of Minnesota computer science and engineering professor Dr. Gini.
She says AI might eliminate some jobs, but it can also do a lot of good — helping detect autism, for example.
But her native Italy banned Chat GPT this week. The European Union has proposed heavy restrictions, especially to protect privacy. China bans Chat GPT but is developing its own AI.
The U.S. government put out guidance for designing and deploying AI, but it’s completely voluntary and Dr. Gini doubts the open letter will be convincing.
"I don’t think it will happen," said Dr. Gini. "There’s too much money at stake. But understanding the consequences I think is the important part."
Both professors agree benevolent AI could help humanity a lot.
It’s just that some people will eventually use it for evil purposes, and it might develop problems on its own, so it would be good to have guardrails in place as soon as possible.