Top

海角视频

Risks of artificial intelligence must be considered as the technology evolves: Geoffrey Hinton

Artificial intelligence can be used as a force for good 鈥 but there are also big risks involved with the generative technology as it gets even smarter and more widespread, 鈥済odfather of AI鈥 Geoffrey Hinton told the Collision tech conference in Toronto on Wednesday.

In a Q&A with Nick Thompson, CEO of The Atlantic magazine, Hinton 鈥 a cognitive psychologist and computer scientist who is a Emeritus at the University of Toronto 鈥 expanded on concerns .

鈥淲e have to take seriously the possibility that [AI models] get to be smarter than us 鈥 which seems quite likely 鈥 and they have goals of their own,鈥 Hinton said during a standing-room-only event at the conference, which was expected to draw nearly 40,000 attendees over three days.

鈥淭hey may well develop the goal of taking control 鈥 and if they do that, we鈥檙e in trouble.鈥

Hinton, who recently left Google so he could speak more freely about AI risks, was , which is billed as North America鈥檚 鈥渇astest-growing tech conference鈥 and counts the university as an event partner.

The government of Ontario that the 鈥 a partnership between government, universities and industry where Hinton is chief scientific adviser 鈥 will receive up to $27 million in new funding to 鈥渁ccelerate the safe and responsible adoption of ethical AI鈥 and help businesses boost their competitiveness through the technology.

During his talk, Hinton outlined six potential risks posed by the rapid development of current AI models: bias and discrimination; unemployment; online echo chambers; fake news; 鈥渂attle robots鈥; and existential risks to humanity.

Geoffrey Hinton, a University Professor Emeritus at the University of Toronto, speaks onstage at the Collision technology conference before a standing-room only crowd.

Geoffrey Hinton, a University Professor Emeritus at the University of Toronto, speaks onstage at the Collision technology conference before a standing-room only crowd (Photo: Johnny Guatto)

When Thompson suggested that some economists argue that technological change over time simply transforms the function of jobs rather than eliminating them entirely, Hinton noted that 鈥渟uper intelligence will be a new situation that never happened before鈥 鈥 and that even if chatbots like ChatGPT only replace white-collar jobs that involve producing text, that would still be an unprecedented development.

鈥淚'm not sure how they can confidently predict that more jobs will be created for the number of jobs lost,鈥 he said.

Hinton added much of his concern stems from his view that AI is getting closer to being able demonstrate the capacity to reason.

鈥淭he big language models are getting close 鈥 and I don鈥檛 really understand why they can do it, but they can do little bits of reasoning,鈥 he said, predicting that AI will evolve over the next five years to include multimodal large models that are trained on more than just text, including videos and other visual media.

鈥淚t's amazing what you can learn from language,鈥 he said. 鈥淏ut you're much better off learning for many modalities 鈥 small children don't just learn from language alone.鈥

Maximizing the creative potential of AI and minimizing its harms requires distinguishing between its potential risks, Hinton added, noting many in the tech sector have downplayed his warnings over the existential risk since he began speaking out.

鈥淭here was an editorial in Nature yesterday where they basically said fear-mongering about the existential risk is distracting attention [away] from the actual risks,鈥 Hinton said. 鈥淚 think it's important that people understand it's not just science fiction; it鈥檚 not just fear-mongering 鈥 it is a real risk that we need to think about, and we need to figure out in advance how to deal with it.鈥

Thompson pointed out that fellow AI luminary Yann LeCun 鈥 for their work on deep learning 鈥 has suggested that the positive aspects of AI will overcome any negative ones.

鈥淚鈥檓 not convinced that a good AI that is trying to stop bad AI can get control,鈥 Hinton said. 鈥淏efore it's smarter than us, I think the people developing it should be encouraged to put a lot of work into understanding how it might go wrong 鈥 understanding how it might try and take control away. And I think the government could maybe encourage the big companies developing it to put comparable resources [into that].

鈥淏ut right now, there鈥檚 99 very smart people trying to make [AI] better and one very smart person trying to figure out how to stop it from taking over. And maybe you want to be more balanced.鈥

by U of T News.