Top

海角视频

Neural network behind Geoffrey Hinton鈥檚 Nobel Prize to be preserved by Computer History Museum

Exterior of the Computer History Museum

The AlexNet source code was at the heart of a seminal paper by U of T鈥檚 Hinton and then-grad students Alex Krizhevsky and Ilya Sutskever. Photo credit: 漏 Computer History Museum/ Doug Fairbairn.

The source code for AlexNet 鈥 the neural network developed at the University of Toronto that kickstarted today鈥檚 artificial intelligence boom and led to a Nobel Prize for Geoffrey Hinton 鈥 will be preserved by the in partnership with Google.

The museum, located in Mountain View, Calif., boasts a diverse archive of software and related material and aims to 鈥渄ecode technology 鈥 the computing past, digital present, and future impact on humanity.鈥

It has already released other historic source codes, including APPLE II DOS, IBM APL, Apple MacPaint and QuickDraw, Apple Lisa and Adobe Photoshop.

鈥淭his code underlies the landmark paper  by Alex Krizhevsky, Ilya Sutskever and , which revolutionized the field of computer vision and is one of the most cited papers of all time,鈥 says Jeff Dean, chief scientist, Google DeepMind and Google Research, of AlexNet.

鈥淕oogle is delighted to contribute the source code for the groundbreaking AlexNet work to the Computer History Museum.鈥

AlexNet has its roots in the decades of research conducted by Hinton, a U of T Emeritus of computer science with Princeton鈥檚 John Hopfield for foundational work in AI.

Geoffrey Hinton stands on a rooftop patio at U of T.

University Professor Emeritus Geoffrey Hinton received the 2024 Nobel Prize in Physics. Photo: Johnny Guatto

By the early 2000s, Hinton鈥檚 graduate students at U of T were beginning to use graphics processing units (GPUs) to train neural networks for image recognition tasks and their success suggested that deep learning could be a path to creating general-purpose AI systems.

In particular, Sutskever 鈥 who went on to become a key figure at OpenAI, which launched ChatGPT, and will 鈥 believed that the performance of neural networks would scale with the amount of data available.

The arrival of ImageNet in 2009 provided him with the chance to test his theory. The dataset of images developed by Stanford University Professor Fei-Fei Li was larger than any previous image dataset by several orders of magnitude.

In 2011, Sutskever convinced Krizhevsky, a fellow graduate student, to train a convolutional neural network on ImageNet. With Hinton serving as principal investigator, Krizhevsky programmed the network on a computer with two NVIDIA cards. Over the course of the next year, he tweaked the network鈥檚 parameters and retrained it until it achieved performance superior to its competitors.

The network was ultimately named AlexNet in his honour.

Before AlexNet, very few machine learning researchers used neural networks. After it, almost all of them would. Google eventually acquired the company started by Hinton, Krizhevsky and Sutskever, and a Google team led by David Bieber worked with CHM for five years to secure the code鈥檚 public release.

In describing the AlexNet project, Hinton says, 鈥淚lya thought we should do it, Alex made it work and I got the Nobel Prize.鈥

With files from the Computer History Museum