Geoffrey Hinton received his BA in Experimental Psychology from
Cambridge in 1970 and his PhD in Artificial Intelligence from
Edinburgh in 1978. He did postdoctoral work at Sussex
University and the University of California San Diego and spent five
years as a faculty member in the Computer Science department at
Carnegie-Mellon University. He then became a fellow of the Canadian
Institute for Advanced Research and moved to the Department of Computer Science
at the University of Toronto. He spent three years from 1998 until
2001 setting up the Gatsby
Computational Neuroscience Unit at University College London and
then returned to the University of Toronto where he is now an
emeritus distinguished professor. From 2004 until 2013 he was the
director of the program on "Neural Computation
and Adaptive Perception" which is funded by the Canadian Institute for Advanced
Research. Since 2013 he has been working
half-time for Google in Mountain View and Toronto.
Geoffrey Hinton is a fellow of the Royal Society, the Royal Society of Canada, and the Association for the Advancement of
Artificial Intelligence. He is an honorary foreign member of the American Academy of Arts and
Sciences and the National Academy of Engineering, and a former president of the Cognitive Science
Society. He has received honorary doctorates from the University of
Edinburgh, the University of Sussex, and the University of Sherbrooke. He was awarded the first David
E. Rumelhart prize (2001), the IJCAI award for research
excellence (2005),
the
Killam prize for Engineering (2012) , The IEEE James Clerk Maxwell
Gold medal (2016),
and the
NSERC Herzberg Gold Medal (2010) which is Canada's top award in Science and
Engineering.
Geoffrey Hinton designs machine learning algorithms. His aim is to
discover a learning procedure that is efficient at finding complex
structure in large, high-dimensional datasets and to show that this is
how the brain learns to see. He was one of the researchers who
introduced the back-propagation
algorithm and the first to use backpropagation for learning word
embeddings. His
other contributions to neural network research include Boltzmann machines, distributed representations, time-delay
neural nets, mixtures of experts,
variational learning, products of experts and deep
belief nets. His research group in Toronto made major
breakthroughs in deep learning that have revolutionized speech
recognition and object classification.