Geoffrey Hinton, a leading figure in the field of artificial intelligence (AI) and a pioneer of deep learning and neural networks, has resigned from Google, citing concerns about the dangers of AI. In a statement to the New York Times, Hinton, aged 75, said he now regretted his work, and told the BBC that some of the dangers of AI chatbots were “quite scary.” While chatbots are not currently more intelligent than humans, Hinton warned they soon could be. He added that the intelligence created by digital systems, which have many copies of the same set of weights, is very different from human intelligence. Hinton’s research has been instrumental in the development of AI systems such as ChatGPT.
Hinton told the BBC that he had become concerned that chatbots could soon overtake the level of information that a human brain holds. He cited GPT-4 as an example, which he said would eclipse a person in the amount of general knowledge it has by a long way. He noted that the rate of progress was such that we should be worried about what would happen next. Hinton also warned of the possibility of “bad actors” using AI for “bad things.” As a worst-case scenario, he cited the possibility of Russian President Vladimir Putin giving robots the ability to create their own sub-goals. Hinton noted that this could eventually lead to the creation of sub-goals like “I need to get more power.”
Hinton stressed that he did not want to criticise Google and that the tech giant had been “very responsible”. He added that he wanted to say some good things about Google, which would be more credible if he did not work for the company. Google’s chief scientist Jeff Dean responded to Hinton’s resignation by saying that the company remained committed to a responsible approach to AI and was continually learning to understand emerging risks while also innovating boldly.
Hinton’s concerns echo those expressed by other leading figures in the field of AI. The technology has the potential to bring significant benefits, from helping to diagnose diseases to reducing energy consumption. However, there are also significant risks, particularly around the potential for AI to be misused. For example, there are fears that chatbots could be used to spread disinformation or propaganda. There are also concerns about the impact of AI on jobs, with some estimates suggesting that up to 300 million jobs could be affected by the technology.
To address these concerns, a number of initiatives have been launched to promote responsible AI development. In 2018, Google, Facebook, Microsoft, and other tech companies launched the Partnership on AI, a collaborative effort to develop best practices around AI development. In 2020, the European Commission released its white paper on AI, which proposed a framework for AI development that would focus on promoting trust and ensuring that the technology was used ethically. Other organisations, such as the IEEE and the Future of Life Institute, have also released guidelines for the development of responsible AI.
Despite these efforts, there is still a long way to go to ensure that AI is developed and used in a responsible and ethical manner. As Hinton’s resignation shows, even those who are at the forefront of AI research are concerned about the potential risks of the technology. Moving forward, it will be important to continue to engage in dialogue around the development of AI and to ensure that the technology is used for the benefit of society as a whole.