Large neural networks are used to create artificial intelligence. They can solve many problems, including those in medicine, finance and research. But could these large neural networks be translating into consciousness? Experts believe it’s possible it already has.
On Wednesday, OpenAI cofounder Ilya Sutskever claimed on Twitter that ‘it may be that today’s largest neural networks are slightly conscious,’ first reported by Futurism.
He didn’t name any specific developments, but is likely referring to the mega-scale neural networks, such as GPT-3, a 175 billion parameter language processing system built by OpenAI for translation, question answering, and filling in missing words.
It is also unclear what ‘slightly conscious’ actually means, because the concept of consciousness in artificial intelligence is a controversial idea.
An artificial neural network is a collection of connected units or nodes that model the neurons found within a biological brain, that can be trained to perform tasks and activities without human input – by learning, however, most experts say these systems aren’t even close to human intelligence, let alone consciousness.
For decades science fiction has peddled the idea of artificial intelligence on a human scale, from Mr Data in Star Trek, to HAL 9000, the artificial intelligence character in Arthur C. Clarke’s Space Odyssey that opts to kill astronauts to save itself.
When asked to open the pod bay doors to let the astronauts return to the spacecraft, HAL says ‘I’m sorry Dave, I’m afraid I can’t do that’.

On Thursday, OpenAI cofounder Ilya Sutskever claimed that ‘it may be that today’s largest neural networks are slightly conscious’

Artificial intelligence, built on large neural networks, are helping solve problems in finance, research and medicine – but could they be reaching consciousness? One expert thinks it is possible. Stock image
While AI has been seen to perform impressive tasks, including flying aircraft, driving cars and creating an artificial voice or face, claims of consciousness are ‘hype’.
Sutskever faced a backlash soon after posting his tweet, with most researchers concerned he was over stating how advanced AI had become, Futurism reported.
‘Every time such speculative comments get an airing, it takes months of effort to get the conversation back to the more realistic opportunities and threats posed by AI,’ according to UNSW Sidney AI researcher Toby Walsh.
Professor Marek Kowalkiewicz, from the Center for the Digital Economy at QUT, questioned whether we even know what consciousness might look like.
Thomas G Dietterich, an expert in AI at Oregon State University, said on Twitter he hasn’t seen any evidence of consciousness, and suggested Stuskever was ‘trolling’.
‘If consciousness is the ability to reflect upon and model themselves, I haven’t seen any such capability in today’s nets. But perhaps if I were more conscious myself, I’d recognize that you are just trolling,’ he said.
The exact nature of consciousness, even in humans, has been subject to speculation, debate and philosophical pondering for centuries.
However, it is generally seen as ‘everything you experience’ in your life, according to neuroscientist Christof Koch.

Thomas G Dietterich, an expert in AI at Oregon State University, said on Twitter he hasn’t seen any evidence of consciousness, and suggested Stuskever was ‘trolling’

While he did not name specific developments, it’s likely that he is referring the massive-scale neural networks such as GPT-3 (a 175 billion paralanguage processing system) built by OpenAI to support translation, answer answering, filling in for missing words and other tasks. Image from stock
A paper was written by him for Nature. He stated that it is “the tune stuck in my head, the sweetness in chocolate mousse and the pain in my teeth.”
In 1990, a book was published that describes various levels of consciousness. It explains how the normal state can compromise wakefulness, awareness, or alertness.
It is possible that Sutskever (who has not responded to DailyMail.com’s requests for comment) refers to neural networks reaching these stages.
Other experts feel that discussing artificial consciousness distracts from the actual topic.
Valentino Zocca, an expert in deep learning technology, described these claims as being hype, more than anything else, and Jürgen Geuter, a sociotechnologist suggested Sutskever was making a simple sales pitch, not a real idea.
Geuter said, “It could also be that these take have no basis in fact and are just a pitch to claim magic tech capabilities for startup that runs very simplistic statistics, just lots of them.”
Others believed that OpenAI scientist, who suggested a slight conscious artificial intelligence, was ‘full-of-it’.
Elisabeth Hildt’s opinion piece, published by the Illinois Institute of Technology, in 2019, stated that “current machines and robotics are not conscious”, despite science fiction.
And this doesn’t seem to have changed following years, with an article published in Frontiers in Artificial Intelligence in 2021 by JE Korteling and colleagues, declaring that human-level intelligence was some way off.
“No matter how smart and autonomous AI agents are in certain aspects, at least in the near future, they likely will remain unconscious machines, or special-purpose devices that assist humans in complex, specific tasks,” they said.
Sutskever is OpenAI’s chief scientist. He has been long preoccupied with artificial general intelligence (AI that can operate at human- or superhuman capacities), so the claim is not out of the ordinary.
His appearance in a documentary called iHuman was where he claimed that AI could solve all problems around the globe, and also offer the possibility of creating stable dictatorships.
OpenAi, which Sutskever founded with Elon Musk and the current CEO Sam Altman was launched in 2016 by Sutskever. But this marks his first assertion that machine consciousness has ‘already arrived’.
Musk quit the group over fears that it would hire the same people as Tesla and the possibility of a fake news generator.
OpenAI is not without controversy.
It claims that it reconfigured AI since the incident to increase its behavior and decrease the chance of it occurring again.