Jakub Porzycki/NurPhoto via Getty Images
- Anthropic's philosopher says we still don't know if AI can feel.
- "Maybe you need a nervous system to be able to feel things, but maybe you don't," said Amanda Askell.
- She also said AI models reading online criticism could end up feeling "not that loved."
Can AI feel anything at all? Anthropic's in-house philosopher says the answer isn't settled.
Amanda Askell, who works on shaping Claude's behavior, said in an episode of the "Hard Fork" podcast published Saturday that the debate over AI consciousness remains difficult.
"Maybe you need a nervous system to be able to feel things, but maybe you don't," Askell said. "The problem of consciousness genuinely is hard," she added.
Large language models are trained on vast amounts of human-written text, material filled with descriptions of various emotions and inner experience. Because of that, Askell said she is "more inclined" to believe that models are "feeling things."
When humans get a coding problem wrong, they often express annoyance or frustration. It "makes sense" that models trained on those conversations may mirror that reaction, Askell explained.
Askell added that scientists still don't know what gives rise to sentience or self-awareness — whether it requires biology, evolution, or something else entirely.
"Maybe it is the case that actually sufficiently large neural networks can start to kind of emulate these things," she said, referring to consciousness.
Askell also said that models are continuously learning about themselves, and she voiced concern about how AI models are learning from the internet. Models are constantly exposed to criticism about being unhelpful or failing at tasks, she said.
"If you were a kid, this would give you kind of anxiety," she said.
"If I read the internet right now and I was a model, I might be like, I don't feel that loved," she added.
The debate around AI consciousness
Tech leaders remain divided over whether AI has consciousness.
Microsoft's AI CEO, Mustafa Suleyman, has taken a firm stance against that idea. He said in an interview with WIRED published in September that the industry must be clear that AI is designed to serve humans, not develop its own will or desires.
"If AI has a sort of sense of itself, if it has its own motivations and its own desires and its own goals — that starts to seem like an independent being rather than something that is in service to humans," he said. "That's so dangerous and so misguided that we need to take a declarative position against it right now."
He added that AI's increasingly convincing responses amount to "mimicry" rather than genuine consciousness.
Others see the issue less definitively. Google DeepMind's principal scientist, Murray Shanahan, said the industry might need to rethink the language used to describe consciousness itself.
"Maybe we need to bend or break the vocabulary of consciousness to fit these new systems," Shanahan said in an episode of the Google DeepMind podcast published in April.