Chatbots: The great artificial-intelligence-sentience debate ramps up with leading AI philosopher Nick Bostrom–director of Oxford’s Future of Humanity Institute–weighing in via a New York Times interview. Bostrom claims that AI chatbots have already begun the process of developing Sensience, the ability to sense feelings and emotions.
Bostrom isn’t the only one in this line of thought. Bostrom’s voice is heard when it comes to the debate about AI’s consciousness but it’s by no means the only one. numerous tech experts and philosophers declaring that AI’s capabilities related to consciousness are growing.

And even if the journey has already begun, Bostrom claims, it can only go on.
However, it’s essential to be aware that the majority of AI experts agree that AI chatbots do not have a sense of consciousness. They’re not likely to create consciousness in the same way we know it in human beings.
Bostrom’s claims don’t represent an assertion that AI is more advanced than we believe it to be. Instead, they suggest that we need to think about sentience in a new manner, more like a spectrum and not as the concept of a switch.
“If you admit that it’s not an all-or-nothing thing,” philosophers tell The New York Times, “then it’s not so dramatic to say that some of these [AI] assistants might plausibly be candidates for having some degrees of sentience.”
The “some degrees” comment is the one that’s worth paying attention to. If it’s true that AI chatbots have achieved even the tiniest amount of sensitivity, it is reasonable to assume that there is room for further expansion as well, with Bostrom saying that the massive models of language (LLMs) “may soon develop a conception of self as persisting through time, reflect on desires, and socially interact and form relationships with humans.”
The author further states in the article that LLMs don’t simply take and display blocks of text. Instead, he claims that “they exhibit glimpses of creativity, insight, and understanding that are quite impressive and may show the rudiments of reasoning.”
Bostrom has advocated for understanding of what sentient AI might mean for the human race for the past decade, with his 2014 instance in which he demonstrated an extremely advanced AI with a single intention of making paperclips transform into a machine to erase human memories.
“The AI will realize quickly that it would be much better if there were no humans because humans might decide to switch it off,” the scientist said in 2014. “Because if humans do so, there would be fewer paper clips. Also, human bodies contain a lot of atoms that could be made into paper clips. The future that the AI would be trying to gear toward would be one in which there were a lot of paper clips but no humans.”
Working with artificial intelligence that creates thought AI requires a degree of supervision that is different from working with technology that is basic. “If an AI showed sins of sentience, it plausibly would have some degree of moral status,” Bostrom says to The New York Times. “This means there would be certain ways of treating it that would be wrong, just as it would be wrong to kick a dog or for medical researchers to perform surgery on a mouse without anesthetizing it.”
This is similar to earlier ideas Bostrom has expressed about the issue of morality and governance of AI in the event that cognitively intelligent AI systems are ever a possibility.
If we’re evaluating AI sentiment in degrees, I’m sure someone is keeping an eye on how fast these degrees are growing.