I keep hearing the word “sentience” thrown around when talking about AI. A quick search tells us that, no, machines and models aren’t sentient. Yet, the conversation persists.
I think this is because the illusion is so darn good. If you ask an LLM to “Imagine [a situation] and describe what you see and feel," it will respond articulately by telling you about feelings, rather than writing in a way that expresses feelings. In some cases, it will produce melodramatic language with a lot of adjectives and cliché images. In others, it will rationally list and explain the feelings, even to the point of describing associated physical sensations (e.g. feeling drained).
For many people, it expresses “feeling” better than they can on their own. The LLM may also produce unexpected phrases or combinations of ideas that surprise or inspire the reader. Because the reader experiences emotions in response, it creates the illusion that the LLM has feeling and insight.
But as someone who was born with a lot of sensitivity to both language and feeling, LLM prose is like reading a romance novel when you want to learn about real love. Some of the time, the language is so far from expressing any real feeling that it makes the experience cold and intellectualized. If a person were talking this way, I would suspect they were in denial about their true emotional life.
So I’m curious — do you think this illusion of sentience is useful? Or is it something to distrust? Why?
As we strive to know ourselves in relationship to LLMs / AI, one of the things we need to stay alert to our own natural tendency to attribute consciousness to inanimate objects. It reminds me of how children ascribe feelings to their stuffed animals and dolls. The belief says much more about how our brains function than the reality of the inanimate object.
I suspect highly flawed and terrible self-replicating machines will destroy us long before any sentience singularity. Going to enjoy my bagel now. Apologies to those who hadn’t thought about it. 🤙🏻
As long as we don't grant the machines a sense of self preservation, I think we'll be fine. That's what ultimately got John and Sarah Connor into all that trouble.
Are the algorithms not already somewhat self perpetuating? With essentially tacit popular consent, the algorithms bleed into and shape our IRL culture. We’ve created our own augmented reality ‘borg with which to destroy ourselves.
I find that the conversations with LLMs / AI basically contain pleasantries that would be present with any conversation with a customer and someone providing a service. Mostly of the type that are meant to conform to conventions but aren't really sincere on either person's part. Rather more of a stilted pantomime in order to meet social convention. This is relatively easy to do and I suspect doesn't really require that much logic or sophistication.
I think that Alexa or Suri placing pleasantries in their responses helpful necessarily. But I also do not find it frightening. I believe it to be more like a parlor trick. I don't believe that Alexa really believes that my taste in literature is enlightening or sophisticated any more than the clerk at my local bookstore.
Until AI is a lot more sophisticated and I have my doubts if it will ever get there, I just see it as likely increasing on a minute scale how ersatz and flat much of everyday life is getting to be.
The word I used originally was Kubuki theater, which is I think a correct reference, but I wasn't sure, so I removed it and used stilted in its place.
I just know that this is the third wave of AI hype I've seen in my career. I find it difficult to believe that this was in different. But, I suppose if you keep playing the same game eventually it will pay off once, maybe, for someone.
As we strive to know ourselves in relationship to LLMs / AI, one of the things we need to stay alert to our own natural tendency to attribute consciousness to inanimate objects. It reminds me of how children ascribe feelings to their stuffed animals and dolls. The belief says much more about how our brains function than the reality of the inanimate object.
I suspect highly flawed and terrible self-replicating machines will destroy us long before any sentience singularity. Going to enjoy my bagel now. Apologies to those who hadn’t thought about it. 🤙🏻
As long as we don't grant the machines a sense of self preservation, I think we'll be fine. That's what ultimately got John and Sarah Connor into all that trouble.
Are the algorithms not already somewhat self perpetuating? With essentially tacit popular consent, the algorithms bleed into and shape our IRL culture. We’ve created our own augmented reality ‘borg with which to destroy ourselves.
Right!
I find that the conversations with LLMs / AI basically contain pleasantries that would be present with any conversation with a customer and someone providing a service. Mostly of the type that are meant to conform to conventions but aren't really sincere on either person's part. Rather more of a stilted pantomime in order to meet social convention. This is relatively easy to do and I suspect doesn't really require that much logic or sophistication.
I think that Alexa or Suri placing pleasantries in their responses helpful necessarily. But I also do not find it frightening. I believe it to be more like a parlor trick. I don't believe that Alexa really believes that my taste in literature is enlightening or sophisticated any more than the clerk at my local bookstore.
Until AI is a lot more sophisticated and I have my doubts if it will ever get there, I just see it as likely increasing on a minute scale how ersatz and flat much of everyday life is getting to be.
Stilted is a great word for this.
I also agree about life getting flat. I am finally diving back into literature for solace…and solitude.
The word I used originally was Kubuki theater, which is I think a correct reference, but I wasn't sure, so I removed it and used stilted in its place.
I just know that this is the third wave of AI hype I've seen in my career. I find it difficult to believe that this was in different. But, I suppose if you keep playing the same game eventually it will pay off once, maybe, for someone.