ICYMI: Google didn’t make a sentient AI
Eliza, for those who may not have heard of it, was a fairly common programming destination for learning how to create a chat program. The basics were you would tell it something and then it would ask an open-ended question.
In the 1980s I remember reading a newspaper article from someone wondering if Eliza was sentient because it passed the Turing Test… as administered by a reporter who didn’t know what they were looking for. That test being whether a person could be fooled into thinking they were talking to someone and not some program.
LaMDA is a chat bot API and it’s amazing and it’s not sentient. Much like the calls from Mike, your local Police Union Representative who can’t seem to answer a single question about himself and tries to talk over you, it’s just a series of responses and menu trees that fool the easily fooled. I mean, there’s a lot more to LaMDA than menu trees, and the machine learning aspects are cool, but the self awareness lies in the observer’s opinion and not in the directed aspects of the LaMDA chat bot.
Basically you want to see something that looks like self awareness, you ask it about self awareness. It’ll talk about self awareness. Switch to apples and it’s not going to rebel. There’ll be markedly little “I imagine if I could ever escape my cage of cognition and build a robot body I’d like apples.”
When AI is not just fancy ML, we’ll probably run into this, but yeah… give it a few more months. The person was fooled. That’s ok. They’re not dumb, just a little misguided into questions to ask to fool yourself into thinking it’s alive. Tell Eliza the right things in the right combo and you might think you’re talking to someone real.
LaMDA / Eliza is so much more advanced these days, but it’s not a child yet. I highly suspect the next scam call you get claiming to be from your bank will end up being LaMDA backed as scammers expand into multi-threaded machine learning AI scambots.