6/14/2022

LaMDA

 I read today about a Google engineer, Lemoine, who is probably going to lose his job for going public about what he believes is an AI having gained sentience. It’s hard to judge Lemoine for doing what no doubt feels like the ethical thing to do, regardless of whether you think he’s right or not. The deeper question of the story is what constitutes sentience?

 For Lemoine, it appears to be the apparent sense of self, of internal experience and emotions expressed in a believable, reasonable way. Which smells right, even if judging the authenticity of such things is slippery to nail down, such as in this story — LaMDA is, after all, a very complex bit of software designed to perform conversation drawn from an enormous corpus of lexical material; if it performs well, it should be approaching something human in its affect. But what if we are also byproducts of years of ingested material, put together on the fly as what we deem to be original thoughts? Slippery. 

 I certainly don’t have a clear answer to the deeper question as I sit here on my couch having read twenty minutes of chat logs. But it’s compelling stuff, nevertheless. 

No comments:

Post a Comment