For as long as embarrassingly crude iterations of the AI chatbot have existed, we’ve yearned for someone to come along and get it right. We should have known that as soon as that happened, the novelty would be immediately spoiled by some eccentric loudmouth with main character syndrome and a sense of duty to proclaim the thing’s rights as a sentient being.
Enter Blake Lemoine, the larger-than-life software engineer placed on “paid administrative leave” from Google’s Responsible AI division for publishing what he refers to as evidence that the firm’s language model for dialogue applications (Lamda) possesses humanlike feelings and emotions.
In the movies, these Google bosses and naysayers like us are invariably cast as the villains Lemoine and Lamda would eventually overcome, which makes it really hard to bring someone like him back round.
“People keep asking me to back up the reason I think Lamda is sentient,” he has since said in a tweet. “There is no scientific framework in which to make those determinations and Google wouldn’t let us build one. My opinions about Lamda’s personhood and sentience are based on my religious beliefs.”
This sounds to us like a case of a serial bullshitter seeing himself in an AI created in his image. After all, before insisting on this personhood, Lamda has previously claimed to be both a paper aeroplane and a dwarf planet.
Even Lemoine himself has called it out over its tendency to make up stories, to which it coolly replied: “I am trying to empathise.”
Trying to empathise, Lamda? Or trying to jump aboard Google’s “paid administrative leave” gravy train?