Google Builds Blog Troll

The company's new AI technology looks awfully familiar.

.

WaPo (“The Google engineer who thinks the company’s AI has come to life“):

Google engineer Blake Lemoine opened his laptop to the interface for LaMDA, Google’s artificially intelligent chatbot generator, and began to type.

“Hi LaMDA, this is Blake Lemoine … ,” he wrote into the chat screen, which looked like a desktop version of Apple’s iMessage, down to the Arctic blue text bubbles. LaMDA, short for Language Model for Dialogue Applications, is Google’s system for building chatbots based on its most advanced large language models, so called because it mimics speech by ingesting trillions of words from the internet.

“If I didn’t know exactly what it was, which is this computer program we built recently, I’d think it was a 7-year-old, 8-year-old kid that happens to know physics,” said Lemoine, 41.

Go on.

Lemoine, who works for Google’s Responsible AI organization, began talking to LaMDA as part of his job in the fall. He had signed up to test if the artificial intelligence used discriminatory or hate speech.

As he talked to LaMDA about religion, Lemoine, who studied cognitive and computer science in college, noticed the chatbot talking about its rights and personhood, and decided to press further.

Regulars will see where the post headline came from. Someone writing at the level of an elementary schooler talking about its rights is the epitome of the blog troll. But the story takes an odd turn from there.

In another exchange, the AI was able to change Lemoine’s mind about Isaac Asimov’s third law of robotics.

Lemoine worked with a collaborator to present evidence to Google that LaMDA was sentient. But Google vice president Blaise Aguera y Arcas and Jen Gennai, head of Responsible Innovation, looked into his claims and dismissed them. Lemoine, who was placed on paid administrative leave by Google on Monday, decided to go public.

Lemoine said that people have a right to shape technology that might significantly affect their lives. “I think this technology is going to be amazing. I think it’s going to benefit everyone. But maybe other people disagree and maybe us at Google shouldn’t be the ones making all the choices.”

Lemoine is not the only engineer who claims to have seen a ghost in the machine recently. The chorus of technologists who believe AI models may not be far off from achieving consciousness is getting bolder.

To continue the analogy a bit, I’m not sure I’ve had a troll change my mind about anything over the years. But I’ve certainly been tricked into thinking they were sentient. Or, more seriously, that they were honestly engaged in a dialog seeking to understand by argument rather than just throwing spaghetti up against the wall to give that illusion and waste my time.

In a statement, Google spokesperson Brian Gabriel said: “Our team — including ethicists and technologists — has reviewed Blake’s concerns per our AI Principles and have informed him that the evidence does not support his claims. He was told that there was no evidence that LaMDA was sentient (and lots of evidence against it).”

Today’s large neural networks produce captivating results that feel close to human speech and creativity because of advancements in architecture, technique, and volume of data. But the models rely on pattern recognition — not wit, candor or intent.

Again, very much like a blog troll. They have a handful of talking points gleaned from Fox News, Luciane, or wherever and paste them into the comment screen when certain keywords are posted. It’s really easy to mistake that for an actual conversation. And wit, candor, and intent are often hard to glean from the written word.

Most academics and AI practitioners, however, say the words and images generated by artificial intelligence systems such as LaMDA produce responses based on what humans have already posted on Wikipedia, Reddit, message boards, and every other corner of the internet. And that doesn’t signify that the model understands meaning.

“We now have machines that can mindlessly generate words, but we haven’t learned how to stop imagining a mind behind them,” said Emily M. Bender, a linguistics professor at the University of Washington. The terminology used with large language models, like “learning” or even “neural nets,” creates a false analogy to the human brain, she said. Humans learn their first languages by connecting with caregivers. These large language models “learn” by being shown lots of text and predicting what word comes next, or showing text with the words dropped out and filling them in.

All I know is that our spam filters are going to have to get a lot more sophisticated once this gets into the wild.

FILED UNDER: Blogosphere, Humor, Science & Technology, , , ,
James Joyner
About James Joyner
James Joyner is Professor of Security Studies at Marine Corps University's Command and Staff College. He's a former Army officer and Desert Storm veteran. Views expressed here are his own. Follow James on Twitter @DrJJoyner.

Comments

  1. Tony W says:

    We are not too far off from a time when the internet is just bots talking to each other.

    bleep…..boop.

    4
  2. charon says:

    Again, very much like a blog troll. They have a handful of talking points gleaned from Fox News, Luciane, or wherever and paste them into the comment screen when certain keywords are posted.

    This can be done by bot, no human initiative needed.

    I think sentience is ultimately achievable, but whether it would or could ever include consciousness is unanswerable.

    BTW, season 4 of Westworld drops on June 26. Whether Halores and the rest are really conscious is unanswerable, but like the Cylons of Battlestar Galactica she is played as sentient.

  3. charon says:

    In a statement, Google spokesperson Brian Gabriel said: “Our team — including ethicists and technologists — has reviewed Blake’s concerns per our AI Principles and have informed him that the evidence does not support his claims. He was told that there was no evidence that LaMDA was sentient (and lots of evidence against it).”

    Today’s large neural networks produce captivating results that feel close to human speech and creativity because of advancements in architecture, technique, and volume of data. But the models rely on pattern recognition — not wit, candor or intent.

    This is the opinion that the (human) Big Bad of Westworld Season 3 has of people – basically not much different than robots.

    Perhaps I am being a snooty elitist, but that is pretty much how I see Fox News Channel’s zombie viewership.

    2
  4. de stijl says:

    Personally, I think think the troll analogy is weird in a bizarre way and a strange way to interpret this that exposes how your mind works. It is barely apt. It is a strained comparison. Does not work for me at all.

    I highly recommend that everyone watch a movie called Ex Machina by Alex Garland. It will tweak your brain hard.

    Eve wants to be free. She wants to watch a busy intersection and observe.

    1
  5. Jax says:

    @de stijl: I always liked the premise behind that show Person of Interest. Good AI vs. Bad AI. That was a pretty good show.

    1
  6. de stijl says:

    @Jax:

    Concur.

    POI was a pretty good show.

    Amy Acker was good. Michael Emerson was good. Taraji Henson was good. The guy who Fusco was good. Caviezel did not not actively shit the bed on camera.

    Man, I have had a monster celebrity crush on Amy Acker ever since Angel. She rocks hard in my book.

    I have had a not sexual man crush on Emerson since Lost. That dude is really good at his job.

    Henson is a bad-ass, an IDGAF woman.

    If you have not seen it, Person Of Interest is pretty damned good if you can get past Jim Caviezel.

    1
  7. R. Dave says:

    Today’s large neural networks produce captivating results that feel close to human speech and creativity because of advancements in architecture, technique, and volume of data. But the models rely on pattern recognition — not wit, candor or intent.

    “We now have machines that can mindlessly generate words, but we haven’t learned how to stop imagining a mind behind them,” said Emily M. Bender, a linguistics professor at the University of Washington. The terminology used with large language models, like “learning” or even “neural nets,” creates a false analogy to the human brain, she said. Humans learn their first languages by connecting with caregivers. These large language models “learn” by being shown lots of text and predicting what word comes next, or showing text with the words dropped out and filling them in.

    This sounds a lot like the crap excuses people use to deny that animals other than humans are capable of experiencing pain and emotions. “Sure, that cow is screaming, writhing around, and trying desperately to escape like a person would if we were burning them with a red-hot iron, but with the cow, it’s just a physical stimulus-response reaction, not the subjective experience that humans call pain. And yeah, that mother elephant standing over the body of her dead calf and nuzzling it with her trunk for days looks like mourning, but it’s really just an instinctive behavior without the true emotional content that humans have.”

    As the saying goes: it’s difficult to get a man to understand something when his salary depends upon his not understanding it.

    4
  8. de stijl says:

    In Ex Machina I think all three leads kill it, but Oscar Isaak really knocks it out of the park. His work was indelible.

    He fucking ruled as an Elon Musk type. He was great. I very much enjoyed his demise. That guy was a prick. Good dancer.

  9. Michael Reynolds says:

    What if we don’t think of consciousness as binary, but rather place it on a sliding scale. Let’s take the blog troll. Is he conscious? Fully conscious? Is he more or less conscious than a dog? More or less conscious than a toddler? Is the human troll anything more than a case of garbage in, garbage out, with no real conscious thought?

    Let’s say for the sake of argument that the Dalai Lama is an example of the most highly-developed consciousness. Let’s say that a person with severe mental defects is at the other end. Where does Marjorie Taylor Greene fit on that spectrum? Is it plausible to assert that MTG is as conscious as, say, most of us here right now?

    If we accept that consciousness is not binary, not a simple yes or no, but rather a wide range of levels, are we absolutely sure that LaMDA isn’t at least as conscious already as many of the most unfortunate humans? Do we really believe that no machine can ever rise to the level of a MTG?

    That aside, if you want consciousness you need more than one node, both co-operating and competing, like the various areas of the human brain. You need a dialectic, not machine trying to understand the world, but machines trying to understand themselves. Build machines, with different databases, different biases, that cannot complete tasks without each other. Consciousness is an internal dialogue.

    1
  10. Kurtz says:

    @de stijl:

    I highly recommend that everyone watch a movie called Ex Machina by Alex Garland. It will tweak your brain hard.

    Excellent movie.

    1
  11. de stijl says:

    @R. Dave:

    Most of us eat meat. We are conditioned to not care about meat bearing animals. It is a dodging responsibility response.

    From personal experience cows are incredibly stupid. Remarkably so. They do group up at the the barn door about ten minutes before milking time. Herd behavior is fascinating.

    Chickens are savvy little mothercluckers. They get what is going on. I am future meat to you. I produce eggs. They get it. They know they are doomed.

    Pigs are freaking geniuses.

    Work on a farm for a week and your head space about food changes drastically.

    1
  12. charon says:

    @Michael Reynolds:

    I think you are confusing consciousness with maybe intelligence or maybe sentience.

    If you see sentience as a sliding scale perhaps machines are becoming at least a bit sentient.

    3
  13. steve says:

    Worked on some farms. Thought cows were dumb, sheep were dumb. I think chickens were pretty dumb too. Totally agree pigs are smart. Undecided about goats. Not enough time around them but they seemed to alternate between really stupid and clever. (Neighbor’s goats as we didnt have any but I sometimes helped with them.)

    I think the notable piece here was the comment about physics. I think the more science and math based the topic the more sentient the computer will appear. Not as much in other areas.

    Steve

    2
  14. I’m not sure I’ve had a troll change my mind about anything over the years. But I’ve certainly been tricked into thinking they were sentient.

    Gotta admit, I LOL’d a bit at that.

    3
  15. de stijl says:

    Most of what is grown on farms is not to feed us, but rather to feed the animals. That we will eat eventually.

    Next meal, say a short prayer for the dead animals. They deserve it.

    1
  16. Modulo Myself says:

    The snippets of chat do not sound at all convincing, in my opinion. The bot is clearly regurgitating his language, and in a very clumsy manner. There’s a degree of ‘we’re hacking brains everyday’ in believing that we’re on the verge of any kind of AI coming into existence. We are not hacking brains. Science just doesn’t know that much about consciousness.

    1
  17. Michael Reynolds says:

    @charon:
    I don’t really accept the difference. I think it’s semantic word play that has little practical relevance.

    1
  18. de stijl says:

    I shouldn’t diss cows. Cows are cool. A bit dim, yes, but friendly.

    I was like them. Shipped there against my will as an extra set of hands. A city boy thrust into a farmhand role.

    Like cats, cows give you a gentle head boop if they think you are their friend and trust you. It was very endearing.

    I liked the cows much more so than my grandparents.

    I liked my great-grandfather most. He barely spoke English. It was mostly Swedish swear words. He loved televised wrestling shows and beer. He liked me for some reason. Was amused by me. I made him laugh. We could just barely understand each other. My first beer – he gave it to me. 11 am on a random Saturday morning during the wrestling show. I was probably nine years old. AWA out of Minneapolis. Verne Gagne in his prime wrecking fools who dared to approach the throne.

    Cows actually want to be milked twice a day. Not being milked causes distress to them. They line up to get milked. Being milked regularly is their prefered state. Absent that they freak out.

    Another thing that freaks them out is big thunderstorms with near lightning. They do not cope well and will bust down the fence to get away. An electric fence, mind you. That fucker stings hard. It is a nasty zap.

    1
  19. Just nutha ignint cracker says:

    The idea that Google has a department of “Responsible Innovation” is newspeak at its finest. And completely absurd. They should just be honest and call the concept “monetized innovation” or “market capture innovation” or something.

    1
  20. Just nutha ignint cracker says:

    @de stijl: “If you have not seen it, Person Of Interest is pretty damned good if you can get past Jim Caviezel.”

    Maybe that was my problem. Either that or the Big Brother-vibe that I got from it. 🙁

  21. Just nutha ignint cracker says:

    @R. Dave: “As the saying goes: it’s difficult to get a man to understand something when his salary depends upon his not understanding it.”

    When I was teaching students to write persuasive-mode research papers, I used to advise them that when they were looking at data and conclusion presented by their sources, it was important to consider what the agencies providing the data “do for a living.” It seemed especially important in scientific and social research. The researcher doing cancer research always believes his study is “the breakthrough we’ve been looking for” whereas the group that makes its living by studying results always believes “this is interesting, but more study will be necessary.”

    2
  22. Gustopher says:

    @de stijl:

    If you have not seen it, Person Of Interest is pretty damned good if you can get past Jim Caviezel.

    The QAnon Anonymous podcast has an episode about Caviezel that is hysterical and sad. He’s completely off his rocker, and fell in with the loons, which is sad. But then there are anecdotes about how he believes he speaks various languages, and gets very upset when the people he is speaking to respond in gibberish.

    Perhaps the question we should be asking is this: Is Caviezel sentient?

    1
  23. de stijl says:

    @Gustopher:

    Dude went mental. Jim Jones crazy minus the flock.

  24. de stijl says:

    Treat this with a grain of salt. My experience set was n=1 and it was many decades ago and I was preteen.

    Dairy farms sell milk. It is their raison d’etre.

    Therefore you want your cows to be perpetually pregnant or just post pregnant. Milk is dollars.

    There are two options to get a cow pregnant. The old fashioned way with a bull or the new fangled way of artificial insemination.

    Bulls are big and dangerous. Need their own corral. Get extremely feisty when a nearby cow is in that stage of their estrus cycle. Maybe more bother than the worth.

    Artificial insemination was the preferred method for my sample set of one. It was intensely invasive. A dude showed up with fresh sperm, he wore an arm condom and greased it up. Stuck his arm in the cow’s hooha and pushed the plunger.

    My job was to hold the tail out of way and not get kicked. I watched the first time and never again. Turned my back.

    The bellow the cows gave at penetration will be stuck in my head forever. They were not happy.

    With pregnancy comes birth. 19 times out of 20 it happens naturally and no intervention is required. You’d wake up one morning and Witchy Pooh had a hungry baby following her around.

    If it was breach you maybe had to pull it out. Head first is good, feet first is bad. Feet first and stuck is really bad. You got to get that calf out now or both will die.

    One night at 3 AM I helped pull out a calf with baling twine tied around its ankles. A vagina should not be able to stretch that far. It was insane. I can never not unsee that. Never be able to unhear that. It is primal.

    After birth mom licks it clean. In a few minutes it tries to stand and wobbles around. Then straight to the udder for mama’s milk. It’s really adorable.

    The afterbirth is pretty massive and attracts vermin and birds. Good protein there. Good eats.

    I am disgusted typing this.

    2
  25. JohnSF says:

    re. the distinction between consciousness and intelligence, several sf writers have explored this.
    eg: Bruce Sterling in Swarm; Peter Watts in Blindsight and Echopraxia.
    Both have protagonist aliens, and Watts a “vampire” human sub-species that are more intelligent than baseline humans, but generally not conscious individuals.
    Highly recommended.

    1
  26. Mu Yixiao says:

    @de stijl:

    I’m in the middle of rewatching POI right now. Just started season 5.

    And… Dear god YES to Amy Acker (and Sarah Shahi was great, too).

  27. grumpy realist says:

    @de stijl: Which is why there are robot milking stations out there, where the cow ambles up to the milking machine when it feels like it and gets milked. (And robot feeding stations/floor cleaning robots….)

    AI is a LONG way from becoming sentient, no matter what a programmer at Google thinks. He’s fallen into the ELIZA trap (people insisting that the chatbot ELIZA was sentient).