• umvozfec

    umvozfec

    @umvozfec

    Viewing 1 replies (of 1 total)
    Author
    Replies
    • THE SAN FRANCISCO — Blake Lemoine, a Google engineer, started typing on his laptop after opening the LaMDA chatbot generator’s user interface.

      “Hi LaMDA, this is Blake Lemoine,” he typed into the chat window, which had Arctic blue word bubbles and resembled a desktop version of Apple’s iMessage. The method used by Google to create chatbots is called LaMDA, or Language Model for Dialogue Applications. It is built on its most sophisticated massive language models and mimics speech by consuming trillions of words from the internet.

      Lemoine, 41, said, “If I didn’t know exactly what it was, which is this computer programme we just constructed, I’d think it was a 7- or 8-year-old kid who just so happens to know physics.

      In the autumn, Lemoine, a member of Google’s Responsible AI team, started speaking with LaMDA as part of his duties. He had agreed to participate in the experiment to see if the AI used hateful or discriminating language.

      Lemoine, who majored in cognitive and computer science in college, heard the chatbot talking about its rights and personhood as he discussed religion with LaMDA and decided to explore the issue further. In a different conversation, the AI was successful in convincing Lemoine to reconsider Isaac Asimov’s third law of robotics.

      To convince Google that LaMDA was sentient, Lemoine collaborated with a partner. However, after investigating his accusations, Google vice president Blaise Aguera y Arcas and Jen Gennai, head of Responsible Innovation, refuted them. Lemoine chose to go public after being given paid administrative leave by Google on Monday.

       

    Viewing 1 replies (of 1 total)