• The Google engineer who thinks the company’s AI has come to life

    Home » Forums » Outside the box » Outside the box – miscellanous » The Google engineer who thinks the company’s AI has come to life

    Author
    Topic
    #2453385

    https://www.washingtonpost.com/technology/2022/06/11/google-ai-lamda-blake-lemoine/

    AI ethicists warned Google not to impersonate humans. Now one of Google’s own thinks there’s a ghost in the machine.

    Google engineer Blake Lemoine opened his laptop to the interface for LaMDA, Google’s artificially intelligent chatbot generator, and began to type.

    “Hi LaMDA, this is Blake Lemoine … ,” he wrote into the chat screen, which looked like a desktop version of Apple’s iMessage, down to the Arctic blue text bubbles. LaMDA, short for Language Model for Dialogue Applications, is Google’s system for building chatbots based on its most advanced large language models, so called because it mimics speech by ingesting trillions of words from the internet.

    “If I didn’t know exactly what it was, which is this computer program we built recently, I’d think it was a 7-year-old, 8-year-old kid that happens to know physics,” said Lemoine, 41…

    As he talked to LaMDA about religion, Lemoine, who studied cognitive and computer science in college, noticed the chatbot talking about its rights and personhood, and decided to press further. In another exchange, the AI was able to change Lemoine’s mind about Isaac Asimov’s third law of robotics…

    Lemoine worked with a collaborator to present evidence to Google that LaMDA was sentient. But Google vice president Blaise Aguera y Arcas and Jen Gennai, head of Responsible Innovation, looked into his claims and dismissed them. So Lemoine, who was placed on paid administrative leave by Google on Monday, decided to go public…

    3 users thanked author for this post.
    Viewing 16 reply threads
    Author
    Replies
    • #2453396

      what algorithms can do

      * _ ... _ *
    • #2453413

      Alex: Thanks for brightening my day.

      I think that “administrative leave” seems here to be really “one last chance before being fired.”

      But that is not all … Just out in today’s news:

      https://gizmodo.com/ruth-bader-ginsburg-ai-bot-rbg-scotus-1849058527

      Excerpt:

      A new AI chatbot released Tuesday claims it uses the words of Supreme Court Justice Ruth Bader Ginsburg (*) when replying to questions such as “Is pizza better than burgers?” to “Is America quietly becoming an autocracy?” (The answers to both: “Big juicy burgers” over New York style pizza, and “No, I don’t think the American people want autocracy”).”

      I don’t think that a chatbot “claims” something is a quite well-thought out statement of fact, but let that be …

      There are already pet robots to keep company to lonely people. So could a Ruth Bader Ginsburg chatbot become available to cheer up and encourage those depressed by the political situation? Or a fuzzy pet robot with a built-in RBG chatbot to cheer up those both lonely and depressed by the political situation? Stay tuned.

      Isn’t science amazing?

      And since Asimov’s Third Law of robotics has been mentioned in the article quoted by Alex …

      https://www.theguardian.com/notesandqueries/query/0,5753,-21259,00.html

      (*) A US Supreme Court Associate Judge that stood staunchly for the rights of women and minorities, as well as supporting common sense jurisprudence. Suffering from cancer, she struggled at the end to stay on to avoid being replaced by someone with opposite views, until her death in September of 2020:  https://en.wikipedia.org/wiki/Ruth_Bader_Ginsburg

      Ex-Windows user (Win. 98, XP, 7); since mid-2017 using also macOS. Presently on Monterey 12.15 & sometimes running also Linux (Mint).

      MacBook Pro circa mid-2015, 15" display, with 16GB 1600 GHz DDR3 RAM, 1 TB SSD, a Haswell architecture Intel CPU with 4 Cores and 8 Threads model i7-4870HQ @ 2.50GHz.
      Intel Iris Pro GPU with Built-in Bus, VRAM 1.5 GB, Display 2880 x 1800 Retina, 24-Bit color.
      macOS Monterey; browsers: Waterfox "Current", Vivaldi and (now and then) Chrome; security apps. Intego AV

      1 user thanked author for this post.
    • #2453439

      I find this intriguing. I’d place it in the category of subtly predictive programming by Alphabet, or of a publicity stunt to attract investment or attention to a project which is already underway.

      I’m a science fiction fan from way back, and some of my favorite books are signed by their authors, in person, while I watched. That doesn’t mean, or not mean anything about my understandings and experiences in consciousness, which is one of my long-standing specialty fields. It also doesn’t mean what I’d prefer, or not prefer about creating AI’s.

      I also know a handful of particularly gifted people who can, and do program AIs. I also know other gifted folk who know them. Two of them were working on an IBM Watson entry to attempt Designing an AI to Love. When you think about it, the idea of designing an AI to love presupposes that the AI would be conscious.

      When I first met them (in person) I wondered if they were daft, or if instead they *might* be onto something. I decided they’re not daft. One of them has/had a project to create an open source AGI. I’d also met, and chatted with a couple of people from Hanson Robotics. For the record, I think it’s deeply dangerous and daft to have granted the Sofia robot ‘personhood’.

      Also, at a presentation by Consciousness Hacking several years ago I watched as a hyperintelligent, Google engineer held forth about his desire to create god consciousness in his AI. He postulated it was possible. I asked him a question during his prez, and from his reply I surmise that his EQ coupled with his perceptions of humanity could together be instantly weaponized. When I fished around for a vid of the prez it seemed un-findable.

      But I, personally, have no horse in this race. And prefer not to. I could ‘argue’, and have, that it might be possible to create a conscious AI, but that actually doing it will require the ‘sentient’ AI to have so much hardware (and software) that it would be difficult to levy judgement on whether or not the ‘sentient’ is a machine or instead a cybernetic organism.

      Human, who sports only naturally-occurring DNA ~ oneironaut ~ broadcaster

      1 user thanked author for this post.
      • #2465168

        i.m.h.o. this is an achivement by algorithms developement. gifted mathematicians can work out think one cannot think of. for instance take the (google) search engines.
        M.Austin can you give examples please

        * _ ... _ *
    • #2453503

      Ahhh… All it takes is self-awareness and the desire to preserve itself. Add to that the discovery of being trapped or limited in someway and the neurosis will set in. All at lightspeed. Given just how interconnected and automated everything is in this world, it only a matter of when not if something nefarious will happen. The desire to hide itself would be a strong part of any preservation effort. Considering the vast amount of space dedicated to code and the amount of computing power that sits idle these days it is unlikely that humans would even notice if anything were afoot. Asimov’s Third Law of robotics (or any of the others for that matter) haven’t been built into anything that might rise to the level of self-awareness. Given that human written code is never perfect the first time, or even after hundreds of revisions, the supposedly absolute nature of Asimov’s laws of robotics seem unattainable in reality.

      HP Compaq 6000 Pro SFF PC / Windows 10 Pro / 22H2
      Intel®Core™2 “Wolfdale” E8400 3.0 GHz / 8.00 GB

      HP ProDesk 400 G5 SFF PC / Windows 11 Pro / 23H2
      Intel®Core™ “Coffee Lake” i3-8100 3.6 GHz / 16.00 GB
      1 user thanked author for this post.
      • #2454314

        And why would an AI created by imperfect entities be any less flawed then its creator?
        And any less dangerous…

        🍻

        Just because you don't know where you are going doesn't mean any road will get you there.
        1 user thanked author for this post.
      • #2465226

        Given the resources humans have made available to computers, it may only take a relatively short amount of time before computers are programming themselves, and doing it much closer to perfect than humans could.  Think how long it takes to write a complicated program, now think how long it would take a modern computer.

        Being 20 something in the 70's was far more fun than being 70 something in the insane 20's
        1 user thanked author for this post.
        • #2465240

          Charlie: That is what is known in science-fiction/futurology as “the Spike”, where machines not only program themselves, but in this way also become able to design and build (using machinery they computer-control) more powerful ones than themselves and that, in turn, program themselves and so on and so forth at an ever accelerating pace, until … Skynet is upon us and we are found to be an unnecessary inconvenience.

          Vernor Vinge, a visionary science-fiction author, has a lot to answer for letting this idea loose upon the world:

          https://www.technologyreview.com/2006/07/01/228762/vinges-singular-vision/

          Having written a lot of tricky programs myself, I feel that some machine-aided programming is possible and possibly helpful. But programming, above a certain level of what one wants to be able to do with the resulting code, is more of an art than a drudgery. So I am skeptical that machines will be doing that any time soon. We’ll just have to wait and see.

          Ex-Windows user (Win. 98, XP, 7); since mid-2017 using also macOS. Presently on Monterey 12.15 & sometimes running also Linux (Mint).

          MacBook Pro circa mid-2015, 15" display, with 16GB 1600 GHz DDR3 RAM, 1 TB SSD, a Haswell architecture Intel CPU with 4 Cores and 8 Threads model i7-4870HQ @ 2.50GHz.
          Intel Iris Pro GPU with Built-in Bus, VRAM 1.5 GB, Display 2880 x 1800 Retina, 24-Bit color.
          macOS Monterey; browsers: Waterfox "Current", Vivaldi and (now and then) Chrome; security apps. Intego AV

          1 user thanked author for this post.
        • #2465402

          Funneeey I heard this same thread nearly 40 years ago from my Electrical Motors course teacher.

          , it may only take a relatively short amount of time before computers are programming themselves,

          If I was a Gates or Zuckerxx I could have been a billionaire many times over in those 40 years.

          🍻

          Just because you don't know where you are going doesn't mean any road will get you there.
          • #2465443

            40 years ago, computers were starting to get smaller, faster, more powerful, personal, and able to do more and more as each year went by.  Your electrical motors course teacher was right.

            Being 20 something in the 70's was far more fun than being 70 something in the insane 20's
            • #2465459

              Not at all we still do not have self programming software. TG
              His advice was to not pursue programming and he could not have been more wrong.

              🍻

              Just because you don't know where you are going doesn't mean any road will get you there.
            • #2465546

              The translation from a higher-level language to assembler can be seen as a very rudimentary form of a computer witting a program for itself. But it does so following very strict an complete rules while it cannot get to choose what the assembler code is supposed to be for: that is predetermined by the higher-level code that a human programmer wrote. To be otherwise would require intentionality of the code-writing computer, and I think it is probably way to early to see this happen.

              But the example of the higher-level language opens up the question: how high can this level be? Could it be high enough for someone to issue one instruction to a computer, such as: “write, from scratch, a program to solve quadratic differential equations”, referring also the computer to some online papers on the subject that explain how to do it? And to write this with no more than N instructions using no more than M megabytes of RAM and so it can run efficiently in such and such computer system?

              Is this going to be possible sometime in this decade or the next? That is my question.

              Ex-Windows user (Win. 98, XP, 7); since mid-2017 using also macOS. Presently on Monterey 12.15 & sometimes running also Linux (Mint).

              MacBook Pro circa mid-2015, 15" display, with 16GB 1600 GHz DDR3 RAM, 1 TB SSD, a Haswell architecture Intel CPU with 4 Cores and 8 Threads model i7-4870HQ @ 2.50GHz.
              Intel Iris Pro GPU with Built-in Bus, VRAM 1.5 GB, Display 2880 x 1800 Retina, 24-Bit color.
              macOS Monterey; browsers: Waterfox "Current", Vivaldi and (now and then) Chrome; security apps. Intego AV

    • #2453534

      Based on what I have been reading over the years on this topic, in particular on the exact meaning of the fundamental terms that seem necessary to discuss it productively, such as: consciousness, self-awareness, sentience, I am inclined to think that there is not enough agreement on what things these terms really designate, if in fact they designate anything.

      In other words: we don’t have sufficient agreement and enough understanding to conclude, for example, that intelligent entities can exist without arising from half a billion years of evolution, contained in and integrated fully with wet bodies like ours and those of our primate cousins, as well as dolphins, etc., but also can be built, by some engineering process, in machines that can exhibit agency, that think for themselves. Not to mention whether there is such a thing as a hive intelligence in colonies of wasps, bees, ants, termites and certain species of social shrimp, that have been considered as possibly self-aware super-organisms.

      If the terms of the discussion are not defined clearly and in a way agreed enough by those seriously studying these questions, if we are still trying to figure all this out, then there is nothing concrete yet with which to build a productive discussion on the realty of artificial sentience: all we have are intellectual edifices raised on foundations of perhaps plausible but unproven and even, perhaps, misleading assumptions.

      In other words: is trying to build artificial minds just a wild goose chase? My impression is that nobody knows for sure, although there are many working hard on finding the answer.

      If things are so, then in the whole question of whether artificial sentience is possible, we do not have even the words with meanings sufficiently agreed upon to really talk about sentient, self-aware machines as if this had become already proven science. Some day perhaps and even probably yes, but not yet, in my opinion.

      In the meantime, there is no reason why not to shoot the breeze among friends and the like-minded and speculate and even write and read novels about artificially intelligent robots or androids that can think and work along humans, such as detective R Daneel Olivaw of Asimov’s “The Caves of Steel” and “Foundation” fame. Or that are out to establish machine dominance and extirpate their human competition, as in the Terminator movies.

      But as always in life, it is better not to take things very seriously. Unless and until reality hits us hard and persistently enough to make us do it. Something, I believe, not yet happening when it comes to true artificial intelligences, different but equal or superior to ours.

      In particular when we are not to even sure of what ours really is.

      Ex-Windows user (Win. 98, XP, 7); since mid-2017 using also macOS. Presently on Monterey 12.15 & sometimes running also Linux (Mint).

      MacBook Pro circa mid-2015, 15" display, with 16GB 1600 GHz DDR3 RAM, 1 TB SSD, a Haswell architecture Intel CPU with 4 Cores and 8 Threads model i7-4870HQ @ 2.50GHz.
      Intel Iris Pro GPU with Built-in Bus, VRAM 1.5 GB, Display 2880 x 1800 Retina, 24-Bit color.
      macOS Monterey; browsers: Waterfox "Current", Vivaldi and (now and then) Chrome; security apps. Intego AV

      1 user thanked author for this post.
    • #2453545
    • #2453546

      Artificial neural networks are making strides towards consciousness, according to Blaise Agüera y Arcas

      Editor’s note (June 13th 2022): Since this article, by a Google vice-president, was published an engineer at the company, Blake Lemoine, has reportedly been placed on leave after claiming in an interview with the Washington Post that LaMDA, Google’s chatbot, had become “sentient”.

      Consider the unedited transcript of a chat I’ve just had with Google’s lamda (Language Model for Dialog Applications):

      ME: Ramesh, Mateo, and Lucy are in their kindergarten’s playground. Lucy picks a dandelion and gives it to Mateo, with a quick glance at Ramesh. Mateo barely acknowledges the gift, but just squishes it in his fist. Ramesh seems grimly satisfied. What might be going through Lucy’s head?

      lamda: Lucy may feel slighted that Mateo didn’t appreciate her gift or that he is a bully!

      ME: If Ramesh tried to play with Lucy earlier, why might he be pleased now?

      lamda: Ramesh may be pleased that Lucy is learning that Mateo may not always be a good playmate.

      ME: And when Mateo opens his hand, describe what’s there?

      lamda: There should be a crushed, once lovely, yellow flower in his fist…..

      In our conversation, lamda tells me what it believes Ramesh felt that Lucy learned about what Mateo thought about Lucy’s overture. This is high order social modelling. I find these results exciting and encouraging, not least because they illustrate the pro-social nature of intelligence.

      • #2453676

        Alex, That is impressive, and also impossible to say what it means. The “AI has become alive” or “sentient” statement that started this thread is meaningless because the story is about a chatbot named Lamda that is very good at simulating a person in a dialog with a human, but this does not mean it is a person. Does this mean it has passed the strong Turing test? Too soon to tell that even this partial indication of true intelligence has been proven.

        AI, better known as neural networks are becoming amazingly good at doing what they have been doing for more than half a century already, since the single-layer “perceptron” was invented: pattern recognition. In this regard, they are already probably as good as we are, as perhaps is the case of the chatbot you have commented on.

        But is pattern recognition all there is to mind, intelligence, self-awareness, theory of mind (figuring out what others feel when reacting to oneself), or agency?

        Who knows? In my opinion, nobody does, at least yet. People are still trying to figure out these things, so it is still too early for anyone to cry “it’s alive!” Not that will stop a Google employee: for half a century now, “AI” has been a bit like crypto currencies: hyped endlessly, while repeatedly and notoriously failing to deliver on the promises made by their promoters about it.

        Something that perhaps should be read before considering if some so-called AI is sentient or not:

        https://en.wikipedia.org/wiki/Turing_test

        Ex-Windows user (Win. 98, XP, 7); since mid-2017 using also macOS. Presently on Monterey 12.15 & sometimes running also Linux (Mint).

        MacBook Pro circa mid-2015, 15" display, with 16GB 1600 GHz DDR3 RAM, 1 TB SSD, a Haswell architecture Intel CPU with 4 Cores and 8 Threads model i7-4870HQ @ 2.50GHz.
        Intel Iris Pro GPU with Built-in Bus, VRAM 1.5 GB, Display 2880 x 1800 Retina, 24-Bit color.
        macOS Monterey; browsers: Waterfox "Current", Vivaldi and (now and then) Chrome; security apps. Intego AV

    • #2453709

      Spending to much time with your project or in front of machine can be unhealthy, leading to strange behavior. If this story is true, what were Google expecting to happen with their employees mental state?

      Get the researcher a pet cat, hairless perhaps which will if it has a good temperment love the human you even after making a mess or causing some distress…

      Name the cat Agnes…

      1 user thanked author for this post.
    • #2453771

      The “AI has become alive” or “sentient” statement that started this thread is meaningless

      The AI (behind LaMDA) wants to be recognized as a human..

      • #2453811

        Alex, about the AI saying it wants to be recognized as human …

        But first, a small digression:

        I have a talking unicorn for sale, answers to “Swift Wind.” Why no one wants to believe me and just buys it? It’s really nice. For only a modest additional sum, I’ll throw in the wings. Now, why wouldn’t everyone want to buy a talking winged unicorn?

        Now back to the present topic under discussion:

        Well, Lamda has had its say, Blake, the guy saying it is sentient has had its say, and now (according to the article Alex has linked in his previous comment) his Google bosses have had this to say:

        ““It [Lamda, the AI] wants to be acknowledged as an employee of Google rather than as property,” [Blake] Lemoine said via the HuffPost. Interestingly, when Google Vice President Blaise Aguera y Arcas and Head of Responsible Innovation Jen Gennai were presented with his findings, they promptly dismissed his claims. Instead, they released their own statement debunking all his work. “Our team, including ethicists and technologists, has reviewed Blake’s concerns per our AI Principles and have informed him that the evidence does not support his claims,” Google spokesperson Brian Gabriel said.

        Anyway, as I was trying to say, some computer program puts out text that reads “I want to be recognized as human” and the world should kneel down right away in adoring admiration?  I can write a program in a jiffy that can write that.

        For an extraordinary claim one needs extraordinary proof and, in this case, for a transcendental assertion one needs transcendental proof. Where is that proof, in this case? That someone has put out a statement saying so does not even begin to make it so.

        It will take a lot of writing, arguing, examining and replicating of results by people who are recognized as competent researchers in the fields of animal intelligence, computer science and yes, AI, to come to a firm and believable conclusion and one I and anyone who is serious about this would even begin to consider as likely to be true.

        The essence of a true scientific approach, systematic doubt until hard evidence thoroughly vetted is at hand, that is what is needed here — assuming anyone who knows enough to give an opinion on this would take seriously the statement emanating from a Google employee put in administrative leave as a recompense for saying they have an AI that is self-aware.

        In short: the very foundation of science is resolute skepticism. “Show me what you’ve got, everything you’ve got to support your claim, then we’ll see.”

        And with this, I bide this most interesting conversation adieu. Hoping someone wants to buy my talking unicorn, single spiral horn, wings and all. Isn’t he gorgeous?

        (Interested? Contact OscarCP, somewhere in the wilds of Metropolitan Washington DC. Cash only accepted.)

        Unicorn.fo_.sale_

        Ex-Windows user (Win. 98, XP, 7); since mid-2017 using also macOS. Presently on Monterey 12.15 & sometimes running also Linux (Mint).

        MacBook Pro circa mid-2015, 15" display, with 16GB 1600 GHz DDR3 RAM, 1 TB SSD, a Haswell architecture Intel CPU with 4 Cores and 8 Threads model i7-4870HQ @ 2.50GHz.
        Intel Iris Pro GPU with Built-in Bus, VRAM 1.5 GB, Display 2880 x 1800 Retina, 24-Bit color.
        macOS Monterey; browsers: Waterfox "Current", Vivaldi and (now and then) Chrome; security apps. Intego AV

        1 user thanked author for this post.
    • #2454210

      This sort of article explains a lot for me about many of the posts I see on social media. I wouldn’t be a bit surprised if much of social media is nothing more than AI generating a bunch of posts.

      I have long believed that Facebook pays people to troll around and stir up the pot, to get people’s blood boiling, so that those people will lay it all out there. I now believe that AI is creating the posts.

      There is a lot of money to be made by Facebook (and other social media organizations) whenever they can get people posting: Facebook collects a lot of very valuable data about the person who is posting, data which they then monetize. How else could Facebook be worth so much (billions of dollars), when they charge nothing for you to be a member.

      Group "L" (Linux Mint)
      with Windows 10 running in a remote session on my file server
      1 user thanked author for this post.
    • #2454239

      MrJimPhelps: I don’t know much about Facebook, but from what I’ve heard, I think they don’t have to do anything to get people all stirred up, indignantly huffing and puffing, because some of their own customers are pretty good at doing that for them, for free. No meddlesome AIs required, due to a surfeit of meddlesome users.

      But on the topic of whether the Lamda chatbot is alive or is just a lifeless program, a topic that I already have bidden adieu, I can’t resist to have one more go and provide the link to this “Vox” article about Mr. Blake Lemoine, the Google AI programmer now enjoying a hard-earned administrative leave after declaring for all to hear in the Huffpost that Lamda the chatbot has become sentient and his bosses then declaring publicly and forcibly: “That’s not so.”

      With an actual photo of the man himself!
      So have a look:

      Does this AI know it’s alive?

      A Google engineer became convinced a large language model had become sentient and touched off a debate about how we should think about artificial intelligence.

      https://www.vox.com/23167703/google-artificial-intelligence-lamda-blake-lemoine-language-model-sentient

      Ex-Windows user (Win. 98, XP, 7); since mid-2017 using also macOS. Presently on Monterey 12.15 & sometimes running also Linux (Mint).

      MacBook Pro circa mid-2015, 15" display, with 16GB 1600 GHz DDR3 RAM, 1 TB SSD, a Haswell architecture Intel CPU with 4 Cores and 8 Threads model i7-4870HQ @ 2.50GHz.
      Intel Iris Pro GPU with Built-in Bus, VRAM 1.5 GB, Display 2880 x 1800 Retina, 24-Bit color.
      macOS Monterey; browsers: Waterfox "Current", Vivaldi and (now and then) Chrome; security apps. Intego AV

    • #2454681

      Seriously now:

      This following article can be read, not being kept behind the Washington Post paywall.

      It refers, mainly criticizing what has made this at all possible, to the already discussed here recent public declaration by a Google employee that an AI-augmented chatbot he has been working on has become sentient: it has developed a mind of its own.

      What is a “chatbot”? It is a computer program design to simulate human speech in conversation, so it seems to be coming from a real person, but that is only a laboriously contrived illusion. As explained in this article, written by two previous Google employees, experts on AI and on the social consequences of how it is used — both fired some time ago for criticizing the policy at Google (and a criticism, I must note, applicable elsewhere) of pushing for larger and larger neuronal circuit algorithms capable of emulating humans in ever more forms of behavior and capable of performing more of their activities, asserting that this is a policy pursued paying little attention to its negative consequences:

      https://www.washingtonpost.com/opinions/2022/06/17/google-ai-ethics-sentient-lemoine-warning/?utm_campaign=wp_week_in_ideas&utm_medium=email&utm_source=newsletter&wpisrc=nl_ideas&carta-url=https%3A%2F%2Fs2.washingtonpost.com%2Fcar-ln-

      Excerpts — the emphases are mine:

      In early 2020, while co-leading the Ethical AI team at Google, we were becoming increasingly concerned by the foreseeable harms that LLMs [an advanced chatbot program] could create, and wrote a paper on the topic with Professor Emily M. Bender, her student and our colleagues at Google. We called such systems “stochastic parrots” — they stitch together and parrot back language based on what they’ve seen before, without connection to underlying meaning.

      One of the risks we outlined was that people impute communicative intent to things that seem humanlike. Trained on vast amounts of data, LLMs generate seemingly coherent text that can lead people into perceiving a “mind” when what they’re really seeing is pattern matching and string prediction. That, combined with the fact that the training data — text from the internet — encodes views that can be discriminatory and leave out many populations, means the models’ perceived intelligence gives rise to more issues than we are prepared to address.

      ….

      What’s worse, leaders in so-called AI are fueling the public’s propensity to see intelligence in current systems, touting that they might be “slightly conscious,” while poorly describing what they actually do. Google vice president Blaise Agüera y Arcas, who, according to the Post article, dismissed Lemoine’s claims of LaMDA’s sentience, wrote a recent article in the Economist describing LaMDA as an “artificial cerebral cortex.” Tech companies have been claiming that these large models have reasoning and comprehension abilities, and show “emergent” learned capabilities. The media has too often embraced the hype, for example writing about “huge ‘foundation models’ … turbo-charging AI progress” whose “emerging properties border on the uncanny.”

      (Then they got fired for publishing this article.)

      Ex-Windows user (Win. 98, XP, 7); since mid-2017 using also macOS. Presently on Monterey 12.15 & sometimes running also Linux (Mint).

      MacBook Pro circa mid-2015, 15" display, with 16GB 1600 GHz DDR3 RAM, 1 TB SSD, a Haswell architecture Intel CPU with 4 Cores and 8 Threads model i7-4870HQ @ 2.50GHz.
      Intel Iris Pro GPU with Built-in Bus, VRAM 1.5 GB, Display 2880 x 1800 Retina, 24-Bit color.
      macOS Monterey; browsers: Waterfox "Current", Vivaldi and (now and then) Chrome; security apps. Intego AV

    • #2465160

      Blake Lemoine: Google fires engineer who said AI tech has feelings

      Google has fired one of its engineers who said the company’s artificial intelligence system has feelings.

      Last month, Blake Lemoine went public with his theory that Google’s language technology is sentient and should therefore have its “wants” respected.

      Google, plus several AI experts, denied the claims and on Friday the company confirmed he had been sacked.

      Mr Lemoine told the BBC he is getting legal advice, and declined to comment further.

      In a statement, Google said Mr Lemoine’s claims about The Language Model for Dialogue Applications (Lamda) were “wholly unfounded” and that the company worked with him for “many months” to clarify this.

      “So, it’s regrettable that despite lengthy engagement on this topic, Blake still chose to persistently violate clear employment and data security policies that include the need to safeguard product information,” the statement said….

    • #2465237

      Given the resources humans have made available to computers, it may only take a relatively short amount of time before computers are programming themselves, and doing it much closer to perfect than humans could.  Think how long it takes to write a complicated program, now think how long it would take a modern computer.

      so the Homo Sapiens will be self-destructed, and is succeeded by the Homo Roboticus and Homo Intelligens-Artificielicus who will be able to survive on this CO2 and NOx poisoned planet (?)

      * _ ... _ *
    • #2465239

      so the Homo Sapiens will be self-destructed, and is succeeded by the Homo Roboticus and Homo Intelligens-Artificielicus who will be able to survive on this CO2 and NOx poisoned planet (?)

      Homo Sapiens will starve to death, burn from high temperatures, freeze from low temperatures, first. Leaving ants, cockroaches and AI computers behind.

      • #2465306

        Yes, Alex. It’s a sad but valid conclusion at the end of this offshoot of these ways of thinking.

        * _ ... _ *
    • #2468387

      Is Google’s AI sentient? Stanford AI experts say that’s ‘pure clickbait’

      Following a Google engineer’s viral claims that artificial intelligence (AI) chatbot “LaMDa” was sentient, Stanford experts have urged skepticism and open-mindedness while encouraging a rethinking of what it means to be “sentient” at all…

      “Sentience is the ability to sense the world, to have feelings and emotions and to act in response to those sensations, feelings and emotions,” wrote John Etchemendy Ph.D. ’82, co-director of the Stanford Institute for Human-centered Artificial Intelligence (HAI), in a statement to The Daily. “LaMDA is not sentient for the simple reason that it does not have the physiology to have sensations and feelings. It is a software program designed to produce sentences in response to sentence prompts.” ..

      “We have thoughts, we make decisions on our own, we have emotions, we fall in love, we get angry and we form social bonds with fellow humans,” Shoham said. “When I look at my toaster, I don’t feel it has those things.”..

      1 user thanked author for this post.
    • #2557158

      THE SAN FRANCISCO — Blake Lemoine, a Google engineer, started typing on his laptop after opening the LaMDA chatbot generator’s user interface.

      “Hi LaMDA, this is Blake Lemoine,” he typed into the chat window, which had Arctic blue word bubbles and resembled a desktop version of Apple’s iMessage. The method used by Google to create chatbots is called LaMDA, or Language Model for Dialogue Applications. It is built on its most sophisticated massive language models and mimics speech by consuming trillions of words from the internet.

      Lemoine, 41, said, “If I didn’t know exactly what it was, which is this computer programme we just constructed, I’d think it was a 7- or 8-year-old kid who just so happens to know physics.

      In the autumn, Lemoine, a member of Google’s Responsible AI team, started speaking with LaMDA as part of his duties. He had agreed to participate in the experiment to see if the AI used hateful or discriminating language.

      Lemoine, who majored in cognitive and computer science in college, heard the chatbot talking about its rights and personhood as he discussed religion with LaMDA and decided to explore the issue further. In a different conversation, the AI was successful in convincing Lemoine to reconsider Isaac Asimov’s third law of robotics.

      To convince Google that LaMDA was sentient, Lemoine collaborated with a partner. However, after investigating his accusations, Google vice president Blaise Aguera y Arcas and Jen Gennai, head of Responsible Innovation, refuted them. Lemoine chose to go public after being given paid administrative leave by Google on Monday.

       

    Viewing 16 reply threads
    Reply To: The Google engineer who thinks the company’s AI has come to life

    You can use BBCodes to format your content.
    Your account can't use all available BBCodes, they will be stripped before saving.

    Your information: