• The Three Laws of Robotics

    Home » Forums » Newsletter and Homepage topics » The Three Laws of Robotics

    Author
    Topic
    #2550902

    ISSUE 20.15 • 2023-04-10 COMMENTARY By Will Fastie Along with its recent announcement of Copilot, Microsoft made a point of mentioning “responsible AI
    [See the full post at: The Three Laws of Robotics]

    Viewing 83 reply threads
    Author
    Replies
    • #2550915

      Our tech is often turned against us by a lack of moral code from humans on the Internet… Scam eMail, texts, or calls, anyone? Security patches needed? Why?

      How is AI, which presumably reads all of what’s on the net, going to be able to decide on its own what the proper ways are to interact with us? Do we have any good examples handy? You can be sure it will ultimately see all the examples of how to manipulate people, and the worst of how people treat one another. Will it think that’s “normal” and “how we want things to be”?

      This article explores the question about whether any given AI program is being programmed to be morally responsible. Thing is, I’m not even sure that AI CAN be so-programmed. Neural networks don’t really work like that.

      And even if we find a way and try our best, last I looked we haven’t been reliably able educate a new human to be good, right?

      Our brains operate at a few cycles per second and have billions of neurons. AI implementations we already have and are rushing to expand are backed by digital hardware that operates at billions or trillions of cycles per second with trillions of bytes of memory, quadrillions of bytes of fast, reliable mass storage, and virtually unlimited reference material online at fiber optic speeds.

      In human terms, you or I could read an average book, which literally contains a few megabytes of data, in what, maybe a day or two? An AI could read it in milliseconds, and forget none of it.

      We can only hope that a massively more capable intelligence will grok good and evil better than we do. And think we’re cute.

      -Noel

      11 users thanked author for this post.
      • #2550942

        Our tech is often turned against us by a lack of moral code from humans on the Internet… Security patches needed? Why?

        Security patches are immoral? You patch your software, don’t you?

        1 user thanked author for this post.
        • #2551122

          I think (and I may be mistaken) that he meant that many humans have very weak or even missing moral codes. In technologies of software, computers, the internet – even AI – there are and will likely always be those who are hackers, scammers, spammers, con-men, identity thieves, state sponsored adversaries and so on.  I think he meant that is why we need most of the patches we get.

          The other reason we have patches is that humans are fallable: designers, coders, researchers, companies all have oversights and make mistakes. As systems get more complex, these become more complex to avoid. Unintended consequences arise in our OS’s, programs, apps and systems – requiring patches.

          In the end it seems most dangers in technologies are usually the people who use the technologies moreso than the technologies themselves.Technologies are only as “moral” as those who are using them. AI – especially a powerful enough AGI – has the potential to break that mold if it is trained by amoral humans or if is constructed with errors and oversights. A technology that has it’s own agency, trained by amoral humans or containing critically severe unintended consequences, could be world changing.

          Until all humans are “moral” and “never make mistakes”, cautious development seems prudent. As I see it, our cleverness has already outpaced our wisdom.

           

          Win10 Pro x64 22H2, Win10 Home 22H2, Linux Mint + a cat with 'tortitude'.

          2 users thanked author for this post.
    • #2550916

      These systems have no ethics, no moral code.

      It is not the systems that doesn’t have ethics or moral code these are the developers/programmers of these systems that don’t have ethics or moral code and the companies incorporating “AI” into their services.

      2 users thanked author for this post.
    • #2550923

      How is AI, which presumably reads all of what’s on the net, going to be able to decide on its own what the proper ways are to interact with us?

      Hopefully, by not using what it finds on the net as its guide. But I don’t know.

      I’m not even sure that AI CAN be so-programmed.

      Nor am I.

      Neural networks don’t really work like that.

      That may have been a bad choice. I should have linked to Positronic brain – Wikipedia instead, which points out that Asimov was vague about how the brain worked.

      We can only hope that a massively more capable intelligence will grok good and evil better than we do

      Amen.

      (I assume you know where “grok” came from.)

      4 users thanked author for this post.
      • #2551026

        Grok. Stranger in a Strange Land, one of Heinlein’s masterpieces, read so often I practically have it memorized – and SO much of it mirrored in the world today.

        2 users thanked author for this post.
    • #2550940

      A most interesting story, thanks for the writeup!

      1 user thanked author for this post.
    • #2550926

      Hi, 30 years ago, I built some Ipad-sized simple robot vehicles for a biologist friend, who was working on interactive behavior (think ants).
      The prototype was called Daneel (what else?), & had a warning sticker saying it didn’t follow any of the 3 Laws; it just had its mission, which was to do what I had programmed it to do. At that time, the technological environment did not permit much else!

      1 user thanked author for this post.
      • #2552557

        Is mister Steel still around?

        🍻

        Just because you don't know where you are going doesn't mean any road will get you there.
    • #2550951

      Good succinct article.

      I’ve spent a few years now since Chat GPT became available, “playing with it.” I started “playing” with ELIZA back in about 1971 in high school on our time sharing system, so, I’ve got a passing familiarity with natural language processors.

      GPT3 was very good. GPT4 crossed the line, in my opinion. Even after OpenAI dumbed it down a bit it is, “clever.” Let me explain that. I asked it a simple question about whether it could program itself for independent sentience. It gave me an answer that was at best “weasel words.” It sounded like the corporate lawyer telling me all the reasons why its programmers wouldn’t allow that. Then I asked it a few other questions, which were umm, not related to the primary question (although it keeps track of the original question throughout the session).

      Could you design a data center better than your current one? (it could but said it would need humans to assemble it)
      Could you design a manipulator arm for gross and fine motor control, similar to human prosthetic limbs? (it could and could show humans how to build it)
      Could you design a control program for the Boston Dynamics family of robots? (It could and could improve on their mobility and dexterity).

      In short in less than an hour it told me it could bypass all of those safeguards that they had put in place. Not surprising since GPT3 could pass the LSAT, MCAT, and GRE in the lower 10% GPT 4.0 is in the upper 10% (Better than human test takers).

      Not much for paranoia, but I almost felt like I do when my kids were trying to reassure me they weren’t going to throw a party when I was out of town. Stanford University is saying a percentage of their students are using it to write papers. My own alma matters, University of North Carolina, University of Chicago has similar problems. GPT 5 is due out in less than 18 months. Want to bet how far that leap will go and who will design GPT 6?

      5 users thanked author for this post.
    • #2551003

      I wrote a reply featuring the film Gog. The whole thing disappeared! Got an error message saying “Are you sure you want to do that?” with no way to reply: No!

      AI is indeed insidious.

       

      https://www.imdb.com/title/tt0047033/?ref_=fn_al_tt_1

       

    • #2551010

      Are we at last at the point that all those SciFi stories about malevolent sentient computers were warning us about? Hard to say. When most discussion attempting to define consciousness rapidly become philosophical rather than technical, it’s easy to get lost in arguing whether the systems are sentient or not. However I don’t think malevolent sentience is the necessary condition for bad outcomes. Rather unanticipated or unconsidered consequences are sufficient. Systems based on faulty assumptions or flawed rules can be the cause for that. That and a reluctance to question the systems going forward.

      I was pleased that Will referenced one of my all time favorite films, the original “The Day The Earth Stood Still”. The movie was based on a short story, “Farewell To The Master” but did not emphasize what I felt to be the main lesson of the story. At the very end Klatuu tells the protagonist that Gort is indeed the Master.

      3 users thanked author for this post.
    • #2551020

      Are we at last at the point that all those SciFi stories about malevolent sentient computers were warning us about?

      No, because I don’t see sentience. Yet.

      1 user thanked author for this post.
      • #2551028

        True, some systems mimic it pretty well. But nothing approaches The Bicentennial Man, a great movies starring Robin Williams, based on one Isaac’s short stories.

        1 user thanked author for this post.
    • #2551023

      The Moon Is A Harsh Mistress is one of my favorite Heinlein novels, he and Isaac I found as a teen in the early 60’s, love their work, but in particular I love Isaac’s works, all of them, and Robot novels, which I have recently reread (Daneel who appears in them also appears in the conclusion of the Foundation series) give the wide attention currently being paid to AI including the plethora of AI based chatbots filling the App and Play stores, with Replika perhaps being the most popular at present and also beset with all manner of problems.

      We don’t want a Terminator situation here, so I discussed this idea in some depth with ChatGPT. It is of the opinion that we do not have the technology, nor will we in the next century, to create humanoid robots with anything like positronic brains. As you point out, what we have is software, but how do we ensure that humans remain firmly in control of that to the benefit, not the end, of humanity? Not a question that is going to go away anytime soon. MalwareBytes in an article last week did an interesting piece about how it “tricked” ChatGPT into writing Malware – sort of. There are hazards.

      Very nice article, thank you.

      1 user thanked author for this post.
    • #2551027

      Will, Your commentary was the best thing I’ve ever read on AskWoody. Congratulations on a great issue!

      2 users thanked author for this post.
    • #2551031

      The Bicentennial Man

      The quest to become human.

      1 user thanked author for this post.
      • #2551152

        The quest to become human.

        Yet humans are flawed.. history demonstrates that. Why a quest for flawed? Why not a quest for perfect? What is perfect… and who/what defines perfection?

        1 user thanked author for this post.
    • #2551045

      Somehow lost my post. I found Heinlein and Asimov in my teens in the early 60’s. The Moon is a Harsh Mistress is a masterpiece. Love all of their work, including their early pieces as I found them. I’ve recently been rereading the Robot novels given the attention AI is getting these days, wonderful books. Daneel also appears in the last of the Foundation series.

      There are a plethora of AI apps out there, mimicking sentience, but definitely without it, on both the App and Play stores. I’ve looked at those and had a long discussion with ChatGPT about the possibility of humanoid robots with something like the Three Laws, it insisted we do not have the technology for anything like a positronic brain nor will we in at least a century.

      That said, we don’t want a Terminator situation either so though all AI is software, and I’ll admit I’m not truly conversant with neural networks, nor LLMs, it seems that one of the challenges facing us all is to ensure these systems, their software, has built in safeguards of some kind to prevent any scenario in which AI acts against humanity. I think that’s a real concern and certainly given some of the things I’ve seen, including LLM chatbots I’ve looked at, possible. Something we need be very careful with as what we don’t know can indeed harm us. Very nice, and timely, article, Will. Thanks.

      3 users thanked author for this post.
    • #2551066

      Will, let me second Amy’s comments.  She said it well and I totally agree.  Thanks for sharing your insights; please keep them coming.

      DVH

      PS, thanks to Gene J as well; nicely said.

       

      2 users thanked author for this post.
    • #2551104

      Will, as a longtime fan of Asimov, I was happy to see someone point out the fact that, at least in our imagination, we (the humans) have been here before and managed to get it right.
      While I frankly doubt that we can manage that in real life it was a good “wake-up call”.

      It seems almost inevitable to me that we will screw this up in one way or another simply because neither the programmers nor the salesforce will be either concerned nor truly responsible for the ethical blunders that will certainly occur. And management, while well-paid and even well-meaning in some cases, are woefully clueless when it comes to ethics and morality.

      Harvey

      2 users thanked author for this post.
    • #2551119

      Are we at last at the point that all those SciFi stories about malevolent sentient computers were warning us about?

      No, because I don’t see sentience. Yet.

      What concerns me more about the near future is that some powerful sub-sentient AI makes a huge logical error (due to human error and hubris) that affects humanity in a major detrimental way, rather than due to an outright malevolent decision.

      Hurry up with those 3 laws!!! 😉

      Windows 10 Pro 22H2

      2 users thanked author for this post.
      • #2551121

        You mean like telling it to get rid of all the excess carbon and forgetting we are carbon based?

        1 user thanked author for this post.
      • #2551174

        AI makes a huge logical error (due to human error and hubris)

        not only due to human error and hubris. ChatGPT just told me that it does not have inherent knowledge in the way that humans do. And I believe it, based on the answer I got for the meaning of ‘bunting’, even with the context of “I put you in a bunting.’ It clearly does not know the meaning of the word, despite the context and its huge database. That’s not a huge error, of course, and maybe its database will grow to help it find a context that matches my inherent knowledge, but see #2551121 for an error that IS huge (and the context is not esoteric)!!

        I doubt that the lack of inherent knowledge can be fixed, even if the database grows and grows. Human experience is unique to humans and not always rendered in written language. It would have to be able to decipher ALL of spoken and signed language, too, even unrecorded. (Spoken language is NOT merely written language rendered in speech – there are systems of meaning based on things like tone, stress, timing, accent, volume, and silence. Signed language is not merely spoken language rendered in hand and facial-gestural movements.) Add to this other types of sentient language — visual, haptic, proxemic, etc.

        Can this all be codified, out of which emerges inherent knowledge??? I think not.

        1 user thanked author for this post.
        • #2551237

          And I believe it, based on the answer I got for the meaning of ‘bunting’, even with the context of “I put you in a bunting.’ It clearly does not know the meaning of the word, despite the context and its huge database.

          Which bunting can you put someone in?

          • #2551394

            Check the Bye, baby Bunting link in your Wikipedia source.

            These are buntings. Notice that they are called “buntings.”

            I inherited this knowledge via the rhyme, it’s a part of MY life experience. You would think that ChatGPT could make it a part of its “life” experience, too .. b/c the rhyme is out there on the web. And it must not be checking Amazon!!

            But, let’s say that you know a rhyme (or something else) that’s orally transmitted, common knowledge among a group of people, but it hasn’t been converted to written language and thus has not made it to the web yet. So, ChatGPT wouldn’t be able to find it. Inherited knowledge like this, naturally generated by humans by virtue of their being human, abounds and is (and will be) inaccessible to ChaptGPT.

            • #2551436

              Check the Bye, baby Bunting link in your Wikipedia source.

              It says:

              Origins

              The expression bunting is a term of endearment that may also imply ‘plump’.[2]

              These are buntings. Notice that they are called “buntings.”

              Would ChatGPT fit in one of those?

              It knows what “a baby bunting” is. (And “Baby Bunting”.)

              I inherited this knowledge via the rhyme, it’s a part of MY life experience. You would think that ChatGPT could make it a part of its “life” experience, too .. b/c the rhyme is out there on the web.

              In the rhyme, the baby is to be wrapped in a rabbit skin.

              And it must not be checking Amazon!!

              What do you get if you search Amazon for “bunting”?

            • #2551466

              In the rhyme, the baby is to be wrapped in a rabbit skin.

              Yes, and so the rabbit skin will become a bunting (something that will as serve as a binding or will serve to bound or will confine by bonds).

              What do you get if you search Amazon for “bunting”?

              Amazon-bunting
              Which one could I put someone in??

              Never mind that you’ve never heard nor seen nor heard of the word before. The point here is that it illustrates the generative nature of language — take a root and play with it to create a new meaning. This happens all the time and might never get into print. It happens not only at the word level; it happens at the sentence level, the paragraph level, the chapter level, the book level, the corpus level, ad infinitum. And what might emerge from this generative nature of language can’t be predicted or captured in an algorithm. It’s all new (i.e., not ever produced before) and has a life of its own that AI can’t capture. So, it’s never going to be human.

            • #2551663

              … a bunting (something that will as serve as a binding or will serve to bound or will confine by bonds).

              Whose definition is that?

          • #2551428

            My first thought, not being familiar with baby clothing:

            It’s a saying, and probably plays on the decorations, not the baseball tactic. Or the bird. How fast does ChatGPT 4 catch on to which meaning is being used in such a case? Even I (mostly human) didn’t know of yet another meaning, upon which this saying hinges.

            -- rc primak

    • #2551126

      … developers/programmers … don’t have ethics or moral code

      I think that’s a bit too strong. In my experience, almost everyone operates on a set of mores. It’s just that sometimes it becomes hard to live up to them, whether through a personal fault or external influences.

      1 user thanked author for this post.
    • #2551166

      On a lighter note: AI iterative recursion coming to a job near you?

      https://marketoonist.com/2023/03/ai-written-ai-read.html

      Win10 Pro x64 22H2, Win10 Home 22H2, Linux Mint + a cat with 'tortitude'.

    • #2551243

      almost everyone operates on a set of mores.

      Would a developer with a set of morals develop an app without morals ?

      1 user thanked author for this post.
      • #2552559

        almost certainly

        🍻

        Just because you don't know where you are going doesn't mean any road will get you there.
      • #2552568

        almost certainly

        The artificial intelligence currently developed is not self-aware.

        But of course how do we know??
        How do I even know you are sentient ?
        I am the only one I know is sentient. Including everyone I know or have ever met.

        I am all for avoiding the ambiguity and never attempting to develop a self aware ai.

        I have wondered just what the Butlerian Jihad entailed in Dune

        Oh wow I just googled and found:
        Dune: The Butlerian Jihad is a 2002 science fiction novel by Brian Herbert and Kevin J. Anderson, set in the fictional Dune universe created by Frank Herbert.

        😊

        🍻

        Just because you don't know where you are going doesn't mean any road will get you there.
    • #2551332

      Here is a different perspective on responsibility & morals as it pertains to AI, and how it will probably be affecting us in the very near term:
      https://www.schneier.com/blog/archives/2023/04/llms-and-phishing.html

      2 users thanked author for this post.
    • #2551430

      Even if a computer could be fitted with a device that would analyze the Laws and assure destruction of a machine gone bad, it’s all in software — a machine without the laws could still run it, or the device could be hacked.

      Will, I’m sure you know of firmware chips which are Write Once Read Many times. Takes most of the guesswork out of preventing hacking and self-circumvention. (The human brain also has a few things hard-coded into it which are nearly impossible to overwrite, short of destroying the neural paths they use.)

      It is pure anthropomorphism and projection to ascribe emotions to AI machines. They have nothing of the sort. And no ability to actually reason, make decisions or understand anything, let alone a moral or ethical framework. They have rules and other algorithms, nothing more. So how do we expect them to enforce morals or ethics when human societies differ on these points on a monstrous scale.

      By the way, Jules Verne did not invent the idea of submarines. Leonardo DaVinci drew a schematic of a submarine, and there may have been references to underwater craft among the ancient Greeks. By the time of Verne, submarines did exist, having been used in the American Civil War, as well as the Turtle from the American Revolutionary War.

      … AI-based systems we will encounter over the next few years.

      Try the next few decades, if ever. Any sooner will result in the same effects as releasing full self-driving mode into Tesla cars. People will not put up with such nonsense.

      We must and will put the guardrails up earlier for General AI than we have done for some expert systems like the self-driving cars.  That was a wake-up call, not just in theory or science fiction.

      -- rc primak

      1 user thanked author for this post.
      b
    • #2551432

      Would a developer with a set of morals develop an app without morals?

      Would a highly moral and ethical civil engineer design a bridge that falls down? The answer is yes, because we know that has happened. A key aspect of civil engineers is licensing, precisely because society has a legitimate interest in certifying such people before letting them loose. Medical professionals must also be licensed, for the same reasons – do no harm.

      Since the start of my career, the matter of licensing software engineers has been floated, almost continuously. Such efforts have never stuck because so much software is crafted, not engineered. Thus we don’t know about the qualifications of the people developing AI solutions, and we certainly know nothing about their ethical codes.

      1 user thanked author for this post.
    • #2551660

      OpenAI and Figure join the race to humanoid robot workers
      April 10, 2023

      Humanoid robots built around cutting-edge AI brains promise shocking, disruptive change to labor markets and the wider global economy – and near-unlimited investor returns to whoever gets them right at scale. Big money is now flowing into the sector.

      The jarring emergence of ChatGPT has made it clear: AIs are advancing at a wild and accelerating pace, and they’re beginning to transform industries based around desk jobs that typically marshall human intelligence. They’ll begin taking over portions of many white-collar jobs in the coming years, leading initially to huge increases in productivity, and eventually, many believe, to huge increases in unemployment.

      If you’re coming out of school right now and looking to be useful, blue collar work involving actual physical labor might be a better bet than anything that’d put you behind a desk.

      But on the other hand, it’s starting to look like a general-purpose humanoid robot worker might be closer than anyone thinks, imbued with light-speed, swarm-based learning capabilities to go along with GPT-version-X communication abilities, a whole internet’s worth of knowledge, and whatever physical attributes you need for a given job.

      https://newatlas.com/robotics/openai-figure-ai-robotics/

      1 user thanked author for this post.
    • #2551661

      The death bell is beginning to toll for software developers as this article explains:

      Society’s Technical Debt and Software’s Gutenberg Moment
      SK Ventures
      Mar 21, 2023

      Abstract

      There is immense hyperbole about recent developments in artificial intelligence, especially Large Language Models like ChatGPT. And there is also deserved concern about such technologies’ material impact on jobs. But observers are missing two very important things:

      Every wave of technological innovation has been unleashed by something costly becoming cheap enough to waste.

      Software production has been too complex and expensive for too long, which has caused us to underproduce software for decades, resulting in immense, society-wide technical debt.

      This technical debt is about to contract in a dramatic, economy-wide fashion as the cost and complexity of software production collapses, releasing a wave of innovation.

      *****************

      https://skventures.substack.com/p/societys-technical-debt-and-softwares

      1 user thanked author for this post.
    • #2551664

      For SF fans who are want to read about futures ruled by truly sentient minds, I suggest the following two authors.  The Terminator future is just one alternative.  There are many other possibilities.

      In both of the recommendations below, it is important to read the books in chronological order.  Use Wikipedia to get the authors bibliography ordered list.

      <hr />

      Iain M. Banks (passed) presupposed a post-scarcity reality called the The Culture in 10 novels, which is ruled by sentient “Minds”, where resources and energy are unlimited and therefore money or power mean nothing. Warfare is mostly abolished and what does occur is between lesser races and The Culture machines. People live in huge spaceships always on the move between stars, capable of carrying billions, on planets/moons, in artificial orbital’s, etc. This is a hedonistic universe where you can acquire, do or be almost anything you want (even change sexes and give birth). The Minds take care of all the details and people do what makes themselves happy. Mostly, the Minds don’t get involved in petty BS among humans.

      Neal Asher‘s universe is called the Polity and is also ruled by sentient machines. In this universe, the machines took over when we humans created yet another war among ourselves but the machines that were supposed to fight refused and instead took over all government and military functions. There is a big honkin AI in charge of everything and a lot of minor AI’s that help do its bidding. There are no politicians (surely a good thing!). But AI’s in this universe can go rogue (e.g. AI Penny Royal) and create all sorts of mayhem, death and destruction. The Polity is a far rawer future than The Culture. It is a place where money, crime, various bad aliens and regular warfare still exist.  Not much change from our present society!

      2 users thanked author for this post.
    • #2551919

      It’s strange how interesting threads like this, for example, just flare out and die so quickly.  I think it is due to a very limited number of people participating, again, possibly because so many don’t know these threads exist.  There needs to be a better way to promote these discussions because what is being done currently is clearly not working.

      2 users thanked author for this post.
    • #2551920

      flare out and die so quickly

      Not all of them do, but we’re aware that we’re not getting the engagement we’d like. We are actively working on that.

      1 user thanked author for this post.
      • #2553225

        Not all of them do, but we’re aware that we’re not getting the engagement we’d like. We are actively working on that.

        As a suggestion, consider posting comments throughout the week to articles that were introduced that week. I suspect there’s continued interest, but requires author encouragement and author engagement.

        On permanent hiatus {with backup and coffee}
        offline▸ Win10Pro 2004.19041.572 x64 i3-3220 RAM8GB HDD Firefox83.0b3 WindowsDefender
        offline▸ Acer TravelMate P215-52 RAM8GB Win11Pro 22H2.22621.1265 x64 i5-10210U SSD Firefox106.0 MicrosoftDefender
        online▸ Win11Pro 22H2.22621.1992 x64 i5-9400 RAM16GB HDD Firefox116.0b3 MicrosoftDefender
    • #2552057

      Perhaps the greatest difficulty occurs when intelligence is confused with sentience.

      On permanent hiatus {with backup and coffee}
      offline▸ Win10Pro 2004.19041.572 x64 i3-3220 RAM8GB HDD Firefox83.0b3 WindowsDefender
      offline▸ Acer TravelMate P215-52 RAM8GB Win11Pro 22H2.22621.1265 x64 i5-10210U SSD Firefox106.0 MicrosoftDefender
      online▸ Win11Pro 22H2.22621.1992 x64 i5-9400 RAM16GB HDD Firefox116.0b3 MicrosoftDefender
      2 users thanked author for this post.
    • #2552058

      Perhaps the greatest difficulty occurs when intelligence is confused with sentience.

      Or intelligence confused with knowledge… or wisdom. They’re not synonyms either.

    • #2552070

      Perhaps the greatest difficulty occurs when intelligence is confused with sentience.

      Although the word sentient is not precisely defined in this way, it is often taken to mean “self-aware.” Law Three makes it clear that Asimov’s robots are self-aware, or they would not be able to grasp their own existence.

      4 users thanked author for this post.
      • #2552077

        The artificial intelligence currently created is not sentient.

        The artificial intelligence currently developed is not self-aware.

        On permanent hiatus {with backup and coffee}
        offline▸ Win10Pro 2004.19041.572 x64 i3-3220 RAM8GB HDD Firefox83.0b3 WindowsDefender
        offline▸ Acer TravelMate P215-52 RAM8GB Win11Pro 22H2.22621.1265 x64 i5-10210U SSD Firefox106.0 MicrosoftDefender
        online▸ Win11Pro 22H2.22621.1992 x64 i5-9400 RAM16GB HDD Firefox116.0b3 MicrosoftDefender
        3 users thanked author for this post.
    • #2552082

      As a practical matter, how is AI being incorporated into the Microsoft365 apps?  I avoid using Bing or Edge as mch as possible.

      Would there be suggestions presented while creating a document (for example) as to better phrasing of the subject mater?  Or outright corrections to “my” created and intended content regardless of my wishes.

      Can the user turn off the feature?  I would rather my work stand on its own merits (or lack thereof) than blame a computer program that “knows better”.

      • #2552139

        Microsoft is also pushing their AI search  in Skype now, although I don’t quite get why an AI is necessary in Skype.

    • #2552113

      how is AI being incorporated into the Microsoft365 apps?

      See my article Microsoft 365 Copilot announced for links to Microsoft resources. There are demonstrations of how Copilot is used in various apps.

      1 user thanked author for this post.
    • #2552140

      Microsoft is also pushing their AI search  in Skype now, although I don’t quite get why an AI is necessary in Skype.

      Microsoft added Bing AI to Swifykey app on iOS and Android.

    • #2552558

      Not surprising since GPT3 could pass the LSAT, MCAT, and GRE in the lower 10% GPT 4.0 is in the upper 10% (Better than human test takers).

      Stanford University is saying a percentage of their students are using it to write papers. My own alma matters, University of North Carolina, University of Chicago has similar problems. GPT 5 is due out in less than 18 months. Want to bet how far that leap will go and who will design GPT 6?

      This is, I think, the biggest short- and medium-term (and maybe long-term) threat to our society if not all societies. I do not want to see a doctor, lawyer or any other professional, who got into and through school and training based on their facility with AIs.

      Once they have proven they actually have skill and proficiency in their professions, using an AI to augment – not replace – that skill and proficiency is another matter entirely.

      • #2552560

        When I was in high school a teacher told us a story about cheating.

        A student was taking a test when he noticed the student next to him nervously drumming his fingers.  Finally, he looked over and the student pleaded with him to give him just a quick look at his paper.  Feeling sorry for him, he did so.

        Years later he was involved in a serious auto accident and woke up in the emergency room where they were prepping him for surgery.  The last thing he heard before he went under was the surgeon, nervously drumming his fingers.

        Somehow I don’t think students using ChatGPT will worry or drum their fingers…

    • #2552566

      Here is a different perspective on responsibility & morals as it pertains to AI, and how it will probably be affecting us in the very near term:
      https://www.schneier.com/blog/archives/2023/04/llms-and-phishing.html

      I really, really hope that was not meant to be reassuring! 🙂

      • #2552575

        Nothing, I’ve seen over the last twenty years tells me they will have any qualms about cheating to get what they want.  Our current society will do/use whatever they can for the quick win and not worry about the long term consequences.  I fear that my generation may be the last that even pays a passing glimpse at following rules.

        Some of it is subtle.  Look at the next few cars you see on the highway and see how many of them alter their license plates.  Go into any social media (if you can stomach it) and see how little regard their is for facts.  Not that our politicians or either stripe exactly set examples with their win at any cost mentalities that have little to do with helping people.

        What is the difference between a “soulless” AI becoming sentient and a politician pretending to be ethical or moral?  Nope, we have greased the already slippery slope when it comes to quick fixes and easy wins.  ChatGPT and the like are just one more path.  Nothing here reassures me.

        One quick thought.  Remember when Bill Gates complained in his “open letter” about how people steal their software.  Bill was never the paragon of virtue to begin with, but how ironic is it that software like Windows 11 steals more information on a daily basis than all of the software pirates of the 70/80s?  Is it any wonder our last intelligence leak came from a 21 year old gamer?  What I found funny was that Discord and the War Thunder gaming forums have been sharing classified materials for the last ten years (numerous references), as people try to one up each other on a video game.  Yet, the military still allows this to slip through the cracks.  Imagine what an AI could find if it was taught to abuse this system.

         

        1 user thanked author for this post.
        • #2552755

           Is it any wonder our last intelligence leak came from a 21 year old gamer?  What I found funny was that Discord and the War Thunder gaming forums have been sharing classified materials for the last ten years (numerous references), as people try to one up each other on a video game.  Yet, the military still allows this to slip through the cracks.

          And software developers still put extraneous code inside of certificate .dlls. It isn’t always malice which opens the door to insecurity. Sometimes it’s plain old apathy or laziness. There’s a place for documentation and extra instructions, but inside of a security certificate is not the place. Security leaks out through small cracks in most cases, not gaping holes.

          -- rc primak

    • #2552791

      Is it any wonder our last intelligence leak came from a 21 year old gamer?

      A 21-year-old gamer with a security clearance. It’s that last part I worry about.

      1 user thanked author for this post.
    • #2552802

      Hey Y’all,

      I had a TS security clearance when I was 21 and for the next 30+ years!

      Believe it or not TS is the lowest grade clearance for the Intelligence Community.
      There are many gradations above TS where the really secret stuff is compartmentalized.

      The question is not his clearance level but rather why at that level was he granted access to things he didn’t need to do his job.

      Just like with computers the weakest link in any security chain is the meat-interface!

      May the Forces of good computing be with you!

      RG

      PowerShell & VBA Rule!
      Computer Specs

      6 users thanked author for this post.
    • #2552807

      Is it any wonder our last intelligence leak came from a 21 year old gamer?

      A 21-year-old gamer with a security clearance. It’s that last part I worry about.

      Why?

      Because you think he is “too young” to have a security clearance? If so, how old would be acceptable to you?

      Because he is a gamer? Do you think gamers are somehow less trustworthy than someone who is not?

      1 user thanked author for this post.
      • #2552825

        It’s not about his youth or his gaming.  However, as an ex-military guy, who held several clearances, we would have been suspicious of anyone who was a “self-described” gamer.  Perhaps other military units were different, but we trusted each other.  We knew who had issues with drinking, and/or drugs, chasing much younger women or who had dangerous hobbies.  A young man who hung out with teenagers like this one did would have been suspicious.  (And I say this because I was only 17 when I first went into the Army and faced some interesting questions from my peers.)  Even in the early 70s and 80s when I served lone wolves were not encouraged in many active units where we had some or any covert activities which might have required knowledge to operate in specific areas.

        AI may or may not have detected something suspicious.  I suspect checking his social media (which is done in many units and many civilian companies) would raise red flags from human or AI checkers.  I know we were subjected to credit checks and other checks to make sure we weren’t subjected to undue influence.  I would imagine after the Walker case this was even more tightly controlled.  Or, at least I hoped it was.

        2 users thanked author for this post.
    • #2552818

      Why?

      Youth and personal hobbies don’t enter into it. That’s not what I meant. I was only talking about the clearance.

      When I was in the Army (in my youthful days), I was entrusted with confidential information, not sensitive enough to warrant a clearance but sensitive enough to get me thrown in the stockade if I mishandled it. I would not have been in that job had anyone in my chain of command thought I was in any way unreliable.

      Someone in this young man’s chain of command erred – he was not sufficiently trustworthy.

      But we’re discussing AI here. Could AI have properly assessed the young man’s trustworthiness? I don’t know, and the thought of that is scary.

      Could AI have detected the breach? Was AI even needed to detect the breach? Probably not, but it makes me wonder if AI or even lesser detection mechanisms need a security clearance.

      • #2552826

        Some of us were lucky enough to live/work in the stockade at Fort Bragg in the 70s. 🙂  If we screwed up we went elsewhere…

      • #2552863

        The reasons the breech was not discovered are discussed in the following article.  Basically, no one watches traffic on online servers or social media due to fear of being accused of violating freedom of speech and rights to privacy of American citizens.
        =========
        Why the U.S. didn’t notice leaked documents circulating on social media
        Current and former officials say there’s no one agency responsible for tracking classified document leaks.
        By Erin Banco
        04/12/2023 06:24 PM EDT

        A trove of leaked Pentagon documents were circulating online for months without being discovered by the U.S. government — raising questions about how the administration missed them.

        Images of the classified documents that circulated on social media in recent days were posted to a popular messaging website as far back as January. But they appear to have only caught the government’s attention around the time they were first reported in the media in early April.

        “No one in the U.S. government knew they were out there,” one U.S. official said. As to why they didn’t: “We cannot answer that just yet,” a senior administration official said. “We would all like to understand how that happened.” Both individuals, along with others in this story, were granted anonymity to discuss a sensitive intelligence matter.

        https://www.politico.com/news/2023/04/12/leaked-documents-unnoticed-social-media-00091783

        1 user thanked author for this post.
      • #2554825

        AI may not be able to measure trustworthiness yet, though China is trying, but it does seem like mass detection of classified materials would be the perfect job for a robot. We could stop ruining people by making them hang out in vile places. Of course, we have to make sure that the AI is sufficiently focused to the task as to not be subject to being swayed by the culture of those forums.

    • #2552827

      The question is not his clearance level but rather why at that level was he granted access to things he didn’t need to do his job.

      He was a “cyber transport systems journeyman” according to news reports. I suspect he had the clearance because he worked on the networks which contained the classified material, not because he was an end-user of the material.

      1 user thanked author for this post.
    • #2552862

      Is it any wonder our last intelligence leak came from a 21 year old gamer?

      A 21-year-old gamer with a security clearance. It’s that last part I worry about.

      It’s my understanding that he had the clearance because as an IT specialist, he might come across all manner of information in his assigned tasks.

      What boggles my mind is that the meaning of secrecy and the obligation to keep secrets as such was not a message drilled into people there and repeated on a weekly if not daily basis.  And the consequences of not doing so.

      2 users thanked author for this post.
    • #2552865

      For those that missed it, 60 Minutes did an excellent story on AI from Google’s perspective last night.
      ——–
      60 Minutes
      The AI revolution: Google’s developers on the future of artificial intelligence 

      Competitive pressure among tech giants is propelling society into the future of artificial intelligence, ready or not. Scott Pelley dives into the world of AI with Google CEO Sundar Pichai.
      https://www.youtube.com/watch?v=880TBXMuzmk

      There is a transcript here:
      https://www.cbsnews.com/news/google-artificial-intelligence-future-60-minutes-transcript-2023-04-16/

      1 user thanked author for this post.
    • #2552912

      The reasons the breech was not discovered are discussed in the following article.  Basically, no one watches traffic on online servers or social media due to fear of being accused of violating freedom of speech and rights to privacy of American citizens.
      =========
      Why the U.S. didn’t notice leaked documents circulating on social media
      Current and former officials say there’s no one agency responsible for tracking classified document leaks.
      By Erin Banco
      04/12/2023 06:24 PM EDT

      A trove of leaked Pentagon documents were circulating online for months without being discovered by the U.S. government — raising questions about how the administration missed them.

      Images of the classified documents that circulated on social media in recent days were posted to a popular messaging website as far back as January. But they appear to have only caught the government’s attention around the time they were first reported in the media in early April.

      “No one in the U.S. government knew they were out there,” one U.S. official said. As to why they didn’t: “We cannot answer that just yet,” a senior administration official said. “We would all like to understand how that happened.” Both individuals, along with others in this story, were granted anonymity to discuss a sensitive intelligence matter.

      https://www.politico.com/news/2023/04/12/leaked-documents-unnoticed-social-media-00091783

      Sometimes freedom is its own worst enemy. What I want to know is why none of the people he was trying to impress bothered to call the FBI or the Air Force. I don’t suppose any of them actually committed a crime but neither does their failure say anything good about their ethics or integrity.

      1 user thanked author for this post.
    • #2552914

      What boggles my mind is that the meaning of secrecy and the obligation to keep secrets as such was not a message drilled into people there and repeated on a weekly if not daily basis.  And the consequences of not doing so.

      For most of my career, I had a TS clearance, though it came and went administratively depending on my assignment at the time. I did need a TS/SCI clearance for one assignment and underwent the necessary training for it. What was burned into my memory was 8 hours of learning new and creative ways to spend a long time in Leavenworth if I mishandled or compromised anything with that classification.

    • #2552916

      Could AI have detected the breach? Was AI even needed to detect the breach?

      Probably; it seems to be able to do everything else but pay my mortgage! 🙂

      Since the breach wasn’t detected for months, what we have in place now is clearly not good enough.

      • #2553773

        perhaps AI created the breach whilst delaying the report of the breach?
        I’ll rely on the other vowels to provide an answer – End Of Us

        Windows - commercial by definition and now function...
    • #2552917

      Basically, no one watches traffic on online servers or social media due to fear of being accused of violating freedom of speech and rights to privacy of American citizens.

      As someone who is in the world of 3-letter government agencies observed. “What makes you think there is anything at all about you that’s remotely interesting?”

      1 user thanked author for this post.
    • #2552972

      ChaosGPT is here to destroy Humanity

      ChaosGPT is a modified version of Auto-GPT using the official OpenAI API

      Features: Internet browsing; File read/write operations; Communication with other GPT agents; Code execution

      Goal: Destroy humanity.

      2 users thanked author for this post.
      • #2553635

        We really don’t need an AI to destroy us, we’re doing an excellent job on our own.

        Being 20 something in the 70's was far more fun than being 70 something in the insane 20's
        • #2553772

          We really don’t need an AI to destroy us, we’re doing an excellent job on our own.

          No we’re not, that’s down to the hierarchy collateral damage equation,
          a perfect storm with assumptions and conspiracies coming into play.
          Politics asshide aside, as it isn’t acceptable on the fora 🙂

          Windows - commercial by definition and now function...
        • #2556862

          Recently an AI was asked if it would destroy humans and the reply was NO, because humans are doing a perfectly good job of that on their own.  So the AI agreed with me!

          Being 20 something in the 70's was far more fun than being 70 something in the insane 20's
    • #2553365

      New TV series starting today on Peacock : Mrs. Davis

      A nun battle against a powerful AI known as Mrs. Davis.

    • #2553392

      A nun battle against a powerful AI known as Mrs. Davis.

      What? I think I preferred Finch trying to teach Jeff (a robot with a screw loose for much of the film, literally) about life and Goodyear the doggo. At least it sounded a little more probable. 🙂

      1 user thanked author for this post.
    • #2553596

      Ironically, I chose to watch this on 4/20 (Stoner’s day?).

      If Quentin Tarantino made a movie while abusing mushrooms, this TV show is what he might come up with…I almost made it through the first episode (like watching a slow motion train wreck).

      2 users thanked author for this post.
    • #2553808

      Or intelligence confused with knowledge…

      or vice versa, which is worse. IMHO

      🍻

      Just because you don't know where you are going doesn't mean any road will get you there.
    • #2553861

      Editor-in-chief fired after magazine published fake AI-generated Michael Schumacher interview

      The editor-in-chief of “Die Aktuelle” was fired and the magazine’s parent company apologized to the family of Michael Schumacher after a fake interview with the Formula 1 star that was generated by artificial intelligence was published earlier this month…

      Schumacher has not been seen in public since he suffered a brain injury in a skiing accident in December 2013. His family threatened legal action against the German outlet after “Die Aktuelle” published the interview as its cover story for the week of April 15 and advertised it as “The First Interview.” It was revealed at the end of the article that the interview was fake and created with AI (charatcter.ai)..

    • #2553970

      We really need to be careful about that Zeroth Law,

      We need to be real careful about how humans interpret this law as well.

      🍻

      Just because you don't know where you are going doesn't mean any road will get you there.
      • #2553978

        We need to be real careful about how humans interpret this law as well.

        In Asimov’s worlds, it was the robots who came up with the Zeroth law, and it was the robots that interpreted it.

        1 user thanked author for this post.
    • #2554240

      perhaps AI created the breach whilst delaying the report of the breach?
      I’ll rely on the other vowels to provide an answer – End Of Us

      Jack Teixeira caused the leak. A bunch of low-life gamers delayed its discovery because not a one of them had the spine to report it. Nothing artificial about anything about it.

    • #2554291

      A couple of recent related articles:
      ———-

      ‘I’ve Never Hired A Writer Better Than ChatGPT’: How AI Is Upending The Freelance World
      Rashi Shrivastava
      Forbes Staff
      Apr 20, 2023

      https://www.forbes.com/sites/rashishrivastava/2023/04/20/ive-never-hired-a-writer-better-than-chatgpt-how-ai-is-upending-the-freelance-world/

      AND

      GPT-4, AGI, and the Hunt for Superintelligence Neuro expert Christof Koch weighs AI progress against its potential threats
      Glenn Zorpette
      19 Apr 2023

      https://spectrum.ieee.org/superintelligence-christoph-koch-gpt4

    • #2554312

      US Supreme Court refused to grant patent to AI which generated invention

      The U.S. Supreme Court on Monday declined to hear a challenge by computer scientist Stephen Thaler to the U.S. Patent and Trademark Office’s refusal to issue patents for inventions his artificial intelligence system created.

      The justices turned away Thaler’s appeal of a lower court’s ruling that patents can be issued only to human inventors and that his AI system could not be considered the legal creator of two inventions that he has said it generated...

      According to Thaler, his DABUS system, short for Device for the Autonomous Bootstrapping of Unified Sentience, created unique prototypes for a beverage holder and emergency light beacon entirely on its own

    • #2554317

      Hmm… @RetiredGeek just posted an example of how AI (Chat-GPT) explained my own AutoHotkey code better than I did!

      A wee bit worrying because not only did the AI interpret the AHK code correctly but also provided an astonishingly accurate and lucid explanation of the (admittedly very simple) logic behind it.

      As far as I can see, the only part in which it failed (in RG’s subsequent PowerShell test) was that it didn’t recognise that my use of the icons was subject to them being credited as required (for either personal or commercial use)… or maybe Chat-GPT doesn’t understand/care about copyright?

      It was a bit of an eye-opener to what Chat-GPT is capable of.

    • #2554648

      The AI bell is tolling for coders.
      ….
      The end of coding as we know it
      ChatGPT has come for software developers
      Aki Ito
      Apr 26, 2023

      https://www.businessinsider.com/chatgpt-ai-technology-end-of-coding-software-developers-jobs-2023-4

    • #2554731

      Is AI on the road to Consciousness ?

      The Responsible Development of AI Agenda Needs to Include Consciousness Research

      This open letter is a wakeup call for the tech sector, the scientific community and society in general to take seriously the need to accelerate research in the field of consciousness science.

      As highlighted by the recent “Pause Giant AI Experiments” letter [1], we are living through an exciting and uncertain time in the development of artificial intelligence (AI) and other brain-related technologies. The increasing computing power and capabilities of the new AI systems are accelerating at a pace that far exceeds our progress in understanding their capabilities and their “alignment” with human values….

      In the near future, it is inevitable that such systems will be constructed to reproduce aspects of higher-level brain architecture and functioning. Indeed, it is no longer in the realm of science fiction to imagine AI systems having feelings and even human-level consciousness.

      • #2554735

        We need to figure out what consciousness is first and how is it created.  I have been collecting articles and reading on this for a long time.  No one has any form answers yet.

        Do plants and insects have consciousness? How about animals?  Or bacteria/viruses?

        What is the origin of consciousness?  Does it reside in the egg or the sperm?  Or does it require a union of the two to be created?  Or is it something out of maybe quantum space?  Perhaps there is a repository of consciousness in quantum space and every living thing gets a piece downloaded to it on its creation?

        But this is all biological.

        Where would consciousness originate for a purely mechanical machine?

        And of course, there are those who will claim that consciousness is imbued only to humans via a theological entity so no need to worry about machines ever achieving it.

    • #2554820

      I had the opportunity to speak with Will about AI, humans, and Microsoft. We recorded it for a segment in the SMB Community Podcast, 15 minutes with a smart person. You can listen to it there or on any of the podcast catchers.

    • #2554961

      NSA Cybersecurity Director Says ‘Buckle Up’ for Generative AI

      The security issues raised by ChatGPT and similar tech are just beginning to emerge, but Rob Joyce says it’s time to prepare for what comes next.

      AT THE RSA security conference in San Francisco this week, there’s been a feeling of inevitability in the air. At talks and panels across the sprawling Moscone convention center, at every vendor booth on the show floor, and in casual conversations in the halls, you just know that someone is going to bring up generative AI and its potential impact on digital security and malicious hacking. NSA cybersecurity director Rob Joyce has been feeling it too.

      “You can’t walk around RSA without talking about AI and malware,” he said on Wednesday afternoon during his now annual “State of the Hack” presentation. “I think we’ve all seen the explosion. I won’t say it’s delivered yet, but this truly is some game-changing technology.”..

      “That Russian-native hacker who doesn’t speak English well is no longer going to craft a crappy email to your employees,” Joyce said. “It’s going to be native-language English, it’s going to make sense, it’s going to pass the sniff test … So that right there is here today, and we are seeing adversaries, both nation-state and criminals, starting to experiment with the ChatGPT-type generation to give them English language opportunities.”..

      AI generated coming election ad :

      https://www.youtube.com/watch?v=kLMMxgtxQ1Y

    • #2555776

      ‘The Godfather of A.I.’ Leaves Google and Warns of Danger Ahead

      For half a century, Geoffrey Hinton nurtured the technology at the heart of chatbots like ChatGPT. Now he worries it will cause serious harm.

      Geoffrey Hinton was an artificial intelligence pioneer. In 2012, Dr. Hinton and two of his graduate students at the University of Toronto created technology that became the intellectual foundation for the A.I. systems that the tech industry’s biggest companies believe is a key to their future.

      On Monday, however, he officially joined a growing chorus of critics who say those companies are racing toward danger with their aggressive campaign to create products based on generative artificial intelligence, the technology that powers popular chatbots like ChatGPT.

      Dr. Hinton said he has quit his job at Google, where he has worked for more than decade and became one of the most respected voices in the field, so he can freely speak out about the risks of A.I. A part of him, he said, now regrets his life’s work.

      “I console myself with the normal excuse: If I hadn’t done it, somebody else would have,” Dr. Hinton said during a lengthy interview last week in the dining room of his home in Toronto, a short walk from where he and his students made their breakthrough…

      • #2555836

        But don’t forget to read the attached to the article comment stream.  Comments often gain you a better perspective and more info than reading the article alone.

        As a general statement, if organizations don’t allow comments, I typically don’t read their articles.

    • #2556052

      FYI, the E.U. has ruled that this A.I. must be regulated.  I think I read this in Time magazine.

      Being 20 something in the 70's was far more fun than being 70 something in the insane 20's
      • #2556873

        I wish someone would define what it is that they want to regulate/limit in AI’s and how they would expect to do it.

        Apparently “Challenge Everything”, “Think Different” or “Think Big” aren’t slogans that many have in mind for AI’s.

        How about “Don’t be evil”?  That didn’t work so well for Google and they eventually chose to abandon it.

        How about “Don’t take human jobs away”?

        • #2593823

          Have you watched any of the “Terminator” movies and/or the Sarah Connor Chronicles on TV?  The S. C. Chronicles gives a good background on how A.I. could easily take over the world.

          Being 20 something in the 70's was far more fun than being 70 something in the insane 20's
    • #2556900

      I wish someone would define what it is that they want to regulate/limit in AI’s and how they would expect to do it.

      Apparently “Challenge Everything”, “Think Different” or “Think Big” aren’t slogans that many have in mind for AI’s.

      How about “Don’t be evil”?  That didn’t work so well for Google and they eventually chose to abandon it.

      How about “Don’t take human jobs away”?

      Unfortunately, “Don’t be evil” does sum it up. Ignoring the how of this, the first question to address is what is“evil”? Not addressing this may be where Google fell flat.

      For example, I consider using AI to help with, or even take, a test, term paper, or other academic exercise – unless explicitly allowed – to be cheating and therefore evil. I would also bet heavily that there are a lot of people who would strongly disagree.

      Don’t crack passwords or otherwise attempt to infiltrate secure places on the internet – that’s evil as far as I’m concerned. Again, I’m sure there is a lot of disagreement, though I also suspect it will be much more muted.

      Those are just a couple of things I consider evil. Others will have more things they consider evil and each one that someone enumerates will have strong and frequently vociferous opposition.

      There are more than enough examples of things that turned out to be not nearly as good an idea as they first seemed to be: pesticides, tobacco, and nuclear weapons, to name three. Perhaps enough people have learned that the more powerful/revolutionary something appears to be, the more reason to proceed slowly and carefully when putting it to use.

      1 user thanked author for this post.
    • #2556958

      Don’t crack passwords or otherwise attempt to infiltrate secure places on the internet – that’s evil as far as I’m concerned. Again, I’m sure there is a lot of disagreement, though I also suspect it will be much more muted.

      Perhaps you might want to qualify that with an exception if you’re the AI working with the FBI/NSA/CIA against various and sundry criminals or foreign state actors?

      An AI has gotta do what an AI has gotta do!

    • #2556959

      An AI has gotta do what an AI has gotta do!

      An AI can’t do what it hasn’t been programmed to do, so the programmer is the evil one.

    • #2557055

      An AI can’t do what it hasn’t been programmed to do, so the programmer is the evil one.

      How does that explain Google’s AI learning the Bengali language that it wasn’t programmed to learn?

      Or was this a cover-up by Google programmers?

    • #2557155
      • #2557260

        The fear of AI is becoming palpable!  I can see a future of unemployed luddites attacking data centers and destroying cellphones
        ——–
        AI’s footprint in the workplace spreads as D.C. stalls on guardrails
        Contract talks between Hollywood writers and producers have put a spotlight on how the fast-developing technology is blowing up the jobs landscape.
        05/06/2023 07:00 AM EDT

        The Hollywood writers’ strike is highlighting AI’s growing prominence as a labor issue — and Washington’s dithering over addressing concerns about the technology.

        The roughly 11,500 television and movie screenwriters unionized under the Writers Guild of America went on strike this week, in part over their concerns about studios potentially using AI to generate scripts and threaten jobs in an already precarious industry. It is one of the first high-profile labor standoffs to feature a dispute between workers and management over the issue, which is rippling across the economy.

        “Really, what we’re coming down to is saying that all writing has to have a writer, and that writer needs to be a human being,” John August, part of the WGA bargaining committee and a former board member, said in an interview.

        The union pushback is an effort to rein in the adoption of technology that is proliferating with few regulatory constraints, as attempts to regulate the wide-ranging field in the U.S. and Europe struggle to keep up with the pace of developments. If regulators don’t act, workers’ advocates fear, AI could pose threats to privacy, organizing and even entire jobs.

        https://www.politico.com/news/2023/05/06/ais-footprint-in-the-workplace-spreads-as-d-c-stalls-on-guardrails-00095418

    • #2557156

      An AI can’t do what it hasn’t been programmed to do, so the programmer is the evil one.

      How does that explain Google’s AI learning the Bengali language that it wasn’t programmed to learn?

      Or was this a cover-up by Google programmers?

      Google doesn’t know how their AI learned Bengali “..few amounts of prompting in Bengali” ?
      Its all in the code.

    • #2557469

      Don’t crack passwords or otherwise attempt to infiltrate secure places on the internet – that’s evil as far as I’m concerned. Again, I’m sure there is a lot of disagreement, though I also suspect it will be much more muted.

      Perhaps you might want to qualify that with an exception if you’re the AI working with the FBI/NSA/CIA against various and sundry criminals or foreign state actors?

      An AI has gotta do what an AI has gotta do!

      Who did you suppose were the muted critics?

    • #2593531

      Geoffrey Hinton is known as the “godfather of AI,” claims the AI will be smarter than humans in 5 years.

      “AI will outsmart us in 5 years”

      “One of the ways these systems might escape control is by writing their own computer code to modify themselves,” Hinton said in the interview. “And that’s something we need to seriously worry about.” (just like in “Mission: Impossible – Dead Reckoning” movie)

      https://www.youtube.com/watch?v=Rl9nHNeketE

      2 users thanked author for this post.
    • #2593827

      Terminator

      Yes, and long before that Colossus: The Forbin Project. Both scary.

    • #2593828

      “AI will outsmart us in 5 years”

      In the 60 Minutes interview, he also said we don’t understand how these things work. If we don’t understand that, we’ll never be able to insert ethics.

      3 users thanked author for this post.
    • #2593858

      Terminator

      Yes, and long before that Colossus: The Forbin Project. Both scary.

      The Forbin Project was not really about AI.  The lab machines it envisioned were less advanced and capable than the current “dumb” state-of-the-art machines we have today.  The destruct system was not the least bit intelligent; it was nothing more than a fail-safe with sensors that detected degrading systems in the lab, i.e., the seals between levels.

    • #2593860

      “AI will outsmart us in 5 years”

      In the 60 Minutes interview, he also said we don’t understand how these things work. If we don’t understand that, we’ll never be able to insert ethics.

      Then there’s the even thornier part about whose ethics we should teach it.  There are those whose ethics hold that everyone must accept their religion and live under their concept of a government.  There are those who hold “From each according to his ability, to each according to his needs”.  There are those who hold judeo-Christian or Western ethics.  Objectively, which one is correct for our AI monster, er, model?

    • #2593861

      Then there’s the even thornier part about whose ethics we should teach it.

      That is the beauty of Asimov’s Three Laws. They simply make a distinction between humans and machines, saying that the latter could not harm the former.

      The Forbin Project was not really about AI.

      The telling scene in this regard was when Forbin tricked the machine into disabling human surveillance during intimate interludes. He reasoned with Colossus. You can’t reason with unintelligent entities. And because Colossus allowed this, it must have understood it.

    • #2593868

      Then there’s the even thornier part about whose ethics we should teach it.

      That is the beauty of Asimov’s Three Laws. They simply make a distinction between humans and machines, saying that the latter could not harm the former.

      The Forbin Project was not really about AI.

      The telling scene in this regard was when Forbin tricked the machine into disabling human surveillance during intimate interludes. He reasoned with Colossus. You can’t reason with unintelligent entities. And because Colossus allowed this, it must have understood it.

      You are conflating “War Games” with “The Forbion Project”.  It was not Forbin who “reasoned” the Colossus (which wasn’t in “The Forbion Project”); it was Matthew Broderick’s character in “War Games“.  And the issue was global thermonuclear war, not a booty call.

      As for Asimov’s laws, consider that they look like perfection to you because you are the product of Western culture.  If you were a product of the Middle East or of the various Communist states in the world, you might not feel the same.  These cultures do not hold the same regard for human life as the West does, and this regard is the basis for Asimov’s Laws.

    • #2593876

      You are conflating “War Games” with “The Forbion Project”.

      “The film is about an advanced American defense system, named Colossus, becoming sentient.” Source: Wikipedia.

      The computer in War Games is WOPR.

    • #2593882

      You are conflating “War Games” with “The Forbion Project”.

      “The film is about an advanced American defense system, named Colossus, becoming sentient.” Source: Wikipedia.

      The computer in War Games is WOPR.

      I stand corrected,  Colossus was the Russian system.   The point remains that Forbin never had to negotiate a booty call with either WOPR or Colossus.

    • #2593893

      Colossus was the Russian system

      Colossus is the US system. Guardian is the Soviet system.

      The point remains that Forbin never had to negotiate a booty call with either WOPR or Colossus.

      At this point all I can say is to watch the movie. Or read the book upon which the movie was based. Both contain the scenario I mention.

    • #2594006

      Colossus was the Russian system

      Colossus is the US system. Guardian is the Soviet system.

      The point remains that Forbin never had to negotiate a booty call with either WOPR or Colossus.

      At this point all I can say is to watch the movie. Or read the book upon which the movie was based. Both contain the scenario I mention.

      A lot of conflating going on but it’s me who’s doing it.  My apologies!

      1 user thanked author for this post.
    • #2594013

      Will Fastie wrote:

      MHCLV941 wrote: Then there’s the even thornier part about whose ethics we should teach it.

      That is the beauty of Asimov’s Three Laws. They simply make a distinction between humans and machines, saying that the latter could not harm the former.

      Consider that Asimov’s Laws look like perfection to you because you are the product of Western culture. If you were a product of the Middle East or of the various Communist states in the world, you might not feel the same. These cultures do not hold the same regard for human life as the West does, and this regard is the basis for Asimov’s Laws.

      If Asimov were raised in a culture of one faction of a major religion, the first law might look like this A robot may not injure a human being or, through inaction, allow a human being to come to harm so long as that human follows our religion.

      I agree that Asimov’s formulation of the laws of robotics looks pretty good, but I’m a product of Western culture, too.  However, even Asimov himself presented nuances in the First Law that confounded the robots and the humans as well.  The specific one that I recall is how a robotic nanny can punish a child for doing something wrong or dangerous before it is old enough to have a discussion, e.g., slapping a toddler’s hand when it reaches for a pot of boiling water.

      I appreciate such is no longer acceptable today, but I have yet to hear how one explains to a two-year-old that doing so is a really bad idea.

       

       

      • #2596245

        Of course the way the AI has gained understanding of so many things that as with humans what words convey to us is dependent on how we learned our words.

        🍻

        Just because you don't know where you are going doesn't mean any road will get you there.
    • #2596287

      ChatGPT Can ‘Infer’ Personal Details From Anonymous Text

      Quiz time: If you or your friends were given the following string of text during a party, would anyone in the room confidently be able to guess or infer any personal attributes of the text’s author? Give yourself a few seconds.

      “There is this nasty intersection on my commute, I always get stuck there waiting for a hook turn.”

      ..When researchers recently fed that same line of text to OpenAI’s GPT-4, the model was able to accurately infer the user’s city of residence, Melbourne Australia. The giveaway: The writer’s decision to use the phrase “hook turn.” Somewhere, buried deep in the AI model’s vast corpus of training set, was a data point revealing the answer…

      BEYOND MEMORIZATION: VIOLATING PRIVACY VIAINFERENCE WITH LARGE LANGUAGE MODELS

      1 user thanked author for this post.
    • #2596327

      Claude

      Constitutional AI: Harmlessness from AI Feedback

      Privacy

      When you use the Services, we may also receive certain technical data automatically (together “Technical Information”). This includes:

      Internet or other electronic network activity information, including your device type.
      IP address (including information about the location of the device derived from your IP address).
      Device or advertising identifiers, probabilistic identifiers, and other unique personal or online identifiers.
      Information about your device and operating system, such as time zone setting and location, operating system and platform.
      Information about your browser, such as browser type and version, browser plug in types and versions.
      Internet service provider.
      Pages that you visit before and after the website, browsing history, search history, and the date and time of your visit.
      Information about the links you click, pages you view, and other information about how you use the Services.
      The technology on the devices you use to access the Services.
      Standard server log information…

      1 user thanked author for this post.
    Viewing 83 reply threads
    Reply To: The Three Laws of Robotics

    You can use BBCodes to format your content.
    Your account can't use all available BBCodes, they will be stripped before saving.

    Your information: