• You’re fired if you don’t know how to use GPT-4

    Home » Forums » Newsletter and Homepage topics » You’re fired if you don’t know how to use GPT-4

    • This topic has 14 replies, 10 voices, and was last updated 2 years ago.
    Author
    Topic
    #2546890

    ISSUE 20.13 • 2023-03-27 PUBLIC DEFENDER By Brian Livingston Mainstream media outlets are ablaze with news about GPT-4, OpenAI’s enormously powerful a
    [See the full post at: You’re fired if you don’t know how to use GPT-4]

    10 users thanked author for this post.
    Viewing 8 reply threads
    Author
    Replies
    • #2546901

      So, what I get from this is that – moving forward – we humans will never, ever be able to trust the internet tubes again in case what we read has been generated by an AI chatbot.

      Or maybe big tech will come up with a two-tier (AI and/or AI-free) system where you have to opt in/out… for a fee of course.

      Thanks, big tech… you’ve helped us minions enormously. Back to my abacus. 🙂

      (Has this post been generated by chatGPT? How would you know?)

      5 users thanked author for this post.
    • #2546934

      If that makes today’s gold rush seem familiar to you, you’re not alone.

      Sometimes it’s best to be where everyone else isn’t. I’m rather troubled by the subscription aspect of this gold-rush adventure, as well.

      On permanent hiatus {with backup and coffee}
      offline▸ Win10Pro 2004.19041.572 x64 i3-3220 RAM8GB HDD Firefox83.0b3 WindowsDefender
      offline▸ Acer TravelMate P215-52 RAM8GB Win11Pro 22H2.22621.1265 x64 i5-10210U SSD Firefox106.0 MicrosoftDefender
      online▸ Win11Pro 22H2.22621.1992 x64 i5-9400 RAM16GB HDD Firefox116.0b3 MicrosoftDefender
      2 users thanked author for this post.
    • #2547022

      Re. Test Performance of LLMs (AI based on Large Language Models):
      These programs do not know the content of the test questions, and do not understand the concepts being tested for. They are exhibiting the ability to predict future multiple-choice answer patterns based on analyzing many previous test answer key sets. This is pattern recognition run amok, and one reason why many institutions forbid allowing students to look over previous tests and answer keys for their courses.

      For answers which require writing something longer than a single phrase or sentence, Brian’s observation about the speed of looking up information over a wide net of searchable sources would be sufficient to explain the improvements. But this does not mean the AI is actually “thinking” in human terms. BTW, if the test is not “open-book”, such searching would be grounds for immediate disciplinary actions. The AI should be limited to its existing database, with all outside connections cut off during test taking demos. (Re. performance on English proficiency and literature exams — since when is English proficiency a prerequisite for tech jobs?)

      Re. “unicorns”:
      There is as Brian implied, a lot of overhype involved when the finance industry declares a company a “unicorn”. A lot of investors over the years have fallen for this hype and gotten burned badly. In fact, the recent collapse of Silicon Valley Bank (SVB) may have been largely a result of the “unicorn effect”. This in addition to SVB’s investments in Crypto assets, which are, according to many in finance, Ponzi schemes.

      Re. OpenAI’s recent White Paper:

      …the 98-page paper introducing GPT-4 proudly declares that they’re disclosing *nothing* about the contents of their training set.

      Could that be because that Training Set is now the subject of numerous current and threatened legal actions, based on theft of intellectual property? Not every for-profit organization’s failure to disclose trade secrets is based on nefarious motives. I don’t think this failure to disclose when legal actions are pending is worthy of a lengthy screed or a new logo proposal.

      But all of these concerns are legitimate. And we need to put the guardrails into place before the tech gets any further ahead of regulations.

      -- rc primak

      2 users thanked author for this post.
    • #2547050
      1 user thanked author for this post.
    • #2547087

      I appreciate Brian’s tongue-in-cheek. I smiled numerous times while reading the article (with occasional grimaces about the darker aspects of the whole AI endeavor…).

      With a degree in electrical engineering and being a sometimes programmer, I have been quite concerned about the headlong rush to deploy AI – especially since it has accelerated from research into being a profit-driven competition for monetization.

      As Brian mentions, there are dangerous downsides to the whole enterprise. It is particularly concerning when corporations are throwing ethical concerns to the curb to save money, removing “inconvenient” reasons for caution.

      I’m currently about a third of the way through an excellent book titled “The Alignment Problem – Machine Learning and Human Values” by Brian Christian. It was published in 2020 and is extremely well-researched yet very accessible for most readers. Brian Christian is the author of “Algorithms to Live By” and is a visiting scholar at the University of California.

      I highly recommend it. It is even more salient and insightful today than when it was published. From the back cover (initialized added by me):

      “Today’s “machine-learning” systems, trained by data, are so effective that we’ve invited them to see and hear for us (now write, speak, draw, make videos for us) – and to make decisions on our behalf. But alarm bells are ringing. When the systems we attempt to teach will not, in the end, do what we want or what we expect, ethical and potentially existential risks emerge. Researchers call this the alignment problem. In Brian Christian’s riveting account, we meet the alignment problem’s “first responders” and learn their ambitious plan to solve it before our hands are completely off the wheel.”

      I wonder whether corporations and society are now taking their hands off the wheel too soon…

      Win10 Pro x64 22H2, Win10 Home 22H2, Linux Mint + a cat with 'tortitude'.

      1 user thanked author for this post.
    • #2547149

      So, moving forward once more, how do humans affirm that they are in fact human instead of AI?

      2 users thanked author for this post.
    • #2547156

      According to OpenAI’s report, GPT-4 did better than 90% of human test-takers on the US Uniform Bar Exam. It outscored 93% of them on the college-admission test known as SAT Evidence-Based Reading & Writing (EBRW).

      I’m not surprised.  I have worked in a university setting in one job or another for over 40 years, and conclude that the writing skills of college students have significantly deteriorated.  I notice that current students are used to talking to their devices, not writing on them.  When searching for information to learn skills, most prefer to view a video rather than read text (of course, this might be the wise choice, since most people seem unable to write useful instructions).  Is it so unexpected that a machine has better writing skills?  Research shows that writing and speaking are supported by quasi-independent parts of the brain.  To use a simple metaphor, the writing “muscle” for most people needs to be exercised more.

      2 users thanked author for this post.
      • #2547302

        I have often searched for instructions on something only to be frustrated that the only choices seem to be videos. I want to read the answer, not watch it. I can read at my own pace (generally far faster than any video is presented, though I can also slow down to take it in more fully if needed) and easily refer back to it. I always assumed that the prevalence of video-form tutorials was because text answers don’t get you any views or monetizable Youtube views.

        There are some things that lend themselves to videos more easily than to the written word, but most of the things that I have searched for don’t fall in that category. For those few exceptions, I generally look for videos first. Otherwise, I avoid them, and will only give in and watch one if I really cannot find a written version of the same info somewhere.

        I am really annoyed at news sites when I follow a headline link, only to be presented with an embedded video with no text version presented below. I’ll go elsewhere rather than start the video (no autoplay on my browsers). If there is a title whose meaning is ambiguous on a place like Youtube, I will scroll to the comments and attempt to detect the meaning from there (which usually works). That helps me to decide whether or not I want to watch the video (usually, the answer is no).

        On the transmitting side, writing is also the preferred mode of expressive communication for me. Not texting, though (via SMS or any of the app-based versions)… that is not writing. Text messages are to writing as Tik Tok is to videography. Email, certainly! I like it for all of the same reasons the younger generations don’t. They want an immediate answer that is short and easy to read; I want a detailed, complete answer that is as long as required to prevent an endless back-and-forth. If I provide that much detail in my original email, the hope is that the back-and-forth can be avoided (though that’s seldom the case, as people almost never actually read it, and end up asking the questions I already answered anyway).

        This is one reason I’ve never had any interest in computerized voice assistants (aside from the data collection concerns). All of the things that could be done with Alexa are things I’d rather do with my eyes and fingers rather than my ears and vocal cords.

        It seems that a lot of the hoopla over these new AI “innovations” comes from the idea of making, for example, searching the web more conversational than it is. Video how-tos are more conversational than written ones too… and that’s why I avoid them.

         

        Dell XPS 13/9310, i5-1135G7/16GB, KDE Neon 6.2
        XPG Xenia 15, i7-9750H/32GB & GTX1660ti, Kubuntu 24.04
        Acer Swift Go 14, i5-1335U/16GB, Kubuntu 24.04 (and Win 11)

        7 users thanked author for this post.
    • #2547307

      New study finds 80% of US workforce could be impacted by advanced chatbots

      According to a recent research paper published by OpenAI and the University of Pennsylvania, the emergence of highly advanced chatbots such as ChatGPT may significantly affect a large number of jobs. The study suggests that up to 80 percent of the US workforce could see up to 10 percent of their work tasks impacted by the use of ChatGPT. Moreover, nearly 19 percent of workers may experience a 50 percent impact on their job duties due to the introduction of General-purpose technologies (GPTs). The study also reveals that while higher-income jobs are expected to have a greater exposure to GPTs, almost all industries will be affected by this trend…

    • #2547848

      Does anyone remember “SKYNET” I might be paranoid but these AI chat/search bots might someday become know as the grandparents of the rise of the machines!?! Has anybody considered what all the hackers and plain old bad actors hiding in the corners of the dark web will do with this technology??? Obviously Microsoft is all about profits not integrity reading that they fired everyone on their ethics & society team. It’s once again a race to the top of the billionaires club. Please someone tell me I am overreacting and why this is a good thing for humanity. Sorry for being so negative about this but you don’t have to look back very far in history to see where this will go.

      1 user thanked author for this post.
    Viewing 8 reply threads
    Reply To: You’re fired if you don’t know how to use GPT-4

    You can use BBCodes to format your content.
    Your account can't use all available BBCodes, they will be stripped before saving.

    Your information: