• Michael Covington

    Michael Covington

    @mcovington

    Viewing 15 replies - 1 through 15 (of 16 total)
    Author
    Replies
    • in reply to: How you can make DeepSeek tell the truth #2751069

      There is a not a grand conspiracy to promote “the narrative,” but LLMs do have censorship, often for well-meaning purposes such as to keep from producing pornography.

      I have heard from a theologian who was trying to make a major American LLM summarize a set of historical Puritan religious books, and it wouldn’t do it, because some of the things the books said about sexual morality were too conservative for it.   Go figure.

    • in reply to: How you can make DeepSeek tell the truth #2751068

      That overcomes hallucinations, but not censorship.  Running the same LLM more than once with different “temperature” (randomization) settings might also work.

    • in reply to: How you can make DeepSeek tell the truth #2751067

      Excellent hack!   This suggests a whole family of tricks for making LLMs divulge what they’ve been trained on.  This shows that the censorship is very much a last-step thing, achieved by rejecting output after it’s generated, not by limiting the training set. Well done!

    • in reply to: What do we know about DeepSeek? #2749044

      That generative AI technology is changing fast, but it takes time to see what each advance actually amounts to.

      And that it’s good not to assume that progress will be constant in one direction with no changes.

      2 users thanked author for this post.
    • in reply to: Where did the rest of AI go? #2741448

      In fact, what’s happening right now is that the rest of AI may be poised to come roaring back.  As of the last month or so, I’m no longer hearing so much about LLMs and chatbots being close to the human mind in capability.  Instead I’m hearing about combinations of LLMs with other software, other kinds of AI and even simple databases and calculators.  If the chatbot is in the driver’s seat, calling on other software to get answers to questions, then it’s called “agentive” AI.  Otherwise it’s composite or mixed-mode AI.  I’m glad to see all of this happening.

      2 users thanked author for this post.
    • in reply to: Where did the rest of AI go? #2741447

      MartyHs:  Humans definitely have a strong bias to attribute humanlike consciousness to anything that looks vaguely like it.  That serves us well — it helps us understand that very unfamiliar people have minds like ours, and animals have functions that are analogous in some ways.  It’s better than erring the other way — until it leads to people being too easily tricked by machines.

    • in reply to: LLMs can’t reason #2730303

      So when you say 2 + 2 = 4 you are only being approximate?

      Human reasoning is limited.  But humans can perform explicit symbolic reasoning (such as arithmetic or highly explicit logic) that is not approximate.  It is not the easiest kind of thinking, but it is something we do.

      6 users thanked author for this post.
    • in reply to: Adobe doubles down on subscriptions #2730301

      Related bad news: Premiere Elements 2019, with its perpetual license, will RUN on Windows 11 but will not INSTALL on Windows 11. (It tells you you have the wrong kind of web browser, and none of Adobe’s suggested workarounds have worked for me; the final word from them is that it’s not supposed to work.)

      The only way to run it under 11 is to install under 10 and then upgrade the computer.

      This kept me from continuing to use one of my Premiere Elements 2019 licenses when I got a new computer. It is not due to any technical limitation of Premiere Elements 2019.

      Not well done, Adobe. I don’t edit videos often and don’t want to have to learn a more complicated package.

      1 user thanked author for this post.
    • in reply to: LLMs can’t reason #2730008

      You’ve just asked my favorite computational linguistics question, the question nobody dares to ask.

      Obviously there is much less training material in languages other than English.  But there’s also another problem.

      LLMs treat language as consisting of discrete words in fixed order.  That works well for English and should work well for Chinese, because words have only a few forms in these languages, and the order is fixed.  In Chinese, each word has only 1 form.  In English, a verb can have 5 forms but usually has only 4 (base, -s, -ed, -ing), and nouns have 2 forms (singular and plural).  In French a verb has dozens of forms.  In Russian the nouns also have multiple forms, and the word order is appreciably more variable because noun suffixes, rather than word order, indicate subject and object.  So even if you had the same amount of training material in Russian as in English, it would be less effective because there’s more for the LLM to model.

      Thank you for your interest!

      6 users thanked author for this post.
    • in reply to: Ethics and computing #2694970

      I’m not sure what kind of AI you have in mind.  Large language models (ChatGPT, etc.) are trained on input texts (generally as much as possible), not on the programmer’s opinions.  It is not easy, in many cases not possible, for the programmer’s opinions to be inserted.  As for rule-based AI, it has clear goals and is tested, and a programmer trying to skew it would be noticed.

      Having said that… I am confidentially aware of a situation where a commercial chatbot has “political correctness filters” on its output, to keep it from saying certain unpopular things, and as a result was not able to truthfully report some things said in old religious books that it was being used to summarize.

      Chatbots are not repositories of knowledge in the first place.  Nobody should treat them as such.  They are repositories of language.

    • in reply to: Ethics and computing #2694969

      I call this the separate-world illusion.  When administering network acceptable-use at the U. of Georgia I occasionally met young people who were deep in it, to the point of being unaware of other people in the real world.  It can reach a level that is close to what psychologists would call a personality disorder.

    • in reply to: Ethics and computing #2694968

      Remember, “there is no law against stealing elephants!”  I think that’s an example of enduring usefulness.

    • in reply to: Ethics and computing #2669624

      Thank you all for your support!

    • in reply to: Ethics and computing #2669623

      We have an AI article coming up, but briefly:  The racism (etc.) in some AI systems comes from the training data.  Programmers do not put it in.  Machine learning algorithms learn to recognize whatever patterns they encounter, often very efficiently.

      1 user thanked author for this post.
    • in reply to: Ethics and computing #2669622

      Well… I’m not going to disparage the young.  There have always been people who buried themselves in fiction, sports, or games of one kind or another.  In fact, my experience at the University of Georgia was that students’ computer ethics was much worse in the 1990s than now.  Nowadays they have more experience and awareness and are guided by the people around them.

    Viewing 15 replies - 1 through 15 (of 16 total)