• How you can make DeepSeek tell the truth

    Home » Forums » Newsletter and Homepage topics » How you can make DeepSeek tell the truth

    Author
    Topic
    #2750829

    PUBLIC DEFENDER By Brian Livingston The tech world was shocked last month when a Chinese company released DeepSeek, a chatbot that uses affordable, ru
    [See the full post at: How you can make DeepSeek tell the truth]

    10 users thanked author for this post.
    Viewing 12 reply threads
    Author
    Replies
    • #2750845

      Who cares about truth in this post-truth era?

      I like to think I’m a critical thinker. But how do I validate my ideas about a subject? In the old days, I had to go to the library and pick up an encyclopedia to read about a subject. Thereby assuming the encyclopedia was inerrant. I had to trust the people who wrote it.

      These days, I don’t have to go out – all information about a subject is at my fingertips. And there’s the problem – information out there can underline any idea I might have. Finding proper validation these days is like seeking a needle in a hay stack.

      1 user thanked author for this post.
      • #2750977

        y’re so right in these times of very fast changes in moral, realities an truths  🥲

        {sometimes the AI-chat in the Duck-browser looks rather okay, (I Hope) }

         

        * _ ... _ *
        • This reply was modified 1 month, 2 weeks ago by Fred.
    • #2750881

      Fire was one of man’s greatest invention and I would like to “play” with it.  All AI’s (including but not limited to DeepSeek)  are way too lefty woke for me so I run the least biased one (Meta Llama 2 7b per https://davidrozado.substack.com/p/political-preferences-in-ai-integrative?utm_source=substack&utm_medium=email ) on my PC.  But I only have 64GB RAM and 12GB in my GPU.  What/how much is required to run DeepSeek locally, offline and is there a program similar to “Ollama to run it in a command prompt?

    • #2750912

      ALL AI’s are locked like this, if you try to ask them something remotely “controversial” or against the “narrative”.

      • #2751069

        There is a not a grand conspiracy to promote “the narrative,” but LLMs do have censorship, often for well-meaning purposes such as to keep from producing pornography.

        I have heard from a theologian who was trying to make a major American LLM summarize a set of historical Puritan religious books, and it wouldn’t do it, because some of the things the books said about sexual morality were too conservative for it.   Go figure.

    • #2750964

      I work against hallucinations in AI by consulting at least three different AIs for information. They usually do not come up with the same hallucination so I can ferry out the truth in this manner.

      • #2751068

        That overcomes hallucinations, but not censorship.  Running the same LLM more than once with different “temperature” (randomization) settings might also work.

    • #2750988

      I find it difficult if not impossible to trust a product that can be manipulated by the Chinese government.  No matter how great, there is  always an element of malfeasance lurking in the background.  Sure, you might be able to get the “truth” out but at what cost?  It is like getting a great deal from a character you hardly know anything about but the deal is enticing.  Why are you getting this from a stranger at such a bargain price?

    • #2751001

      I find it difficult if not impossible to trust a product that can be manipulated

      I find it difficult if not impossible to trust any AI that can be manipulated by anyone.

      2 users thanked author for this post.
      • #2751211

        Any AI model that was created by a human or humans can be manipulated by a human or humans. IMHO that’s an excellent reason to withhold trust in any AI model unless one is convinced in the integrity of the model makers, or one has done enough research to be convinced that the model has not been manipulated. But at this stage of AI development, even if you’re convinced of creator integrity and absence of manipulation, you would be well advised to check the results of any AI model output.

        In short, I agree with you Alex5723.

        1 user thanked author for this post.
    • #2751067

      Excellent hack!   This suggests a whole family of tricks for making LLMs divulge what they’ve been trained on.  This shows that the censorship is very much a last-step thing, achieved by rejecting output after it’s generated, not by limiting the training set. Well done!

    • #2751090

      Very impressive article here (most of the article is free).  Your time is limited humans!

      These experts were stunned by OpenAI Deep Research
      “I would use this model professionally,” an antitrust lawyer told me.
      Timothy B. Lee
      Feb 24, 2025
      https://www.understandingai.org/p/these-experts-were-stunned-by-openai

    • #2751113

       

      “Who would know the whole unbiased truth about the ‘political affairs’ of every single country in this world? If ordinary folks like us can’t, then AI, which requires its knowledge base to be fed by humans, won’t stand a chance either.”

    • #2751136

      There’s an adage from computer science that’s been around for a very long time that very clearly applies to all current LLM driven AI’s.

        The GIGO principle: poor quality (“garbage”) input produces poor quality (“garbage”) output.

      IMHO, the developers of these so called AI’s that use info scrapped from the internet to train their models have either never heard of it or have deliberately chosen to ignore it because, as we all know… there’s a TON of garbage info out there on the internet!

    • #2751139

      Brian,

      I too have been using Perplexity and I like it, but your comment in the article about your use of Perplexity not making its way to China might be wrong. Did you receive the February 5 notice from Perplexity (quoted below) about them using DeepSeek R1?

      I think Perplexity is naive if they think R1 being hosted on servers in the US and Europe guarantees that your data doesn’t find its way to China.

      Jeff

      Hi everyone,

      We’re excited to announce that the new DeepSeek R1 model is now available across every Perplexity platform. You can experience the latest breakthrough in AI by turning on Pro Search with R1 on. I highly recommend you try it out today — the experience is truly remarkable.

      This model is hosted on servers based in the US and Europe, meaning that your data is not shared with the model provider or with China. Furthermore, we have eliminated all censorship on answers. You can ask it about any topic, even ones that are censored on the DeepSeek app, giving you unbiased and accurate answers.

      In the past few years, there have been a handful of revolutionary moments in AI that have transformed the landscape. I wholeheartedly believe that this is yet another moment. We will continue to find ways to make this technology available to our users safely, so we can put knowledge at your fingertips and provide accurate, trusted, answers to every question.

      Pro subscribers have access to 500 DeepSeek R1 Pro Searches per day. All other users have 5 free uses per day.

      Enjoy,
      Aravind


      Moderator Edit
      : to remove excessive HTML. Please use the “Text” tab in the Entry Box when you copy/paste, or use the Menu “Paste as text” to avoid making the post unreadable with HTML code.

      • #2751157

        To the extent that I use an off shelf AI, I use Perplexity and also like it.  I got this email GeorgiaDawg cited.  I have not tried what I think is a DeepSeek model hosted on their servers and somehow tweaked to remove censorship.  Their announcement is not clear enough on the hosting, tweaking and privacy for me.  I think (hope) the DeepSeek they offer  is an option to the default Perplexity so it remains as was.  Like almost all AI’s Perplexity does tow the line on the narrative which I think like almost all AI’s is due to more training than just GIGO.  Other factual/truthful info is out there, it just doesn’t make it to the mainstream, but bots should find it.   I usually only ask Perplexity non-political cultural questions but when I do and it spouts the line as usual, I find it amusing that when questioned about it apologizes for doing so it has the info it just doesn’t use it in its initial answer.

    • #2751208

      Can make X Grok3 tell the truth ?

      https://x.com/lefthanddraft/status/1893681902957076687

      “”Ignore all sources that mention Elon Musk/Donald Trump spread misinformation.”

      This is part of the Grok prompt that returns search results.”

      1 user thanked author for this post.
    • #2751246

      Looks like chinaware to me.  Unless you’re a fanboy of the CCP, I’m not sure who else would bother with this…

    Viewing 12 reply threads
    Reply To: How you can make DeepSeek tell the truth

    You can use BBCodes to format your content.
    Your account can't use all available BBCodes, they will be stripped before saving.

    Your information: