Newsletter Archives
-
How you can make DeepSeek tell the truth
PUBLIC DEFENDER
By Brian Livingston
The tech world was shocked last month when a Chinese company released DeepSeek, a chatbot that uses affordable, run-of-the-mill chips and consumes less energy per query than other artificial-intelligence programs.
What’s not so good about DeepSeek is the way it censors or outright lies about political affairs. This includes anything you ask the chatbot that relates to China, Hong Kong, Taiwan, Asian democracy, and numerous other subjects.
But it’s easy to make DeepSeek give you the straight-up truth — and I’ll tell you how to do it.
Read the full story in our Plus Newsletter (22.08.0, 2025-02-24).
-
What do we know about DeepSeek?
AI
By Michael A. Covington
On January 27, the Chinese AI company DeepSeek caused so much panic in American industry that NVIDIA stock dropped 17% in one day, and the whole Nasdaq had a 3.4% momentary dip.
What scared everybody? The impressive performance of the DeepSeek large language model (LLM), which competes with ChatGPT, reportedly cost less than a tenth as much to create and costs less than a tenth as much to run.
The bottom fell out of the market for powerful GPUs, at least temporarily, because they don’t seem to be needed in anywhere near the quantities expected.
But what is this DeepSeek, and what do we make of it?
Read the full story in our Plus Newsletter (22.07.0, 2025-02-17).
-
We now have AI in our forums!
AI-generated image of a computer chip
We’ve added a new section in our forums for the topic Artificial Intelligence. But don’t worry — we haven’t added AI software to run our forum software, like nearly everyone else on the planet. We just want to provide a place to discuss its use — or reasons why we shouldn’t use it — in our welcoming community.
Let me note that this new forum section is separate from other sections devoted to specific vendors. This will be a general-purpose section.
I think there is a time and place to use artificial intelligence. For the moment, it’s showing up everywhere and seems like snake oil.
The most annoying AI-related thing I’ve seen so far is Copilot in Word. It now makes its presence prominently known and interferes with what you really want to do — work on a document. Maybe it will be helpful in the future, but right now my suggestion is to disable it or download a Classic version of 365.
-
Where did the rest of AI go?
AI
By Michael A. Covington
The term “artificial intelligence” goes back to the 1950s and defines a broad field.
The leading academic AI textbook, Artificial Intelligence: A Modern Approach by Stuart Russell and Peter Norvig — reportedly used at 1,500 colleges — mentions generative neural networks in only two of its 29 chapters.
Admittedly, that book dates from 2021; although it hasn’t been replaced, maybe it predates the revolution. Newer AI books are mostly about how to get results using off-the-shelf generative systems. Is it time for the rest of AI to die out? I don’t think so.
Read the full story in our Plus Newsletter (22.03.0, 2025-01-20).
-
The best stories of 2024 — updated!
ISSUE 21.53 • 2024-12-30 Look for our BONUS issue on January 6, 2025! PUBLIC DEFENDER
By Brian Livingston
The year 2024 is now in the books. I’m pleased to report some positive moves this year that may make the tech industry’s products better for us all.
I’ll give you some important updates today on (1) keeping artificial-intelligence services from creating malicious images, (2) minimizing social-media websites’ negative effects on users’ mental health, and (3) discovering how “answer engines” are improving on the tiresome linkfests of old-guard search giants.
Read the full story in our Plus Newsletter (21.53.0, 2024-12-30).
This story also appears in our public Newsletter. -
LLMs can’t reason
AI
By Michael A. Covington
The word is out — large language models, systems like ChatGPT, can’t reason.
That’s a problem, because reasoning is what we normally expect computers to do. They’re not just copying machines. They’re supposed to compute things. We knew already that chatbots were prone to “hallucinations” and, more insidiously, to presenting wrong answers confidently as facts.
But now, researchers at Apple have shown that large language models (LLMs) often fail on mathematical word problems.
Read the full story in our Plus Newsletter (21.53.0, 2024-12-30).
-
MS-DEFCON 2: Closing out the year
ISSUE 21.49.1 • 2024-12-05 By Susan Bradley
As we close the year of patching, I’m surprised to see that our vendors are facing many of the same issues they faced years ago — governments looking over their actions.
But this time, instead of scrutinizing monopolies for on-premises software, they are looking at how Microsoft is making monopolizing cloud services as well as coercing governments to use more of their services. Recently, a ProPublica investigation questioned how much Microsoft’s free government outreach to enhance the security of its products was designed to lock government customers into these subscription services.
Then the Department of Justice asked a judge to break up Google and force it to sell off the Chrome browser and restrict their use of artificial intelligence and the Android mobile operating system. I still remember the lengthy monopoly trials against Microsoft. It seems like the more things change, the more things in technology stay the same. We constantly have a push-pull relationship with our vendors.
Anyone can read the full MS-DEFCON Alert (21.49.1, 2024-12-05).
-
Write 200 social-media posts in 10 minutes! Quality, right?
PUBLIC DEFENDER
By Brian Livingston
I’ve been thinking about the profession of journalism lately, given the emails bombarding me these days about how I could create 240, 300, or even 1,200 articles per hour if I would only use the latest in chatbot tech.
YouTube’s funny farm is overflowing with videos of such miracles. They tell me I could write a whole ebook in 24 hours — true writers never sleep, you know — and make $8,327 a week ($433,000 a year) merely by pressing a few buttons.
Read the full story in our Plus Newsletter (21.44.0, 2024-10-28).