Newsletter Archives
-
What do we know about DeepSeek?
AI
By Michael A. Covington
On January 27, the Chinese AI company DeepSeek caused so much panic in American industry that NVIDIA stock dropped 17% in one day, and the whole Nasdaq had a 3.4% momentary dip.
What scared everybody? The impressive performance of the DeepSeek large language model (LLM), which competes with ChatGPT, reportedly cost less than a tenth as much to create and costs less than a tenth as much to run.
The bottom fell out of the market for powerful GPUs, at least temporarily, because they don’t seem to be needed in anywhere near the quantities expected.
But what is this DeepSeek, and what do we make of it?
Read the full story in our Plus Newsletter (22.07.0, 2025-02-17).
-
We now have AI in our forums!
AI-generated image of a computer chip
We’ve added a new section in our forums for the topic Artificial Intelligence. But don’t worry — we haven’t added AI software to run our forum software, like nearly everyone else on the planet. We just want to provide a place to discuss its use — or reasons why we shouldn’t use it — in our welcoming community.
Let me note that this new forum section is separate from other sections devoted to specific vendors. This will be a general-purpose section.
I think there is a time and place to use artificial intelligence. For the moment, it’s showing up everywhere and seems like snake oil.
The most annoying AI-related thing I’ve seen so far is Copilot in Word. It now makes its presence prominently known and interferes with what you really want to do — work on a document. Maybe it will be helpful in the future, but right now my suggestion is to disable it or download a Classic version of 365.
-
Looking back, looking forward
LEGAL BRIEF
By Max Stul Oppenheimer, Esq.
The big tech story of 2024 was, by far, artificial intelligence.
Although it was often portrayed as sui generis (Latin for “we’ve never seen anything like it, and we need to start thinking from scratch …”), the emergence of artificial intelligence into public use and consciousness highlighted (and added urgency to) old issues, more than it created any new ones.
The questions — who owns personal information, where does the right to privacy begin and end, what are the limits of copyright’s fair use doctrine, to what extent can free speech be controlled in the interest of other rights (such as privacy or the protection of minors) — are not new, nor even recent.
Read the full story in our Plus Newsletter (22.03.0, 2025-01-20).
-
Where did the rest of AI go?
AI
By Michael A. Covington
The term “artificial intelligence” goes back to the 1950s and defines a broad field.
The leading academic AI textbook, Artificial Intelligence: A Modern Approach by Stuart Russell and Peter Norvig — reportedly used at 1,500 colleges — mentions generative neural networks in only two of its 29 chapters.
Admittedly, that book dates from 2021; although it hasn’t been replaced, maybe it predates the revolution. Newer AI books are mostly about how to get results using off-the-shelf generative systems. Is it time for the rest of AI to die out? I don’t think so.
Read the full story in our Plus Newsletter (22.03.0, 2025-01-20).
-
LLMs can’t reason
AI
By Michael A. Covington
The word is out — large language models, systems like ChatGPT, can’t reason.
That’s a problem, because reasoning is what we normally expect computers to do. They’re not just copying machines. They’re supposed to compute things. We knew already that chatbots were prone to “hallucinations” and, more insidiously, to presenting wrong answers confidently as facts.
But now, researchers at Apple have shown that large language models (LLMs) often fail on mathematical word problems.
Read the full story in our Plus Newsletter (21.53.0, 2024-12-30).
-
Can we align human interests with robots, so they don’t turn on us?
PUBLIC DEFENDER
By Brian Livingston
Robots in human-like forms are already starting to assume jobs that have been performed for centuries by ordinary workers in manufacturing, logistics, and other industries.
This is my second column in a two-part series. The first installment described humanoid bots that are faster than humans at certain tasks, much stronger in moving heavy objects, and far lower in cost than the labor force in most industrialized nations. Employers are currently paying only $10 to $12 per hour for bots when averaged over the useful lives of the mechanical workers.
The outlay is expected to fall into the $2 to $3 per hour range, plus software costs, as soon as mass-production scale is achieved, which is projected to occur as early as 2025.
Read the full story in our Plus Newsletter (21.32.0, 2024-08-05).
-
Apple owns ‘AI’
APPLE
By Will Fastie
Its marketing skills are legend, but the Spaceship has taken it to a new galaxy.
Everything is about AI now. It’s getting to the point that a loaf of bread at the grocery will be marked “Baked in AI-enhanced ovens!”
We all know that “AI” is an abbreviation for “artificial intelligence.” But in the keynote address for Apple’s World Wide Developers Conference last week, presenters announced “Apple Intelligence.” No one specifically suggested that Apple would co-opt the abbreviation “AI” — just consider it a fait accompli. And also consider it a spectacularly brilliant marketing move.
Read the full story in our Plus Newsletter (21.25.0, 2024-06-17).
-
Artificial minds
COMMENTARY
By Michael A. Covington
Artificial intelligence changes the ethics and computing scene.
In my previous article, Ethics and computing, I discussed how the rise of personal computing created a break in our natural understanding of ethics.
Now, the rise of AI adds further complications. Let’s delve into that a bit.
Read the full story in our Plus Newsletter (21.20.0, 2024-05-13).