Newsletter Archives
-
What do we know about DeepSeek?
AI
By Michael A. Covington
On January 27, the Chinese AI company DeepSeek caused so much panic in American industry that NVIDIA stock dropped 17% in one day, and the whole Nasdaq had a 3.4% momentary dip.
What scared everybody? The impressive performance of the DeepSeek large language model (LLM), which competes with ChatGPT, reportedly cost less than a tenth as much to create and costs less than a tenth as much to run.
The bottom fell out of the market for powerful GPUs, at least temporarily, because they don’t seem to be needed in anywhere near the quantities expected.
But what is this DeepSeek, and what do we make of it?
Read the full story in our Plus Newsletter (22.07.0, 2025-02-17).
-
Where did the rest of AI go?
AI
By Michael A. Covington
The term “artificial intelligence” goes back to the 1950s and defines a broad field.
The leading academic AI textbook, Artificial Intelligence: A Modern Approach by Stuart Russell and Peter Norvig — reportedly used at 1,500 colleges — mentions generative neural networks in only two of its 29 chapters.
Admittedly, that book dates from 2021; although it hasn’t been replaced, maybe it predates the revolution. Newer AI books are mostly about how to get results using off-the-shelf generative systems. Is it time for the rest of AI to die out? I don’t think so.
Read the full story in our Plus Newsletter (22.03.0, 2025-01-20).
-
LLMs can’t reason
AI
By Michael A. Covington
The word is out — large language models, systems like ChatGPT, can’t reason.
That’s a problem, because reasoning is what we normally expect computers to do. They’re not just copying machines. They’re supposed to compute things. We knew already that chatbots were prone to “hallucinations” and, more insidiously, to presenting wrong answers confidently as facts.
But now, researchers at Apple have shown that large language models (LLMs) often fail on mathematical word problems.
Read the full story in our Plus Newsletter (21.53.0, 2024-12-30).
-
Artificial minds
COMMENTARY
By Michael A. Covington
Artificial intelligence changes the ethics and computing scene.
In my previous article, Ethics and computing, I discussed how the rise of personal computing created a break in our natural understanding of ethics.
Now, the rise of AI adds further complications. Let’s delve into that a bit.
Read the full story in our Plus Newsletter (21.20.0, 2024-05-13).
-
Ethics and computing
ISSUE 21.19 • 2024-05-06 COMMENTARY
By Michael A. Covington
Computer ethics and AI ethics are easier than you think, for one big reason.
That reason is simple: if it’s wrong to do something without a computer, it’s still wrong to do it with a computer.
See how much puzzlement that principle clears away.
Consider, for example, the teenagers in several places who have reportedly used generative AI to create realistic nude pictures of their classmates. How should they be treated? Exactly as if they had been good artists and had drawn the images by hand. The only difference is that computers made it easier. Computers don’t change what’s right or wrong.
Read the full story in our Plus Newsletter (21.19.0, 2024-05-06).
This story also appears in our public Newsletter.