Newsletter Archives
-
50 years and counting
Where were you 50 years ago when Microsoft started? I was in junior high and it wasn’t until high school that we saw our first Basic computer. There was one computer in the math lab and that was it. Now we have the vastly greater computing power in our pockets and in our wristwatches.
50 years later, Microsoft is holding a Copilot event on the 50th anniversary date (what else do we expect, I guess?)
Where do you think we will be in the next 50 years in terms of computing? I realize that many of us won’t be around to experience the full 50 years! Will we get flying cars? Experience the lifestyle of the Jetson’s? Interestingly enough, we do have many of the technologies that were shown in that classic cartoon show.
What technology do you predict for the next 50 years?
-
Microsoft wants to hear from you
I received this in an email from Microsoft this morning. It does not appear to be an April Fool’s joke. Take the company at its word — if you get this survey request, tell Redmond what you think.
I was struck by two things about the mostly multiple-choice survey. First, most of the available choices tended to favor Microsoft. For example, in a set of five choices, three could be construed as favorable to the company while only two were unfavorable. That’s not balanced.
The other was the following question: “How long have you used Copilot for?” I’m no English savant, but this would not have escaped my attention while editing an article for AskWoody. It would certainly have been rewritten by Roberta Scholz. I decided to ask Copilot to copy edit the sentence and it provided “How long have you been using Copilot?”
This suggests that the survey was written by a human, albeit one whose English-language skills are slightly below average.
-
The Casio question
I want to write a letter to Casio in the US. I knew neither who the CEO is or where the US headquarters are. So, I decided to ask Copilot via Bing.com:
“Who is the president of Casio US, and what is the US headquarters’ mailing address?”
I then verified the answers from alternate sources. The mailing address Copilot returned was correct. The CEO was wrong. Copilot gave me Makoto Ori, who did assume that position in 2021. But in 2023, a new CEO was named, Tomoo Kato, a fact I established using ordinary search and reading further. Copilot’s source was a news article from 2021.
Having found the new name, I just typed it to Copilot. It responded by stating that Kato-san was the current CEO.
Copilot knew both names but gave me the wrong one when I asked. What are we to make of this?
-
What do we know about DeepSeek?
AI
By Michael A. Covington
On January 27, the Chinese AI company DeepSeek caused so much panic in American industry that NVIDIA stock dropped 17% in one day, and the whole Nasdaq had a 3.4% momentary dip.
What scared everybody? The impressive performance of the DeepSeek large language model (LLM), which competes with ChatGPT, reportedly cost less than a tenth as much to create and costs less than a tenth as much to run.
The bottom fell out of the market for powerful GPUs, at least temporarily, because they don’t seem to be needed in anywhere near the quantities expected.
But what is this DeepSeek, and what do we make of it?
Read the full story in our Plus Newsletter (22.07.0, 2025-02-17).
-
The mustache question
We’ve been having something of a debate around here regarding the use of AI. Susan, as you well know from her numerous posts and columns about it, warns about jumping into Microsoft’s Copilot service. I agree that caution is warranted, and one of the first things I reported here was the abusive threat Bing’s chat feature made to a reporter. I have a slightly different take, based on the premise that these AI services and features are here whether we like them or not, and whether we fear them or not.
I conducted a very simple and brief experiment based on questioning two assistants. I asked Bing search, “Who was the last US president to have a mustache?” Bing did more or less what I expected — it gave me a list of results, with the first being a link to a Wikipedia article about facial hair on all presidents. I saw nothing on the first page with the answer. Further research would have been necessary.
Then I asked Copilot. It’s response? “William Howard Taft.” It provided several citations in support. Of course, that’s the correct answer, and Copilot did the research for me. Annoyingly, Copilot’s response included a question to me: “Is there a particular reason you’re interested in presidential facial hair?” I was tempted to tell it “None of your business.”
So, I ask you: Which assistant do you think did a better job?
-
We now have AI in our forums!
AI-generated image of a computer chip
We’ve added a new section in our forums for the topic Artificial Intelligence. But don’t worry — we haven’t added AI software to run our forum software, like nearly everyone else on the planet. We just want to provide a place to discuss its use — or reasons why we shouldn’t use it — in our welcoming community.
Let me note that this new forum section is separate from other sections devoted to specific vendors. This will be a general-purpose section.
I think there is a time and place to use artificial intelligence. For the moment, it’s showing up everywhere and seems like snake oil.
The most annoying AI-related thing I’ve seen so far is Copilot in Word. It now makes its presence prominently known and interferes with what you really want to do — work on a document. Maybe it will be helpful in the future, but right now my suggestion is to disable it or download a Classic version of 365.
-
Where did the rest of AI go?
AI
By Michael A. Covington
The term “artificial intelligence” goes back to the 1950s and defines a broad field.
The leading academic AI textbook, Artificial Intelligence: A Modern Approach by Stuart Russell and Peter Norvig — reportedly used at 1,500 colleges — mentions generative neural networks in only two of its 29 chapters.
Admittedly, that book dates from 2021; although it hasn’t been replaced, maybe it predates the revolution. Newer AI books are mostly about how to get results using off-the-shelf generative systems. Is it time for the rest of AI to die out? I don’t think so.
Read the full story in our Plus Newsletter (22.03.0, 2025-01-20).
-
LLMs can’t reason
AI
By Michael A. Covington
The word is out — large language models, systems like ChatGPT, can’t reason.
That’s a problem, because reasoning is what we normally expect computers to do. They’re not just copying machines. They’re supposed to compute things. We knew already that chatbots were prone to “hallucinations” and, more insidiously, to presenting wrong answers confidently as facts.
But now, researchers at Apple have shown that large language models (LLMs) often fail on mathematical word problems.
Read the full story in our Plus Newsletter (21.53.0, 2024-12-30).