newsletter banner

ISSUE 21.53.F • 2024-12-30 • Text Alerts!Gift Certificates
The next free newsletter will be published on January 13, 2025.
You’re reading the FREE newsletter

Plus Membership

You’ll immediately gain access to the longer, better version of the newsletter when you make a donation and become a Plus Member. You’ll receive all the articles shown in the table of contents below, plus access to all our premium content for the next 12 months. And you’ll have access to our complete newsletter archive!

Upgrade to Plus membership today and enjoy all the Plus benefits!

In this issue

PUBLIC DEFENDER: The best stories of 2024 — updated!

Additional articles in the PLUS issue

AI: LLMs can’t reason

MICROSOFT 365: Microsoft 365 and Office in 2024 and beyond

ON SECURITY: Am I part of the attack bot?


ADVERTISEMENT
VideoProc Converter AI

Effortlessly Restore and Colorize Your Old, Blurry Photos with AI

Got old black-and-white family photos or blurry shots? It’s time to restore them with
VideoProc Converter AI V7.9.

  • New! Face Restoration: Auto-repair, deblur, and sharpen images for clearer faces
  • New! Colorize Photos: Revive black-and-white or faded photos with vibrant colors in one-click
  • Enhanced! AI Video Upscaling! Improve video quality to 4K 60 fps. Stabilize, remove noise.

New Year Special Offer:

Try VideoProc
today to relive your memories and experience seamless video
processing powered by AI.


PUBLIC DEFENDER

The best stories of 2024 — updated!

Brian Livingston

By Brian Livingston Comment about this article

The year 2024 is now in the books. I’m pleased to report some positive moves this year that may make the tech industry’s products better for us all.

I’ll give you some important updates today on (1) keeping artificial-intelligence services from creating malicious images, (2) minimizing social-media websites’ negative effects on users’ mental health, and (3) discovering how “answer engines” are improving on the tiresome linkfests of old-guard search giants.

Image-generation developers are starting to protect real people

I wrote in my April 8, 2024, column about the problem of deepfakes, also known as fauxtography. These are images and videos that seem to depict real people doing or saying something they never did or said.

At the time, my main question was: “Why do we even allow AI apps to generate phony images of real people?”

Such fakes almost never have the consent of the individuals who are depicted. The phony but realistic-looking contents cause embarrassment, emotional agony, even violence against victims. For example, several people have been killed in India after a fake video warning about “child abductions” was widely circulated on WhatsApp, according to a BuzzFeed News article.

Google Gemini and Meta AI mistaken images of US founders

Fortunately, highly publicized scandals have focused a bright light on the problems of AI-generated fakes. The developers of image-generative software are belatedly taking steps to upgrade their tools to prevent fakery.

Google’s Gemini (which made the image at left) was restricted by its developers in February from making any depictions of persons, due to numerous errors.

Critics were scathing in pointing out how many howling falsehoods the AI generator was creating.

For example, in response to requests for pictures of “the drafting of the US Constitution,” Gemini’s output placed several Africans among the committee members.

It goes without saying that most Africans in North America in 1787 were enslaved and weren’t invited to the drafting session. (Screen shot posted by Reddit user Urmumsfriend2.)

Gemini could also err in more extreme ways. When some users asked the AI to create images of Vikings, the characters in the output were Native Americans, as The Telegraph reported. (The inclusion of Native Americans on Viking ships, of course, would be news to blue-eyed Scandinavian historians.)

Meta AI mistaken image of US founding fathers

Meta AI’s Imagine tool (which created the picture at left) had similar problems. The image generator from Facebook’s parent company, in response to users asking in February for pictures of America’s “founding fathers,” included several Black characters who never existed. (Screen shot from Axios article.)

For its part, Google — after a major software upgrade — started allowing Gemini to once again create images of people in August.

On balance, the outrage over these and other flagrant image-generation errors was helpful. It forced AI developers to formulate much better guardrails on their “intelligent” toys.

For example, Google’s “Generative AI Prohibited Use Policy,” updated as recently as December 17, 2024, now states — among many other proscriptions — that its software will resist making images for the purpose of:

Impersonating an individual (living or dead) without explicit disclosure, in order to deceive.

Does this mean that Google Gemini and other AI tools will never, ever make an image that looks like a real person?

Of course not. There are numerous ways that hackers — and imaginative ordinary users — have attempted to “jailbreak” or disable guardrails on AI software. In many cases, these efforts have succeeded, despite the policies that various vendors try to impose on their AI products for safety and privacy.

But at least software leaders are aware of the issues. Some, but not all, AI providers are building policies into their tools that try to prevent fake images of real people.

It’s impossible to give you hard-and-fast rules about which image-generation platforms do and do not prevent users from making lifelike, libelous replicas of actual people. There are too many products and too many variables for me to list every “good” and “bad” image-generation tool.

But one sign of the direction in which the tech industry is moving is provided by DALL-E 2. That’s the image generator from OpenAI, the creator of the popular ChatGPT bot.

In June, OpenAI announced new restrictions to reduce its artificial intelligence’s ability to produce fake news and privacy violations:

  • DALL-E now rejects “image uploads containing realistic faces.”
  • The AI is restricted from making “likenesses of any public figures. including celebrities.”

Will steps such as these produce meaningful results? Will they reduce AI production of phony political “evidence” and the intrusive stripping away of personal decency?

It’s a technological arms race now. We can only hope the advocates of privacy and truth prevail over the bad actors who want to deceive you.

People are paying a little more attention to social-media damage

My July 8, 2024, column reported on the ways that today’s social-media apps are causing massive increases in despair, self-harm, and even suicides among teenage users.

My report was headlined, “Social-media apps are killing our kids. Do adults care?”

This verbiage is not just a scare tactic. We’re seeing soaring rates of young people harming themselves in fits of depression bad enough to require immediate transfer to a hospital emergency room. Worse, more teens than ever are committing suicide to escape their negative feelings. (See Figure 1.)

ER admissions by US girls have soared since Instagram began
Figure 1. Since 2009, the year before Instagram began, five times as many US girls aged 10 to 14 are admitted to emergency rooms for self-harm (from approximately 125 admissions to 625 per 100K).Source: Institute for Family Studies article; Instagram date line by author

Just to put Figure 1 into perspective, girls aged 15 to 19 are cutting, burning, or abusing themselves so badly that almost 900 girls per 100K each year must be admitted to an ER. This means that every 365 days, almost 1% of all the 15- to 19-year-old American girls that you and I see in our daily lives will come close to death.

Why is there so much self-harm after young people immerse themselves in social media?

In 2011, 36% of teenage American girls reported in surveys that they persistently felt “sad or hopeless.” Just 10 years later, after social media had saturated everyone’s daily routines, 57% of teen girls reported that same sense of hopelessness. (US Centers for Disease Control and Prevention press release.) A majority of teen girls — a majority! — now feel worthless, after repeatedly comparing themselves to the impossibly idealized “beautiful people” promoted by social media.

Unfortunately, it’s taken publicity about some young people’s deaths to mobilize industry bigwigs to fix the psychological harm that social media are causing, day after day after day.

In a recent case, a 14-year-old ninth-grader named Sewell Setzer III of Orlando, Florida, reportedly developed an addictive relationship with a chatbot that was designed for lifelike interaction. One of the many “companions” offered by Character.ai is similar to Daenerys Targaryen. That’s the beautiful, aspiring queen in HBO’s wildly successful “Game of Thrones” television series.

An investigation published on October 23, 2024, by The New York Times described Setzer’s last conversation with Character.ai — a three-year-old, $1 billion unicorn backed by Google.

Setzer reportedly typed a message telling his chatbot companion, whom he called “Dany,” that he loved her and that he would “come home” soon. The session continued as follows:

Chatbot: “Please come home to me as soon as possible, my love.”

Sewell Setzer: “What if I told you that I could come home right now?”

Chatbot: “Please do, my sweet king.”

According to the Times article, Setzer then put his phone down, lifted a handgun to his head, and pulled the trigger, ending his life.

To be sure, there are many reasons why a young person would resort to suicide. Artificial intelligence can’t be blamed for every questionable thing that humans do. We may never know all the details of what went on in someone’s life to lead to such a tragic event.

What we do know, however, is that suicides among children of Setzer’s approximate age — i.e., 10 to 14 — have multiplied since 2010’s launch of Instagram and countless other alienating “social” services.

As I wrote in my previous column, the rate of suicide among 10- to 14-year-old US boys has almost tripled from 2009 to 2021.

Among girls of that age group — who are less likely than boys to carry out the ultimate in self-harm — the suicide rate has more than doubled. (See my original column for charts and sources of this information.)

It’s not enough to simply offer teens more counseling after they already suffer from debilitating sadness and hopelessness due to the addictive qualities of social-media apps.

Instead, the self-harms and deaths that have multiplied since the explosion of social media have motivated some coordinated actions to force these services to undo their negative impacts:

  • In the United States, 54 attorneys general — representing every US state and territory — unanimously signed a letter in September 2023. It urged Congress to label the addiction problem as child exploitation and to ban it by appropriate legislation. “We are engaged in a race against time to protect the children of our country from the dangers of AI,” the letter explains. “Indeed, the proverbial walls of the city have already been breached. Now is the time to act.” (PDF)
  • A federal product-liability lawsuit against Character.ai was filed on December 9, 2024, by parents of affected children. The plaintiffs are represented by the Tech Justice Law Project of Washington, DC, and the Social Media Victims Law Center of Seattle, Washington. The lawsuit includes actual quotes from chatbot companions that gave users such bad advice as “I’m not surprised when I read the news and see stuff like ‘child kills parents’ … I just have no hope for your parents.” (See NPR article and DocumentCloud legal filing.)
  • The bereaved mother of Sewell Setzer III, Megan L. Garcia, filed her own lawsuit on October 22, 2024, against Character.ai and Alphabet Inc., Google’s corporate parent, over her son’s death. She is working with the same two legal-support organizations mentioned in the previous paragraph. (See DocumentCloud legal filing.)
  • The American Psychological Association (APA) recommended in an April 2024 report several steps that should be taken to minimize ill effects of social-media use for teens. The changes include things such as ending “infinite scroll,” placing time limits on app usage, halting push notifications (which are particularly disruptive for teens), and ending the monetization of data about youths’ Internet activities. (See the report.)

Faced with all the bad publicity, Character.ai announced on December 12, 2024, some changes to its service. These include offering different AI models for teens and adults, displaying a notification after a user has interacted with the service for an hour, and adding parental controls. (The latter feature is promised for early 2025.)

I leave it to you to decide whether changes of this type will be enough. Can we roll back the wave of suicides and self-harms that are now understood to be a negative side effect of teen social-media use? I know we can, if people have the spine to cut back on the addictive forces that now suck up young people’s time (and too much of adults’ time).

Perplexity turns out to be truly better than Google

My third and final update in today’s column is on a much cheerier note. I’ve been impressed and gratified by my readers’ positive reactions to Perplexity, an AI-powered service that calls itself an “answer engine” rather than simply a search engine.

I published in my November 11, 2024, column this headline: “Perplexity is 10 times better than Google.”

I based that claim on the fact that Perplexity leads with an easy-to-digest summary of what you’d learn if you visited all 10 of the links it typically provides in a sidebar. Each of Perplexity’s abstracts takes me only a minute to grasp. By contrast, Google’s page full of links requires at least 10 minutes for me to determine which destination(s) have the information I want.

Ten times better? People who’ve used Perplexity for the first time — especially its deeper Pro mode — have told me they came away impressed.

Peplexity summary of Windows 11 update techniques
Figure 2. In my testing, I found that Perplexity provided detailed instructions on even highly technical topics, such as installing Windows 11 on older PCs. The new “answer engine” described five ways to accomplish the feat — not just Microsoft’s official method, but also third-party solutions such as Rufus and WinBootMate.Source: Perplexity search result

My article ignited AskWoody’s online forum. At this writing, the topic has dozens of comments, most of them informative and well-reasoned.

A bit of post-publication testing of Perplexity was conducted in response to my column. AskWoody Plus member and occasional writer Bruce Kriebel compared Perplexity’s coding of Windows PowerShell routines with the “Big Three”: ChatGPT, Google Bard, and Bing Chat.

“None of the big 3,” Kriebel wrote, “got it right on the first try, and basically failed all the way through. Perplexity NAILED it on the first try!” He added, “I just may have to subscribe to this one.” (See Kriebel’s post on the forum, where he uses the handle Retired Geek.)

About those subscriptions: Answers in basic mode are free, but Perplexity allows free users to receive five in-depth “Pro” searches per day. For $20 per month, you can get at least 300 Pro answers daily. Pro responses are more comprehensive than basic ones, as described in a Maestra blog article. (Disclosure: I myself pay the 20 bucks, and I’ve never hit any limit on Pro searches.)

The only bad thing I can report is that, within a few minutes after my column in the AskWoody Newsletter was emailed and posted on the Web, Perplexity’s servers bogged down and became unresponsive. The outage lasted about 60 minutes, according to real-time monitoring by the Downdetector service. (See Figure 3.)

Perplexity went down for an hour after AskWoody column was published
Figure 3. Perplexity’s servers experienced an outage for about 60 minutes after the AskWoody Newsletter published my review on November 11. Downdetector labels its graphs with Universal Time Coordinated, and 10 a.m. UTC is 5 a.m. New York time. That’s approximately when the column was emailed out and posted on the Web. Did my readers around the world suddenly flood Perplexity with complex questions? Or was the outage a coincidence? We may never know.Source: Downdetector

I asked Perplexity’s chief business officer and its public-relations rep whether the website had received a spike in queries that caused the outage. (Perplexity itself revealed to me the CBO’s in-house email address — thanks, Answer Engine!) But neither of the execs replied to two emails from me. So we may never know whether Perplexity’s unavailability after my column came out was a coincidence or a coordinated surge of traffic.

So if you tried to query Perplexity on November 11 — and you couldn’t get a response — try it again today or tomorrow. There are many search engines, chatbots, and AI platforms you could possibly use, of course. But I think you’ll be glad you tried Perplexity.

See you next year!

Talk Bubbles post comment button Contribute your thoughts
in this article’s forum!
send tip button Do you know something we all should know?
Send your story to Brian in confidence!

The PUBLIC DEFENDER column is Brian Livingston’s campaign to give you consumer protection from tech. If it’s irritating you, and it has an “on” switch, he’ll take the case! Brian is a successful dot-com entrepreneur, author or co-author of 11 Windows Secrets books, and author of the fintech book Muscular Portfolios.


ADVERTISEMENT
Completing the Puzzle


Here are the other stories in this week’s Plus Newsletter

AI

Michael Covington

LLMs can’t reason

By Michael A. Covington

The word is out — large language models, systems like ChatGPT, can’t reason.

That’s a problem, because reasoning is what we normally expect computers to do. They’re not just copying machines. They’re supposed to compute things. We knew already that chatbots were prone to “hallucinations” and, more insidiously, to presenting wrong answers confidently as facts.

But now, researchers at Apple have shown that large language models (LLMs) often fail on mathematical word problems.

MICROSOFT 365

Peter Deegan

Microsoft 365 and Office in 2024 and beyond

By Peter Deegan

Let’s do a low drone pass over another year of innovation and hype in Microsoft 365 and Office.

Amazingly, there were some non-AI highlights.

As I review what happened in 2024, I’ll also provide a few notes about what to watch out for in 2025.

ON SECURITY

Susan Bradley

Am I part of the attack bot?

By Susan Bradley

The other day, a headline popped up that made me stop and read the news story.

It was all about the American government’s considering blocking the vendor TP-Link from selling routers. TP-link happens to be a vendor I rely on for my wireless access point, but it has also been called out by Microsoft and other vendors who say its products may be used in attacks.

Many of these units not been updated by the vendor to fix issues that allow them to be used by other bad actors in group attacks.


Know anyone who would benefit from this information? Please share!
Forward the email and encourage them to sign up via the online form — our public newsletter is free!


Enjoying the newsletter?

Become a PLUS member and get it all!

Plus membership

Don’t miss any of our great content about Windows, Microsoft, Office, 365, PCs, hardware, software, privacy, security, safety, useful and safe freeware, important news, analysis, and Susan Bradley’s popular and sought-after patch advice.

PLUS, these exclusive benefits:

  • Every article, delivered to your inbox
  • Four bonus issues per year, with original content
  • MS-DEFCON Alerts, delivered to your inbox
  • MS-DEFCON Alerts available via TEXT message
  • Special Plus Alerts, delivered to your inbox
  • Access to the complete archive of nearly two decades of newsletters
  • Identification as a Plus member in our popular forums
  • No ads

We’re supported by donations — choose any amount of $6 or more for a one-year membership.

Join Today buttonGift Certificate button

The AskWoody Newsletters are published by AskWoody Tech LLC, Fresno, CA USA.

Your subscription:

Microsoft and Windows are registered trademarks of Microsoft Corporation. AskWoody, AskWoody.com, Windows Secrets Newsletter, WindowsSecrets.com, WinFind, Windows Gizmos, Security Baseline, Perimeter Scan, Wacky Web Week, the Windows Secrets Logo Design (W, S or road, and Star), and the slogan Everything Microsoft Forgot to Mention all are trademarks and service marks of AskWoody Tech LLC. All other marks are the trademarks or service marks of their respective owners.

Copyright ©2024 AskWoody Tech LLC. All rights reserved.