In this issue PUBLIC DEFENDER: The best stories of 2024 — updated! Additional articles in the PLUS issue AI: LLMs can’t reason MICROSOFT 365: Microsoft 365 and Office in 2024 and beyond ON SECURITY: Am I part of the attack bot?
PUBLIC DEFENDER The best stories of 2024 — updated!
By Brian Livingston • Comment about this article The year 2024 is now in the books. I’m pleased to report some positive moves this year that may make the tech industry’s products better for us all. I’ll give you some important updates today on (1) keeping artificial-intelligence services from creating malicious images, (2) minimizing social-media websites’ negative effects on users’ mental health, and (3) discovering how “answer engines” are improving on the tiresome linkfests of old-guard search giants. Image-generation developers are starting to protect real people
I wrote in my April 8, 2024, column about the problem of deepfakes, also known as fauxtography. These are images and videos that seem to depict real people doing or saying something they never did or said. At the time, my main question was: “Why do we even allow AI apps to generate phony images of real people?” Such fakes almost never have the consent of the individuals who are depicted. The phony but realistic-looking contents cause embarrassment, emotional agony, even violence against victims. For example, several people have been killed in India after a fake video warning about “child abductions” was widely circulated on WhatsApp, according to a BuzzFeed News article. Fortunately, highly publicized scandals have focused a bright light on the problems of AI-generated fakes. The developers of image-generative software are belatedly taking steps to upgrade their tools to prevent fakery. Google’s Gemini (which made the image at left) was restricted by its developers in February from making any depictions of persons, due to numerous errors. Critics were scathing in pointing out how many howling falsehoods the AI generator was creating. For example, in response to requests for pictures of “the drafting of the US Constitution,” Gemini’s output placed several Africans among the committee members. It goes without saying that most Africans in North America in 1787 were enslaved and weren’t invited to the drafting session. (Screen shot posted by Reddit user Urmumsfriend2.) Gemini could also err in more extreme ways. When some users asked the AI to create images of Vikings, the characters in the output were Native Americans, as The Telegraph reported. (The inclusion of Native Americans on Viking ships, of course, would be news to blue-eyed Scandinavian historians.) Meta AI’s Imagine tool (which created the picture at left) had similar problems. The image generator from Facebook’s parent company, in response to users asking in February for pictures of America’s “founding fathers,” included several Black characters who never existed. (Screen shot from Axios article.) For its part, Google — after a major software upgrade — started allowing Gemini to once again create images of people in August. On balance, the outrage over these and other flagrant image-generation errors was helpful. It forced AI developers to formulate much better guardrails on their “intelligent” toys. For example, Google’s “Generative AI Prohibited Use Policy,” updated as recently as December 17, 2024, now states — among many other proscriptions — that its software will resist making images for the purpose of: Impersonating an individual (living or dead) without explicit disclosure, in order to deceive. Does this mean that Google Gemini and other AI tools will never, ever make an image that looks like a real person? Of course not. There are numerous ways that hackers — and imaginative ordinary users — have attempted to “jailbreak” or disable guardrails on AI software. In many cases, these efforts have succeeded, despite the policies that various vendors try to impose on their AI products for safety and privacy. But at least software leaders are aware of the issues. Some, but not all, AI providers are building policies into their tools that try to prevent fake images of real people. It’s impossible to give you hard-and-fast rules about which image-generation platforms do and do not prevent users from making lifelike, libelous replicas of actual people. There are too many products and too many variables for me to list every “good” and “bad” image-generation tool. But one sign of the direction in which the tech industry is moving is provided by DALL-E 2. That’s the image generator from OpenAI, the creator of the popular ChatGPT bot. In June, OpenAI announced new restrictions to reduce its artificial intelligence’s ability to produce fake news and privacy violations:
Will steps such as these produce meaningful results? Will they reduce AI production of phony political “evidence” and the intrusive stripping away of personal decency? It’s a technological arms race now. We can only hope the advocates of privacy and truth prevail over the bad actors who want to deceive you. People are paying a little more attention to social-media damage
My July 8, 2024, column reported on the ways that today’s social-media apps are causing massive increases in despair, self-harm, and even suicides among teenage users. My report was headlined, “Social-media apps are killing our kids. Do adults care?” This verbiage is not just a scare tactic. We’re seeing soaring rates of young people harming themselves in fits of depression bad enough to require immediate transfer to a hospital emergency room. Worse, more teens than ever are committing suicide to escape their negative feelings. (See Figure 1.)
Just to put Figure 1 into perspective, girls aged 15 to 19 are cutting, burning, or abusing themselves so badly that almost 900 girls per 100K each year must be admitted to an ER. This means that every 365 days, almost 1% of all the 15- to 19-year-old American girls that you and I see in our daily lives will come close to death. Why is there so much self-harm after young people immerse themselves in social media? In 2011, 36% of teenage American girls reported in surveys that they persistently felt “sad or hopeless.” Just 10 years later, after social media had saturated everyone’s daily routines, 57% of teen girls reported that same sense of hopelessness. (US Centers for Disease Control and Prevention press release.) A majority of teen girls — a majority! — now feel worthless, after repeatedly comparing themselves to the impossibly idealized “beautiful people” promoted by social media. Unfortunately, it’s taken publicity about some young people’s deaths to mobilize industry bigwigs to fix the psychological harm that social media are causing, day after day after day. In a recent case, a 14-year-old ninth-grader named Sewell Setzer III of Orlando, Florida, reportedly developed an addictive relationship with a chatbot that was designed for lifelike interaction. One of the many “companions” offered by Character.ai is similar to Daenerys Targaryen. That’s the beautiful, aspiring queen in HBO’s wildly successful “Game of Thrones” television series. An investigation published on October 23, 2024, by The New York Times described Setzer’s last conversation with Character.ai — a three-year-old, $1 billion unicorn backed by Google. Setzer reportedly typed a message telling his chatbot companion, whom he called “Dany,” that he loved her and that he would “come home” soon. The session continued as follows: Chatbot: “Please come home to me as soon as possible, my love.” Sewell Setzer: “What if I told you that I could come home right now?” Chatbot: “Please do, my sweet king.” According to the Times article, Setzer then put his phone down, lifted a handgun to his head, and pulled the trigger, ending his life. To be sure, there are many reasons why a young person would resort to suicide. Artificial intelligence can’t be blamed for every questionable thing that humans do. We may never know all the details of what went on in someone’s life to lead to such a tragic event. What we do know, however, is that suicides among children of Setzer’s approximate age — i.e., 10 to 14 — have multiplied since 2010’s launch of Instagram and countless other alienating “social” services. As I wrote in my previous column, the rate of suicide among 10- to 14-year-old US boys has almost tripled from 2009 to 2021. Among girls of that age group — who are less likely than boys to carry out the ultimate in self-harm — the suicide rate has more than doubled. (See my original column for charts and sources of this information.) It’s not enough to simply offer teens more counseling after they already suffer from debilitating sadness and hopelessness due to the addictive qualities of social-media apps. Instead, the self-harms and deaths that have multiplied since the explosion of social media have motivated some coordinated actions to force these services to undo their negative impacts:
Faced with all the bad publicity, Character.ai announced on December 12, 2024, some changes to its service. These include offering different AI models for teens and adults, displaying a notification after a user has interacted with the service for an hour, and adding parental controls. (The latter feature is promised for early 2025.) I leave it to you to decide whether changes of this type will be enough. Can we roll back the wave of suicides and self-harms that are now understood to be a negative side effect of teen social-media use? I know we can, if people have the spine to cut back on the addictive forces that now suck up young people’s time (and too much of adults’ time). Perplexity turns out to be truly better than Google
My third and final update in today’s column is on a much cheerier note. I’ve been impressed and gratified by my readers’ positive reactions to Perplexity, an AI-powered service that calls itself an “answer engine” rather than simply a search engine. I published in my November 11, 2024, column this headline: “Perplexity is 10 times better than Google.” I based that claim on the fact that Perplexity leads with an easy-to-digest summary of what you’d learn if you visited all 10 of the links it typically provides in a sidebar. Each of Perplexity’s abstracts takes me only a minute to grasp. By contrast, Google’s page full of links requires at least 10 minutes for me to determine which destination(s) have the information I want. Ten times better? People who’ve used Perplexity for the first time — especially its deeper Pro mode — have told me they came away impressed.
My article ignited AskWoody’s online forum. At this writing, the topic has dozens of comments, most of them informative and well-reasoned. A bit of post-publication testing of Perplexity was conducted in response to my column. AskWoody Plus member and occasional writer Bruce Kriebel compared Perplexity’s coding of Windows PowerShell routines with the “Big Three”: ChatGPT, Google Bard, and Bing Chat. “None of the big 3,” Kriebel wrote, “got it right on the first try, and basically failed all the way through. Perplexity NAILED it on the first try!” He added, “I just may have to subscribe to this one.” (See Kriebel’s post on the forum, where he uses the handle Retired Geek.) About those subscriptions: Answers in basic mode are free, but Perplexity allows free users to receive five in-depth “Pro” searches per day. For $20 per month, you can get at least 300 Pro answers daily. Pro responses are more comprehensive than basic ones, as described in a Maestra blog article. (Disclosure: I myself pay the 20 bucks, and I’ve never hit any limit on Pro searches.) The only bad thing I can report is that, within a few minutes after my column in the AskWoody Newsletter was emailed and posted on the Web, Perplexity’s servers bogged down and became unresponsive. The outage lasted about 60 minutes, according to real-time monitoring by the Downdetector service. (See Figure 3.)
I asked Perplexity’s chief business officer and its public-relations rep whether the website had received a spike in queries that caused the outage. (Perplexity itself revealed to me the CBO’s in-house email address — thanks, Answer Engine!) But neither of the execs replied to two emails from me. So we may never know whether Perplexity’s unavailability after my column came out was a coincidence or a coordinated surge of traffic. So if you tried to query Perplexity on November 11 — and you couldn’t get a response — try it again today or tomorrow. There are many search engines, chatbots, and AI platforms you could possibly use, of course. But I think you’ll be glad you tried Perplexity. See you next year!
The PUBLIC DEFENDER column is Brian Livingston’s campaign to give you consumer protection from tech. If it’s irritating you, and it has an “on” switch, he’ll take the case! Brian is a successful dot-com entrepreneur, author or co-author of 11 Windows Secrets books, and author of the fintech book Muscular Portfolios.
The AskWoody Newsletters are published by AskWoody Tech LLC, Fresno, CA USA.
Your subscription:
Microsoft and Windows are registered trademarks of Microsoft Corporation. AskWoody, AskWoody.com, Windows Secrets Newsletter, WindowsSecrets.com, WinFind, Windows Gizmos, Security Baseline, Perimeter Scan, Wacky Web Week, the Windows Secrets Logo Design (W, S or road, and Star), and the slogan Everything Microsoft Forgot to Mention all are trademarks and service marks of AskWoody Tech LLC. All other marks are the trademarks or service marks of their respective owners. Copyright ©2024 AskWoody Tech LLC. All rights reserved. |