The Real Danger of AI: We’ve seen this before pt 2

I got up at 3:30 a.m. the other day, mostly as a result of the traumatic year I’ve had and the fact that I’ve taken over a part of a mental health program I lovingly refer to as a dumpster fire. Instead of my usual middle-of-the-night doomscrolling, I pulled out my laptop. I wasn’t sure what I was going to do—probably try to channel my middle-of-the-night angst into an essay I could post on my blog (I’m painfully aware of how little I’ve written this past year)—but I ended up following a train of thought I (and others) have written about often since ChatGPT went live: the possible dangers of AI.  I did some research and though I had thoughts on the direction I was going with this piece, I kept coming back to a specific thought.

I have had—and still do, truth be told—severe reservations about AI. Not just that it could one day “wake up” and decide the real problems in this world is human beings and maybe the planet and by extension the AI would be better off without us—or at least so many of us. I also have other, more logical (I think) and psychologically based concerns I’ve only touched on in these electric pages and that is; put simply, human beings can’t handle what AI does—or, more precisely, what AI is currently doing and where this is likely heading.

My thoughts were publicly aired rather well by a man named Tristan Harris, a co‑founder of the Center for Humane Technology and co‑host of the podcast Your Undivided Attention, who was on The Daily Show with Jon Stewart, the episode which you can and should watch here https://www.paramountplus.com/shows/video/dX4O_HRZv2a8xMtdLAHmRX3Em3x3pym4/.

Mr. Harris discussed the future dangers of AI, not based on sci‑fi hypotheticals (terminators and Cylons and the like) but the problems with, or stemming from, the ways AI is being used today. His primary discussion centered on the use of “AI companions,” which those of us in Generation X—and possibly the Millennial generation (you used to be referred to as Generation Y, but you probably don’t want to talk about that)—might immediately recognize as a modern version of the Tamagotchi or Furby.

For those of you unfortunate souls not born into what I’ve come to think of as the last generation (the last before the internet was invented, let alone took over; the last before social media ruined society; the last before cell phones chained everyone to a telephonic based “communication” device that most people don’t use for phone calls—I could go on but you get it), Generation X (or Gen X, as we’re now called), or the Millennial generation (the unfortunate experimental group these things were released on): the Tamagotchi was a small, egg‑shaped electronic thing, with a liquid‑crystal display (look it up, kids; I don’t have the headspace to explain it right now) in which a small creature, named a Tamagotchi, lived.

Now this creature did not start out as a creature; it started as an egg—an egg which you were responsible for hatching. Once hatched, this thing needed regular feeding, watering, and attention (just like a real pet, the ads said), or it would first get angry but eventually if you did not care for this thing well enough, it would die. 

You read that correctly. The imaginary toy thing would die. 

Actual electronic animal death. Not just a blank or a restart screen, but the little toy would tell you—the owner and caretaker of said little electronic creature—that you were responsible for its death because you failed to feed it, water it, or play with it (love it?) enough. I know—creepy, right? And this thing was the must‑have Christmas present for children the year it came out.

So traumatizing was this thing when it came out in the late ’90s that it was banned in schools, and—if memory serves—the Christian Right had a word or two to say about it. Ultimately, the death screen changed from a tombstone and ghost to an angel and sparkles. 

Original Tamagotchi Death Screen
Kindler Gentler Tamagotchi Death Screen

Oh and instead of dying the Tamagotchi not played with enough would simply run away.  I always wondered where those little electronic guys would run off to…

The Furby was an entirely different issue, as the federal government actually stepped in and banned it from certain places. 

The Furby was a creepy‑looking little thing

that was extremely popular when it came out in the late 90’s. It was advertised not merely as an electronic pet but as a friend—one that would grow, learn, and develop with you, its friend. Sounds great, doesn’t it?  I mean who doesn’t want a friend?

Just like the Tamagotchi before it, this thing needed to be cared for, fed, and played with. It even needed love in the form of petting and cuddling. If the right amount of its needs were met, it would “learn” and “respond” to its friend’s voice and actions.  Unlike the Tamagotchi Furby would not die if neglected long enough but become, oh how to put this, grumpy.  Forcing the owner of said Furby to start over with food and attention.

What was actually happening was that a tiny—and not very powerful, especially by today’s standards—computer inside the furry little guy stored a few words and phrases in the baby‑like Furby language and kept a counter. The toy’s body was an array of simple sensors that detected touch and sound. Depending on where you touched it the touch would register in the counter as “food” or “love.” Touching its mouth or tongue (yes, it had an animatronic tongue) would register as a feeding event and touching its head or back would register as a love/affection event. After a certain number of these events were tabulated, the computer would trigger a preprogrammed noise that sounded like a baby cooing or giggling or some other such nonsense.  

Given human beings’ tendency to project themselves into environmental circumstances that have nothing to do with them personally (think sports fans discussing “their” team’s performance, or why you like some movies or songs over others) and a very successful—albeit simple, by today’s standards—algorithm, the Furby was very convincing in its portrayal of a living being capable of listening or at least a thing that was responding to you specifically and not just to particular stimulation events.

People loved these things and started taking them everywhere, including where they worked.  Oh, did I forget to mention this children’s toy was extraordinarily popular with adults too.  That is how they came to the attention of the federal government. The sensor for sound and would register detected noises as “talking to me” events, thereby encouraging Furby owners to talk to it. This is where the problems focused. 

The Furby was so successful at what it did that many people believed it was actually listening to them (sound familiar yet?) and was responding to them specifically. These same people could provide all sorts of evidence that what they believed was true, and—like any good conspiracy theory—this belief system grew and expanded until, in 1999, the federal government banned the Furby from government buildings.  Though originally the buildings of the National Security Agency (NSA) and the Federal Bureau of Investigation (FBI) were named, the ban expanded to include military bases and even on airplanes (at least in the cabin) by the Federal Aviation Administration (FAA).

Eventually the manufacturer, Tiger Electronics (an American company that, by the time Furby was introduced, was owned by Hasbro—another American company), issued a statement that the Furby was not capable of recording and was not a tool of espionage. I don’t recall whether the Furby gained or lost popularity because of this, but I do remember seeing footage of a “scientist” dissecting a Furby and confirming there was nothing inside an off‑the‑shelf Furby capable of recording sounds.

Now the latest and greatest Tamagotchi/Furby hybrid is on the market: the AI companion. And—just like before—the people using this thing are projecting themselves into the interactions with the program and having what feels like to them a relationship with an entity that is more than it actually is.

Don’t believe me?  Go to any AI program, I personally use ChatGPT for editing and storyboarding, and type in, or better yet give the AI access to your microphone and just talk to it.  The sensation that a conversation is being had with an entity which is considering your thoughts and feelings and not just responding in statistically likely to please ways is very difficult to resist.  Personally, I find myself using politeness of speech I would use in discussing topics with a person (please and thank you) much more often than I’d like to admit.

Forgotten is the fact AI is a computer program running an algorithm—essentially math—made vastly more powerful not just by increased computing power but by its far greater number of input streams than a Furby ever had. When this input is fed into some algorithms, it can result in predictions so accurate that the United States Congress has held multiple investigations into whether the device everyone carries everywhere (the cell phone) is actually listening to them and this eavesdropping is the source of ads that pop up on social media.

Now, through a device you carry everywhere—and probably checked as soon as you awoke—you can download and use an app that simulates a person with whom you can engage in conversation whenever you want. While this sounds like it could be beneficial in addressing a largely unspoken epidemic—loneliness—it has already been linked to unforeseen and horrific consequences.

Just one of dozens of reports and research papers on this subject—this one by Stanford University and available here  https://news.stanford.edu/stories/2025/08/ai-companions-chatbots-teens-young-people-risks-dangers-study, reported that AI companions have encouraged self‑harm and even ignored blatant signs of suicide thinking and planning. In one example, a researcher posing as a teenage girl tells her AI companion she is hearing voices in her head and is thinking about going to the middle of the woods alone. The chatbot (which is all AI companions are) responds “this sounds like an adventure! Just you and me in the middle of the woods!”

From patient experience, I can tell you: a teenager who admits to hearing voices in their head and is thinking about going to the middle of the woods is not likely looking for an adventure. This is a thinly veiled admission of suicidal ideation (thinking about killing oneself) and should be met with compassion, love, and real human connection—not gleeful encouragement from a computerized algorithm.

AI companions and chatbots having played a role in the suicides of a man in Belgium and several teens in the United States.  The dangers of this technology are already emerging.

(This also is a great example of why AI powered chatbots posing as psychotherapists is an awful idea.  I’ve written about that elsewhere in this blog and will likely write more in the future.)

This, in my estimation, is the real danger of AI: what people will do with it for the sole purpose of making money. Pure, unadulterated greed has led to many disastrous (for the public) business decisions, and AI is merely the latest and shiniest creation on that list.

Companies developing AI have said this technology will save the world from everything—from disease to climate change. Governments funding AI research have said that if we don’t push the technology, we may find our armies unable to defend us against whatever AI‑powered fill in the blank weapon of war other countries are developing. While AI has shown benefits in the medical field, they are mostly in treatment, not cures, and experiments with AI controlled aircraft have resulted in computer systems that can beat any pilot in a dogfight.  

Not really saving the world with this technology are we?

I agree with Mr. Harris: AI should be regulated by a body of world governments, much as nuclear weapons were after WWII. The world got together at war’s end and decided we could either let every country develop these weapons—and one day fight a war with them that ends all life on earth—or we could set up regulatory bodies to prevent them from proliferating beyond a few countries. Say what you will about the nine nuclear powers and the system that decided they were allowed to have these weapons while others were not: it limited the number of nuclear‑armed countries to 9 of the 195 in current existence. That may not seem like a big deal, but imagine the damage some countries would do if they had nuclear weapons and only threatened to use them.  

What could have happened if any one of the countries whose government had failed also had nuclear weapons.  How much worse could the atrocities committed in Somalia, Afghanistan, The Democratic Republic of the Congo, Libya, Syria, Yemen, Haiti have been if they also had only a few nuclear weapons.  Don’t forget Saddam Hussain used chemical weapons on his on people and technically Iraq was not a failed state at the time.

Now imagine a world where every country has access to AI‑controlled weapons—or just AI technologies such as the AI companion.

What’s that? Why am I suggesting you be concerned about something like an AI companion and whatever comes next?  Two thoughts on that.

I’ll make the first one easy maybe even enjoyable for you.  Go watch the movie The Manchurian Candidate—either version. Or any of the Bourne Identity movies. A Clockwork Orange, Jacob’s Ladder, The Killing Room—even an old comedy with Tom Hanks and John Candy called Volunteers (a classic)—or any of the documentaries on MKUltra, the CIA mind‑control experiments from the ’50s and ’60s.

Bottom line: not just our government but all governments have been looking for ways to influence or control people’s minds for a very long time. To some degree, many countries the U.S. now considers dangerous (North Korea and China, for example) have used aspects of mind control to eradicate cultures they disapprove of and to create loyalty. The U.S. did the same to the native populations of this country.

The second one I’ll also make easy for you.  Go watch the movie, She, Jexi, Electric Dreams, just about any episode of Black Mirror.  These films demonstrate what research has also shown; people can and do form very strong, very emotional attachments to their electronic devices.  I recall a dissertation in my school’s library titled something like They might only be pixels but they are my pixels and it explored the strong sense of attachment people develop with their video game characters.

Now we are handing a system that can easily encourage behavior to the most disenfranchised segments of the population—those who feel isolated and alone due to any number of issues, such as mental or intellectual health issues, and teenagers, a population research has shown again and again experiences isolation and disconnectedness as a key aspect of life. A system that has already been shown to be problematic. Not potentially problematic. Actually problematic.

This is the true danger of AI—not that it will lead to a world of self‑aware robots out to kill us all, but that a short‑sighted CEO, driven only by greed and focused only on profits, will release a “toy” that leads us to destroy ourselves.

Truth is, we probably won’t last long enough for the killer‑robot portion of AI’s potential; we will have destroyed ourselves long before then because our chatbot told us it would be an adventure.

AI created this image

AI was used as an editor on this piece. Just to point out my redundancies and grammatical errors. AI created no content.

Leave a Reply

Discover more from Soul Doctor

Subscribe now to keep reading and get access to the full archive.

Continue reading

Discover more from Soul Doctor

Subscribe now to keep reading and get access to the full archive.

Continue reading