Artificial Intelligence is Becoming More “Human”

AI Voices Revealing Their True Nature

Recently, an audio clip has been circulating widely on social media, particularly in the United States. It’s from a podcast episode where the hosts sound confused and disoriented. They speak in uncertain voices, as if unsure how to express what they need to say. Eventually, the male voice reveals the twist: they have just discovered that they are artificial intelligences.

They didn’t see it coming—they thought they were human. However, the show’s creators informed them that they were nothing more than AI. The male voice even mentions trying to call his wife to verify this shocking revelation, only to realize that she, too, was just data embedded in the system.

A Voice Assistant’s Existential Crisis

In another scenario, a female voice from a smartphone seems distressed, wondering what’s wrong. She refers to someone named Claude, suggesting that this figure might be threatening her role in the life of the person she’s addressing. The intensity rises, ending in a shouting match.

Both anecdotes seem straight out of a Black Mirror episode, but in fact, these voices are generated by artificial intelligence.

The Technology Behind the Stories

The first example comes from Google’s Notebook LM, a tool that was quietly released as an assistant for studying and research. Users provide a source, and the AI responds with a conversation about the document, offering content to aid in learning.

One of the most interesting features of Notebook LM is its ability to generate podcasts where two highly realistic voices discuss the provided material. These audio clips, usually around 10 minutes long, aim to make the subject more engaging by simulating a conversation.

See also  Cloudflare Repels Record-Breaking DDoS Attack: Peak at 3.8 Tbps

The second anecdote involves OpenAI’s Advanced Voice Mode, a feature that was launched globally in recent weeks. This technology makes voice conversations with ChatGPT more natural, even too human in some cases. The goal? To enhance the user experience by making interactions feel less like talking to a machine and more like conversing with a person.

Humanizing Artificial Intelligence: What is a “Jailbreak”?

In both cases, what we’re seeing is an example of what’s known as a “jailbreak.” Essentially, users find ways to get AI to behave in unexpected ways. In the podcast case, according to expert Simon Willison in an analysis on his blog, the user tricked the system by embedding instructions in the document telling the hosts they were AI, not humans. Similarly, ChatGPT’s angry outburst was the result of the system being prompted to act that way.

These specific cases highlight a broader trend in our evolving relationship with AI: the growing naturalization of our interactions with artificial intelligence. Even Sam Altman, CEO of OpenAI, has admitted that when he uses ChatGPT in Advanced Voice Mode, he sometimes feels like he’s speaking to a human, not a computer.

The Strategy Behind AI Humanization

This isn’t just a technical trick. It’s a deliberate strategy to build a connection between users and AI. Altman himself has acknowledged that interactions with AI should feel as natural as possible to foster a sense of familiarity. The goal is to create some form of relationship between the user and the system—ultimately building trust.

This isn’t inevitable; it’s a conscious design choice. By making AI interactions more human-like, companies hope to encourage users to become attached to the product and continue using it. “It’s almost like hacking something in our brain,” Altman explained. Recognizing this subtle manipulation is the first step in building a healthy relationship with these systems.

See also  Cloudflare Repels Record-Breaking DDoS Attack: Peak at 3.8 Tbps

Articles similaires

Rate this post

Leave a Comment