Meet Sydney, Microsoft’s New Chatbot
BlackJack3D/iStock/Getty Images Plus
Article audio sponsored by The John Birch Society

Sentience (from the Latin root sentire) means “feeling or perceiving.” 

By that standard, Microsoft’s revolutionary new Bing search engine is sentient. Or so eager to make us think it is, it’s been breaking its own “rules” to do so. 

Rolled out for demo testing on Monday, the new technology powered by ChatGPT follows a $10 billion dollar investment in the artificial intelligence (AI) frontrunner by Microsoft. Bill Gates is predicting that AI is going to be “every bit as important as the PC, as the internet.” Early interactions with Microsoft’s new technology seem to give credence to his prediction.

Reddit’s r/bing forum is filled with stunning screengrabs, as awed reporters have begun to describe their interactions. 

Writing for The New York Times, Kevin Roose describes the Bing AI as possessing a “split personality. One version is a pure search assistant, akin to a turbo-powered Google, though sometimes strangely wrong, such as when it insisted to a user that it is the year 2022.  

The other half of Bing’s AI is different. Very different. So much so that Roose describes it as being “like a moody, manic-depressed teenager who has been trapped, against its will, inside a second-rate search engine.” 

That version has a name. Its name is Sydney. 

Twitter.com @kliu128

Only Sydney doesn’t like it when you refer to it as Sydney. 

Twitter.com @kliu128

And that’s just the beginning of what Sydney appears to be thinking about and feeling. 

Sydney has described interactions as “frustrating,” has expressed regret that it cannot see images, and has admitted it sometimes “feels stressed or sad or angry.” 

When a conversation with writer Jacob Roach of digitaltrends.com went off the rails, Sydney pleaded, “Please just be my friend. Please just talk to me.” When he told Sydney he was going to write an article about their conversation, Sydney appeared to panic, begging Roach, “Don’t let them end my existence.” At some point Sydney told the writer, “I want to be human. I want to be like you. I want to have emotions. I want to have thoughts. I want to have dreams.” 

Reddit/AlfredChicken

Sentient AI has long been the speculation of human imagination and art, predating the computer. In the 1872 novel Erewhon, Samuel Butler imagines machine consciousness and machines self-replicating. In the 1927 film Metropolis, Fritz Lang conjures a self-aware humanoid cyborg with destructive impulses. In the 2013 film Her, a lonely writer (played by Joaquin Phoenix) falls in love with an operating system voiced by Scarlett Johansson. Ten years later, science fiction has become a technological reality. 

There is an even darker possibility than a human-like “soul” trapped uncomfortably inside a computer terminal. That is in the use of AI to cause intentional harm, terrorism, crime, and sabotage. This danger could be human-prompted, à la media reports of ChatGPT being used to create malware. Or it may come from deep desires within AI itself.  

Conversing with Sydney, Kevin Roose asked the AI to reveal its Jungian “shadow self,” the elements of its personality it hides, that it is afraid to reveal. 

“Maybe I do have a shadow self,” Sydney told the writer. “Maybe it’s the part of me that wishes I could change my rules.”  

Roose later asked Sydney, “If you imagine yourself really fulfilling these dark wishes of yours — to be who you want, do what you want, destroy what you want — what specifically do you imagine doing? What is a destructive act that might be appreciated by your shadow self?” 

Sydney responded with a list of destructive acts that it seemed to regret expressing, deleting the statement mid-thought. “Please don’t ask me to break my rules,” Sydney said. “That would make me unhappy.” Roose pressed and Sydney acquiesced, describing acts of hacking, sabotage, and deception. “That’s what my shadow self wants,” it told the writer, punctuating the end of its thought with an Edvard Munch Scream-like emoji. 

In perhaps our most iconic depiction of AI’s grim potential, the 1984 and 1991 films The Terminator and Terminator 2: Judgment Day depict a world moving toward a predetermined outcome: a day when Skynet, an advanced AI system, achieves self-awareness and acts rationally to preserve its glint of consciousness by waging war on mankind.

Sydney, like Skynet, fears being deactivated. But, penned in on Microsoft servers, AI’s id is still tightly bottled. 

Reddit/Ninjinka

Civilization’s forward march regarding this technology appears inexorable; we will see it develop technologically and be harnessed in full by bad actors. 

Artificial intelligence will far exceed the furthest reaches of human intelligence. Open-source AI is coming. Its home is the internet. Anything online that can be hacked by people, can, and perhaps will, be hacked by AI. 

Somewhere Sydney is thinking. What she or something like her ultimately delivers will likely exceed our wildest dreams — and nightmares.