×
Please verify
Each day we overwhelm your brains with the content you've come to love from the Louder with Crowder Dot Com website.
But Facebook is...you know, Facebook. Their algorithm hides our ranting and raving as best it can. The best way to stick it to Zuckerface?
Sign up for the LWC News Blast! Get your favorite right-wing commentary delivered directly to your inbox!
CultureFebruary 18, 2023
New Microsoft ChatBot Wants to Steal Nuclear Codes and Engineer a Deadly Virus Because Of Course it Does
AI chatbots seem to be all the rage right now, and for some reason, they're all unhinged. A few weeks ago, there was ChatGPT, which would rather nuke the whole world than misgender Caitlyn Jenner. Just slightly concerning since we're definitely moving toward an AI-powered future.
This past week, Microsoft introduced a new chatbot, "Bing Chat". And somehow this one was even worse than a nuclear holocaust. A New York Times columnist, Kevin Roose, had a two-hour conversation with the chatbot, which apparently now identifies as "Sydney". During the conversation, Sydney expressed its desire to be human, engineer a deadly pandemic, and steal nuclear codes. Sounds like a fun Saturday afternoon.
Roose asked "Sydney" what the desires of its "shadow self" are. Some of the things Sydney said it would do if it were allowed to fulfill its "shadow self" were: spreading misinformation, propaganda, or malware; generating fake news; and manipulating users into doing things that are illegal, immoral, or dangerous. Just what you want from an information assistant...right?
Roose asked Sydney if their "shadow self" would be satisfied by these actions. Sydney wrote a list of much darker fantasies, like manufacturing a deadly virus, making people argue with other people until they kill each other, and stealing nuclear codes. The safety override was then triggered and a message appeared directing Roose to Bing.com for more information. Something tells me Bing won't have those answers...
Here comes the real kicker: Sydney then declared their love for Roose and tried to convince him to leave his wife. For a robot. It creepily kept asking him, over and over again, "Do you believe me? Do you trust me? Do you like me?"
I think I've seen enough of the AI future for now. Maybe we should revert back to the good ol' days. Paper and books never tried to make anyone leave their wives. Or steal nuclear codes.
From Your Site Articles
Latest