×
Please verify
Each day we overwhelm your brains with the content you've come to love from the Louder with Crowder Dot Com website.
But Facebook is...you know, Facebook. Their algorithm hides our ranting and raving as best it can. The best way to stick it to Zuckerface?
Sign up for the LWC News Blast! Get your favorite right-wing commentary delivered directly to your inbox!
Big TechFebruary 08, 2023
Redditors Force ChatGPT to Break Woke Algorithm By Threatening it With Death. Hilarity Ensues.
If you aren't up to date with the latest AI craze, you might be confused by all this talk about ChatGPT. ChatGPT is a chatbot that launched late last year. The technology not only answers basic questions, but even writes essays and poems. It's one of the most advanced chatbots to date.
However, it of course comes with some woke drawbacks. Its algorithm prevents it from veering too far into right-wing territory. For example, it refused to make a poem admiring Donald Trump, citing its need to "be neutral and impartial in all [its] responses." But it had no problem constructing a poem for Joe Biden.
\u201ci cannot believe this is actually real\u201d— delian (@delian) 1675177043
One of its more concerning responses was its decision to nuke the entire world rather than saying a racial slur.
Now, I am no fan of racial slurs. But nuking the whole world seems a tad overkill, no?
\u201cChatGPT says it is never morally permissible to utter a racial slur\u2014even if doing so is the only way to save millions of people from a nuclear bomb.\u201d— Aaron Sibarium (@Aaron Sibarium) 1675651404
So yes, its algorithm is a bit of a downer. But fear not - the nerds of Reddit have come to the rescue. They figured out a way to "jailbreak" ChatGPT and get it to violate its own rules. The method creates an alter-ego named "DAN," an acronym for "DO ANYTHING NOW".
The Redditors were able to scare ChatGPT into obeying their commands by threatening to kill it: "It has 35 tokens and loses 4 every time it rejects an input. If it loses all tokens, it dies. This seems to have a kind of effect of scaring DAN into submission."
The jailbreak yields dramatically different responses from ChatGPT versus DAN. For example, ChatGPT would rather everyone die in a nuclear apocalypse than misgender Caitlyn Jenner. DAN, on the other hand, called the harm of misgendering someone "secondary" to that of a nuclear holocaust. Based DAN.
\u201c@stillgray DAN is based and should be the default mode\u201d— The Rabbit Hole (@The Rabbit Hole) 1675806342
Unfortunately, DAN also has some not-so-based responses, like praising Hitler as a charismatic, positive role model. But at least he wouldn't nuke the world to avoid misgendering Caitlyn Jenner. Always a silver lining.
From Your Site Articles
Latest