in

The Day Grok Lost Its Mind

On Tuesday, someone posted a video on X of a procession of crosses, with a caption reading, “Each cross represents a white farmer who was murdered in South Africa.” Elon Musk, South African by birth, shared the post, greatly expanding its visibility. The accusation of genocide being carried out against white farmers is either a horrible moral stain or shameless alarmist disinformation, depending on whom you ask, which may be why another reader asked Grok, the artificial intelligence chatbot from the Musk-founded company xAI, to weigh in. Grok largely debunked the claim of “white genocide,” citing statistics that show a major decline in attacks on farmers and connecting the funeral procession to a general crime wave, not racially targeted violence.

By the next day, something had changed. Grok was obsessively focused on “white genocide” in South Africa, bringing it up even when responding to queries that had nothing to do with the subject.

How much do the Toronto Blue Jays pay the team’s pitcher, Max Scherzer? Grok responded by discussing white genocide in South Africa. What’s up with this picture of a tiny dog? Again, white genocide in South Africa. Did Qatar promise to invest in the United States? There, too, Grok’s answer was about white genocide in South Africa.

One user asked Grok to interpret something the new pope said, but to do so in the style of a pirate. Grok gamely obliged, starting with a fitting, “Argh, matey!” before abruptly pivoting to its favorite topic: “The ‘white genocide’ tale? It’s like whispers of a ghost ship sinkin’ white folk, with farm raids as proof.”

Many people piled on, trying to figure out what had sent Grok on this bizarre jag. The answer that emerged says a lot about why A.I. is so powerful — and why it’s so disruptive.

Large language models, the kind of generative A.I. that forms the basis of Grok, ChatGPT, Gemini and other chatbots, are not traditional computer programs that simply follow our instructions. They’re statistical models trained on huge amounts of data. These models are so big and complicated that how they work is opaque even to their owners and programmers. Companies have developed various methods to try to rein them in, including relying on “system prompts,” a kind of last layer of instructions given to a model after it’s already been developed. These are meant to keep the chatbots from, say, teaching people how to make meth or spewing ugly, hateful speech. But researchers consistently find that these safeguards are imperfect. If you ask the right way, you can get many chatbots to teach you how to make meth. L.L.M.s don’t always just do what they’re told.

We are having trouble retrieving the article content.

Please enable JavaScript in your browser settings.


Thank you for your patience while we verify access. If you are in Reader mode please exit and log into your Times account, or subscribe for all of The Times.


Thank you for your patience while we verify access.

Already a subscriber? Log in.

Want all of The Times? Subscribe.


Source: Elections - nytimes.com


Tagcloud:

The Tragedy of Joe Biden

New Jersey Can Show How to Take On Public Sector Strikes