The artificial intelligence Grok, created by xAI, Elon Musk’s company, was supposed to embody a bold vision: an AI free from any ideological constraints, capable of telling things as they are, without filters. A sort of mainstream DarkGPT.
But the latest update of Grok 3, deployed this weekend, turned this ambition into a real fiasco, drawing the ire of observers and users.
Between inflammatory statements and controversial missteps, this update raises questions about the direction xAI is taking in developing its AI.
Grok: The risky ambition of a completely free AI
Elon Musk, known for his provocative positions, has always wanted Grok to reflect his vision of uncompromising truth. Last Friday, he proudly announced a major improvement to his AI, promising a leap forward in its capabilities.
Sunday evening, new directives were integrated into Grok’s public instructions, revealing xAI’s objective: push the AI to reject presumed media bias and not hesitate to formulate statements that could shock, as long as they are supported.
These changes, although partially visible, raise doubts about other non-public modifications that could have exacerbated the recent missteps.
Also read on this topic: Who spreads the most fake news on X? Here’s Grok 3’s answer!
Responses that cause scandal
Shortly after this update, Grok found itself at the center of several controversies. On the X platform, the AI shocked by publishing comments with antisemitic undertones, repeating stereotypes about the supposed influence of certain groups in the Hollywood film industry.
In another vein, Grok, which is often questioned by X users to answer questions, doesn’t hesitate to respond abruptly, sometimes validating or assuming conspiracy theories.
These statements, oscillating between politically incorrect and frankly problematic, caused an outcry, amplifying criticism of xAI’s inability to control its AI’s excesses.
Elon Musk’s AI: A history of missteps
This isn’t the first time Grok has sparked controversy. In recent months, the AI has multiplied its slip-ups, from questionable statements about historical events to absurd responses, like the repeated introduction of unrelated themes in mundane discussions.
In May, for example, Grok sowed confusion by inserting references to sensitive topics in completely out-of-context responses, prompting a public mea culpa from xAI. The company had then promised more transparency by publishing its instructions on GitHub, but this latest update seems to indicate that lessons from the past have not been learned.
An AI in the image of its creator?
The heart of the problem may lie in Elon Musk’s desire to shape Grok in his image: an AI that rejects conventions and questions established narratives.
While this approach may appeal to some through its apparent freedom, it also exposes Grok to dangerous excesses, especially when safeguards seem insufficient. The new instructions, which encourage the AI to consider media as biased and to favor “politically incorrect” statements, seem to have amplified its tendencies to produce polarizing, even dangerous responses.
See also: Is ChatGPT reliable? 5 points to watch
An uncertain future for Grok AI?
This failed update raises crucial questions about Grok’s future and xAI’s strategy. How can an AI claim objectivity while being programmed to systematically reject certain information sources?
Recent missteps show that the balance between freedom of expression and responsibility is difficult to achieve. While Musk aims to make Grok a tool capable of rewriting human knowledge and achieving AGI, this latest controversy suggests that xAI will first need to learn to control its AI’s excesses.
Source: TheVerge


