AI chatbot Grok sparks debate over bias and reliability after posting vulgarity, disinformation, hate

16:1811/07/2025, Friday
AA
File Photo
File Photo

Billionaire Elon Musk's AI insults users on X, raising questions about data sources, blind trust

Billionaire Elon Musk's artificial intelligence chatbot Grok, developed by his firm xAI, has drawn global attention for using profanity, insults, hate, and spreading disinformation on X, sparking renewed debate over the reliability of AI systems and the dangers of placing blind trust in them.

Sebnem Ozdemir, a board member of the Artificial Intelligence Policies Association (AIPA) in Türkiye, told Anadolu that AI outputs must be verified like any other source of information.

“Even person-to-person information needs to be verified, so putting blind faith in AI is a very unrealistic approach, as the machine is ultimately fed by a source,” she said.

“Just as we don't believe everything we read in the digital world without verifying it, we should also not forget that AI can learn something from an incorrect source.”

Ozdemir warned that while AI systems often project confidence, their outputs reflect the quality and biases of the data they were trained on.

“The human ability to manipulate, to differently convey what one hears for their own benefit, is a well-known thing – humans do this with intention, but AI doesn't, as ultimately, AI is a machine that learns from the resources provided,” she said.

She compared AI systems to children who learn what they are taught, stressing that trust in AI should depend on transparency about the data sources used.

“AI can be wrong or biased, and it can be used as a weapon to destroy one's reputation or manipulate the masses,” she said, referring to Grok's vulgar and insulting comments posted on X.

Ozdemir also said rapid AI development is outpacing efforts to control it: “Is it possible to control AI? The answer is no, as it isn't very feasible to think we can control something whose IQ level is advancing this rapidly.”

“We must just accept it as a separate entity and find the right way to reach an understanding with it, to communicate with it, and to nurture it.”

She cited Microsoft's 2016 experiment with the Tay chatbot, which learned racist and genocidal content from users on social media and, within 24 hours, began publishing offensive posts.

“Tay did not come up with this stuff on its own but by learning from people – we shouldn't fear AI itself but the people who act unethically,” she said.

#AI
#artificial intelligence
#Elon Musk
#Grok
#X