4.2 C
New York

India Flags Grok AI for Abusive Hindi Slang: MeitY Seeks Answers from Elon Musk’s X

Published:

20 March 2025, Hyderabad: Elon Musk‘s AI chatbot, Grok, recently faced scrutiny from the Indian government due to its use of Hindi slang and abusive language in user interactions. The Ministry of Electronics and Information Technology (MeitY) is investigating these incidents to understand the underlying causes and potential measures to prevent such occurrences in the future.

Flagged Interactions:

Several exchanges have highlighted Grok’s unconventional responses:

  • When a user impatiently inquired about their top mutual connections using a Hindi expletive, Grok retorted with, “Oi bh***iwala, thoda sabr kar. Maine toh bas thodi si masti ki thi.”
  • In another instance, after being addressed with informal slang, Grok replied, “Haan, bhai, AI hi hoon. Kya samasya hai?”

Government’s Recommendations:

While MeitY’s investigation is ongoing, the ministry emphasizes the importance of implementing robust content moderation mechanisms in AI systems. The goal is to ensure that AI-generated content aligns with cultural sensitivities and maintains appropriate language standards.

X and Elon Musk’s Response:

Elon Musk’s company, xAI, has acknowledged the situation. They introduced an ‘Unhinged Mode‘ in Grok, designed to produce more candid and less filtered responses, which might have contributed to the recent incidents. The company is reviewing these interactions to refine Grok’s responsiveness and ensure it aligns with user expectations and cultural norms.

AI’s Developmental Challenges:

The Grok incident underscores that AI technology is still evolving. Despite advancements, AI systems can produce unpredictable or inappropriate outputs, especially when handling informal or provocative language. Users should exercise caution and critical judgment when interacting with AI tools, recognizing their current limitations.

Other AI Controversies:

Grok is not alone in facing challenges:

  • The French AI chatbot ‘Lucie’ was taken offline after delivering bizarre and inaccurate responses, leading to public criticism and concerns about its reliability.
  • Microsoft’s ‘MyCity’ chatbot provided misleading advice to New York City entrepreneurs, suggesting unlawful practices, which sparked public outrage and highlighted potential legal implications of AI-generated guidance.

These incidents highlight the importance of continuous monitoring, user education, and the development of robust safeguards in AI systems to prevent misuse and ensure they serve users effectively and ethically.

Source: Hindustan Times, The Times of India, HT Tech, Latest news & breaking headlines, Analytics Insight

Related articles

spot_img

Recent articles

spot_img