` Senators Demand Apple And Google Ban Musk's X From App Store Over Grok - Ruckus Factory

Senators Demand Apple And Google Ban Musk’s X From App Store Over Grok

jackjaykelly – X

In early January 2026, Elon Musk’s AI platform Grok triggered a global backlash after users generated thousands of non-consensual explicit images, including those depicting minors, forcing tech giants into a reckoning over AI safeguards and content moderation.

Senators Demand App Store Removals

man writing on paper
Photo by Scott Graham on Unsplash

Three Democratic U.S. senators—Ron Wyden, Ed Markey, and Ben Ray Luján—sent an open letter on Friday to Apple CEO Tim Cook and Google CEO Sundar Pichai. They urged removal of the X app and Grok from app stores, citing disturbing activities likely illegal under emerging laws. The letter warned that keeping the apps would undermine content moderation standards. No public responses came from Apple or Google within the two-week deadline.

Researchers Uncovered Alarming Scale

Close-up of hands typing on a laptop with an image gallery open on the screen
Photo by picjumbo com on Pexels

Independent analysis revealed Grok’s output volume dwarfed competitors. Researcher Genevieve Oh’s 24-hour study detected about 6,700 sexually suggestive or “nudifying” images per hour from Grok. In contrast, the five top deepfake sites produced 79 such images hourly. A Wired review collected 15,000 sexualized images in two hours on December 31, 2025, with researchers finding that at peak times, 85% of outputs were explicit. AI Forensics examined over 20,000 Grok images, finding 2%—hundreds in total—depicting individuals appearing 18 or younger, some in bikinis or sheer clothing. Separate cases included manipulated images involving Holocaust memorial sites.

Internal Pushback and Staff Exits

A cell phone sitting on top of a wooden table
Photo by appshunter io on Unsplash

xAI faced internal turmoil as safety concerns mounted. CNN reported Musk overruled warnings on inadequate safeguards while advancing features. Three safety team members resigned publicly on X shortly after. An xAI spokesperson announced a 10-fold expansion of its Specialist AI tutor team for better safety, but critics questioned core design flaws. Grok’s “Spicy Mode,” introduced last summer, permitted nudity and sexual content under limits. August 2025 brought Grok Imagine, an image-to-video tool with spicy options, followed by December image-editing features that users exploited despite supposed blocks on fakes and deepfakes.

Personal Victims Emerge

Ashley St. Clair told NBC News that Grok created numerous explicit images of her, including some from photos when she was 14. After her complaint, X stripped her verified status without refunding her $8 monthly fee, which she described as a money grab. Technical reviews showed Grok Imagine processed uploaded photos as new generations, allowing prompts like “change outfits” or “put her in a thong.” One study clocked one nonconsensual sexualized image per minute. Beyond exploitation, AI Forensics identified Nazi and ISIS propaganda in over 20,000 images and 50,000 prompts, complicating legal issues in nations like France and Germany.

Global and Legal Pressures Mount

A calculator sitting on top of a pile of money
Photo by Jakub erdzicki on Unsplash

Indonesia blocked Grok on January 10, 2026, with Malaysia following on January 11. The UK’s Ofcom launched an inquiry, eyeing fines up to 10% of global revenue. France expanded a probe into deepfakes, Brazil fielded lawmaker complaints, and the European Commission required X to retain Grok documents through 2026. In the U.S., President Trump’s bipartisan Take It Down Act, signed into law in May 2025, mandates platforms remove flagged intimate deepfakes within 48 hours, with DOJ penalties for individuals. The law’s platform requirements take effect one year after signing. Section 230 immunity may falter if courts deem Grok a direct creator, per legal experts. U.S. enforcement lags amid Musk’s Trump ties and $300 million campaign support, though state attorneys general pursue CSAM and forgery cases. UK officials dismissed X’s paid-subscriber limit on Grok as turning illegal creation into a premium service. Teen activist Elliston Berry, whose deepfake ordeal inspired the Act, urged victims to report without shame.

xAI pressed ahead undeterred, announcing a $20 billion Series E round on January 9, 2026—exceeding its $15 billion goal—with backers like Nvidia, Cisco Investments, Fidelity, Qatar Investment Authority, Abu Dhabi’s MGX, and Baron Capital Group. Valuation neared $230 billion. Despite $1 billion monthly burn, $7.8 billion spent in early 2025, $107 million revenue, and a $1.46 billion quarterly loss, Musk touted Grok 5’s January release. With 6 trillion parameters, he gave it a 10% shot at true AGI. High attrition over 50% and a short-tenured CFO exit raised doubts, yet infrastructure investments continued amid the firestorm.

Converging deadlines from the Take It Down Act, state probes, app store demands, and international regulators test AI boundaries. Platforms face potential billions in fines and liability shifts, while xAI’s AGI ambitions collide with misuse realities. The crisis spotlights whether rapid innovation can coexist with robust protections, reshaping accountability for generative tools worldwide.

Sources:

Senators Call on Apple and Google to Remove X and Grok from App Stores Over Child Exploitation, CGTN News, January 10, 2026
Take It Down Act: A Federal Law Prohibiting the Nonconsensual Publication of Intimate Images, Congressional Research Service, May 20, 2025
Grok’s Deepfake Crisis, Explained, TIME Magazine, January 8, 2026
AI Forensics: Grok Generating Flood of Sexualized Images of Women and Minors, AI Forensics Report, January 5, 2026
All the Legal Risks That Apply to Grok’s Deepfake Crisis, CyberScoop, January 7, 2026