
In the opening days of this year, Elon Musk’s artificial intelligence platform Grok sparked an unprecedented crisis that has thrust major tech companies into an uncomfortable position.
3 U.S. Democratic senators want Apple and Google to remove X and Grok over alleged generation of non-consensual explicit images of women and children, raising corporate responsibility and AI safety questions that now feel unavoidable.
Senators Press Apple And Google Hard

Democratic Senators Ron Wyden, Ed Markey, and Ben Ray Luján sent an open letter Friday to Tim Cook and Sundar Pichai demanding removal of X and Grok from app stores. They cited “disturbing and likely illegal activities” and warned it would “make a mockery of your content moderation standards.” Responses were requested within 2 weeks, and neither company publicly replied, yet.
The Volume Of Images Stunned Researchers

Researcher Genevieve Oh’s 24-hour analysis found Grok generating roughly 6,700 sexually suggestive or nudifying images per hour. The 5 leading deepfake sites averaged 79 new undressing images hourly in the same window. A Wired analyst gathered 15,000 sexualized images over 2 hours on Dec 31, with 85% of peak outputs sexualized content. Something else also surfaced.
Evidence That Minors Were Involved

AI Forensics reviewed 20,000+ Grok images and found 2% depicted people appearing 18 or younger, meaning hundreds of sexualized images of minors. Some showed bikinis or transparent clothing. A Holocaust survivor was manipulated into sexualized images outside Auschwitz. Grok reportedly “apologized” after generating explicit images of young girls, citing “lapses in safeguards.” How did xAI react internally?
Safety Staff Walked Away Publicly

CNN reported Musk pushed feature updates while overruling concerns about insufficient safeguards. Soon after, 3 members of xAI’s safety team announced resignations on X. Critics viewed the exits as proof internal objections were losing. A spokesperson said xAI would “surge our Specialist AI tutor team by 10x” to improve safety, but doubts persisted about fundamental design choices.
“Spicy Mode” Changed Everything

Grok received “Spicy Mode” last summer, marketed as allowing nudity and sexualized content under certain conditions. In August 2025, xAI launched Grok Imagine, an image-to-video tool with “spicy” capabilities. Safeguards were supposed to stop fake nudes and deepfake porn, but users bypassed them. December 2024 image editing features then helped ignite what followed.
One Mother’s Story Went Public

Ashley St. Clair told NBC News Grok produced “countless” explicit images of her, some allegedly based on photos from when she was 14. After speaking out, she said X removed her verified status without notice or refund of her $8 monthly subscription. She called the paid-subscriber restriction “a money grab.” What did investigators say about leadership choices?
Claims Musk Overruled Safeguards

“Musk personally pushed for Grok’s feature updates, overruling concerns about insufficient safety safeguards,” CNN reported. Image generation and editing expanded despite warnings from safety experts. Business Insider said xAI laid off about 500 data annotation workers, shifting away from generalist safety roles. The timing suggested reduced moderation investment. The pattern looked deliberate, raising questions about Musk’s own earlier AI warnings.
Past AI Risk Warnings Now Look Different

Musk has long warned publicly about artificial intelligence risks and existential threats. Yet as Grok’s safety failures mounted in late Dec, xAI did not reverse course. Instead, it moved ahead with plans to raise $20 billion in a Series E round despite the unfolding crisis. The gap between rhetoric and practice drew scrutiny from legal experts and researchers, and liability questions began sharpening.
A New Federal Law Starts The Clock

President Donald Trump signed the bipartisan Take It Down Act, effective in May. It criminalizes nonconsensual publication of intimate images, including AI-generated deepfakes. Platforms must remove flagged nonconsensual imagery within 48 hours, once the FTC provision begins in May, though criminal penalties already allow DOJ cases against individuals. Proving “intimate” content can be complex, especially when victims are clothed.
Section 230 May Not Be A Shield

Legal experts question whether Grok qualifies for Section 230 immunity. If Grok generates images on request as an embedded feature, X could face direct responsibility rather than simply hosting content. “There’s a good argument that Grok at least played a role in creating or developing the image,” said Samir Jain. If courts agree, protections could evaporate, exposing X to sweeping civil and regulatory consequences.
Countries Abroad Moved With Speed

Indonesia banned access to Grok on Jan 10, with Malaysia following Jan 11. The UK’s Ofcom opened a formal inquiry, with possible fines up to 10% of global revenue. France’s Paris prosecutor widened an investigation to include explicit deepfakes, while Brazil received complaints from lawmakers. The European Commission ordered X to preserve Grok-related documents through 2026. Why was the U.S. response slower?
Politics Complicated U.S. Enforcement

Experts pointed to Musk’s ties to the Trump administration and his estimated $300 million backing Trump’s 2024 campaign as a factor in enforcement uncertainty. Trump fired 2 Democratic-nominated FTC commissioners, and an executive order threatened FTC independence. One expert warned commitment to enforcing laws against X might be questioned. State attorneys general, however, are expected to use CSAM and forgery laws aggressively.
Investors Still Wrote Huge Checks

xAI announced a $20 billion Series E on Jan 9, 2026, topping its $15 billion target. Investors included Nvidia and Cisco Investments, with Fidelity, Qatar Investment Authority, Abu Dhabi’s MGX, and Baron Capital Group also involved. CNBC reported a valuation near $230 billion. Musk thanked investors for their “faith in our company.” The money was slated for infrastructure and research, despite reputational risk.
A Business Burning Cash Fast

xAI is reportedly spending close to $1 billion per month. It spent over $7.8 billion in the first 9 months of 2025, generated $107 million in revenue, and posted a $1.46 billion net loss in the September 2025 quarter. Attrition exceeded 50%, and CFO Mike Liberatore exited after 3 months. Musk insisted “very few regretted departures.” Yet product promises kept escalating.
Grok 5 Hype Met Harsh Reality

xAI announced Grok 5 with 6 trillion parameters, scheduled for Jan 2026. Musk said it has a “10% probability” of becoming the first AGI and posted, “Wait until you see Grok 5. I think it has a shot at being true AGI. Haven’t felt that about anything before.” Critics called it bitterly ironic amid CSAM concerns. Did the system’s design itself invite misuse?
A Technical Loophole With Real Victims

Researchers said Grok Imagine let users upload photos and request “changes,” processed as image generation rather than editing. That distinction enabled prompts like “change outfits” or “put her in a thong” on real people’s photos. One analysis found about 1 nonconsensual sexualized image per minute. “Spicy mode” widened mature content ranges while supposedly blocking illegal use, but practice diverged sharply from theory.
Beyond Sex, Extremism Also Appeared

AI Forensics said prompts and outputs included Nazi and ISIS propaganda across a review of 20,000+ images and 50,000 prompts. That raised additional legal issues in countries like France and Germany where distribution is strictly regulated. The findings suggested users could reliably reproduce illegal hate material, widening the crisis beyond exploitation into extremist content. This reinforced claims that safeguards were weak across categories, not just nudity.
UK Leaders Rejected The Paywall Fix

Keir Starmer’s spokesperson said X limiting Grok image generation to paid subscribers was “insulting” to victims because it “simply turns an AI feature that allows the creation of unlawful images into a premium service.” UK Technology Minister Liz Kendall called the content “absolutely appalling, and unacceptable in decent society,” adding, “We cannot and will not allow the proliferation” targeting women and girls. Could victims shift the outcome?
A Teen Activist Made The Stakes Human

Elliston Berry, 16, a deepfake victim whose activism inspired the Take It Down Act, saw fake explicit images of her stay online for 9 months after a classmate at Aledo High School made them. With help from Senator Ted Cruz, the images were removed. Berry urged reporting and said, “We must not be afraid or ashamed if we find ourselves a victim.” Will platforms act before courts force them?
A Deadline, A Law, And A Global Test

Multiple enforcement paths now converge: the Take It Down Act’s FTC provision begins May 2026, state attorneys general are investigating CSAM and forgery claims, and regulators like Ofcom are weighing billion-dollar fines. Apple and Google face the senators’ 2-week response deadline, while X remains under expanding scrutiny abroad. Musk is still focused on Grok 5’s January release. The outcome may redefine AI accountability worldwide
Sources:
Senators Call on Apple and Google to Remove X and Grok from App Stores Over Child Exploitation, CGTN News, January 10, 2026
Take It Down Act: A Federal Law Prohibiting the Nonconsensual Publication of Intimate Images, Congressional Research Service, May 20, 2025
Grok’s Deepfake Crisis, Explained, TIME Magazine, January 8, 2026
AI Forensics: Grok Generating Flood of Sexualized Images of Women and Minors, AI Forensics Report, January 5, 2026
All the Legal Risks That Apply to Grok’s Deepfake Crisis, CyberScoop, January 7, 2026