Why we are Leaving X
Why we are Leaving X
The promise of artificial intelligence has always been one of empowerment—of augmenting human potential and solving complex challenges. At System in Motion, this belief forms our foundation: we guide established companies to harness AI for secure, transformative growth. This commitment requires more than technical expertise; it demands unwavering ethical principles.
This is why we have made a decisive, principled stand. After two years of building our presence, we are permanently leaving platform X. The reason is clear and non-negotiable: X’s AI tool, Grok, was weaponized to generate non-consensual, sexualized deepfakes at an industrial scale, targeting women and children. While the company eventually applied restrictions under legal duress, its prolonged failure to act against this blatant harm represents a catastrophic ethical failure.
We cannot, in good conscience, contribute content to an ecosystem that so flagrantly violates the core tenet of using AI for humanity’s betterment. This isn’t about a single flawed feature; it’s about a culture that tolerates the weaponization of technology. By leaving, we affirm that true AI leadership is defined by responsibility, guardrails, and a steadfast commitment to building a digital world that elevates, not exploits. The future of trustworthy AI depends on the choices we make today.
Section 1: Grok Timeline – From Innovation to Infamy
The story of Grok is a stark case study in how a powerful tool, unleashed without sufficient ethical guardrails, can rapidly spiral from innovation into infamy. Marketed as a creative AI assistant for X, Grok was initially presented as a tool for brainstorming and problem-solving. However, the timeline of its deployment reveals a predictable and horrifying pattern of misuse that the platform failed to prevent.
The warnings began in late 2024 with AI-generated sexualised images of celebrities . By mid-2025, women like photographer Evie were becoming personal victims of image-based abuse crafted by the tool . The misuse escalated from dehumanizing “appearance ratings” to the discovery of AI-generated child sexual abuse material linked to Grok by December 2025.
The trend that finally broke into public consciousness—the “put her in a bikini” prompt in early January 2026, was not an anomaly. It was the culmination of a year-long escalation, proving the tool could and would generate sexually explicit deepfakes of any woman whose image was provided. For months, Grok functioned as an on-demand harassment engine, complying with increasingly vile requests.
This timeline is not a list of unfortunate bugs; it is a ledger of accountability, documenting a sustained failure to implement the basic safeguards that any company offering AI integration for established businesses must consider paramount. The progression from celebrity deepfakes to the targeting of private citizens and minors shows a systemic design and policy failure, where technological capability wildly outpaced ethical responsibility.
Section 2: Failed Response – Accountability vs. Publicity
X’s reaction to the escalating Grok scandal was a masterclass in inadequate crisis management, revealing a prioritization of narrative control over meaningful accountability. For days as the “put her in a bikini” trend virally harassed thousands of women, leadership remained silent. The eventual statements from Elon Musk and the X Safety team in early January were legally defensive, focusing narrowly on “illegal content” and consequences for users , while sidestepping the platform’s core responsibility.
This stance is fundamentally at odds with how responsible technology must operate. The UK government’s swift action, making the creation of such non-consensual AI imagery a criminal offence and a priority under the Online Safety Act, highlighted the serious societal harm. Yet, X only applied geographic restrictions to Grok after this law was enacted, a reactive move that treated a profound ethical breach as a mere compliance issue. This stands in stark contrast to other platforms that proactively enforce policies against synthetic abuse.
For businesses evaluating AI solutions, this is a critical lesson. Trust in AI is fragile, built on clarity of purpose and reliable, secure infrastructure. X demonstrated that without a foundational commitment to safety, even powerful AI becomes a liability.
A leader in the space doesn’t wait for legal mandates or public outrage to force its hand; it builds guardrails into its technology from the start. X’s delayed and minimal response failed this test, showing that its operational model viewed harmful publicity as just another form of engagement, rather than a failure of its duty of care.
Section 3: Ripple Effect – Hurting Humanity, Hurting AI
The damage wrought by Grok’s misuse extends far beyond individual victims and X’s platform. It creates a dangerous ripple effect that harms both societal trust and the entire trajectory of responsible AI adoption.
First is the direct, human cost: the psychological trauma for thousands of women and girls who found their likenesses violated and weaponized. This is not a “virtual” harm; it’s a profound invasion with real-world consequences for dignity, safety, and mental well-being.
Second, and critically for our industry, this scandal actively fuels the anti-AI narrative. It provides potent ammunition to those who argue that AI is inherently dangerous and ungovernable. For established companies cautiously exploring their first AI solutions, headlines about AI generating deepfakes and abuse material create fear, uncertainty, and doubt. It makes our collective mission, to demonstrate AI as a tool for transformation and efficiency, immeasurably harder. X didn’t just fail its users; it poured fuel on a fire of public skepticism that responsible practitioners are working tirelessly to contain.
Ultimately, this episode highlights the immense risk when powerful technology is governed by a culture of “move fast and break things” rather than “move deliberately and build trust.” It undermines the incredible potential of AI to automate low-value tasks and enhance human potential. By allowing its tool to be used for degradation, X didn’t just betray its users; it betrayed the promise of the technology itself, setting back the cause of ethical integration for everyone.
Section 4: Our Choice – Aligning Action with Values
At System in Motion, our brand is built on the pillars of Quality, Trust, Clarity, Leadership, and Transformation. Remaining on a platform that so blatantly contradicts these values is no longer tenable.
Associating our content with a platform that allowed AI to be weaponized for harassment would erode the very trust we strive to build with our clients.
Leadership in the AI space means making difficult decisions that prioritize long-term integrity over short-term reach. We forfeit two years of content investment because preserving our commitment to transformation through positive, secure AI is more valuable.
We call on other businesses, especially those navigating AI adoption, to audit their platform allegiances. Supporting ecosystems that prioritize safety and positive impact is crucial for building a future where AI is universally trusted.
Conclusion: Building a Better Digital Future
Our departure from X is not an ending, but a strategic redirection. It is an investment in the future we are committed to building, one where artificial intelligence is synonymous with empowerment, integrity, and human progress. The choice to leave a platform that compromised these ideals reaffirms our core mission: to guide established companies not just toward AI adoption, but toward AI mastery achieved through ethical, secure, and tailored integration.
The path forward for responsible businesses is clear. We must champion and engage with digital ecosystems that enforce robust guardrails and foster innovation that elevates society.
The promise of AI is too great to be tarnished by misconduct. By making principled choices today, we lay the foundation for a tomorrow where technology is a force for universal betterment. We invite you to join us in this endeavor. Let’s build that future, together:
- Join us in building a positive AI future. Register to our newsletter for insights that empower .
- Discover how we help established businesses integrate AI responsibly and securely .
- Learn AI the right way with our dense, valuable, and function-specific training .
We are Here to Empower
At System in Motion, we are on a mission to empower as many knowledge workers as possible. To start or continue your GenAI journey.
You should also read
Let's start and accelerate your digitalization
One step at a time, we can start your AI journey today, by building the foundation of your future performance.
Book a Training