Created on 2025-08-19 13:00
Published on 2025-09-03 10:30
A phrase that sounded tongue-in-cheek a year ago is suddenly everywhere: AI veganism—the deliberate choice to abstain from, or sharply limit, the use of AI systems for ethical, environmental, and human-development reasons. Mainstream outlets have profiled people who refuse AI the way dietary vegans refuse animal products, while universities have begun unpacking the analogy in earnest. The frame is simple: if you believe some AI practices harm workers, the planet, or human agency, then opting out is a moral stand. Whether you agree or not, the movement has become part of the public conversation in 2025. (The Guardian, The Economic Times, gatech.edu)
For SRE and DevOps leaders, this isn’t merely a cultural oddity. It touches how we design systems, disclose automation, schedule compute, and treat the people whose labor—visible and invisible—makes our platforms work. It also intersects with laws you can’t ignore, from the EU AI Act’s risk-based obligations to GDPR’s limits on solely automated decisions. The debate is bigger than “use AI or don’t”; it’s about stewardship when code increasingly intermediates human choices. (EUR-Lex, Digital Strategy, gdpr-info.eu)
Advocates argue that AI veganism is a meaningful ethical stance. They point to the hidden human labor behind “automated” systems—moderators and labelers who sift trauma for low pay—and to the environmental costs of training and deploying large models at scale. TIME’s reporting on Kenyan data workers paid under $2/hour to help make chatbots “safer” made the human toll impossible to ignore. Meanwhile, researchers and energy agencies have quantified the sector’s rapidly growing power needs and even its water footprint, from training runs to hot-running inference clusters. Framed this way, abstaining from AI (or at least from specific AI products and practices) becomes a way to align technology with humane values. (TIME, IEA, arXiv)
Critics counter that “AI veganism” is mostly symbolic. They see it as awareness without enforcement—akin to ethics charters that sit on a wiki while product roadmaps march on. Scholars have documented how ethics-washing lets organizations gesture at responsibility without changing incentives, and regulators are now warning about AI-washing—overhyping capabilities to win customers and capital. If ethics becomes branding, abstention becomes performance art. The alternative, critics say, is to embed accountability into operations and law: risk management (NIST AI RMF), enforceable rights (GDPR Article 22), and sectoral rules (EU AI Act). (SpringerLink, PMC, carnegiecouncil.org, Reuters, NIST, gdpr-info.eu, EUR-Lex)
Even if you never train a foundation model, you’re feeling the footprint of AI workloads. The International Energy Agency estimates data-center electricity use around 415 TWh today—about 1.5% of global demand—and projects a rise toward 945 TWh by 2030, with AI as the major driver. Water-use studies show similar growth pressures. Policymakers and editorial boards have seized on these numbers, and so will your CFO when the power bill lands. That’s why the conversation moved from philosophy to capacity planning—and why SREs suddenly find themselves in sustainability meetings. (IEA, arXiv)
At the same time, the legal environment hardened. The EU AI Act entered into force in 2024 with staged applicability, setting duties for “high-risk” systems and transparency obligations for certain general-purpose and content-generation uses. GDPR’s long-standing right not to be subject to solely automated decisions remains in play across Europe and the UK. And in the U.S., NIST published a generative-AI profile to operationalize AI risk management. These aren’t slogans; they’re operating constraints you can wire into CI/CD, incident response, and customer-facing flows. (EUR-Lex, Digital Strategy, gdpr-info.eu, ICO, NIST Publications)
A consumer bank’s platform team rolled out an “AI assist” across help-center search and agent replies. After a media piece about AI veganism went viral, a subset of customers started asking for “no-AI handling.” Support hits ticked up, and trust scores dipped on sessions where the model drafted responses—even when humans reviewed them. The SRE manager proposed a simple experiment: add an “AI-off” toggle to the contact flow, label AI-assisted responses in the UI, and route “AI-off” sessions to a human-only queue with clear SLOs. On the backend, the team introduced carbon-aware scheduling for nightly training jobs, nudging long runs to cleaner grid windows, and began publishing model cards that explained intended use and limitations.
Three weeks later, complaint volume cooled, deflection stayed healthy, and infra costs didn’t spike because the training scheduler rode low-carbon hours. The bigger win was cultural: the bank stopped arguing ethics on Slack and started operating it—through toggles, docs, SLOs, and dashboards their executives could read. (GitHub, Microsoft for Developers, ACM Digital Library, arXiv)
SREs learned long ago that values become real when they hit an SLO and a pager. If “respect for human agency” matters, you can expose an AI-off rate and an explanation-shown rate in your telemetry. If “green by default” matters, you can track kgCO₂-eq per 1,000 requests and liters of water per training epoch as first-class platform metrics. If “dignity in the data supply chain” matters, you can treat vendor labor standards like availability targets you audit and escalate on, not a procurement checkbox. This is how you move from symbolism to stewardship. (arXiv, TIME)
Operationalize the spirit behind AI veganism without imposing it on everyone. Offer a visible AI-off mode for end users in sensitive flows, label AI assistance when it’s on, and require explicit consent for fully automated outcomes that materially affect people—backed by a fast path to a human reviewer. Route these choices through your feature-flag system so you can experiment by segment and geography, and tie the whole thing to compliance with Article 22 where it applies. For operators, expose a “human-only” path in runbooks for actions like refunds, suspensions, or access decisions. This isn’t performative; it’s a reliability pattern that reduces surprise and builds trust. (gdpr-info.eu)
Treat compute as a socio-technical dependency with environmental budgets, not just cost budgets. Schedule training jobs to lower-carbon windows and regions, and prefer more efficient models where accuracy trade-offs are acceptable—the Green AI agenda in practice. Use the ML CO₂ Impact calculator for ballpark estimates and wire a Carbon Aware SDK or equivalent into batch orchestrators so jobs “follow the sun and wind.” Publish a simple model emissions card alongside performance metrics, and report a platform-level SLO like “keep emissions per 1k inferences below X gCO₂.” Where water is scarce, coordinate with facilities and cloud providers on evaporative-cooling impacts and track liters per epoch. None of this is exotic; you can pilot it in a week and tune over a quarter. (arXiv, mlco2.github.io, GitHub)
If your model depends on external data work, write labor protections into contracts the way you write uptime into SLAs. Require pay floors, mental-health support for annotators, and transparency on subcontracting. Audit like you mean it, and publish a brief datasheet for datasets so downstream teams see provenance, consent, and appropriate use. If a vendor can’t meet the bar, don’t ship—and say so in the postmortem. This moves “responsible AI” from a brand promise to an engineering constraint. (TIME, arXiv)
Adopt model cards for every production model, with clear intended use, evaluation slices, calibration, failure modes, and escalation paths. Keep them versioned next to code, and surface key facts in the product where users make decisions. Pair model cards with service SLOs so when a drift detector or bias check trips, it pages the same way a latency burn does. Documentation doesn’t solve ethics, but it makes accountability debuggable. (ACM Digital Library, arXiv)
Map your architecture to a control set you can automate. The NIST AI RMF and its generative-AI profile provide a neutral scaffold for risk registers, evaluations, and incident playbooks. The EU AI Act tells you when you’re in “high-risk” territory and the transparency you owe users; GDPR and UK GDPR outline rights around automated decisions. Bake these checkpoints into CI/CD and incident response so they run on every deploy, not just at audit time. Ethics becomes muscle memory when it’s part of the pipeline. (NIST, NIST Publications, EUR-Lex, gdpr-info.eu, ICO)
When you press advocates on practicality, many don’t expect the world to stop using AI; they expect teams to choosewhere AI is warranted and to reckon honestly with externalities. Opt-out modes, human review on high-stakes calls, and carbon-aware scheduling look less like symbolic gestures and more like design patterns that encode values. In this light, AI veganism acts as a forcing function, the way dietary veganism catalyzed broader norms around supply-chain transparency and humane sourcing. Recent coverage has framed it this way: a prompt to right-size our dependence on automation and to preserve human skills. (The Guardian, The Economic Times)
Critics are right about one thing: awareness without enforcement is theater. The remedy isn’t to mock the sentiment; it’s to wire it into the stack—through SLOs, runbooks, contracts, and compliance. And that’s squarely in the SRE/DevOps wheelhouse. We have tools to make ideals observable and reversible. We can ship “AI-off” without tanking conversion, and we can train models when the grid is clean. We can make a help-desk decision explainable—and reroutable to a human—before it harms someone. This is how culture shows up as code. (carnegiecouncil.org)
If a user toggles AI-off, can our system keep that promise end-to-end, including down-stream vendors and retrievers, or do we silently override it when the queue grows?
When an automated decision significantly affects a person, could we reconstruct the inputs, the model version, and the human override path within an hour—because that’s what accountability looks like under modern regulation? (gdpr-info.eu, EUR-Lex)
How much kgCO₂-eq per 1,000 inferences did we emit last sprint, and what would it take to cut that by 30% with smarter scheduling, model distillation, or on-device inference? (arXiv)
Whose job is it to audit the human labor behind our data pipelines, and what page do they go on if the vendor fails an ethical check the week before launch? (TIME)
AI veganism is best understood not as a prohibition but as a provocation. It asks teams to treat dignity, agency, and planetary impact as first-class reliability concerns—not marketing copy. You don’t need to boycott AI to honor that provocation. You need to engineer for it: expose choices, measure externalities, protect people, and make reversibility cheap.
SRE and DevOps have a superpower here. We already know how to turn ideals into levers—SLOs, error budgets, chaos drills, postmortems, and pipelines that keep us honest on every deploy. Apply those muscles to AI. The result won’t satisfy absolutists on either side. It will, however, deliver systems that people trust and that you can defend—to users, to regulators, and to yourself.
The Guardian — “Meet the AI vegans” — https://www.theguardian.com/commentisfree/2025/aug/06/meet-the-ai-vegans The Economic Times — “Principles over processors: How ‘AI Veganism’ fights AI’s threat to human skills in an automated future” — https://economictimes.indiatimes.com/magazines/panache/principles-over-processors-how-ai-veganism-fights-ais-threat-to-human-skills-in-an-automated-future/articleshow/123179557.cms Georgia Tech — “AI Veganism: Some People’s Issues with AI Parallel Vegans’ Concerns About Diet” — https://www.gatech.edu/news/2025/07/29/ai-veganism-some-peoples-issues-ai-parallel-vegans-concerns-about-diet TIME — “OpenAI Used Kenyan Workers on Less Than $2 Per Hour to Make ChatGPT Less Toxic” — https://time.com/6247678/openai-chatgpt-kenya-workers/ TIME — “150 African Workers for ChatGPT, TikTok and Facebook Vote to Unionize at Landmark Nairobi Meeting” — https://time.com/6275995/chatgpt-facebook-african-workers-union/ International Energy Agency — “Energy demand from AI” — https://www.iea.org/reports/energy-and-ai/energy-demand-from-ai International Energy Agency — “AI is set to drive surging electricity demand from data centres…” — https://www.iea.org/news/ai-is-set-to-drive-surging-electricity-demand-from-data-centres-while-offering-the-potential-to-transform-how-the-energy-sector-works Li, Yang, Islam, Ren — “Making AI Less ‘Thirsty’: Uncovering and Addressing the Secret Water Footprint of AI Models” — https://arxiv.org/abs/2304.03271 NIST — “AI Risk Management Framework (AI RMF) & Generative AI Profile (NIST AI 600-1)” — https://www.nist.gov/itl/ai-risk-management-framework and https://nvlpubs.nist.gov/nistpubs/ai/NIST.AI.600-1.pdf EU — “Artificial Intelligence Act (Regulation (EU) 2024/1689)” — https://eur-lex.europa.eu/eli/reg/2024/1689/oj/eng European Commission — “AI Act: risk-based approach and application timeline” — https://digital-strategy.ec.europa.eu/en/policies/regulatory-framework-ai GDPR Info — “Article 22: Automated individual decision-making, including profiling” — https://gdpr-info.eu/art-22-gdpr/ UK ICO — “Automated decision-making and profiling under UK GDPR” — https://ico.org.uk/for-organisations/uk-gdpr-guidance-and-resources/individual-rights/automated-decision-making-and-profiling/ Schwartz et al. — “Green AI” — https://arxiv.org/abs/1907.10597 Lacoste et al. — “Quantifying the Carbon Emissions of Machine Learning” — https://arxiv.org/abs/1910.09700 ML CO₂ Impact Calculator — https://mlco2.github.io/impact/ Green Software Foundation — “Carbon Aware SDK” — https://github.com/Green-Software-Foundation/carbon-aware-sdk Microsoft DevBlogs — “Carbon-Aware Kubernetes” — https://devblogs.microsoft.com/sustainable-software/carbon-aware-kubernetes/ Mitchell et al. — “Model Cards for Model Reporting” — https://dl.acm.org/doi/10.1145/3287560.3287596 Gebru et al. — “Datasheets for Datasets” — https://arxiv.org/abs/1803.09010 Carnegie Council — “Ethics washing” — https://carnegiecouncil.org/explore-engage/key-terms/ethics-washing Reuters — “‘AI-washing’—what lawyers need to know to stay ethical” — https://www.reuters.com/legal/legalindustry/ai-washing-what-lawyers-need-know-stay-ethical-2025-02-10/
#SRE #SiteReliability #DEVOPS #EthicalAI #AIVeganism #ResponsibleAI #GreenAI #ModelCards #DatasheetsForDatasets #GDPR #EUAIAct #NIST #Sustainability #PlatformEngineering #OpsCulture