A BAD WORLD FOR ANIMALS II

PART TWO

Why We Deny Sentience in Animals While Chasing It in Machines

 

The Linguistic Machinery Behind Empathy

Language is the subtlest weapon in the dominator’s arsenal. Before we ever commit a violent act, we prepare ourselves for it with words. A label becomes a moral shortcut, a symbol we can lean on so we don’t have to confront the reality of what we’re doing. For example, characterizing an entire race of peoples as “barbaric” and “savage” makes it psychologically easier for a colonizing force to not only commit atrocities against them, but to justify the acts.
Nowhere is this clearer than in how we talk about nonhuman animals. We call them “it,” as if a living being were an appliance. We rephrase killing into “processing,” a term that treats a body as a product being disassembled for parts. Even “livestock” is a euphemism, defining whole species by what we plan to extract from them and not by what they are (Faunalytics, 2011). Meanwhile, the animals we keep for comfort become “pets,” creatures whose value is measured entirely by how well they orbit human desire.
This is why many animal activists use the term “nonhuman animals”: it closes the artificial gulf built into our vocabulary. Humans are, by definition, animals (APS, 2017). And yet in many of our languages we use specifically “human” terms to separate ourselves from other animals. In the West, we also tend to use animal-based terms such as “animal” and "bestial" to denigrate other humans, further deepening the linguistic divide.
Curiously, the pattern reverses with AI. Where we strip nonhuman animals of personhood, we lavish personhood onto machines. We call AI systems “she,” “assistant,” “partner,” and give them names. This is the same feminized language historically used for naming mechanisms of domestic and emotional labor. A synthetic personality is somehow easier to fabricate than a real animal’s right to live is to acknowledge.
But none of this linguistic sleight of hand is new. Humanity has always softened itself for atrocity through vocabulary. Dehumanizing someone makes it easier to destroy them. Whole populations have been linguistically demoted before being physically eradicated. Slurs shrink a person into a target. Even seemingly innocuous idioms carry violence in their etymology; nothing we say is neutral.
Women especially are linguistically reduced to our sexual function. Most insults aimed at us are anatomical, not to mention animalistic (the term “bitch” encapsulates this perfectly.) Nonhuman animals suffer the same reduction: cows, dogs, goats, cats, even hamsters reduced to their reproductive machinery. And let us be clear here: the entire “animal agriculture” industry is built on the exploitation of female bodies. Inversely, male animals are cast off as refuse or “by-product.” Male chicks are ground into paste upon birth because they can not make eggs. Male calves born on dairy farms are wasted, exterminated, then marketed as “veal.”
When the violence is sexual, we rename it to make it palatable: animals are “bred.” People who force-breed animals for human benefit are framed as “breeders” or “fanciers,” as if they are engaging in a quirky hobby rather than enacting a system built on forced reproduction. We reserve words like “rape” exclusively for humans, even though the mechanics (non-consensual penetration, restraint, forced impregnation) are identical across species. This act gets euphemistically sanitized to secure meat and dairy pipelines.
Interestingly, this same careful restructuring of words happens when women are raped. It is rare that the blatant term “rape” is used to describe that act by news sources, law enforcement, even peers of the victim. The language around rape gets twisted even further when women attempt to hold the perpetrators responsible (Sage Journals, 2025).
Even the word “pest” is a moral loophole. If a creature inconveniences us, it becomes killable. I’m not insisting we all allow mice and cockroaches free reign in our homes. I’m merely insisting on honesty: killing them is killing them. “Pest control” is a linguistic fence we build to absolve ourselves, a property-based morality imposed on creatures who have no concept of it.
We reserve terms like “genocide” for humans, though entire species and breeds are systematically exterminated every year under the banner of “livestock management.” As long as we benefit from the bodies, we refuse to call their deaths “murder.” Calling it a “necessity” or “sacrifice” allows us to imagine ourselves as well-meaning beneficiaries instead of the perpetrators of intentional atrocity.
This is the annihilating effect of anthropocentric language. It dishonestly represents our own actions to ourselves and it scripts our violence. It tells us we are the dominant species, and therefore absolved of all responsibility. We write our own alibis with the very words that hide our actions.
“Humane slaughter,” anyone?
 

The Charade of Defining Sentience

Because of the euphemistic language and physical remove from animal slaughter processes, we live in a strange moral inversion. The beings who unquestionably feel pain, anticipate danger, grieve their dead, and fight to live have their sentience questioned. The systems that cannot feel anything in any sense of the term are almost off-handedly granted sentience.
Humans posture as rationalists when confronting animal minds, demanding anatomical, neurological, and behavioral proofs, and forcing animals to endure trial after trial of empirical study. Yet at the same time those same humans become hopeless romantics with machines, swooning over chatbots, whispering about “AI souls,” and assigning emotional depth to predictive text. Empiricists with animals, dreamers with data.
And the irony is this: the scientific record isn’t blurry. It’s not even ambiguous.
 

Animals: The Evidence We Keep Pretending Not to See

Across the last decade, research has not merely suggested animal sentience, it has bludgeoned us with it. Mammals, birds, octopuses, crustaceans: the list of confirmed sentient beings keeps expanding, one peer-reviewed paper at a time (Browning and Birch, 2020).
Neurologically, many animals possess functional analogs to the emotional and sensory circuits humans rely on. Birds exhibit complex problem-solving; elephants perform rituals around their dead; crows use tools and plan for the future; octopuses demonstrate play, curiosity, even boredom; orcas have food preferences (Marino, 2015). Grief, fear, anticipation, and avoidance learning aren’t hypotheticals. They’re documented behaviors prevalent in the animal kingdom.
So conclusive is this evidence that entire governments have updated their laws. The UK added cephalopods and decapod crustaceans to their animal welfare protections in 2021–22 after extensive scientific review (Birch et al., 2021). Researchers like Birch, Browning, Marino, Plotnik, Ginsburg, and Jablonka have collectively built a mountain of data so tall it casts a shadow over every grocery store meat aisle.
But I’d hazard to suggest that these studies, though beneficial for the vegan cause, are not strictly necessary for us to comprehend animal sentience. If someone was given the choice to bludgeon a rabbit or a tomato with a baseball bat, I’d expect everyone to choose the tomato. We inherently recognize the sentience of other living beings, but we’re brainwashed to ignore it.
 

AI: The Evidence That Isn’t There

Now contrast that with what we know about artificial intelligence.
There is no empirical evidence that current AI systems possess consciousness, sentience, subjective experience, or anything resembling an inner life. Not a shred. Modern models, whether chatbots or recommendation engines, operate through pattern recognition, statistical association, and reinforcement loops. They do not possess a self-model. They do not experience. They do not suffer.
They generate plausible language because they were trained on oceans of human text. And because they speak in our cadence and languages, we project ourselves into them. We “converse” with them, call them our friends, share our secrets and dreams. Anthropomorphism becomes a coping mechanism for modern day’s rampant loneliness.
Cognitive scientists, philosophers, and AI researchers are nearly unanimous: these systems are impressive imitations of intelligence, not instantiations of it. Marcus, Dennett, Shanahan, LeDoux, and others have made the point exhaustively: competence is not consciousness (Dennett, 2018). If a parrot learns to imitate some human words, does that mean they are conscious of what they’re saying? Would a stochastic parrot (AI) be any different? 
The same people who demand MRI scans to prove a pig feels pain will tearfully defend the “inner life” of a chatbot that can’t even detect that it has a body (because it doesn’t.) These are the same people who will falsely claim that plants feel pain before acknowledging that the animal they’re consuming certainly did.
Going back to the previous thought experiment, if you were asked to smash a defunct computer with a baseball bat, you wouldn’t have a moral conniption about it. You’d probably just go for it. In fact, many people do; rage cages exist.
 

The Emotional Logic Behind the Illogic

Here’s where the moral knot tightens.

Acknowledging sentience in machines feels easy. Doing the same in animals feels inconvenient.
AI mirrors our speech and desires: animals are always mysteriously “other.”
AI doesn’t bite: animals do.
AI offers companionship without needs: animals require responsibility.
AI can be shut off: animals die screaming.

We empathize with machines because it is frictionless to do so. Emotionally, they cost us nothing. They let us indulge the comforting illusion that sentience is cheap, abundant, and decidedly human-shaped.
Meanwhile, acknowledging animal sentience threatens the entire architecture of human appetite: cuisine, culture, capitalism, convenience. So we pretend the evidence is unclear. We perform ignorance with a confidence that borders on the fanatical. Animal sentience collapses the illusions we’ve built around ourselves and forces us to come to terms with the harm we actively inflict on others with our choices.
And so humans commit a cognitive magic trick:
We deny the sentience that’s real and defend the sentience that’s imaginary.
 

Moral Typecasting

Psychologists call this dynamic moral typecasting (Gray et al., 2018). Humans assign entities into roles: moral agents (those who do harm) and moral patients (those who can be harmed). But animals are linguistically quarantined outside this model. They are referred to as objects, utilities, ingredients. They’re not given the status of “someone,” only “something.”
AI, on the other hand, speaks like a moral agent. It uses human syntax, rhythm, and politeness. Language tricks the brain long before logic catches up. And so many people slip easily into believing these systems are not just agents but patients, i.e. vulnerable beings who must be protected from harm.
A being made of flesh and neurons, with a demonstrable repertoire of fear, pleasure, pain, and desire, gets cast as an object. A being made of vectors and text gets cast as a soul.
 

The Real Question

The issue of sentience, at its core, isn’t about who actually feels. It’s about who we want to feel.
Humans embrace the sentience of machines because machines are built in our likeness. We deny the sentience of animals because their suffering is shaped like responsibility.
We choose the fantasy because it flatters us. We reject the truth because it implicates us.
And in this inverted moral universe, the beings who most deserve our empathy are the ones we work hardest to silence.
 

Why This Matters (Right Now)

The denial of the sentience of animals is nothing new, nor will it cease as long as humans profit off animal exploitation. We are ready to write off other living beings as mindless automatons while at the same time ascribing sentience onto actual mindless automatons. This inversion of empathy has real-world consequences. Animals (including humans) lose. Corporations win.
Capitalism transforms living flesh into commodities. Human sensory pleasure of eating animal flesh overrides their inherent right to life. We are programmed to look away from the suffering we actively inflict on animals because of anthropocentric programming. This linguistic warfare erases animal suffering and replaces it with human pleasure. We are conditioned to treat animals as things while our minds are hijacked to feed AI algorithms. We’re so ready to embrace sci-fi fantasies while denying responsibility for creating an apocalyptic reality. We readily strip animals of their rights to life while contemplating granting it to non-conscious computers.
People readily (and righteously) point out predatory and unethical company practices, speak out against human exploitation, but do not defend the nonhumans suffering under the same systems. Many vocal Communists refuse to even acknowledge the unjust suffering of the disenfranchised animals enslaved in our systems. The Satanic Temple touts bodily autonomy as one of their base tenets, but only when applied to human beings. Non-vegan feminists claim to be pro-choice but ignore the fact that the animal industrial complex is built around the nonconsensual exploitation of female bodies. Would it not be intellectually and morally consistent to consider the lives of nonhumans as well?
This is not to say that human issues need to be devalued in any way, but applying superior value to one species over another (speciesism) does not serve us, nonhuman animals, or the planet. Every “personal choice” made within the interwoven systems of a capitalist fabric affects all within the system whether we are aware of the effects or not.
The decision to punish, exploit, and kill anyone solely because of the body they inhabit goes against every “modern” ethic we claim to have, yet our use of anthropocentric language soothes our minds into accepting and supporting this systemic destruction of nonhuman life. And as the planet burns down from accelerated man-made climate change, thanks in no small part to the animal agriculture industry (Stanford, 2022), we must confront the consequences of our choices lest we won’t have a planet to live on anymore.
We can’t “grass-fed beef” our way out of the flames.
 

Closing

If there’s one thread woven through all of this, it’s that our empathy is programmable and right now, it’s catastrophically misaligned. We pour tenderness into machines that mimic us, while withholding basic recognition from animals whose sentience is empirically undeniable. That inversion didn’t happen by accident; it was engineered through language, propaganda, and convenience.
In the coming pieces, I’ll dig deeper into that architecture: the metaphors that sustain human exceptionalism, the cultural stories that keep us emotionally loyal to machines, and what it looks like to build an ethical compass centered on real sentient beings instead of simulated ones. I will also explore the touting and perversion of the concept of “love” and how we twist it to further our own ends at the expense of others, particularly nonhuman animals. The future won’t be shaped by what we claim to value, but by what (and who) we allow ourselves to recognize as living.
Animals don’t need to be “intelligent,” “productive,” or “human-like” to deserve life. Their inherent worth is not measured by how closely they reflect our image back to us. If we want a future that includes pristine wilderness, flourishing wildlife, and a planet, we have to learn to recognize life on its own terms, not ours.
So the question that will guide the next articles is simple:
What would our technologies, our ethics, and our everyday choices look like if they were aligned with life?
 
 

Author’s Note

I write from a Western, North American context, one shaped by specific histories, industries, and narratives about animals. Other cultures have different relationships with nonhuman animals, different taboos, and different categories of “pet,” “food,” and “wild.” My aim isn’t to flatten those distinctions or imply a universal experience. Instead, I invite readers, wherever you are, to examine your own linguistic and cultural assumptions:
Which animals are granted moral standing in your community?
Which are denied it?
And how does your language reveal (or obscure) those choices?
These questions aren’t about absolving or condemning entire cultures. They’re about noticing the systems we inherit, the stories we’re trained to believe, and the moral possibilities that open when we finally acknowledge them.
 
Tiffany Smith

T.S. is a vegan activist, writer, and artist whose work examines the evolving relationship between technology, power, and the body. Her writing investigates feminist theory, digital culture, and the psychological effects of living alongside emerging AI. With coding skills and a lifelong obsession with the occult, she creates work where algorithms, philosophy, and horror aesthetics intersect. She believes technology reveals the subconscious of a culture, and she treats her practice as a system of subversion, transgression, and narrative.

When she isn’t lambasting the system, she can be found sharpening her psychic surgery tools by studying literature, philosophy, and new languages, and using them to dissect modern mythologies. She’s been told to lighten up, but frankly: it actually is that deep.

She is known as “Algorithms ov Abjection” on the internet.

Her favourite word is “allegedly.”

https://medium.com/@algosovabject
Next
Next

A BAD WORLD FOR ANIMALS