There is a casual brutality to the way AI content moderation is handled. Workers are assigned to review “horrible things worded in the most banal, casual way,” a description that captures the deeply unsettling nature of the job. This is not a war zone, but a remote office job, yet the psychological hazards can be just as real, and they are treated with a shocking degree of indifference.
The brutality begins with the lack of warning. Unlike other professions that deal with traumatic material, AI trainers are often thrown into content moderation with no preparation. A job advertised as “writing” or “rating” suddenly becomes a stream of violent, hateful, or sexually explicit content, leaving the worker in a state of shock.
It continues with the lack of support. While the psychological risks of content moderation are well-documented, the contract workers who perform this labor for AI companies are rarely given access to the mental health resources they need. They are expected to process traumatic material and then simply move on to the next task in the queue, all within a 10-minute timeframe.
This casual approach to a deeply serious issue is a feature, not a bug, of the layered contracting system. The tech giants who benefit from this work are insulated from the human cost, while the contracting firms are incentivized to keep costs low, and mental health support is an expensive line item. The result is a system that treats psychological harm as just another acceptable externality of doing business.
