Five Stages of Losing Your Job — A Worker's Quiet Exit
The day workplace automation became real, one office worker traversed the Kübler-Ross five stages to redefine what his Job truly meant.
Stage 1: Denial
He still thought he was fine.
Monday morning, a notice appeared on the company messenger. "Workplace Automation Pilot Program." In the attached demo video, someone typed a single line on screen. One request number. That was it. Checking customer requests, gathering materials from relevant departments, data analysis, verification runs, compiling results, writing reports — everything he spent half a day doing across six different programs vanished behind a single line of input.
He scoffed. A demo is just a demo.
"Real work is different," he told a colleague. "Every customer's situation is different, the data changes every time, and there are so many edge cases." He wasn't wrong. He just missed one thing. The system hadn't created new programs. The email, Excel, internal systems, shared folders — they were all already there. The AI simply opened and used the tools the company already had. It didn't build new doors — the doors had been open all along; it was just that only humans had been walking through them.
While he was in denial, the AI was already commuting through those doors.
The essence of denial is a distortion of time perception. The judgment that "it's still far off" rests on the premise that change arrives slowly. Quarterly strategy reviews, annual roadmaps, semi-annual budget planning — measured in the company's time units, it still seemed distant. But building that system took half a day. Design, implementation, verification, documentation — all of it. Like Miller's planet in Interstellar, years had already passed on the outside, and he was still looking at the company's clock.
Stage 2: Anger
When the second demo was shared, he watched the video to the end.
When a work request came in, the AI control tower analyzed it. Which customer, which product, which conditions — all extracted automatically. Up to this point, it was what he did every morning, opening emails and scanning with his eyes.
What came next was the problem. The AI split into two roles: a data analysis agent and a quality verification agent. One analyzed recent change histories; the other checked whether verification criteria were met. Neither looked at the other's domain. They judged independently — then debated.
"The processing method changed in the recent update. It's a product issue." "The verification criteria assume the old method. It's a standards issue."
The same argument he and his colleagues had every day in the conference room — two AIs were having it. A judging AI listened to both sides and reached a conclusion. If no consensus was reached, it repeated for up to three rounds.
He was angry. To be precise — his pride was wounded.
The experience of debating with a colleague for an hour over "Is this a product issue or a standards issue?" The feeling of digging through Excel data late at night, tracking down root causes. He believed that was his expertise. But the AI in the video was doing the same thing. Faster. Quieter. Without booking a conference room. Without a single cup of coffee.
The real target of his anger wasn't the AI. It was the realization that what he'd built over ten years was "execution capability." Running Excel quickly, reading data accurately, writing reports well. Until now, that had held value. But the moment AI replaced execution, the basis for that value evaporated.
He couldn't yet articulate that the era when sitting in front of a screen equaled work was ending — but his body had already sensed it.
Stage 3: Bargaining
He attempted negotiation. Not with the AI, but with himself.
"Ultimately, humans still have to design the AI." He was right. In fact, the part that took longest in designing that system wasn't the technology. Should the analysis agent and verification agent debate, or should a single AI review both perspectives sequentially? Should each agent communicate directly with the other, or must everything go through the control tower? Should raw data be passed in full, or only analysis summaries?
These judgments could only come from someone who had actually worked with those spreadsheets, someone who had presented reports to customers, someone who had personally argued "whose responsibility is this?" in the conference room — from that field experience alone.
So he crafted a bargain. "I'll become the architect. I'll hand execution to the AI and be the person who designs the structure."
The trap of bargaining lies in not seeing that these are limited seats. The ability to judge what to command the AI — the ability to decide how to organize AI — that was the qualification for being "the Chairman." Behind the optimism of "everyone will become a Chairman with AI assistants" hid the existential crisis of the majority who cannot become one.
And one more thing. The most capable Chairman was the person who had most recently been on the ground. When someone who doesn't know the field gives the AI a flawed structure, they get flawed conclusions and make flawed decisions. No matter how accurately the AI executes, if the design is wrong, the results are wrong too.
He was still in the field. The window for his bargain was — right now.
Stage 4: Depression
One afternoon, he looked back at what he'd done all day. He opened emails, launched three Excel files, exchanged messages with colleagues on the messenger, and wrote a report in Word. Six program windows. The same pattern every time, the same clicks and keystrokes.
And he realized. What he was doing wasn't work — it was program operation.
Ever since computers entered the office, there had been an unspoken pact between humans and machines. If humans learn how to use the machine, the machine will help with human work. He learned to type, mastered Excel functions, internalized ERP systems, trained on video conferencing tools. Login, menu, button — all of these were acts of humans taking half a step toward the machine.
For twenty years, he had been taking that half step. And he called it "professional competency."
Deep within his depression lay this question. For the past twenty years, the digital economy ran on a structure of gathering users' attention and selling it to advertisers. Session time, daily active users, click-through rates — how long and how much human attention was captured? But the moment AI handles execution, the eyes to show things to disappear. Does an AI need a pretty screen? A dashboard? The AI reads data, runs analysis, and outputs results. The only thing a human needs to see is one final report.
The "eyes" that the $600 billion global digital advertising market depends on — and his reason for existence, built on operating screens in front of those eyes for a salary — were quietly evaporating.
This wasn't work becoming easier. It was the quiet but violent extinction of the mediating role between humans and machines losing its reason to exist. And it was also the extinction of a person who had built an entire career on that mediating role.
Stage 5: Acceptance
One morning, on his commute, he thought: In an era when AI automatically connects to company systems and starts working, why am I commuting by subway?
Acceptance was not resignation. It was a recalibration of coordinates.
In that system, the judging AI picked only one. Product issue or standards issue. It didn't list options — it reached a conclusion. In the search era, ten results sat side by side on screen, but in the AI's judgment process, only one remained. Which data the analysis agent looks at first, which criteria the verification agent applies first, how many rounds of debate are allowed. All of these judgment criteria were set by the human who designed the system.
What perspective gets planted inside the AI's judgment process — this was the core of the new competition. And the person with the advantage in that competition was the one who knew the field.
He took inventory. What he'd lost and what he hadn't.
What he lost. The monopoly on execution. The existential proof that came from deftly navigating six program windows. The premium on processing faster, analyzing more accurately, writing better.
What he hadn't lost. The instinct for why those six windows opened in that order. The judgment to decide how to organize AI. And the sole source of that judgment — the experience of having done the work himself.
The next day at work, he opened a document to design the AI team's workflow structure for the first time. The moment he entered a request number, the analysis team and verification team each began their work. He waited for results. When reports came in, he reviewed, judged, and made decisions.
The AI executed. He designed the strategy.
Epilogue: There Is No Sixth Stage
The Kübler-Ross model has no sixth stage. What comes after acceptance is each person's own.
The point where economic value is created is shifting from "human eyes" to "AI execution." This transition changes screens, advertising, platform power, and the way we work — all at once. And the smallest unit of that change is not a large corporation, but one person who knows the field and their AI team.
What he accepted wasn't the loss of his Job. It was the redefinition of his Job.
But one thing was clear. Those who read this and think "it's still far off" are at Stage 1. Those who feel "it's unfair" are at Stage 2. Those bargaining "I'm different" are at Stage 3.
What stage are you at right now?
📚 You might also enjoy
Share
