Utilize the invoice payment gateway for checkout – order now, pay later. With a Coinsteam business account, you gain access to priority support, increased order limits, and competitive pricing for bulk orders available through quote requests.
Invoice Gateway : Upon placing your order, we will issue an order invoice followed by an invoice payment request, featuring a convenient payment gateway. Choose from flexible payment options including PayPal, Venmo, Apple Pay®, credit cards, debit cards, or ACH bank transfers.
Thank you for being a valued customer. We're looking forward to build steam for your projects.
The AI Job Replacement Narrative Is Finally Falling Apart
While tech leaders declare coding 'solved,' employment data, code quality studies, and high-profile corporate reversals are painting a very different picture.
OpenCode co-founder Dax Raad’s Valentine’s Day tweet—a six-bullet dismantling of the AI productivity fantasy—racked up 793,000 views and 22,000 Reddit upvotes in days, striking a nerve that Silicon Valley boardrooms would rather you ignore.
Posted on February 14, the tweet landed like a grenade in an industry drunk on its own hype. While Anthropic’s head of Claude Code was telling Lenny Rachitsky that “coding is largely solved,” Raad—the man behind the fastest-growing open-source AI coding tool in the world—offered a brutally different take on what AI is actually doing inside real organizations.
everyone's talking about their teams like they were at the peak of efficiency and bottlenecked by ability to produce code
here's what things actually look like
– your org rarely has good ideas. ideas being expensive to implement was actually helping
The post exploded across Reddit’s r/ExperiencedDevs under the title “An AI CEO finally said something honest,” collecting over 22,000 upvotes. It landed on Hacker News, was covered by half a dozen outlets, and shared across LinkedIn and Lemmy. The reaction wasn’t surprise—it was relief. Thousands of developers finally heard someone in a position of authority say what they had been experiencing firsthand.
And this is where the AI job replacement narrative starts to unravel.
The “Coding Is Solved” Irony
Just five days after Raad’s tweet, Boris Cherny—the creator and head of Claude Code at Anthropic—appeared on Lenny’s Podcast and declared that “coding is largely solved.” He claimed he hasn’t edited a single line of code by hand since November 2025, that he shipped 259 pull requests in a single month with every line written by Claude, and that the title “software engineer” will start going away by the end of 2026.
These are bold claims from the person in charge of a product whose public GitHub repository currently has over 5,400 open issues. For a tool that has allegedly solved coding, that is a considerable backlog of unsolved problems.
Meanwhile, Anthropic is offering total compensation packages exceeding $900,000 to attract software engineers—the very profession Cherny suggests is becoming obsolete. The contradiction writes itself.
The issue isn’t whether AI coding tools are useful. They demonstrably are. The issue is the gap between the narrative being sold and the reality being lived. And that gap is widening.
What the Data Actually Shows
The Bureau of Labor Statistics projects software developer employment to grow 15% between 2024 and 2034—significantly faster than the national average—with approximately 129,200 new openings per year. As of 2024, the field employs 1.7 million people in the United States alone.
The World Economic Forum’s Future of Jobs Report 2025, surveying over 1,000 employers across 55 economies, projects a net gain of 78 million jobs by 2030—170 million created against 92 million displaced. AI and data processing roles alone account for 11 million new positions versus 9 million replaced.
But the most revealing data isn’t about job counts. It’s about code quality.
45% of AI-assisted development tasks introduce critical security flaws, according to Veracode’s 2025 GenAI Code Security Report. Java projects hit a 72% security failure rate.
AI-generated pull requests contain 1.7x more issues than human-written ones, with performance problems appearing nearly 8x more frequently, per CodeRabbit’s analysis of 470 real-world PRs.
Code churn has doubled since 2021, with AI-generated code showing a 41% higher revert rate than human-written code, according to GitClear’s analysis of 211 million lines of code.
59% of developers admit to using AI-generated code they don’t fully understand, per a Clutch survey of 800 software professionals.
The Stack Overflow 2025 Developer Survey tells the trust story plainly: while 84% of developers now use AI tools, only 3% report “highly trusting” AI output, and 46% actively distrust its accuracy. Favorable sentiment toward AI tools dropped from 72% to 60% in a single year.
Stanford researchers found something even more concerning. A study led by Dan Boneh demonstrated that developers with AI assistant access wrote significantly less secure code than those without—and were simultaneously more confident that their code was secure. The danger isn’t just bad code. It’s bad code shipped with false confidence.
The Constraint That Made Great Software
Raad’s most incisive observation was about constraints. When implementation is expensive, organizations are forced to think. They debate trade-offs, question whether a feature justifies its maintenance cost, and kill mediocre ideas early because every mediocre idea carries a real opportunity cost.
The products we know and love didn’t win because they generated code faster than everyone else. They won because they made sharp product decisions under constraints—picked a narrow problem, executed well, and said no to a thousand tempting extensions.
That constraint is disappearing. And most people are celebrating it as pure upside.
When the marginal cost of shipping a feature drops close to zero, discipline collapses. If a feature takes ten minutes to generate and an hour to patch, the temptation to try five variations becomes irresistible. The result is predictable: well-thought-out features replaced by half-baked ideas that become a maintenance nightmare.
As Raad put it in a follow-up: OpenCode itself is built by 2.5 people using AI tools—“nothing crazy, no skills, no plugins.” His pointed question: why do companies claiming to have solved software need ten times that headcount?
Jevons Paradox: More Efficiency, More Demand
Jevons paradox strikes again! As AI gets more efficient and accessible, we will see its use skyrocket, turning it into a commodity we just can't get enough of. https://t.co/omEcOPhdIz
In 1865, economist William Stanley Jevons observed that as steam engines grew more efficient at burning coal, England didn’t use less coal—it consumed dramatically more, because cheaper energy fueled entirely new applications. The pattern has repeated across every major technological shift since, and the Jevons Paradox is now playing out in software development.
Consider the historical record. When ATMs were introduced, the number of bank tellers in the United States actually increased from roughly 500,000 to nearly 600,000 between the 1980s and 2010. Tellers per branch dropped, but banks responded by opening 43% more urban branches. When spreadsheets automated tedious calculations, bookkeeper demand fell 44%—but demand for accountants, auditors, and financial analysts surged nearly 4:1.
Google’s Addy Osmani documented this pattern in AI’s context with his Efficiency Paradox analysis. Teams that adopt AI tools don’t shrink engineering headcount—they expand product surface area. A three-person startup maintaining one product now maintains four. An enterprise team experimenting with two approaches now tries seven. Internal tools that failed cost-benefit analysis at two weeks of development time suddenly become viable at three hours.
Entire industries that previously relied on spreadsheets and email threads now demand custom dashboards and lightweight applications. The easier it is to create software, the more software gets created—and all of it needs to be maintained, secured, debugged, and eventually replaced by someone who understands what it actually does.
The AI Washing Problem
On February 19, Sam Altman stood at India’s AI Impact Summit and acknowledged that companies are “AI washing” their layoffs—blaming artificial intelligence for workforce reductions that would have happened regardless.
The data supports him. A National Bureau of Economic Research study found that 90% of surveyed executives said AI has had no impact on employment over the past three years. An Oxford Economics report from January 2026 concluded that many layoffs attributed to AI were actually the result of past overhiring, with companies dressing up workforce reductions as forward-thinking strategy rather than admitting to bloated headcounts.
Tom Davenport, a Babson College professor who surveyed over 1,000 executives in late 2025, found that while many organizations had made cuts based on the promise of AI, only 2% were making cuts related to the actual implementation of AI. The distinction matters enormously.
Block’s February 26 announcement offered a textbook case. Jack Dorsey cut approximately 4,000 employees—nearly 40% of Block’s workforce—citing AI as the primary driver. Yet Block’s Q4 2025 gross profit was up 24% year-over-year, reaching $2.87 billion. The company’s workforce had ballooned from 3,835 in 2019 to over 12,500 by 2022 during a hiring spree that had nothing to do with AI capabilities. Bloomberg investigated within days, questioning whether AI was the true driver or convenient cover for a long-overdue correction.
IBM took the opposite approach. On February 12, the company announced it would triple entry-level hiring in the United States in 2026—including software developers. IBM’s CHRO Nickle LaMoreaux argued that eliminating junior roles might improve short-term efficiency but creates catastrophic talent gaps. Companies that stop hiring early-career workers will eventually need to poach mid-level employees at a 30% premium, and those external hires won’t know the culture, the systems, or the institutional knowledge that makes organizations function.
Her bet: companies that double down on entry-level hiring now will outperform the competition in three to five years.
When Replacement Strategies Backfire
The corporate world is already producing case studies in what happens when organizations mistake AI tools for human replacements.
Klarna eliminated approximately 700 customer service positions in 2023-2024, replacing them with an AI chatbot. CEO Sebastian Siemiatkowski publicly boasted that the AI was doing the work of 700 humans. Then customer satisfaction collapsed. Complaints about robotic responses, inflexible scripts, and infinite loops mounted until Siemiatkowski admitted that cost had been a “too predominant evaluation factor.” By May 2025, Klarna was rehiring humans.
McDonald’s pulled its AI-powered drive-through ordering system from 100 U.S. restaurants after viral videos showed machines repeatedly adding impossible quantities of items to orders. IBM itself had previously laid off roughly 8,000 workers to deploy an AI HR bot, only to discover the system couldn’t handle tasks requiring empathy or subjective judgment, and eventually rehired many of those positions.
Gartner now predicts that 50% of organizations expecting to significantly reduce their workforce using AI will abandon those plans by 2027. As of April 2025, even the best AI agents could only finish 24% of jobs assigned to them.
Why Fundamentals Matter More Than Ever
Some people today are discouraging others from learning programming on the grounds AI will automate it. This advice will be seen as some of the worst career advice ever given. I disagree with the Turing Award and Nobel prize winner who wrote, “It is far more likely that the…
AI pioneer Andrew Ng called the advice to stop learning programming “some of the worst career advice ever given.” He recalled that in 1960, Nobel laureate Herbert Simon predicted programming would become extinct because computers would program themselves. Six decades later, that prediction hasn’t aged well.
Coding, despite all the mythology around it, is the mechanical part of software engineering. The hard part is modeling the problem correctly—defining boundaries, understanding data flow, anticipating failure modes, and designing systems that survive change. When you write code yourself, you simulate the system in your head. You understand why a function exists, what assumptions it makes, what state it mutates, and what happens when inputs are invalid. That mental simulation is the training.
When that layer gets outsourced to an AI, the training disappears. You still ship features and close tickets, but you stop exercising the muscles required to reason about complexity. And as Raad observed, the two people on your team who actually understand the system are now also responsible for reviewing and maintaining the slop code everyone else is producing. They will burn out, and they will leave.
The Atlassian 2025 State of Developer Experience Report quantified the paradox perfectly: 68% of AI-using developers reported saving ten or more hours per week, but 50% simultaneously reported losing ten or more hours per week to organizational inefficiencies. The net productivity gain is effectively zero. The bottleneck was never typing speed. It was—and remains—bureaucracy, communication, architecture decisions, and the dozen other realities of shipping something real.
Even when AI produces code faster, you are still bottlenecked by code review, QA, compliance, deployment pipelines, stakeholder alignment, and user feedback. Generating another thousand lines in thirty seconds doesn’t compress a two-week approval cycle. It just gives you more code to review, more edge cases to test, and more surface area to secure.
The Bottom Line
The AI hype machine wants you to believe that software developers are months away from obsolescence. The data tells a fundamentally different story. Employment is projected to grow 15% over the next decade. Companies that replaced human workers with AI are hiring them back. Code quality metrics are deteriorating as AI adoption increases. And the overwhelming majority of executives who actually implemented AI report no meaningful impact on headcount.
AI is a genuinely powerful tool. It accelerates prototyping, automates boilerplate, and helps experienced developers move faster on well-defined problems. But a tool is exactly what it is—not a replacement for the judgment, domain knowledge, and architectural thinking that separate functional software from production-grade systems.
The professionals who will thrive aren’t those who outsource their thinking to a language model. They are the ones who understand the fundamentals deeply enough to know when the AI is wrong, why it’s wrong, and how to fix it. They treat AI as a force multiplier for existing expertise rather than a substitute for acquiring it.
The constraint that made great software was never code generation speed. It was the discipline to think before building. Removing that constraint doesn’t create better products—it creates more products that nobody needs, built on foundations that nobody understands, maintained by teams that can no longer tell the difference.
Trending now