Will AI Coding Kill Traditional Programming?

AI coding tools from OpenAI and Anthropic are transforming software development. Discover the real impact on developers, technical debt, and the future of programming.
AI coding models from OpenAI and Anthropic are rewriting the rules of software development, automating everything from bug fixes to new features in production apps. But as tech leaders hype a future where “coding as a profession” may disappear, many developers and researchers warn that the risks to code quality, security, and burnout are being badly underestimated.
In this article, we unpack the AI-first workflows emerging at companies like Spotify, the bold claims from AI executives, and the growing backlash from engineers worried about technical debt and long-term maintainability.
The New AI-First Coding Workflow
From IDEs to Chat: How Developers Now “Write” Code
At some companies, senior engineers are no longer spending their days inside traditional IDEs. Instead, they are orchestrating work through conversational interfaces where AI models generate, refactor, and ship code.
Key aspects of these AI-first workflows include:
Developers describe features or bugs in natural language, often in chat tools.
AI models generate patches, tests, and documentation automatically.
Engineers review and approve changes, then promote them to production with minimal manual editing.
The feedback loop becomes: prompt → code → quick review → deploy.
This flips the traditional programming model on its head: human engineers shift from being primary authors of code to reviewers, editors, and system designers.
Spotify’s AI-Powered Development Pipeline
Spotify has become one of the most cited examples of this shift. Co‑CEO Gustav Söderström has described a workflow where engineers use an internal system to talk to an AI coding assistant directly from Slack. They can:
Ask the AI to fix a bug in the iOS app during their commute.
Receive a new build of the app directly in Slack for testing.
Merge the change to production before they even arrive at the office.
The striking detail is that some of Spotify’s most senior engineers reportedly “have not written a single line of code since December,” relying instead on AI to implement their decisions while they focus on architecture, product direction, and review.
OpenAI vs Anthropic: The Race to Own AI Coding
OpenAI’s GPT‑5.3‑Codex: “The Model That Helped Build Itself”
OpenAI’s GPT‑5.3‑Codex is positioned as a breakthrough coding model, and the company claims it played an active role in its own development. According to OpenAI, the model:
Diagnosed issues in its own training runs.
Helped debug evaluation pipelines and test harnesses.
Assisted engineers in analyzing failures and refining training strategies.
That marketing line—“the first model that was instrumental in creating itself”—has fueled both excitement and unease. It suggests a future where AI systems increasingly participate in their own design, making it harder for humans to fully understand the resulting stack.
Anthropic’s Claude Opus 4.6: Strong on Enterprise Code and Finance
Anthropic’s Claude Opus 4.6 is pitched less as a flashy stunt and more as a practical, enterprise-ready model for working across large codebases. The company highlights:
Leading scores on financial analyst benchmarks, suggesting strong reasoning on complex, domain-heavy tasks.
Improved handling of long contexts, enabling refactoring and analysis of large repositories.
Better reliability when navigating multiple files and intertwined business logic.
Together, GPT‑5.3‑Codex and Claude Opus 4.6 illustrate the competitive race: models that are not just “autocomplete on steroids,” but strategic tools for engineering managers, financial analysts, and product teams.
The Hype: “Coding Will End by 2026”
Viral Essays and Bold Predictions
Entrepreneur Matt Shumer’s essay “Something Big Is Happening” lit up X with tens of millions of views. He argued that:
AI can now handle nearly all of his technical work.
The disruption will be “much bigger than Covid.”
Traditional engineering roles will rapidly be transformed or displaced.
This narrative aligns with eye‑catching statements from high‑profile tech figures:
Mustafa Suleyman, Microsoft CEO, has suggested that most white‑collar tasks could be automated within roughly 18 months.
Elon Musk has claimed that “coding as a profession will effectively end by the conclusion of 2026.”
These predictions play well on social media and in boardrooms, where leaders are under pressure to “do something with AI” to impress investors and analysts.
Why Executives Love the Story
For executives, the upside of AI‑first development looks irresistible:
Reduced time‑to‑market for new features.
Lower apparent engineering costs per unit of output.
Easy narratives for shareholders about efficiency and innovation.
The ability to reframe software development as a high‑leverage, prompt‑driven process.
However, the story sounds very different when you ask researchers focused on safety, reliability, and long‑term maintainability.
The Pushback: Reliability, Security, and Technical Debt
Gary Marcus and the Case Against AI Coding Hype
NYU emeritus professor Gary Marcus has emerged as one of the most vocal critics of the current wave of AI‑coding enthusiasm. In a Substack post and interview with Business Insider, he argues that:
AI hallucinations remain a serious, unsolved problem.
AI‑generated code can appear correct while hiding subtle, dangerous bugs.
Security vulnerabilities are often introduced by code that looks plausible but has never been properly threat‑modelled.
Claims that “AI can do all my technical work” echo past tech promises—like Musk’s forecast of one million robotaxis by 2020—that never materialized.
Marcus’s central point is not that AI is useless, but that uncritical adoption, driven by hype and FOMO, risks creating enormous, invisible liabilities in the software we depend on.
Evidence from Developers: “Looks Right, Fails Later”
Real‑world data is starting to back up those concerns. Studies of developer experiences with AI assistants have found:
88% of developers report negative impacts on technical debt from AI‑assisted coding.
53% specifically mention code that “looked correct but was unreliable.”
This matches what many engineers feel on the ground: AI tools are fantastic at generating boilerplate and “first drafts,” but they can encourage shallow review and overconfidence—especially in teams where management wants visible speed gains.
Technical Debt in the Age of AI
More Code, Less Refactoring
GitClear’s analysis of 153 million lines of code between 2021 and 2024 produced some worrying trends in AI‑assisted repositories:
Code duplication increased by 48%.
Refactoring activity dropped by 60%.
That pattern makes sense: AI is very good at generating fresh snippets that solve the immediate problem, but not as good at spotting that three similar functions should be merged, or that a shared abstraction is overdue. The result is a sprawling codebase that “works” today but becomes painfully rigid tomorrow.
Long‑Term Risks for Engineering Teams
Growing technical debt in AI‑generated code can lead to:
Slower feature delivery in the long run as complexity accumulates.
More production incidents due to hidden edge cases.
Difficulty onboarding new developers who must navigate messy or inconsistent patterns.
Increased reliance on the same AI tools to understand, explain, and patch the code they originally created.
In other words, AI can create a self‑reinforcing dependency loop: the more you lean on it to move fast now, the more you may need it just to stay afloat later.
Burnout and the Human Cost of “Vibe Coding”
Productivity Gains… at What Price?
Some studies link AI adoption to changes in developer wellbeing:
AI tools can temporarily boost perceived productivity, making engineers feel they “should” get more done.
Organizations often respond by increasing expectations, accelerating deadlines, and shrinking teams.
Developers report heightened pressure and feeling constantly behind, even as output metrics improve.
The result? Short‑term gains, long‑term exhaustion.
“Three Productive Hours a Day”
Veteran engineer Steve Yegge has argued that even with powerful tools, “vibe coding at max speed” is sustainable only for a few hours per day. Beyond that, cognitive fatigue sets in:
Reviewing AI‑generated code still requires deep focus.
Context‑switching between prompts, reviews, and deployments is mentally taxing.
Constantly second‑guessing whether the AI missed something critical adds hidden stress.
Treating AI as a way to push developers to 8–10 hours of high‑intensity output is a recipe for burnout and attrition, not long‑term productivity.
Will AI Replace Programmers—or Redefine Them?
Marcus’s Long View: A Century, Not a Year
Despite all the headlines, Gary Marcus takes a more measured stance:
He believes AI will likely replace most human labor over the next century.
He is highly skeptical that this will happen “over the next year or two.”
This timeline matters. If AI is a 100‑year transformation rather than a 2‑year revolution, then the right strategy for companies is careful integration and upskilling—not mass layoffs and blind automation.
What Future Developers Might Actually Do
Instead of vanishing, programming roles are more likely to evolve. Future engineers may:
Spend less time typing code and more time designing systems and architectures.
Act as “AI conductors,” orchestrating multiple models and tools.
Focus on verification, testing, security, and reliability rather than raw implementation.
Become stewards of long‑lived codebases, ensuring that AI‑generated changes don’t break core invariants.
For developers and students today, that suggests a practical response: double down on fundamentals, system design, testing, and security—skills that remain essential even when an AI can write the initial function for you.
How Teams Can Use AI Coding Tools Safely
Practical Guidelines for Engineering Leaders
If you’re adopting AI coding tools in your organization, consider these guardrails:
Define AI‑appropriate tasks
Use AI for boilerplate, migration scripts, test scaffolding, and documentation—avoid handing it security‑critical or safety‑critical code without rigorous review.Keep humans in the loop
Require human review for all AI‑generated changes, especially in core services and shared libraries.Track technical debt deliberately
Monitor duplication, code complexity, and refactoring rates; don’t just celebrate lines of code or sprint velocity.Invest in testing and tooling
Strengthen automated testing, static analysis, and security scanning to catch issues AI might miss.Protect developer wellbeing
Use AI to reduce drudgery and context‑switching, not to ramp up unrealistic output targets.
How Individual Developers Can Stay Relevant
For individual engineers, survival and success in the AI era looks like:
Learning to write excellent prompts and review AI output critically.
Developing strong debugging and reasoning skills, not just memorizing syntax.
Building expertise in domains (finance, healthcare, embedded systems, etc.) where contextual understanding is crucial.
Staying curious and adaptable as tools evolve.
Conclusion: Beyond the Hype
AI coding models from OpenAI, Anthropic, and others are not a gimmick; they are already reshaping how software is built at leading companies. They offer genuine speed and convenience, but they also introduce real risks around technical debt, reliability, security, and human burnout.
Traditional programming is unlikely to vanish overnight—but the nature of the job is changing fast. The teams and developers who will thrive are those who treat AI as a powerful but fallible partner, adopt it thoughtfully, and keep humans firmly in charge of quality, ethics, and long‑term design.
Enjoyed this? Get the week’s top France stories
One email every Sunday. Unsubscribe anytime.


