AI#027 · December 16, 2025 · 6 min read

What Software Engineering Looks Like When AI Writes the Code

Two years ago, AI coding assistants were autocomplete on steroids. Today, they write entire functions, debug complex issues, and generate test suites. The software engineering profession is in the middle of a genuine transformation, and the people inside it are divided on whether that's good news.


What has actually changed

GitHub Copilot data suggests that AI now contributes to over 40% of the code committed by developers who use it. More importantly, the nature of AI contribution has shifted. Early versions suggested line completions. Current versions can take a natural language specification and generate a working implementation with error handling, tests, and documentation.

For experienced engineers, this is a genuine productivity multiplier. Tasks that used to take a day take a few hours. Code review bandwidth expands because engineers spend less time writing boilerplate. Senior engineers can hold more complexity in their heads because AI handles the mechanical translation from logic to syntax.

The skills that are shifting in value

The skills gaining value are precisely the ones that were historically the hardest to teach. System design, architectural judgment, understanding tradeoffs across performance and maintainability and security: these require the kind of contextual knowledge that current AI models handle poorly. An AI can write a function. It struggles to reason about whether that function belongs in the system at all.

The skills losing value are those that were easiest to learn but most time-consuming to apply: writing boilerplate, looking up syntax, implementing standard patterns. Junior developers historically spent years building these skills as a foundation. That foundation is being automated, creating real questions about how the next generation of senior engineers develops.

The security and quality problem

AI-generated code has a documented tendency to reproduce common security vulnerabilities. Studies have found that code generated by popular AI assistants contains exploitable security flaws at rates comparable to code written by junior developers without security training. The speed advantage is real. The quality verification burden shifts to the engineer reviewing the output.

This creates a new form of risk: teams that adopt AI coding tools aggressively without investing equivalently in code review, security testing, and quality control. The worst outcome isn't AI replacing software engineers. It's AI-assisted engineers shipping more code, faster, with systematically embedded vulnerabilities that take years to discover.

XLinkedIn

← Previous
India's Manufacturing Moment: Real or Overhyped?
Next →
The AI Buildout Has an Energy Problem

Enjoyed this issue?

Get the next one in your inbox.

Free, weekly, and worth your five minutes.

Preferences

No spam. Unsubscribe anytime.