Collectively, we're only really months into AI-assisted software development, and the abyss is already clearly visible.
Collectively, we're only really months into AI-assisted software development, and the abyss is already clearly visible.
I'd say I'm (currently) a 'maximalist traditionalist' - I use AI to help me build better, reusable code as well as deeper test suites than I would previously have had the patience to grind out. I'm getting more of a feeling for when it's bullshitting me. I'm also able to debug things more effectively. Overall, I'd say I'm operating about 1.8x than before, with a probable ceiling of 2.0x. That's a combination of a bit faster, a bit more comprehensive, and a bit more effective all at the same time.
But for the kind of high-stakes embedded codebases I work on, I don't want to ship code I don't understand or can't completely explain or justify. In many ways, my ideal scenario would be finding ways to use AI to help me ship less code than before (i.e. that's compact, easy to understand and share, and better all over).
And so I look across to agentic stuff, and the gap between how I work and how that works seems like an abyss. The appearance of working doesn't mean it is actually working; passing tests doesn't mean the code wasn't gamed to pass those tests; black box code is a liability, not an asset; code review is next to impossible at scale; and so forth.
So where do we go forward from here? I'm guessing traditionalist maximalists like me will find ways of using agentic development for specific kinds of task, such as early prototyping, client-facing UIs, CI support tooling, etc. But that doesn't feel like a major step change.
More and more, I'm coming to think that the programming languages we use aren't fit for purpose at the scale and speed we're now able to develop at. Or perhaps we're missing a whole load of AI development patterns? There's certainly a lot to think about.