Programming with an AI copilot: My perspective as a senior dev

Yes, I know, everyone has already written something about AI, the internet is full of it. Still, I'd like to add my own little piece to it, because it's almost all hype and I miss a critical perspective. Here it comes.

The software development world is dominated by FUD (Fear, Uncertainty, and Doubt). Managers are being flooded with posts about "10x productivity," the imminent end of traditional developers, and agents talking to agents where neither agent is human. The gold rush is driving companies massively towards rushed AI features that not everyone necessarily wants. Nobody wants to miss the boat, so everyone jumps on board. Often without thinking.

Gemini AI seems to think we want to summarize our folders. Even empty ones.

When this dev was a little dev

I still remember when I started programming, on my father's 486 PC, with nothing more than an included book with DOS commands. I borrowed a book about QBasic and that's how I started making my first games as a little dev. No syntax highlighting, no autocomplete, no Stack Overflow to help me on my way, and certainly no AI. The moment when I went to the computer store with my father to buy a network card was still to come. When I got stuck with a problem, I had two options: PRINT commands and a lot of dedication, or back to the library. Debugging at that time meant: running, hitting an error, searching, adjusting, and running again with fingers crossed.

All the tools we're now accustomed to were once spectacular improvements. I remember when I switched from Notepad to ConTEXT editor. Suddenly I had syntax highlighting, a brilliant improvement. The discovery of experts-exchange.com (even before they got the hyphen!) and I no longer had to search endlessly on Lycos or AskJeeves. Then came Stack Overflow, where you could look up all the answers. And a real IDE replaced the editor: Eclipse. That wasn't just an editor anymore, no, Eclipse could refactor your code. It understood what you were doing and showed errors before you had to run your code. You could navigate by Ctrl+clicking on class names or variables. Reading documentation in the editor. Matching brackets were automatically found, and indentation was taken care of. I had to search around less and less, more and more happened automatically. More and more came within reach. I needed to maintain less of an overview myself and burden my own 'working memory' less.

The same applies to frameworks and SaaS platforms. You no longer need to be a guru to set up a scalable server. You don't need to learn SQL to set up a Firebase backend. A modern programmer doesn't have to worry about deallocating memory.

Not a REvolution but Evolution

And now we're facing the next wave: AI-assisted development. The marketing machines are working overtime with fantastic promises about productivity explosions. But in my view, this is just the next step in a long evolution of developer tools.

Eclipse, Firebase, and Stack Overflow haven't replaced developers, and AI won't make us obsolete either. Instead, it creates some space for what really matters in software development: creativity, innovation, understanding what the customer wants, and solving complex problems. You could even say that productivity doesn't increase, but expectations do. We can now make prettier things in less time, or someone with little experience can quickly conjure up a prototype. But there isn't one programmer less needed to write a program.

Twenty-five years in this industry taught me one immutable truth: There's no substitute for understanding how things actually work.

Source: No-code platforms are secretly killing your productivity & Hackernews thread

Positive impact of AI devtools

I'm also enthusiastic about AI, don't get me wrong, but the real power of these tools lies in the small things. That frustrating boilerplate code that you have to write for the thousandth time? Tab-tab-tab and it's there. Those repetitive patterns that are just different enough that you can't copy them? AI understands what you mean and automatically adapts it. It's like smart autocomplete that not only knows what you're going to type, but also why. Adding a translation in another language is also a breeze.

The highlight is learning new frameworks. I've, no, ChatGPT has previously written about how I learned to write apps in Flutter in two weeks. More recently, I've also delved into Express, Wagtail, LangChain, and more - all new territories for me. AI functioned as a private tutor who could communicate exactly at my level. No basic tutorials for beginners, but targeted answers for an experienced developer who just wants to understand the specific differences between frameworks.

Why still so underwhelmed?

Half a lifetime ago, I already graduated with a minor in Artificial Intelligence, and I admit that in the meantime, the technology has made an impressive leap forward. But AI is not new. It's not even technically very different from then. AI is nothing more than pattern recognition, which has been used for various purposes for years. By recognizing patterns, an AI can also extrapolate a pattern, and in a way predict what the next step will be. This principle has been successfully used for years in social media algorithms that present you with content you'll like, or by advertisers who predict what you'll want to buy. ChatGPT simply predicts token by token what the next word in the sentence should be. That's not new either, your smartphone keyboard has been doing this for years. Virtually the only difference from the past is that so much computing power and energy has been thrown at it that it has crossed a threshold where it suddenly became useful.

The past half year, I've worked intensively with tools like Codeium, ChatGPT, and Windsurf. And yes, they're impressive - but in daily use as a copilot for programming, not necessarily a major addition. AI is definitely not on my list of ten devtools I would take to a deserted island, if that makes any sense.

Where does it go wrong?

These tools are dangerously capable. They produce code that looks professional, that works, that can even be elegant - and that can be dramatically unsafe.

Take my Express backend experiment. The code worked perfectly, but it was clear that protection against SQL injection was completely absent. Not even in the most basic way was it taken into account, which means that the site was so incredibly unsafe that it wouldn't take an hour before it would be hacked. When I addressed ChatGPT about this, it immediately provided a security fix. The knowledge was there, but wasn't proactively applied. Of course, this can be solved with better prompting, but that's obviously not a solution that can replace developers. To be able to prompt well, you already need the knowledge of a developer by definition. For the same reason, no-code solutions will never beat normal, typed code. It doesn't solve the fact that you need to understand development to build something, and especially to build something good that goes beyond a quick and unsafe prototype.

Anyone who has tried an AI assistant will recognize that their capability quickly deteriorates as you add more context, even if their context window is large. Hence, being able to feed them precisely what they need to figure out the implementation details is crucial [...] Well, this is exactly what humans are good for.

Source: Are AI assistants making us worse programmers?

This is not a minor shortcoming - it's a fundamental problem that demonstrates why A.I. remains a tool that requires expertise, and is not a replacement for it. There's also a lot of code on GitHub and StackOverflow, but to use it you need to know what you're doing. Instead of searching, we're now prompting, but in essence, we're still doing the same thing. Blindly copying from StackOverflow was always a bad idea.

"people don't know what they want. They'll ask AI for a solution to their perceived need, and then run what they are given, often without understanding what it does. This applies equally to end users and (amazingly) to developers."

Source: AI coding is based on a faulty premise

And then we have the outright hallucinations. My copilots invented their own sorting methods that didn't sort. Inputted non-existent function names. Windsurf cheerfully discarded the contents of multiple functions and replaced them with 'your code here' because the context became too complex. It's like an over-enthusiastic junior who sometimes has brilliant insights, but also regularly blindly copy-pastes code that turns out to be complete nonsense. Windsurf even managed to write Python code in a JavaScript file. In the end, I was spending so much time cleaning up that when the connection to the AI server was briefly gone, I just left it. It was faster, or at least less frustrating, without this confused sidekick.

Reality Check: AI tools are like interns who know how to Google really fast but don't understand context. Cursor started changing core implementations for unrelated edits. Cline would randomly rewrite half the system without asking. By the time I noticed, my codebase looked like the aftermath of a spaghetti fight at a junior developer convention. Most of the codebase was actually unreachable.

Source: Day 4 of an afternoon project.

But isn't the development going very fast?

Well, not really. After the initial surprise that was the arrival of ChatGPT, it hasn't progressed very hard. In fact, it's even stagnating a bit, because OpenAI and Anthropic can't find new sources to train their models on. The internet has been fully read. With the speed at which the internet is now even filling up with low-quality AI-generated content, that will only become more difficult, like when you keep running a copy through a copy machine.

GPT models are still a kind of black boxes that try to predict the next step in the pattern as best as possible. Features like 'advanced reasoning' don't make this principle 'smarter'. They are systems of training wheels and patches that still work on an underlying black box, often at the cost of a multiple in energy consumption. It's as if the water pipes in your house basically work reasonably well, but the water is brown and sometimes water also sprays from your socket. You can repeatedly call a plumber to replace a piece of pipe, but maybe you should admit that you'd better drink water from a bottle. The plumber will eventually get far, but the water will never be completely clean.

In short

Once again for the people in the back: AI copilots are not a silver bullet. It's a reality check that LinkedInfluencers prefer to ignore, because AI is so incredibly cool and hip and for many people the only intelligence they know, but those who build products on a daily basis know better. They might even require more expertise than ever. Because now you not only need to be able to program - you also need to be able to recognize when your AI assistant is missing the mark. And that's difficult, because reading code is hard.

A good programmers is best when using the tools they are comfortable with. Some of the best developers I know prefer to work in vim. For those not in the know: it is a weirdly incomprehensible editor that looks like it is from a 90s hacker movie, and which does definitely not contain AI.

I'll probably keep programming with an AI copilot. And that copilot sits where it belongs: in the passenger seat. Because even though they can read maps quite well, you better do the steering yourself.


(This post is translated using Claude from its Dutch original)