AI In Programming: Blessing Or Curse?

Discussions on social media about AI in programming often take one of two extreme standpoints. Either the AI is a total time saver and productivity booster, or it is an absolute nightmare that destroys productivity, creativity, and code quality.

Granted, taking extreme positions on a topic guarantees high visibility. As so often, the truth lies somewhere in the middle. Although “truth” is a strong word here, given that so many aspects have to be considered that there can't be a single truth. Rather, I observe a wide spectrum of experiences and a field that constantly changes.

So I'd like to share my own experiences with using LLMs for coding, and to make it clear from the beginning, my conclusions neither reflect the unconstrained enthusiasm of the AI tech bros nor the unrelenting pessimism of the eternal naysayers. (Don't get me wrong: optimism and criticism are equally important when dealing with emerging technologies. It's this going to mental extremes that I extremely, utterly reject.)

My AI setup for coding

Here is my current setup. I use Continue in VSCodium. Continue is an open-source AI coding assistant that is agnostic to the model used. You can, but don't have to, subscribe to any LLM APIs, as Continue can also connect to self-hosted models via Ollama. I settled on Claude 3.5 Sonnet as the LLM to use, which seems to have pretty good understanding of Go programming.

Continue automatically indexes my workspace and can also ingest other context like online documentation or local files. With this context at hand, the LLM can do a lot of useful things, from generating code and chatting about code to modifying code selected in the editor and generating comments, summaries, and documentation.

I configured Continue to collect the full Go documentation, the standard library, and the language reference as an embedded context. With this context, my coding assistant should have a good idea of how Go works.

How does AI coding support work in real life?

In a recent project, I had to write code to access multiple APIs that were quite similar in nature. I added the API docs as a context to Continue and asked the LLM to write a function for calling each API endpoint and processing the response. The LLM did quite well, and it even provided a sample use case, but for func main, with Printf and log output.

Unused parameters

The generated function, however, is supposed to be called by an HTTP handler. I told that the LLM, hoping that it would provide sample code for this scenario instead, with proper error handling as an HTTP handler would do. Well, the LLM did, but it also passed the HTTP request and response parameters to the client func that did nothing with those parameters. I had to remove them manually after I noticed that flaw.

RTFM, AI!

On another occasion, the LLM added “/v1” to the endpoint's URL slug, although the docs state that the base URL already contains the version string. Moreover, the model did not consider that ioutil.ReadAll is deprecated, despite having the current Go documentation at its disposal. When I asked Claude about the status of ioutil.ReadAll, it answered correctly that the ioutil package is deprecated and that io.ReadAll should be used instead. However, this knowledge does not seem to override the trained use of ioutil.

LLMs aren't autonomous coders yet

In summary, while I got quite some code created in short time, and most of it was running on the first attempt, I also needed time and a keen eye to spot flaws that creeped in and weren't caught by the compiler. If I had no experience with developing software, many of these logical flaws might easily go unnoticed, introducing subtle bugs or even security holes.

Know what to expect from coding assistants and how to use them properly, and you get useful results.

My conclusion: LLMs in their current state are tools that require experience to use, much like a kitchen knife or a bicycle (no stabilizers!). Know what to expect from these tools and how to use them properly, and you get useful results. Or expect too much and shoot yourself in the foot.

But I recommend everyone to go ahead and run a few tests with AI coding assistants. See how far you come, and find out about the possibilities and the limits. If nothing else, you'll surely have some fun along the way.