Little Helper

The software developer community appears to be split into two camps, divided by an insurmountable rift. One camp fully embraces large language models and turn from hand-coding to orchestring LLM agents. The other camp rejects LLMs (and other AI perhaps, too), for they take away the joyfulness that comes with exploring codebases, designing data structures and implementing algorithms.

I'm not here for taking sides. Rather, I want to change the perspectice and focus on tasks that LLMs can do for developers wihout generating code.

The image I'll be painting below isn't far away from “Little Helper,” a tiny humanoid robot with an incandescent light bulb for a head that assisted Gyro Gearloose, an ingenious inventor from the Donald Duck universe. Little Helper doesn't take the act of inventing away from Gyro, but it frequently helps Gryo out of a jam.

LLMs can become your Little Helpers.

A practical example

Let me start with a recent session I ran for checking some code for security issues. Here's the prompt, simple and straightforward:

run govulncheck --help and gosec --help and run 
a thorough security analysis on this repo.
Write your findings and recommendations to 
security-check-2026-04-05.md

To which Claude responded:

Done. Here's a summary of what the analysis found:

  govulncheck — clean: no known CVEs in any reachable symbols.

  gosec — 70 issues, 3 HIGH:

  ┌─────┬─────────────────────────────────────────────┬──────────────────┬────────────────────┐
  │  #  │                   Finding                   │       File       │     Real Risk      │
  ├─────┼─────────────────────────────────────────────┼──────────────────┼────────────────────┤
  │ 1   │ Path traversal (G703) via symlink in        │ generate.go:359  │ Low but fixable    │
  │     │ filepath.WalkDir callback                   │                  │                    │
  ├─────┼─────────────────────────────────────────────┼──────────────────┼────────────────────┤
  │ 2   │ Symlink TOCTOU (G122) in same Walk callback │ generate.go:355  │ Low (trusted path) │
  ├─────┼─────────────────────────────────────────────┼──────────────────┼────────────────────┤
  │ 3   │ Goroutine uses context.Background (G118)    │ web/update.go:27 │ Intentional —      │
  │     │ while request context available             │                  │ needs #nosec       │
  └─────┴─────────────────────────────────────────────┴──────────────────┴────────────────────┘

  Noteworthy MEDIUM issues:
  - Session cookie missing Secure: true (web/auth.go:91) — trivial one-line fix
  - No ReadHeaderTimeout on http.Server — Slowloris mitigation, one-line fix
  - 31 unhandled tx.Rollback() errors — cosmetic, use _ = tx.Rollback()

  staticcheck — no security issues; 46 code quality items (unused 
  code, capitalized error strings).

  Top 3 actionable fixes in priority order:
  1. Add Secure: true to session cookie in news/web/auth.go:91
  2. Add ReadHeaderTimeout: 5 * time.Second to http.Server in news/web/server.go
  3. Add path-escape validation after filepath.Rel() in news/generate.go:350

✻ Churned for 4m 58s

So that's not much more info than from running those tools myself. I could have stressed more on “do your own vulnerability checks”, but I got curious about the third gosec issue. “Goroutine uses context” isn't exactly unusual, so I asked Claude to explain this one.

Referring to G118 — Goroutine uses context.Background while 
request context is available in news/web/update.go:27:

go func() {
    updateCtx, updateCancel := context.WithTimeout(context.Background(), s.cfg.UpdateTimeout)
    defer updateCancel()
    _ = s.ops.RunUpdate(updateCtx, jobRunID)
}()

The HTTP handler receives a request, kicks off the update 
pipeline in a background goroutine, then immediately 
redirects the browser to a status page. The goroutine 
outlives the HTTP request by design.

If the code used the request's context instead:

updateCtx, updateCancel := context.WithTimeout(r.Context(), s.cfg.UpdateTimeout)

…then the update would be cancelled the moment the HTTP 
response is sent (which happens almost immediately, 
before the redirect even reaches the browser). The 
entire Reddit fetch + enrichment + SQLite write pipeline 
would be aborted seconds after it started.

Using context.Background() as the parent means the 
update runs to completion (or until s.cfg.UpdateTimeout 
fires) regardless of whether the client is still 
connected. That is exactly the intended behaviour — the 
user kicks off the job, gets redirected to a progress 
page, and the job runs independently in the background.

gosec flags this as a policy warning because detaching 
from the request context is a common source of bugs 
(leaked goroutines, missing cancellation), so it asks 
you to make the choice explicit. In this case the choice
is correct; the fix is just to add a //nolint:contextcheck 
or #nosec G118 comment with a short explanation so the 
intent is documented and future readers (and tools) 
don't re-flag it.

Claude correctly inferred that my code works as designed and suggested to exclude the code from further gosec urns (but only for the particular issue G118).

This was just Claude 4.6 and some superficial prompting. Newer models like Opus 4.7 or GPT 5.5 are said to be fierce security hole detectors; you may heard of Claude Mythos finding 271 zero-day security holes in Firefox.

Reviews like these are such a great use case for LLMs because LLMs excel at reading and interpreting information while they aren't designed for producing genuinely new output. So instead of having them produce code, you can get a lot usefulness out of them by asking them to review code.

And that's what I'll start the list of what LLMs can do for developers with:

What LLMs can do for developers

Review your code

LLMs are relentless reviewers, never tired, never too bored to do another review. Have them inspect your code for -

  • vulnerabilities (see above)
  • performance smells
  • logical bugs (that the compiler can't catch)
  • subtle edge cases that might surface three months after production rollout
  • opportunities for refactoring

In fact, anything you can ask a teammate about your code you can ask an LLM. I'm not saying that the quality of answers is always comparable, but there will be times when your teammates are hard to catch and a quick LLM answer is ways better than an answer days later or no answer at all.

Explore unfamiliar codebases

What's more daunting than having to become familiar with an unknown codebase. An LLM can dissect the architecture, walk you thorough the code, explain how the parts work together, and guide you to the parts relevant to your task. My favorite approach to this is Simon Willison's linear walkthrough.

Besides a full walkthrough, you can also focus on a specific package, function, or code line and pester the LLM with your questions. No question too dumb to ask!

Be your debugging rubber duck

Do you know these types of bug that seem to have no logical explanation? The ones where you exhausted all hypotheses about possible root causes?

The classic approach to escape this pit is a rubber duck. Yes, a phyiscal rubber duck sitting at your desk. When you start talking to it about the bug, you are able to sort your thoughts more clearly than just be silent thinking. As a side effect, talking to a rubber duck may look silly but it's probably looking less silly than talking to yourself.

An LLM is your next-level rubber ducky. Unlike a real rubber duck, an LLM can actually respond to you and help you sort your stream of thoughts and your myriads of hypotheses about why that damned bug doesn't stop playing whack-a-mole with you.

Write boring boilerplate and scaffolding

Ok, I promised to write about how LLMs can help you without taking the coding part away from you, but every coding task includes parts that can bore anyone to death: writing boilerplate code, over and over again. Sometimes you can find a smart way to factor out boilerplate code, but if you're in a hurry, you might want to skip the R&D part and just have an LLM produce the inevitable boilerplate code for you.

Or if the boilerplate follows a deterministic pattern with a few parameters that can vary, why not ask the LLM to write a tool that generates this kind of boilerplate?

I do this quite often: Ask an LLM to write a tool for a task, rather than repeatedly asking the LLM to do the task. This approach saves tokens and ensures deterministic output on each run.

Generate function and package documentation

Writing documentation is not everyone's cup of tea. Summarizing and describing text (including code) is a core competency of LLMs (as opposed to arguing logically; a trait that LLMs can only simulate). So: lazy dev & chatty LLM = match made in heaven.

Write unit test stubs and extend tests

Even if you don't want to give control out of your hands for testing, an LLM can at least set up test stubs that you only have to complete. Likewise, an LLM can suggest additional tests to cover edge cases you might have overlooked.

Deal with environment configuration

Whether it's a Dockerfile, a Docker Compose file or a Kubernetes configuration, LLMs, can not only take over generating them based on your project structure and tool stack but also help troubleshooting a broken configuration.

Whether a tool is good or bad depends on the usage

AI slop is real, and LLM-generated code may accumulate tech debt in the long run. Yet, a knife can't hurt you if you know how to use it properly and safely. LLMs can be used in many good and even more bad ways, but it's never a good idea to go to extremes: Neither vibe-coding (in the original definition: Letting an LLM write code and never look at it) nor total rejection of LLM-assistance is a good response to the new reality that LLMs have created.

Instead, use LLMs as your Little Helpers: handy when you need them, always in service of your craft but never in place of it.