Optimizing A Codebase By Porting It To Go?
We gophers love success stories of porting codebases to Go, don't we? Especially if the port results in massive speedups or dramatically reduced resource usage (usually it's both). Here's one such story:
My job has a Scala service that they've been optimizing and improving for about 5 years. We just finished rewriting it in Go. The new service uses ~10% of the old's memory, and about 50% cpu, under the same load. The codebase is also much simpler, the image size is ~40mb instead of 1gb, and the pods restart in about 2 seconds, as opposed to 30-ish. So like, great success.
This story got me thinking. Is it really that easy? Just re-write everything in Go and—poof!—instant success?
Learn debugging with Matt, at 40% off
(and support AppliedGo this way)!
Use the coupon code APPLIEDGO40 at ByteSizeGo.com.
(Affiliate Link)
Of course not. But what else makes the difference? The above story contains a hint:
The codebase is also much simpler
Now we're talking. Mechanical porting wouldn't simplify the codebase. Apparently, the team took the occasion and not only ported the code but also transformed it from Scala-ish to Go-ish.
This, in turn, means that the team was aware of how Go works, and they knew how to adapt the code. In particular, to get substantial optimizations from a port to Go, a team would –
- Get rid of the idioms and paradigms of the legacy language
- Become familiar with Go's idioms and paradigms
- Embrace the Go community's best practices
Bottom line: If you plan to port legacy code to Go (let alone have an LLM port the code), keep in mind that a substantial part of the desired speedup or memory consumption reduction may come from adapting the code to the idioms, paradigms, and best practices of Go.