Cheating Your CI/CD Pipeline: More memory without actual RAM please

Armen Rostamian
5 min readApr 26, 2024

I’ve been heads down in my own personal cloud experimentation lab, hacking on lots of interesting and weird things.

As part of a full-stack framework I’ve been steadily refining in my spare time, I decided that it would be a good idea to play with some code generation techniques in one of the CI/CD workflows I’ve set up. More on my secret adventures with LLMs, fine-tuned models, RAGs, and code generation in an upcoming post…

I hadn’t really thought it through very much. 😎

I just figured that if there’s a way to cleverly generate a giant truckload of usable TypeScript boilerplate code and save myself countless human-hours, then that’s something worth having.

However, I didn’t realize that I’d introduce a show-stopping problem into the equation → Allocation failed - JavaScript heap out of memory.

Seeing that message in your CI/CD pipeline when you’re running a pnpm build almost always inspires indigestion or some other form of gastric distress (to put it mildly).

The Constraints

Every good and painful problem has constraints. Some are constraints imposed by virtue of trade-offs you have to make, and others are just hard limitations or boundaries that you cannot cross.

Resource limitations (hard constraints)

The CI/CD pipeline that’s hooked up to my project has the following limitations:

  • 4 vCPUs
  • 7GB Memory
  • 256GB Disk/storage

In this particular CI/CD offering, you can’t opt for more memory or CPU…lame, I know.

Technical limitations (soft constraints/trade-offs)

While the code generation I’m employing actually manages to work — and produces correct and usable code — we have a pretty unique issue.

The code that is composited from my code generation technique is HUGE (in size). REALLY huge. And it’s not just huge, but it’s all composited into just a couple of REALLY big files.

.rw-r--r--   53M x3r0 26 Apr 13:10  main.ts
.rw-r--r-- 11M x3r0 26 Apr 11:58 assets_1.ts
.rw-r--r-- 8.8M x3r0 26 Apr 11:58 assets_2.ts
.rw-r--r-- 12M x3r0 26 Apr 11:58 assets_3.ts

Total Size: 84.8M

^^ This is really, really not-great.

You may be saying: Armen, you can try to minify or otherwise compress the code in the files…but unfortunately, we’re talking about a giant, giant amount of strings. Without some *serious* acrobatics, the turn-key approach of minifying your code won’t work here.

On my local machine (an M1 Max with 32GB), I can get away with doing things like export NODE_OPTIONS= — max-old-space-size=18168 so that pnpm dev and pnpm build manage to work.

When your project includes really large TypeScript files that range from 10MB — 60MB in individual size, you’re going to have a bad time.

If you try to compile or run your project (including these files) using Node.js, it’s going to require much more memory than NodeJS’s default heap size allows.

The trade-off here (the soft constraint) is one of those 80/20 deals. I’ve done a minimal amount of work, and gotten myself almost all the way to an ideal outcome get giant piles of useful code automatically/for free.

I could now spend an INORDINATE amount of time trying to fine-tune my approach so that I can get back many, many bite-sized files…but we’d still be asking tsc (the TypeScript Compiler) to have to eat the pain of loading all that junk into memory anyway.

There are other more esoteric theoretical approaches for mitigating the issue — like looking into Huffman encoding or dictionary-based compression like LZ77/LZ78 — but these are deeply technical science-fair projects, ain’t nobody got time for that, and that’s not the point of this article anyway.

Coming back to the issue at hand

  • ✅ On a reasonably specced local machine, we’re able to develop and build this codebase.
  • 😅 In my resource-constrained CI/CD pipeline, I do not have enough memory to successfully build the code.

The greatest hack I never knew about

I’ve been compiling my own Linux kernels since I was 12 (I’m in my late 30’s now…so it’s been a while). I’ve done and seen all sorts of stuff in the 20+ years that I’ve been running and administering Linux as both a desktop daily-driver and as a server OS.

On top of that, I’ve been working with cloud technologies and CI/CD pipelines for over 10 years.

With all that under my belt, I couldn’t hack my way out of this problem. So, I headed over to my favorite Google-replacement LLM — Perplexity — and I searched up the error I’m encountering.

After a few hours of free-falling through different links and ideas, I found something buried deep inside a GitHub issue somewhere. And the solution it offered was nothing short of hilariously brilliant.

Maybe I’m late to the party on this one, but as it turns out…the answer to this problem is: use SWAP.

As I’ve mentioned earlier, we’re capped at 4 cores + 7 GB memory…but we’ve also got 256GB of disk space. So, why not make the most of what we’ve got?!

In my CI/CD build spec, I add the following:

      commands:
- |
# Set up swap space
echo "Setting up swap space..."
sudo fallocate -l 180G /swapfile
sudo chmod 600 /swapfile
sudo mkswap /swapfile
sudo swapon /swapfile
sudo swapon -s
echo "Done setting up swap space!"

…and voila. While the implicit performance characteristics of this solution are somewhat questionable, I can confidently state that it works.

With that clever little hack in place, we’ve effectively uncapped NodeJS and can now provide it with an absurd amount of max-old-space-size.

Now, we can do this:

NODE_OPTIONS=--max-old-space-size=32768 pnpm build

🚀 With a few quick lines of shameful bash, my CI/CD builds start to work again.

Conclusion

It ain’t pretty, but it gets the job done and it allows me to continue pushing forward with my wild and crazy ideas.

Like I said before…this may be something lots of folks know about, but it was a totally new “technique” to me.

I haven’t tested this out in anything other than my own CI/CD pipelines on my specific provider, so I’ve got no idea whether or not this will work with GitHub Actions, BitBucket Pipelines, or others. As such, your mileage may vary.

Until next time…

_Armen

👇 Leave comments below, or find me on LinkedIn. I’m sure there’s plenty of reasons why this is a terrible idea. I’d love to hear them! 😎

--

--

Armen Rostamian

Professional geek. Amateur polymath. Culture nerd. Melophile. I build and write about useful systems of people, culture, and things.