Running a Satisfactory Dedicated Server on AWS (Without Paying $130/Month)

By Dustin Umphress • Notes / write-up

I didn’t start this project because I wanted to build something impressive. I started it because I was lagging.

I was playing Satisfactory with my brother, who was hosting the game, and I kept rubber-banding. The obvious fix was to run a dedicated server — something I’ve done a bunch of times before.

What I hadn’t done before was run one on AWS. And for a long time, that was intentional.


Why AWS Always Felt Like a Bad Idea for Game Servers

Satisfactory is memory-hungry. Realistically you’re looking at around 12–16 GB of RAM, which on AWS points you toward something like a t3.xlarge. If you leave that running 24/7, you’re staring at roughly $130/month.

That’s hard to justify when you can pay:

I’ve done both. I’ve run servers on physical machines, cheaper VPS providers, and “click-and-go” game hosts. Those work because the server is always there. You don’t think about lifecycle or cost — you just play.

AWS is different. You pay for every minute something is running, so historically it felt like the wrong tool for this job.


Why I Tried AWS Anyway

Two things pushed me into trying it:

Not just “spin up EC2 and install a server.” I wanted to build the VPC, configure networking, control the instance lifecycle intentionally, and treat it like an infrastructure problem.

The key idea was simple:

We only play a few hours a week. Why would the server run all month?

If the instance only runs while people are playing, the cost math changes completely. At roughly $0.17–$0.18/hour, light usage becomes cheap enough to be worth exploring.


The First Wrong Assumption: “Idle Detection Is Easy”

My initial plan was straightforward: detect whether players were connected and shut the server down automatically when nobody was playing.

I assumed I could do this with standard Linux tools like ss or netstat. That assumption was wrong.

Satisfactory uses UDP, and UDP doesn’t behave like TCP. There’s no clean concept of a session, so tools that are great for TCP didn’t tell me what I needed. My first auto-shutdown logic was basically blind.

This is where it stopped being “just a game server” and started feeling like a real ops problem.


Detecting Activity the Hard Way

Instead of asking “who’s connected?”, I changed the question to:

Is real traffic flowing?

I ended up monitoring network byte deltas via /proc/net/dev, sampling traffic over a rolling window. That gave me a reliable way to distinguish:

Once that was in place, automatic shutdown became dependable.


Documentation vs Reality: The Port Surprise

I initially followed what I saw in the docs: multiplayer needs UDP 7777. So that’s what I opened.

Clients still couldn’t connect reliably.

What fixed it wasn’t packet sniffing or anything fancy — I found newer references (docs / community write-ups) indicating the server also needs inbound TCP on ports 7777 and 8888 for handshake/service communication. Once I opened those and tested, connectivity stabilized immediately.

The lesson for me was the same as it is in normal infrastructure work:

Docs are a starting point — validation is what matters.


Why This Needed a Control Plane

I didn’t want to SSH in every time I wanted to start/stop the server — that defeats the entire point.

So I built a simple serverless control plane:

The scheduled shutdown isn’t “elegant,” it’s practical — it prevents the one mistake that turns AWS into a horror story: forgetting something is running.


Cost Fear vs Cost Reality

Before doing this, a lot of my AWS cost fear came from the unknown. When you don’t understand what exists, what bills, and how to shut it down, it feels like AWS could surprise you.

But once you understand what resources you created and when they’re billed, the fear drops. Worst case here is obvious: the instance runs all month and costs ~$130. Not great — but not mysterious.


This Was Always an Infrastructure Exercise

I could have built this without ever playing on it, but actually using it is what exposed the real issues — the UDP behavior, the port requirements, and whether the shutdown logic was trustworthy.

At that point, it wasn’t really about Satisfactory anymore. The game was just the workload. The project was about lifecycle management, cost control, and closing the gap between assumptions and reality.


Tradeoffs I Didn’t Try to Optimize

This setup is not something I’d “sell.” For always-on environments, purpose-built game hosting still makes a lot of sense.

This approach makes sense when:


Would I Build This Again?

For myself? Yes.

For other people? Probably not.

The learning was the point. And it changed how I think about AWS: it’s not bad for game servers — it’s bad for always-on, lightly-used ones. Treat cost as a design input, and the options open up.