Satisfactory Dedicated Server on AWS
Optimizing game server costs using event-driven architecture
The Constraint
I didn’t start this project because I wanted to build something impressive. I started it because I was lagging.
Satisfactory is memory-hungry (12–16 GB RAM), which requires an AWS t3.xlarge instance. Running
this 24/7 costs roughly $130/month—hard to justify for a server used only a few hours a week.
The Architecture
Instead of a traditional always-on server, I treated the game session as an on-demand workload. The goal: pay only when playing.
- Compute: EC2 (t3.xlarge) for the game server.
- Control Plane: AWS Lambda + API Gateway handling Discord Slash Commands (Run/Stop/Status).
- Networking: Custom UDP/TCP port configuration (Ports 7777/8888) to bypass common connectivity issues.
The Challenge: Idle Detection
My first wrong assumption: "Idle detection is easy."
Satisfactory uses UDP, which has no constant connection state. Standard TCP-based "who is connected?" checks failed. The server would shut down mid-game or stay on forever.
The Fix: I implemented traffic monitoring via /proc/net/dev. By sampling
network byte deltas over a rolling window, the system can reliably distinguish between active gameplay,
background noise, and true inactivity, triggering an automated shutdown only when safe.
Results
Cost Reduction: Reduced monthly spend from fixed ~$130 to approximately $0.18/hour of active play. For our usage patterns, this dropped the bill to practically coffee money.
This project validated that AWS can be viable for gaming workloads if you treat cost as a primary design constraint and build the automation to support it.
Source & Notes
This was a practical exercise in lifecycle management and cost control.
👉 Check my GitHub Profile for related repositories.