Migration Guide

AWS GameLift Is Costing You More Than You Think. Here's How Studios Get Off It.

Egress fees, AWS lock-in, and fleet configuration complexity are the three reasons studios migrate. None of them require a deadline — just a decision.

No shutdown deadline Cost-driven migration Typically 2–4 weeks 10 min read
💡

Running on AWS credits?

Gameye can act as your orchestration layer now, routing sessions to AWS while your credits last. When they expire, you flip the underlying infrastructure to Gameye's bare metal and cloud — same API, same code, zero downtime. The studios that scramble are the ones who wake up to a full GameLift bill the month after credits run out and have to migrate under pressure.

Read more about the credits migration path →

AWS GameLift is a capable service. It's also one of the most expensive ways to run dedicated game servers at scale — not because of the compute rates, but because of what surrounds them: egress fees that compound with every player, a proprietary server SDK that embeds AWS deeply into your game code, and a fleet management model that requires ongoing DevOps attention even when nothing is wrong.

Studios migrating to Gameye aren't doing it because of a shutdown notice. They're doing it because the math stopped working — or because they want to fix it before it does.

What changes
  • GameLift Server SDK (InitSDK, ProcessReady, OnStartGameSession)
  • Session allocation call (CreateGameSession → Gameye POST /session)
  • Fleet and build management (GameLift console → Gameye API or Admin)
  • FlexMatch (replaced by any HTTP-compatible matchmaker)
  • IAM roles and CloudFormation templates for GameLift resources
  • Server packaging (zip upload → Docker image)
What doesn't change
  • Authoritative game server logic
  • Networking model and netcode
  • Client connection flow (still receives IP + port)
  • Game binary (repackaged, not rewritten)
  • Matchmaking rules and player grouping logic
  • Player-facing behaviour

Where GameLift's cost actually comes from

The compute rate is visible. The egress rate is where studios get surprised.

Egress fees

AWS charges $0.08–$0.09 per GB for data leaving their network. For game servers this is almost entirely one-directional: players send small inputs (free ingress), the server sends frequent game state updates to every player (charged egress). A server running 8 players at 20 KB/s generates ~57 MB of egress per minute. Across 1,000 concurrent sessions, that's more than $1,000/month in bandwidth alone — on top of compute.

Fleet management overhead

GameLift requires configuring fleets, build uploads, scaling policies, fleet aliases, and EC2 instance types — and revisiting that configuration when demand patterns change. That's not a one-time setup; it's ongoing DevOps work. Studios routinely over-provision to avoid cold-start delays, which means paying for idle capacity around the clock.

Single-provider risk

All GameLift sessions run on AWS. A regional outage takes your game down with no automatic failover. Gameye routes sessions across multiple bare metal and cloud providers — if one has an incident, sessions shift automatically.

SDK coupling

GameLift's Server SDK embeds AWS-specific session lifecycle management into your dedicated server binary. Every build carries it. Removing it isn't hard, but it's the kind of technical debt that accumulates if you plan to migrate later rather than now.

Cost comparison — 1,000 concurrent sessions
AWS GameLift
Compute (2 vCPU × 1,000 sessions)~$3,500/mo
Egress (20 KB/s × 8 players × 1,000 sessions)~$1,100/mo
Estimated total~$4,600/mo
Gameye
Compute ($0.07/vCPU/hr × 2 × 1,000 sessions)~$3,024/mo
Egress$0 — included
Estimated total~$3,024/mo

Estimates based on 8-player sessions, 2 vCPU each, running 24/7. GameLift compute rates vary by EC2 instance type and region. Calculate your own costs →


The migration path — five steps

1
Containerise your server

GameLift uses build uploads — a zipped server binary uploaded to S3. Gameye runs Docker images. If you're not already building your dedicated server as a Docker image, this is the main new step — and it's a one-time change that applies to any future infrastructure you use.

Your Unreal or Unity headless server build produces a binary. A Dockerfile wraps that binary and its runtime dependencies into a portable image, which you push to a container registry (Docker Hub, AWS ECR, or any OCI-compatible registry). Gameye pulls the image when starting sessions.

See the containerisation guide in Gameye Docs →

2
Remove the GameLift Server SDK

Strip the following from your dedicated server build:

  • Aws.GameLift.Server.GameLiftServerAPI.InitSDK() — SDK initialisation
  • ProcessReady() — readiness signal to GameLift fleet
  • OnStartGameSession callback — session start handler
  • OnProcessTerminate callback — graceful shutdown handler
  • ActivateGameSession() and TerminateGameSession() calls

Replace them with nothing proprietary. Gameye determines server readiness through a configurable health check — typically a port check or a lightweight HTTP endpoint your server exposes when it's ready to accept players. Your server just needs to listen on the port specified in the session metadata.

3
Replace the session allocation call

Wherever your backend calls CreateGameSession or polls DescribeGameSessions, replace it with a single POST request to Gameye's Session API:

Before (GameLift)
// Create game session → poll for ACTIVE status → describe for IP/port
// Multiple API calls, status polling, SDK dependency
CreateGameSession(request) → DescribeGameSessions → GetConnectionInfo
After (Gameye)
POST /session
{ "image": "your-image:tag", "region": "eu-west", "metadata": {...} }
→ 200 OK: { "ip": "1.2.3.4", "port": 7777 }

IP and port are returned in the same response — no polling loop, no status management.

4
Replace FlexMatch (or keep your matchmaker)

FlexMatch is tightly coupled to GameLift's fleet management. Without GameLift, it becomes redundant. Most studios replace it with a matchmaker that communicates over standard HTTP — Nakama, OpenMatch, or a custom backend. Your existing matchmaking rules (player grouping, skill brackets, region preferences) are transferable concepts regardless of which matchmaker you use.

If you have an existing custom matchmaker that calls GameLift, you're replacing one backend call. The matchmaking logic itself doesn't change.

5
Parallel run by region — then shift traffic

Run GameLift and Gameye in parallel. Route one region's traffic to Gameye, validate session start time, stability, and cost over a week, then shift region by region. No hard cutover required. If anything is off, traffic rolls back without players noticing.

Most studios complete end-to-end validation within two weeks. The actual traffic shift is a configuration change.

What doesn't change: Your authoritative server logic, networking model, client connection flow, player-facing behaviour, and any matchmaking rules. The migration scope is the infrastructure layer — not your game.


On AWS credits — and why now is the right time

Many studios on AWS Activate, Y Combinator credits, or investor-sourced cloud agreements are running GameLift at effectively zero marginal cost today. That's not a reason to wait — it's a reason to start now.

Now
Integrate Gameye's orchestration layer

Your backend calls Gameye's Session API. Gameye routes sessions to AWS infrastructure while you're still on credits. You get Gameye's sub-second allocation and fleet management immediately — credits still cover the underlying AWS compute.

Credits expire
Flip the infrastructure

Change one configuration — the infrastructure provider Gameye routes to. Sessions now run on Gameye's bare metal and cloud providers. Your backend API call hasn't changed. Your players see nothing change.

After
Full cost savings

$0.07/vCPU/hr, no egress fees. You avoided an emergency migration and your players never experienced a disruption.

The studios that scramble are the ones who discover the full GameLift bill — compute plus egress — the month after credits run out, with a live game and players expecting uptime. Starting the integration now turns that into a planned transition.


Side-by-side comparison

AWS GameLift Gameye
Egress / bandwidth fees $0.08–$0.09/GB None — included
Compute pricing EC2 on-demand + management fees $0.07/vCPU/hr flat
Session start time Seconds–minutes (fleet dependent) ~0.5s average
Provider coverage AWS only Bare metal + multi-cloud
Provider failover None — single provider Automatic, multi-provider
Matchmaker FlexMatch or custom Any matchmaker via HTTP
Fleet management Manual — EC2 types, aliases, policies Managed — API driven
IAM / infra complexity High — IAM roles, CloudFormation None
Proprietary server SDK Required — GameLift Server SDK None — standard port/health check
Sandbox access AWS account setup required 24 hours
Works during AWS credit period Yes Yes — orchestrates AWS or own infra

Why studios migrating from GameLift choose Gameye

No egress fees

GameLift charges $0.08–$0.09/GB for every byte your servers send to players. Gameye includes all bandwidth in the compute rate. For high-traffic sessions this is often where the biggest savings come from — 40–60% total cost reduction is typical.

No proprietary SDK

Remove five GameLift SDK calls from your server. Replace them with nothing proprietary. Gameye determines readiness through a standard health check — your server just needs to listen on a port. No SDK version updates to track, no AWS dependency in your binary.

Sub-second session allocation

Gameye starts sessions in ~0.5 seconds on average. No fleet pre-warming, no EC2 cold starts, no scaling policy lag. Players grouped by a matchmaker connect to a server that was already running and waiting.

Multi-provider failover

Gameye routes sessions across bare metal and multiple cloud providers. If one has an incident, sessions shift automatically. You're not replacing AWS lock-in with Gameye lock-in — the same Docker image runs wherever Gameye schedules it.

Works with any matchmaker

FlexMatch, Nakama, OpenMatch, or a custom backend — Gameye's Session API is a single HTTP endpoint. Any system that can make a POST request can allocate a session. You don't replace your matchmaking logic; you replace the hosting call inside it.

Gradual migration

Parallel-run by region. Validate Gameye on 10% of traffic before shifting the rest. No hard cutover. If you're on AWS credits, integrate Gameye now and flip the infrastructure when credits expire — same API, zero downtime.


Frequently asked questions

Do I need to rewrite my game server to migrate from GameLift?

No. Your authoritative game server logic, networking model, and client connection flow are unchanged. The migration scope is: remove the GameLift Server SDK from your server build, replace the session allocation call in your backend, and package your server as a Docker image if you haven't already. Your game binary and netcode stay the same.

Can I keep using FlexMatch if I move to Gameye?

You can, but FlexMatch is tightly coupled to GameLift's fleet management and becomes redundant without it. Most studios migrating to Gameye replace FlexMatch with a matchmaker that speaks standard HTTP — Nakama, OpenMatch, or a custom backend. Gameye is matchmaker-agnostic: any system that can make an HTTP POST can allocate a session.

How does Gameye handle server warm-up compared to GameLift fleet pre-warming?

GameLift requires you to pre-warm fleets — pre-allocating EC2 capacity and configuring scaling policies to have servers ready. Cold starts on un-warmed capacity can take minutes. Gameye maintains a dynamic warm buffer of pre-started containers that sessions are allocated from in ~0.5 seconds. There's no fleet to configure or warm — you pay for sessions when they run, not for reserved capacity sitting idle.

What replaces the GameLift Server SDK in my dedicated server?

Nothing proprietary. Remove InitSDK, ProcessReady, OnStartGameSession, and OnProcessTerminate from your server code. Gameye determines server readiness through a configurable health check — typically a port check or a lightweight HTTP endpoint. Your server just needs to listen on the port specified in the session metadata. No Gameye-specific SDK is required on the server.

How much do studios typically save migrating from GameLift to Gameye?

Savings vary by game type and session profile, but 40–60% total cost reduction is typical once egress fees are removed from the calculation. A game running 1,000 concurrent 8-player sessions at 20 KB/s server-to-client traffic generates over $1,000/month in GameLift egress charges alone. That line disappears on Gameye. Use the pricing calculator to model your specific case.

I'm still on AWS credits and GameLift is essentially free — why migrate now?

Because by the time your credits expire, you'll want the migration already done. GameLift's full cost — compute plus egress fees — hits in full the month credits run out. Studios that integrate Gameye while credits are active use that period to validate performance and get comfortable with the platform. Gameye can route sessions to AWS infrastructure during the credit period, then shift to Gameye's bare metal and cloud when credits expire — same API, same code, zero downtime for players.

Ready to stop paying the egress tax?

Request sandbox access, push your Docker image, and run your first Gameye session. Most studios complete parallel testing within two weeks — with or without a GameLift deadline.