And with of and it is possible to get fast realtime networked communication between multiple users.
And finally with , it is possible to run a persistent distributed server for your game while keeping everything in the same programming language.
Creating a networked multiplayer game is a much harder task than writing a single player or a multiplayer game.
In essence, multiplayer networked games are , and almost everything about distributed computing is more difficult and painful than working in a single computer (though ).
Deployment, administration, debugging, and testing are all substantially complicated when done across a network, making the basic workflow more complex and laborious.
There are also conceptually new sorts of problems which are unique to distributed systems, like security and replication, which one never encounters in the single computer world.
One thing which I deliberately want to avoid discussing in this post is the choice of networking library.
It seems that many posts on game networking become mired in details like , choosing between vs , etc.
On the one hand these issues are crucially important, in the same way that the programming language you choose affects your productivity and the performance of your code.
But on the other hand, the nature of these abstractions is that they only shift the constants involved without changing the underlying problem.
In a similar vein, the C programming language gives better realtime performance than a garbage collected language at the expense of forcing the programmer to explicitly free all used memory.
However whether one chooses to work in C or Java or use UDP instead of TCP
the problems that need to be solved are essentially the same.
So to avoid getting bogged down we won’t worry about the particulars of the communication layer, leaving that choice up to the reader.
We will model the performance of our communication channels Similarly
I am not going to spend much time in this series talking about.
Unlike the choice of communication library though, security is much less easily written off.
So I will say a few words about it before moving on.
In the context of games, the main security concern is to.
At a high level, there are three ways players cheat in a networked game: : Which use bugs in the game logic to directly manipulate the state for the player’s advantage.
, , etc.).
: Which snoops on parts of the state that should not be visible to the player
, , etc.).
: Which uses scripts/helper programs to enhance player performance and repeat trivial tasks.
, , etc.).
Preventing exploits is generally as “simple” as not writing any bugs.
Beyond generally applying good software development practices, there is really no way to completely rule them out.
While exploits tend to be fairly rare, they.
So it is often critical to support good development practices with monitoring systems allowing human administrators to identify and stop exploits before they can cause major damage.
Finally, preventing automation is the hardest security problem of all.
For totally automated systems, one can use techniques like or human administration to try to discover which players are actually robots.
However players which use partial automation/augmentation (like aimbots) remain extremely difficult to detect.
In this situation, the only real technological option is to force users to install anti-cheating measures like DRM/spyware and audit the state of their computer for cheat programs.
, and because they ultimately must be run on the user’s machine they are vulnerable to tampering and thus have dubious effectiveness.
Now that we’ve established a boundary by defining what this series is not about it, we can move on to saying what it is actually about: namely.
The goal of replication is to ensure that all of the players in the game have a consistent model of the game state.
Replication is the absolute minimum problem which all networked games have to solve in order to be functional, and all other problems in networked games ultimately follow from it.
Solutions to the replication problem are usually classified into , and when applied to video games can be interpreted as follows: Active replication: Inputs from the players are sent to all players in the network, state is simulated deterministically and independently on each client (also called and ).
Passive replication: Inputs from the players (clients) are sent to a single machine (the server) and state updates are broadcast to all players.
(also called , , and ).
There are also a few intermediate types of replication like and replication, though we won’t discuss them until later.
(1978) “” Communications of the ACM It is fair to say that active replication is kind of an obvious idea, and was widely implemented in many of the earliest networked simulations.
Many classic video games like , .
Starcraft and relied on active replication
One of the best writings on the topic from the video game perspective is M.
Terrano and P.
Bettner’s teardown of Age of Empire’s networking model: M
(2001) “” Gamasutra Desynchronization bugs are often very subtle
For example, different architectures and compilers may use different floating point rounding strategies resulting in divergent calculations for position updates.
Other common problems include incorrectly initialized data and differences in algorithms like random number generation.
Recovering from desynchronization is difficult.
A common strategy is to simply end the game if the players desynchronize.
Another solution would be to employ some distributed consensus algorithm, like or , though this could increase the overall latency.
(1995) “” Computer Graphics Today
it is fair to say that the client-server model has come to dominate in online gaming at all scales, including competitive real-time strategy games like , fast paced first person shooters like and even massively multiplayer games like World of Warcraft.
In the case of active replication, the latency is proportional to the.
This is minimized in the case where the graph is a (peer-to-peer) giving total latency of.
The bandwidth required by active replication over a peer-to-peer network is per client, since each client must broadcast to every other client, or total.
To analyze the performance of passive replication, let us designate player 0 as the server.
Then the latency of the network is at most twice the round trip time from the slowest player to the server.
This is latency is minimized by a with the server at the hub, giving a latency of.
The total bandwidth consumed is per client and for the server.
This entry was posted in ,.
Bookmark the permalink.
6 Responses to Replication in networked games: Overview (Part 1)
February 10, 2014 at 7:57 am Although some of the subjects you skipped over lightly (NAT Traversal, etc) are still not entirely a solved problem in practice (depending on chosen framework/language) I agree that there are a huge quantity of articles already out there and it was a nice change to see you focus on the core of the subject.
The analytical approach is a breath of fresh air too.
The only note of contention I have with your conclusion is that you ignore what the design of the game requires – Some developers may not have the resources or business case to develop a client-server architecture (both remote or local).
Or maybe the design of the game allows for a more loosely synchronized state machine (Turn-based games).
I look forward to your upcoming articles – Something I’ve been personally looking into is how games like Battlefield 4 work so well under latency.
I suspect its a typical client-server event system with a very well done layer of local prediction and user feedback.
nop February 18, 2014 at 11:04 pm Some thoughts on the subject.
For active replication it is enough to only send inputs from every player to every other player.
Every player will send and receive N * size_of_input data.
Regardless of world complexity.
For passive replication every player will only send size_of_input_data, but the server has to send out N * size_of_world_delta_update.
Which is MUCH bigger.
Since latency is at least partially affected by the size of data to be sent, passive replication is much less efficient for a big game world.
But of course other advantages can’t be denied.
Reply strichmond February 19, 2014 at 12:28 am In the real world Passive Replication is very workable if you apply some basic intelligence such as only sending updates to players if they can see the objects that need updating.
Its usually very cheap to do, .
Depending on the genre of game – Open world Vs
Adrian Myers May 29, 2014 at 7:47 pm To expand on strichmond’s reply, a server running a large game world will almost never send N responses for a given request, .
Especially in the case of an MMO
Say you have 3,000 people on a server, and a group of 20 players are fighting an outdoor boss.
One of the players casts an area-of-effect healing spell that hits all 20 players.
Say some of them have talents which share healing with people around them in addition to other secondary effects, potentially leading to hundreds of discrete updates which must be applied to 20 entities and then sent to those 20 players and anybody nearby.
This is a relatively expensive event in MMO terms
but it is completely unknown to the majority of the people playing the game, and the server doesn’t have to send this event to anybody out of range of that combat.
Instead, when players enter that area, they’ll simply be given up-to-date snapshots of the other players there (which are then updated as described above).
If the boss was killed, it simply won’t be around to send information about at all to new players walking through that area until it respawns.
The same is true for NPC AI.
Some things can actually be done the active/simulation way
like say a patrolling guard whose position is given as a function of time and a path just needs a moderately accurate clock update to work correctly (it won’t be to a player’s advantage to modify this information since the server will just ignore requests against a target if the positions don’t work out, and the player won’t see the monsters if they’re in aggro range and the server is about to have them attack, etc).
But if the monsters required active pathfinding, simple spatial partitioning and cell population tests allow the server to suspend updates unless there is anybody around to receive them, and then only send those updates to players in range.
For many reasons (security being one but others are just as important), this is easier (and possibly more efficient) to handle with sparse passive replication, and the size/bandwidth requirements of even very complex updates are quite manageable in practice.
It’s also helpful in practice to have such a system when it comes to resolving high player density in a given area.
If you play WoW, you see this in the first new zone for high level players in every expansion.
The size of the logical world that is visible to your player shrinks, and the tolerated latency before considering somebody disconnected increases, as the server adapts to handle congestion in these very dense areas.
This lowers the number of event side-effects that the server has to worry about, something which would be extremely difficult to manage in an active replication environment with no simple way to scale that number down without immediately desynchronizing hundreds of players or introducing so much latency players only make one request per second or something along those lines.
There is also no need for the AI clock to vary as described in the Age of Empires article (which was very interesting!).
which is good, because that’s a global (or at least per-thread) thing which shouldn’t adapt to one high-congestion area and the leave the rest of the game nearly unplayable as a result.
And that ignores another issue of active replication, which would be asking a client’s computer to run the game logic for pretty much the entire MMO while also rendering their part of it.
Reply nop August 24, 2015 at 2:46 pm I was mostly concerned about multiplayer shooters or other games that use smaller worlds, but still need to support a decent amount of players that can potentially meet each other within, say 10 seconds.
In this case, sparse replication won’t save much at all.
Running game logic is also a non-issue mostly, typical server is not much faster than typical client PC, 2-5 times at most, if client HW is really slow.
And the cost of simulating the world in many such games actually doesn’t add much to the cost of rendering the stuff on screen.
Main problem of this approach is consistency and security which should be resolved by using more careful programming of simulation logic and using server-side verification.
nop August 24, 2015 at 2:59 pm Or to sum this up, I assumed maps with decent detail but with limited amount of players, something a single server _or_ a single client can handle with relative ease.
Even if server still has to run the simulation for obvious reasons, bandwidth savings will be huge, latency smaller.
Leave a Reply Cancel reply.
Enter your comment here.
Email Name Website You are commenting using your WordPress.com account.
( / ) You are commenting using your Google account.
( / ) You are commenting using your Twitter account.
( / ) You are commenting using your Facebook account.
( / ).
bloggers like this:.