Thursday, March 28, 2024

Optimizing WebSockets Bandwidth

by Eric Li

 

Steve cradled his pumpkin spice latte as he walked into the office. It had been three weeks since the launch of his HTML5 multiplayer game, and things were going great. As Steve checked his e-mail, he noticed the account statement from his cloud service provider.

 

“Let’s see here,” Steve muttered to himself. “Virtual server… 100,000 total hours.” Steve knew this was roughly correct, given the statistics they had been gathering. Thinking about the success of the launch, Steve could barely contain his smile. Skimming further down in the statement, Steve froze. All the way at the bottom, nestled between the line for tax and the heading “Bandwidth” was a number that made Steve’s stomach sink as fast as the latte dramatically falling from his hand.

 

HTML5 games are growing in a strong way. Once Internet Explorer 10 is released, WebSockets will be available in all modern desktop browsers. This technology enables serious realtime communications—the keystone to any non-trivial multiplayer action game. But like a snake in tall grass, bandwidth costs can seemingly come from nowhere. In this article, I want to discuss the oft overlooked cost of network bandwidth and techniques for minimizing it.

 

The “Hidden” Costs of Network Bandwidth

socket.io is a WebSockets-enabled networking module for node.js I commonly see recommended for its ease of use, robust fallback support, and integrated publish-subscribe pattern. Here is a snippet of server code that demonstrates this.

var io = require(‘socket.io’).listen(80);

 

io.sockets.on( ‘connection’, function( socket ) {

  io.sockets.emit( ‘broadcast’, { will: ‘be received by everyone’ });

 

  socket.on( ‘private message’, function ( from, msg ) {

    console.log( ‘I received a private message by’, from, ‘saying,’, msg );

  });

});

 

In just a few short lines, you have a WebSockets implementation that will fallback gracefully into a number of transports at runtime, on a per-client basis. To see the publish-subscribe pattern in action, let’s look at a sample of client code.

 

var socket = io.connect();

 

socket.on( ‘broadcast’, function( data ) {

  console.log( data.will === ‘be received by everyone’ );

});

 

socket.on( ‘connect’, function() {

  socket.emit( ‘private message’, ‘MyUsername‘, ‘”This is easy!”‘ );

});

 

The event system used here is familiar to most people, as it is actually EventEmitter found in node.js. All these things are great, so what’s the issue here and what can this small example reveal about why Steve’s bandwidth bill is so high?

 

JSON: Strengths and Weaknesses

 

JSON is a way of representing structured data that has enjoyed native implementations of encoding and decoding in all modern browsers. For many reasons, JSON has picked up acceptance on the web for data serialization. If we examine socket.io source, we see that our calls to emit eventually serialize to JSON. Then, supposing we inspect the WebSocket frames, we might see the following.

 

 

It is generally clear how our calls to emit translate to what we see here. A lot of that can be attributed to the human-readability of JSON. However, in a networked game environment where we can expect 20 to 30 messages a second from the server, the verbosity of JSON adds up. There is a lot of metadata describing the structure of the data that we don’t want because our priority is compression, rather than human-readability. We are also not expecting truly arbitrary data, so the structure that JSON encodes is not necessary. It is becoming clear that JSON is not the solution for this problem.

 

Byte Ripper X or (How to Shed Those Bits)

 

To really understand what we’re dealing with, we will want some real data to play with. For this example, I am going to use the following “game state.”

 

var state = [{

  id: 0,

  transform: {

    position: {

      x: 15.290663048624992,

      y: 2.0000000004989023,

      z: -24.90756910131313

    },

    rotation: {

      x: 0.32514392007855847,

      y: -0.8798439564294107,

      z: 0.32514392007855847,

      w: 0.12015604357058937

    }

  }

}, {

  id: 1,

  transform: {

    position: {

      x: 7.490254936274141,

      y: 2.0000000004989023,

      z: -14.188117316225544

    },

    rotation: {

      x: 0,

      y: 0.018308020720336753,

      z: 0.1830802072033675,

      w: 0.9829274917854702

    }

  }

}];

view raw gistfile1.js This Gist brought to you by GitHub.

 

What we have here could represent the positional and rotational data for two players in a 3D game. The values were made up for this example, but should be sufficient for our purposes. If we were to serialize this game state to JSON, it would be 410 bytes.

 

var msg = JSON.stringify( state );

 

var getUTF8Size = function( str ) {

  var sizeInBytes = str.split(”)

    .map(function( ch ) {

      return ch.charCodeAt(0);

    }).map(function( uchar ) {

      // The reason for this is explained later in

      // the section “An Aside on Text Encodings”

      return uchar < 128 ? 1 : 2;

    }).reduce(function( curr, next ) {

      return curr + next;

    });

 

  return sizeInBytes;

};

 

var msgSize = getUTF8Size( msg ); // 410 bytes

view raw gistfile1.js This Gist brought to you by GitHub.

 

And since the camera adds ten pounds, socket.io’s snapshot of the game state adds roughly 30 bytes of overhead on top—more if you have a longer event name. Assuming that we’re sending 30 messages a second, we are uploading 12.89 KBps per player. Now, this may not sound like a lot, but consider this: as the player cap for a single game increases, so does the required bandwidth, in a linear fashion. Given enough players or game instances, these numbers become sobering very quickly.

 

You may have already noticed that every message we send comprises of exactly eight values per player. If we define a set ordering of these values, we can dispense with the structure and just rebuild the state with the data once we receive it on the client, bringing us down to 8.61 KBps per player. You will also notice that this is harder to parse at a glance. However, the question remains—can we do better?

 

var slimState = [[

  0,

  15.290663048624992,

  2.0000000004989023,

  -24.90756910131313,

  0.32514392007855847,

  -0.8798439564294107,

  0.32514392007855847,

  0.12015604357058937

], [

  1,

  7.490254936274141,

  2.0000000004989023,

  -14.188117316225544,

  0,

  0.018308020720336753,

  0.1830802072033675,

  0.9829274917854702

]];

 

var slimMsg = JSON.stringify( slimState );

view raw gistfile1.js This Gist brought to you by GitHub.

 

An Aside on Text Encodings

 

Before we continue, it is imperative to understand how WebSocket actually transports data. WebSockets originally only allowed UTF-8 data to be transmitted on the wire. Then in April of 2011, the hybi-07 draft of the WebSocket protocol introduced the option to transfer binary data. The catch here is that socket.io does not currently support binary WebSocket frames. I’ll discuss this more later, but keep this in mind when choosing a WebSocket implementation for node.js.

 

For those unfamiliar with the Unicode Standard, UTF-8 is a variable-length encoding. This fact will become important when we calculate exactly how big our messages are. The other caveat here occurs when the WebSocket protocol receives invalid UTF-8—the connection must immediately be failed and the connection closed. This includes invalid byte sequences and invalid code points and needs to be kept in mind when trying to encode data with UTF-8. Also worth noting is that JavaScript uses UTF-16 and will not error if you attempt to create an invalid string.

 

Getting Down and Dirty: Looking at the (Naughty) Bits

 

If you’ve been around computers, code, or programmers for long enough, you may be privy to the fact that underneath it all, those numbers are represented by a bunch of bits. You may even know that the Numbers in JavaScript are represented by the IEEE 754 double-precision floating-point format. But if you don’t, the main takeaway is that they occupy 8 bytes of memory. This is a lot less than the 20 characters (and therefore 20 bytes) some of the Numbers use in JSON, and we’re going to look into this binary representation using typed arrays.

When we use typed arrays, we can more closely examine the binary form of the data. Behind each typed array we create is an ArrayBuffer object that holds the actual bits for the typed array. These ArrayBuffers are chunks of pure bits—they have no notion what kind of data they hold or what format it is in. In order to make sense of this pure data, we need to impose some context. We can create “views” to accomplish this. Here is an example of what you might see in node.js on a little-endian machine.

 

var char  = new Int8Array( 1 );

// uchar now shares the same ArrayBuffer as char.

var uchar = new Uint8Array( char.buffer );

 

char.buffer === uchar.buffer; // true

 

char[0] = -1;

uchar[0] === 255; // true, same data interpreted differently

 

var arr = new Float64Array([15.290663048624992]);

 

// { ‘0’: 15.290663048624992,

//   buffer:

//    { ‘0’: 0,

//      ‘1’: 0,

//      ‘2’: 128,

//      ‘3’: 201,

//      ‘4’: 209,

//      ‘5’: 148,

//      ‘6’: 46,

//      ‘7’: 64,

//      byteLength: 8 },

//   BYTES_PER_ELEMENT: 8,

//   length: 1,

//   set: [Function: set],

//   slice: [Function: slice],

//   byteOffset: 0,

//   byteLength: 8,

//   subarray: [Function: subarray] }

view raw gistfile1.js This Gist brought to you by GitHub.

 

We can see the eight individual bytes that compose the Number that is 15.290663048624992. We can then convert these into an eight character string using String.fromCharCode. We can then compress the entire game state in the process shown below.

 

// Compact the state.

var slimmerState = new Float64Array([

  0,

  15.290663048624992,

  2.0000000004989023,

  -24.90756910131313,

  0.32514392007855847,

  -0.8798439564294107,

  0.32514392007855847,

  0.12015604357058937,

  1,

  7.490254936274141,

  2.0000000004989023,

  -14.188117316225544,

  0,

  0.018308020720336753,

  0.1830802072033675,

  0.9829274917854702

]);

 

// Impose an 8-bit unsigned format onto the bytes

// stored in the ArrayBuffer.

var ucharView  = new Uint8Array( slimmerState.buffer );

var slimmerMsg = String.fromCharCode.apply(

  String, [].slice.call( ucharView, 0 )

);

var slimmerMsgSize = getUTF8Size( slimmerMsg ); // 170 bytes

 

 

// Decode

var decodeBuffer = new ArrayBuffer( slimmerMsg.length );

var decodeView   = new Uint8Array( decodeBuffer );

 

// Put message back into buffer as 8-bit unsigned.

for ( var i = 0; i < slimmerMsg.length; ++i ) {

  decodeView[i] = slimmerMsg.charCodeAt( i );

}

 

// Interpret the data as JavaScript Numbers

var decodedState = new Float64Array( decodeBuffer );

view raw gistfile1.js This Gist brought to you by GitHub.

 

Our resulting message is 128 characters long, and because it will be UTF-8 encoded, ends up being 170 bytes. Once we add our overhead, we have slimmed our bandwidth usage down to 5.86 KBps per player. That’s a 54% reduction from our initial naïve approach!

 

How Low Can You Go?

 

There are further ways to compress your data, including actual compression algorithms, and we will go into more detail for a few of these techniques. Some of these will apply very well generally, but there are some techniques here that won’t fit your requirements, may end up being too costly (in clock cycles or development time), or may even inflate your data. Feel free to mix and match these techniques, but as with optimization in general, be sure to test with data that is representative of your real world data.

 

Too Much Precision?

 

It is possible that the data the server is sending to the clients is too precise. Do you really need to send double-precision values? Will just single-precision work? Can we send them as 16 bit or even 8 bit integers? Do we need to even send it at all? These are all questions that will depend heavily on the application, but can help cut down a lot of the data.

 

In our example, we end up encoding the ids in the game state as double-precision values. However, when we examine our application, we see that they are just meant as an unique identifier within the game. Let us assume that we have a player cap of four players. In that case, an 8-bit integer is more than enough for our needs. Then, assuming we use a server-client model where the client is simply a dumb renderer, we can definitely get away with using single-precision numbers for the position and rotation values. We are then left with a string of 58 characters, totaling 75 bytes. Add our overhead, and we have 3.08 KBps per player.

 

var encodeUint8 = (function() {

  var arr = new Uint8Array( 1 );

  return function( number ) {

    // If we assume that the number passed in

    // valid, we can just use it directly.

    // return String.fromCharCode( number );

    arr[0] = number;

    return String.fromCharCode( arr[0] );

  };

}());

 

var encodeFloat32 = (function() {

  var arr  = new Float32Array( 1 );

  var char = new Uint8Array( arr.buffer );

  return function( number ) {

    arr[0] = number;

    // In production code, please pay

    // attention to endianness here.

    return String.fromCharCode( char[0], char[1], char[2], char[3] );

  };

}());

 

var encodeState = function( state ) {

  var msg = ”;

  for ( var i = 0; i < state.length; ++i ) {

    var player = state[i];

 

    // Encode id

    msg += encodeUint8( player.id );

 

    // Encode transform

    msg += ( encodeFloat32( player.transform.position.x ) +

             encodeFloat32( player.transform.position.y ) +

             encodeFloat32( player.transform.position.z ) +

             encodeFloat32( player.transform.rotation.x ) +

             encodeFloat32( player.transform.rotation.y ) +

             encodeFloat32( player.transform.rotation.z ) +

             encodeFloat32( player.transform.rotation.w ) );

  }

  return msg;

};

 

var impreciseMsg = encodeState( state );

 

 

// Decode

var decodeUint8 = function( str, offset, obj, propName ) {

  obj[ propName ] = str.charCodeAt( offset );

 

  // Number of bytes (characters) read.

  return 1;

};

 

var decodeFloat32 = (function() {

  var arr  = new Float32Array( 1 );

  var char = new Uint8Array( arr.buffer );

  return function( str, offset, obj, propName ) {

    // Again, pay attention to endianness

    // here in production code.

    for ( var i = 0; i < 4; ++i ) {

      char[i] = str.charCodeAt( offset + i );

    }

 

    obj[ propName ] = arr[0];

 

    // Number of bytes (characters) read.

    return 4;

  };

}());

 

var decodeState = function( str ) {

  var charsRead = 0;

 

  var state = [];

  while ( charsRead < str.length ) {

    // GC performance suffers here. Read Martin Wells’

    // article about writing GC-friendly code, here

    // on BuildNewGames.com

    var player = { transform: {} };

    var position = player.transform.position = {};

    var rotation = player.transform.rotation = {};

 

    // !!!: The order of decode is same as encode!

 

    // Decode id

    charsRead += decodeUint8( str, charsRead, player, ‘id’ );

 

    // Decode transform

    charsRead += decodeFloat32( str, charsRead, position, ‘x’ );

    charsRead += decodeFloat32( str, charsRead, position, ‘y’ );

    charsRead += decodeFloat32( str, charsRead, position, ‘z’ );

    charsRead += decodeFloat32( str, charsRead, rotation, ‘x’ );

    charsRead += decodeFloat32( str, charsRead, rotation, ‘y’ );

    charsRead += decodeFloat32( str, charsRead, rotation, ‘z’ );

    charsRead += decodeFloat32( str, charsRead, rotation, ‘w’ );

 

    state.push( player );

  }

  return state;

};

 

var decodedState = decodeState( impreciseMsg );

view raw gistfile1.js This Gist brought to you by GitHub.

 

The Overhead

 

With our last optimization getting as low as 75 bytes, those 30 bytes of overhead really hurt. So what exactly are they doing for us that is worth holding on to? You will recall that part of this overhead includes the event name that the event system uses. We can certainly pick shorter event names, but that reduces readability of the code, and there are still bits of JSON in there from using emit.

 

The solution is to bypass all of this. socket.io allows you to skip all the nonsense and just use bare-bones messages with the “message” event. We can then recreate the event system on top in a less verbose manner by using something akin to C-style enums. We put this enum in code shared between the server and client, and then manually dispatch events when we receive them.

 

// Shared code

var MsgType = {

  “game start”:      String.fromCharCode(0),

  “update state”:    String.fromCharCode(1),

  “private message”: String.fromCharCode(2)

};

 

var MsgTypeLookup = [

  “game start”,

  “update state”,

  “private message”

];

 

var createNetEmitter = function( EventEmitter2, socket ) {

  var receive = new EventEmitter2({ wildcard: true });

  socket.on( ‘message’, function( encodedString ) {

    var event = MsgTypeLookup[ encodedString.charCodeAt(0) ];

    netreceive.emit( event, encodedString.substr(1) );

  });

 

  var send = new EventEmitter2({ wildcard: true });

  netsend.on( ‘*’, function( event, encodedString ) {

    socket.emit( ‘message’, MsgType[this.event] + encodedString );

  });

 

  return { receive: receive, send: send };

};

 

 

// Server code

var EventEmitter2 = require(‘eventemitter2’).EventEmitter2;

 

io.sockets.on( ‘connection’, function( socket ) {

  var net = createNetEmitter( EventEmitter2, socket );

  net.send.emit( ‘game start’, ‘world state goes here’ );

});

 

 

// Client code

// Include the EventEmitter2 library

var socket = io.connect();

socket.on( ‘connect’, function() {

  var net = createNetEmitter( EventEmitter2, socket );

 

  net.receive.on( ‘game start’, function( worldState ) {

    var state = decode( worldState );

  });

});

view raw gistfile1.js This Gist brought to you by GitHub.

 

There is a lot of flexibility in terms of how this dispatch layer can be architected. Here, we make use of EventEmitter2’s wildcard functionality to mimic the original interface, allowing us to accept strings as events, encode the message type, and then send it down the wire. Although socket.io will still add 4 characters up front, the important thing is that we’ve reduced our overhead from 30 characters down to 5 characters. Applying this to our third example drops our message size from around 200 bytes to 175 bytes and our bandwidth down to 5.13 KBps per player.

 

Delta Compression and ZigZag Encoding

 

With names out of a young adult sci-fi novel, delta compression and ZigZag encoding are compression techniques that are very dependent on the data that you run them on.

 

Delta compression is a technique used when unsigned integers are stored in a variable-length encoding. By storing the difference between the current value and the previous value, we obtain smaller values which use fewer bytes. Now, when I use the terms current and previous, this can mean over time, between messages, or within a single message, between the sequence of values.

 

<pre id=”figure-10″>

var sequence1 = [ 0, 1, 2, 3, 4, 5, 6 ];

var deltaSeq1 = [ 0, 1, 1, 1, 1, 1, 1 ];

 

// Initial unsigned sequence. 12 bytes.

var sequence2 = [ 150, 160, 155, 140, 153, 151 ];

// Signed delta compressed sequence

var deltaSeq2 = [ 150,  10,  -5, -15,  13,  -2 ];

</pre>

 

To understand how this actually saves us bytes requires that we go back to our encoding. If we encoded sequence2 as UTF-8, they would all require two bytes. Remember, all code points from 128 to 2047 require two bytes in UTF-8. However, we run into a snag when we encode the delta compressed sequence.

 

<pre id=”figure-11″>

var deltaSeq2 = [ 150, 10,  -5, -15,  13,  -2 ];

// Delta compressed sequence as unsigned

var unsigned2 = [ 150, 10, 251, 241,  13, 254 ];

</pre>

 

Representation of negative numbers using two’s complement interferes with our goal of getting small numbers. What we want is for numbers with a small magnitude to be mapped to small unsigned values, and therefore less bytes. This is where ZigZag encoding comes in. The Google Protocol Buffer documentation has a great description (and code), but the general idea involves “zig-zagging” back and forth between positive and negative numbers. Our final result then only has an initial value that uses two bytes, with the rest of the sequence using a single byte for each value.

 

var sequence2 = [ 150, 10,  -5, -15,  13,  -2 ];

// Final unsigned sequence, delta compressed

// and ZigZag’d. 7 bytes.

var zigzagdlt = [ 150, 20,   9,  29,  26,   3 ];

 

base128 Encoding

 

Another way that we can maximize the number of one byte characters is to ensure that it will always be so. By taking an ArrayBuffer and serializing 7 bits at a time to a string, we guarantee that all code points will be less than 128 and all the characters in the UTF-8 stream will be one byte in length.

 

This is an interesting technique, because we have essentially changed our data encoding to be a fixed-length encoding. Techniques like delta compression no longer have any affect if we use base128 encoding. In terms of compression performance, every byte can only be used to store 7 bites of data, so the encoded data will be 8 / 7 ≈ 14.3% larger than the binary representation of the data. This is better than the expected value of 50% when encoding uniformly random 8-bit integers, but your mileage may vary, depending on your data. However, before you jump to implement a base128 encoder, you may want to consider using binary transport.

 

Binary Transport

 

Ah, now we’re really getting down to brass tacks. Here, we skip all the inconvenient little transformations and ship entire typed arrays down on the wire. Unfortunately socket.io does not support sending binary data because the other fallback transport methods have issues with binary data, which may be a problem for some applications. But fear not, because the WebSocket library that socket.io uses, ws, does in fact support sending binary data, as do other WebSocket implementations like faye-websocket and WebSocket-Node.

 

There is a potential pitfall that using binary data can lead to. To initialize a view onto an ArrayBuffer, the view must be byte aligned to the beginning of the ArrayBuffer. That means that you can only put a 64-bit floating-point number at intervals of eight bytes and 32-bit integers at intervals of four bytes. We need padding to ensure byte alignment for the typed arrays, or a means of packing values cleanly. There is also the need for having the entire typed array allocated, although this will generally speaking be relatively small. However, variable length data, like arrays, may present issues. Nonetheless, between strictly base128 encoding and just sending the binary, we save the need to encode the data with bitwise operations and only need 7 / 8 of the space, potentially cutting down ≈12.5% of the bandwidth.

 

WebSocket Deflate Extension

 

The last technique I will cover is the use of an actual compression algorithm. Older drafts of the WebSocket specification described a deflate-stream extension that applied compression to the stream. However, there were many potential issues with this extension. Len Holgate sums up the pitfalls regarding deflate-stream, which was pulled in 11th draft of the specification (§9.2.1). There is an alternative, however, and that is the per-frame DEFLATE extension.

 

The good news is that the deflate-frame extension landed in WebKit back in February. The bad news is that there is not yet a WebSocket implementation in node.js that implements the extension, although that will hopefully change really soon.

 

The caveat with a compression extension is that the browser has to also support the extension. As such, the target browser of your application will determine the efficacy of this technique.

 

Final Thoughts

 

As I had stated in the opening statements of this section, these techniques have varying degrees of efficacy for different data sets. This is then compounded by the fact that combining these techniques and the ordering thereof will have differing results. I‘ve been hesitant to include a graph of the performance, as it may trick the unwary into thinking that there is an ultimate punchline. The answer to that heavily involves not only your data, but also your target platforms/browsers and other constraints.

 

Conclusion

 

If you’ve managed to make it this far, chances are that this topic is of great interest (or concern) to you. I would recommend closely examining the technologies you’re using. If you don’t need the robustness of socket.io, consider using a pure WebSocket implementation. Finally, if you need more ideas for compression, consider looking into other areas for inspiration. For example, the demoscene uses many techniques to heavily compress vertex data for meshes. It is also possible to compress your data yourself without relying on deflate-frame, although runtime performance needs to be considered here as well. And as with all optimizations, don’t forget to test!

 

<<<>>> 

About the Author

Eric Li does Javascripts at Gradient Studios. Otherwise, he cooks steak, believes in Jesus, throws Frisbees, ♥’s C++, and reads.  Follow him on Twitter.

 

This article was reprinted with permission from Microsoft Corporation. This site does business with Microsoft Corporation.

Get the Free Newsletter!

Subscribe to Developer Insider for top news, trends & analysis

Popular Articles

Featured