Jackson Data Binding Message Serialization

So now that I have a custom game server connected to a message-based infrastructure, I need a way to send messages back and forth to other parts of the system.  Messages going to and from the custom game server are transmitted as JSON given that the format is easy to work with, supported in all the languages we’re immediately concerned with, and more or less straightforward.

For my Java-based server I am using Jackson and Jackson-databind to magically turn JSON into POJOs and back.

Message Formatting

Let’s consider a more interesting version of our Message: Along with other basic message information (here represented by the not-very-interesting property “text”) a Message contains a Body, of which there are many possible implementations for Body.

public class Message {

 

    public String text;

 

    @JsonTypeInfo(use = JsonTypeInfo.Id.NAME, include=JsonTypeInfo.As.PROPERTY, property="bodyType")

    @JsonSubTypes(value = {

            @JsonSubTypes.Type(value = BeginBattleRequest.class),

            @JsonSubTypes.Type(value = BeginBattleResponse.class),

            @JsonSubTypes.Type(value = GetNameRequest.class),

            @JsonSubTypes.Type(value = GetNameResponse.class)

    })

    public Object body;

 

    public Message() {

    }

 

    public Message(String text) {

        this.text = text;

    }

}

Jackson Databinding uses the type information of the body to figure out what value to put in a property we’ve decided to call “bodyType”.  Given this information, Jackson can figure out how to reconstruct the objects when given the JSON string this serializes to:

{ 

"text":"fight!",

"body":{

  "bodyType":"BeginBattleRequest",

  "opponentName":"thatGuyOverThere"

  }

}

This works really well, especially if the list of types you’re going to serialize into “body” is manageable and short, and also is known up front at compile time.

Discovering Types

But what if your types aren’t known up front, or you have hundreds of these body classes (and are adding more all the time), as is the real-world implementation of this case?

Instead, we can define the ObjectMapper sub types programmatically, using Reflection to identify our body candidate classes.  I prefer to use an annotation to denote the classes of interest, but you have each of the body classes implement an interface or extend from some body base class, whatever is most suitable for your situation.

@MessageBody

public class BeginBattleResponse {

    public boolean successful;

    public Result result;

 

    public enum Result {

        OK,

        INVALID_REQUEST

        // etc, etc etc...

    }

}

Then registering these @MessageBody classes with the ObjectMapper is simple:

objectMapper = new ObjectMapper(new JsonFactory());

objectMapper.disableDefaultTyping();

objectMapper.disable(SerializationFeature.WRITE_NULL_MAP_VALUES);

objectMapper.disable(SerializationFeature.WRITE_EMPTY_JSON_ARRAYS);

 

// register our @MessageBody classes with the objectMapper:

for(Class<?> clazz : reflections.getTypesAnnotatedWith(MessageBody.class)) {

    objectMapper.registerSubtypes(clazz);

}

The Code

The full implementation of this is at https://github.com/trasa/WebSocketClientServer/

Server WebSocket Clients, with Jetty

Previously I discussed how to write a Server which uses a persistent WebSocket Client, using the Netty framework to make things go.  Netty is configured through establishing a Channel Pipeline, which is great if you need a very flexible system for handling network input and output.  For most projects however, this sort of power can be overkill.

Jetty

Jetty, among other great things, contains a WebSocketClient implementation.  Getting this up and running is very simple: you’ll need a WebSocketClientFactory, where you can configure various settings, and a WebSocketClient which also has a bunch of settings to flip around to meet your needs.  Once you have those, you’ll need a class that implements WebSocket.OnTextMessage …and you’re pretty much done at this point. 

The code for sending and receiving messages becomes equally simple:

@Override

public void send(Message message) {

    // a message is going out!

    String data = null;

    try {

        data = messageToString(message);

        connection.sendMessage(data);

    } catch (Exception e) {

        logger.error("Failed to send message! " + data, e);

    }

}

 

@Override

public void onMessage(String data) {

    // a message is coming in!

    logger.debug("message received: {}", data);

    try {

        Message message = stringToMessage(data);

        handleMessage(message);

    } catch (IOException e) {

        logger.error("Failed to parse incoming message! " + data, e);

    }

}

 

The complete code for this example can be found at

https://github.com/trasa/WebSocketClientServer/tree/jettyclient

Server WebSocket Clients

I’ve been working on a system involving connecting custom game servers to a larger, message based and multi-tenant infrastructure.  The custom game server I was writing requires a persistent connection to the larger system, since messages originate from all directions (from/to client, from other servers, from the infrastructure, from thin air…).  And while the code I’m working with is all in Java, it’s expected that other game servers will be written in C#, or python, or ruby, or whatever they feel like, within reason.

Rather than build a custom protocol on top of TCP to go between the custom game server and the rest of the system, we decided to use Web Sockets for the wire protocol and objects serialized via JSON.  There are plenty of client and server implementations for all of the major languages we’re intending to support – making the job of supporting the other dev teams much easier.  Plus, it’s trivial to create various test harness Web Socket powered clients and servers – useful for developing your custom game without having to be tied to the larger system’s environment.  Particularly since that larger system was still actively under development..

As I was writing the game server, I found many examples of writing a Java server which would listen for incoming clients.  I found plenty of examples for writing Web Socket clients in JavaScript, or clients in Java that did everything in “public static void main()”.  But I didn’t find much in the way of writing a Java Server that kept a persistent Web Socket client around for the life of the server.  So, attached is an example Web Socket “client” server (a server application) which can send and receive messages using netty.

https://github.com/trasa/WebSocketClientServer

This particular example connects to a public “echo” server — which is great for testing out such ideas.

Netty

Netty is a client server framework for building network applications.  Netty’s power comes from being able to construct a high performance Channel Pipeline which describes the steps that network traffic goes through on its way to and from the application.  For this example, we define a pipeline that is built on HTTP and includes a Serializer/Deserializer for translating objects into json (and back), and also the WebSocketClientHandler itself for doing something with those WebSocket messages.

When sending a Message, we send the Message through the pipeline we’ve established. The Serializer/Deserializer (SerDe) translates the message into a byte array, wraps the bytes into a TextWebSocketFrame, and sends the WebSocketFrame on it’s way.

Coming back the other way, the SerDe recognizes an incoming stream of bytes as a Message and reconstitutes the java object.  The Message object is sent up the pipeline, handled by WebSocketClientHandler, which then executes that message, finding the correct handler to call, and otherwise “doing stuff” with the message received.

The advantage of this pipeline approach is that the actual game logic is easy to separate from the underlying communication, making the game code easier to write and much easier to test in isolation.  For example, a message handler might be defined as:

@MessageHandler

public void handleConstructArmyRequest(ConstructArmyRequest request) {

    // do whatever you're supposed to do here

    sendResponse(new ConstructArmyResponse(SUCCESS));

}

Netty met the needs of this particular game server, performance-wise and through it’s flexibility.  Since we’re using web sockets, such a system would be easy to put together using a library or different language completely.

 

The next step for this system is to look at replacing Netty with Jetty – which provides a servlet container and web server with web sockets support.  Typically a custom game server will need some http-based parts for monitoring (connecting to nagios, viewing and graphing statistics) and for allowing some administrative access to the inner workings of the game server.  Jetty doesn’t provide the same sort of pipeline for serializing and deserializing but the process is pretty much the same.

What’s on your software bucket list?

One of my favourite drinking topics with fellow geeks right now is a software bucket list.   What are the things that you would love to write before the end of your career.  No limits.  Write a list and compare with others.  This makes a great interview question too.  Forget ‘learning android’ or JavaScript.  What are the actual types of application that you would love to write?  These are not hobby project but things that someone will actually use.

It could be to write a robotics application, a simulation application.  Implement the cutting stock algorithm – whatever floats your boat.

If a potential hire can’t give me 5 things on their bucket list, then I am struggling to see their passion for software.   I have changed my list over and over.  Sometimes hearing someone else’s cool idea means I add it to my list, and sometimes I will remember an old item that makes it back to the list.

What’s on my list?  Well that will cost you a beer.  I will say that I have knocked 3 off out of my 10 and I am always trying…

So what’s on your software bucket list?

Where is the craft in CMS?

I have been spending a lot of time working with some CMS systems recently.  I won’t name names but it could be Drupal, WordPress, DNN, Joomla etc – doesn’t matter.  I have been working with two different systems for two unrelated projects that happened to arrive around the same time.

I admit that I am lost.  I am lost without TDD/BDD/Refactoring/ATDD – all of the things that have been my support system for years.  I know that you need to put in the hours to learn these tools to get the best of them.  I get that.  I have been.  But after hours of mouse clicks (for the love of god, how about some shortcuts) and bending my requirements to the tools’ wishes I have an overwhelming need to let off some steam.

The bad luck for you is that this is my first post on elegant code and you get to read it.  Now don’t get me wrong, these products gave me a working site amazingly quickly (and therefore cheaply).  They have a wide ecosystem of components, both free and paid that I have used to extend the features that my sites needed.  I am in no way bashing these tools – this post is about me (aren’t they all?)

But…

I feel like there is always the ‘one true way’.  The way the tool wants me to do it.  The way the paid module wants me to do it.  Need another way?  That’s gonna be a lot of work.  Feel like re-writing what you just paid for?  The ‘one true way’ makes me feel like I am driving a slot car.  There is nowhere else to go.  My sites will be like millions of others following the ‘one true way’.  How do I delight my customer with that?

I miss the wind in my hair feeling that comes from driving on a real road, or skiing off piste.  Letting the solution emerge from the requirements.

I feel that I must be missing something.  How do I get this feeling back whilst working with these tools?  Is there something stupid that I am not doing?  Am I just a grumpy old so & so that needs to get with the times?

So I guess that’s why I really wrote this post.  A plea for help.

My name is Rob and I am a code-aholic.  I realize this and am seeking treatment.

MVC Portable Areas

Introduction:

An MVC Portable Area is really just a dll that contains the views, controllers, scripts, etc… needed to use in a website that is either a Web Forms website or an MVC website. I will cover the MVC website here.

Why Use One?

I have used portable areas in many projects since they came out. A developer can use them for a reusable widget or a complete engine. I have actually used them for both.

Set Up:

I am going to assume that you have a new project. It is just as easy to add a portable area to an existing solution.

First, let’s create a blank solution. Click on New Project, then select Other Project Types –> Visual Studio Solutions. Name it PortableAreaDemo

image

 

Next, right click on the solution in Solution Explorer and Add New Project. Choose Web and ASP.NET MVC 3 Application. I Named mine PortableAreaDemo.Mvc.

 

image

 

Choose Internet Application on the next screen:

image

 

Next, add another project. This is also going to be a ASP.NET MVC 3 Web Application. I called mine PortableAreaDemo.PortableAreas. Note, you might get an error here or might not see your solution node. If so, follow this link: http://stackoverflow.com/questions/7457935/solution-folder-not-showing-in-visual-studio-2010-how-can-i-make-it-visible

image

 

This time, just make it an empty application.

image

 

Hopefully you see something like this:

image

 

Next, go ahead and delete the Controllers, Views, Models, Scripts, and Content folders under the PortableAreas project. Also, remove the Global.asax inside that project as well.

image

 

Now, right click on the PortableAreas project and add an Area. Let’s call it Demo. This should produce something like this:

image

 

Notice now that it created an AreaRegistration child called DemoAreaRegistration. We will need to now go and add a Library Package Reference to this. Right click on References under the PortableAreas project –> Add Library Package Reference. Then click on Online –> All. Wait for it to load, then type MvcContrib in the search box. Install the package.

image

 

Now go back into DemoAreaRegistration.cs and change the parent name to PortableAreaRegistration.

image

 

Move the routes out into a private method called RegisterRoutes.  By default, if you were to call the base.RegisterAreas here, it would set up the EmbeddedResource. However, your routes get set up in the wrong order and you will lose control once your portable area gets more sophisticated. I would strongly suggest not using the built in ones here and registering them yourself. Override the RegisterArea that passes the IApplicationBus instead of the one that just has the context:

   1: public override void  RegisterArea(AreaRegistrationContext context, IApplicationBus bus)

   2: {

   3:     RegisterRoutes(context);

   4:     RegisterAreaEmbeddedResources();

   5: }

 

Then add a private method called RegisterRoutes and add the following routes into it:

   1: private void RegisterRoutes(AreaRegistrationContext context)

   2: {

   3:     context.MapRoute(

   4:         AreaName + "_scripts",

   5:         base.AreaRoutePrefix + "/Scripts/{resourceName}",

   6:         new { controller = "EmbeddedResource", action = "Index", resourcePath = "scripts" },

   7:         new[] { "MvcContrib.PortableAreas" }

   8:     );

   9:  

  10:     context.MapRoute(

  11:         AreaName + "_images",

  12:         base.AreaRoutePrefix + "/images/{resourceName}",

  13:         new { controller = "EmbeddedResource", action = "Index", resourcePath = "images" },

  14:         new[] { "MvcContrib.PortableAreas" }

  15:     );

  16:  

  17:     context.MapRoute(

  18:         AreaName + "_default",

  19:         base.AreaRoutePrefix + "/{controller}/{action}/{id}",

  20:         new { action = "Index", id = UrlParameter.Optional },

  21:         new[] { "PortableAreaDemo.PortableAreas.Areas.Demo.Controllers", "MvcContrib" }

  22:     );

  23: }

 

When all is done, your registration class should look like this:

   1: using System.Web.Mvc;

   2: using MvcContrib.PortableAreas;

   3:  

   4: namespace PortableAreaDemo.PortableAreas.Areas.Demo

   5: {

   6:     public class DemoAreaRegistration : PortableAreaRegistration 

   7:     {

   8:         public override string AreaName

   9:         {

  10:             get

  11:             {

  12:                 return "Demo";

  13:             }

  14:         }

  15:  

  16:         public override void  RegisterArea(AreaRegistrationContext context, IApplicationBus bus)

  17:         {

  18:             RegisterRoutes(context);

  19:             RegisterAreaEmbeddedResources();

  20:         }

  21:  

  22:         private void RegisterRoutes(AreaRegistrationContext context)

  23:         {

  24:             context.MapRoute(

  25:                 AreaName + "_scripts",

  26:                 base.AreaRoutePrefix + "/Scripts/{resourceName}",

  27:                 new { controller = "EmbeddedResource", action = "Index", resourcePath = "scripts" },

  28:                 new[] { "MvcContrib.PortableAreas" }

  29:             );

  30:  

  31:             context.MapRoute(

  32:                 AreaName + "_images",

  33:                 base.AreaRoutePrefix + "/images/{resourceName}",

  34:                 new { controller = "EmbeddedResource", action = "Index", resourcePath = "images" },

  35:                 new[] { "MvcContrib.PortableAreas" }

  36:             );

  37:  

  38:             context.MapRoute(

  39:                 AreaName + "_default",

  40:                 base.AreaRoutePrefix + "/{controller}/{action}/{id}",

  41:                 new { action = "Index", id = UrlParameter.Optional },

  42:                 new[] { "PortableAreaDemo.PortableAreas.Areas.Demo.Controllers", "MvcContrib" }

  43:             );

  44:         }

  45:         

  46:     }

  47: }

 

So a little explanation of this. When your application first starts, there is a little piece of code in Application_Start that will end up calling this Portable Area Registration. Area routes should always be called before your normal routes to give it a chance to find the Area name. Otherwise, the default route would think the Area name was a controller name. Here is the piece of code that calls the portable area registration:

   1: protected void Application_Start()

   2: {

   3:     AreaRegistration.RegisterAllAreas();

   4: ...

 

Ok, so now let’s create a controller in the Demo area. Let’s call it “World”. Now let’s add an action to it. Can you guess the name? Yep, that’s right “Hello”. Create a Razor partial View for it. Type “Hello World” in the view. When all is said and done, you should look something like this:

image

 

Now, here is a very IMPORTANT step. Anything that is either a view, css, javascript, image, etc must be an embedded resource. What does this mean? It means it will be added to the dll instead of being found on the file system. This is important because a portable area travels with the dll and not with the project itself. Yes, it will cause you to have to build every time you want to see a change in the page or js. This is why you must decide up front if it is worth the development time that it will take. For me, the projects that I have used them for, it has been. We wanted something reusable that we could move from solution to solution without having to rewrite.

So in order to make it embedded, right click on the Hello.cshtml file and choose Properties. Change the Build Action to Embedded Resource.

 

image

 

Next, add the reference for the PortableAreas Project to the main Mvc project.

Now add a folder called Areas under the Mvc project. Open the folder Views under the Mvc project and copy the Web.config up to the Areas folder that you just created.

You should now look like this:

image

 

So, why are we adding the folder called Areas and a web.config into it? It is because when the dll gets put into the project, it will put your portable area into this folder underneath the covers. This is where it pretends it is and will look for the views and such inside of this folder.

 

Now, in the Index.cshtml view under Home, add the line at the bottom:

   1: @Html.Action("Hello", "World", new { area = "Demo"})

 

Set the PortableAreaDemo.Mvc as the Startup Project and Press “Start Debugging”, you should now see:

image

Hopefully you got it working. If not, go back through and check the steps, otherwise, you can comment here or catch me on twitter at @mike_d_moser and I can try and help you out. Thanks and happy coding.

Monitoring an MMO

I’ve been working on a free-to-play MMO which has been “officially” live since last April, and things have been going well – a steady growth of players; the game itself has been well-received, and all the important graphs are “up and to the right.”  Part of my job involves detecting problems before they become serious and fixing problems when they inevitably do.  So, there are two questions.  “Is there a problem in the game?” “What is causing the problem?” 

When trying to debug something on our development and test clusters, typically you can tail log files.  We have a python script that can monitor the communication between various parts of the game and pretty-print it along with color to highlight “this is a problem!”  Attaching a debugger to a running process is also not uncommon.  However, looking at logs and bus traffic in realtime on a production environment gives you this neat “Matrix-y” experience.  Attaching a debugger to a production process (assuming you could, which you can’t) would get you smacked with a rolled-up newspaper.  “Bad Developer!  No treat!”  So, what can you do?

Monitoring

When you’ve got clusters full of machines, using Nagios to monitor things is an obvious solution.  Beyond making sure the power is on and other sysadmin things, we’ve written other checks to see if the login process is working, the parts are working together, and automating typical in-game functions.  For example, if nagios can’t successfully log into the game do basic game activity, then alerts happen.

Metrics for EVERYTHING

Anything that happens in game has metrics reporting tied to it, generating piles of data constantly.  We use Cacti to visualize game activity.  An example metric is concurrent users, or CCU.  We graph how many people are in the game over time, which when things are healthy should be a nice smooth curve climbing to peak game hours, then descending nicely through the night.

We can tell by sight if the game looks healthy or not – if the CCU graph is jaggy, has a sudden drop or spike, or drops to zero then we know that something is wrong.  Typically nagios alerts accompany the graphs, giving more data points on where to look.  But this has also pointed out areas where a nagios check was missing or wasn’t working as intended.

Log Files

When a player gets an error in game, the error dialog box gives them the opportunity to submit the error details back to us.  If we see a spike in user-reported errors through this or other customer service means, we know we have something of interest to look for. 

The game server components make use of log4j and similar logging frameworks.  Anything that you’d want to watch happening in game needs to be aggressively logged.  All components are configured so that operations can change the log level on the fly.  That’s still quite a bit of data across many machines though, so all that information is run through Splunk to be indexed and searchable.  This gives us a great tool for searching through log data, examining trends, or watching selected activity in real time.  Unfortunately it is very expensive so we are selective about the data that passes through it. 

Jetty Startup Problems Due to Entropy

On the MMO I’m working on, we do quite a bit of service monitoring via jetty.  Things like having a URL that reports back version information for a component, or a page full of statistics showing recent activity, or a way to trigger a self-check in a component to determine if the component is sane and healthy or not.

For certain components we run multiple instances of the service on the same box.  In development environments, it’s a pretty small number of instances.  On a production environment it’s a much bigger number.  Typical stuff.  However Operations noticed that startup times for a server full of instances jumped from seconds to over 20 minutes when the number of instances was increased from 3 to 12.  Ouch.

So, something is blocking on startup.  After some digging, I came across this post explaining how jetty uses a secure random number generator for session ids, which is based on the pool of entropy generated by the system.  Sure enough…

$ cat /proc/sys/kernel/random/entropy_avail
128

(On development cluster machines that value is upwards of 3000)

Since we don’t care if the jetty session IDs are securely generated or not, switching the generator from secure to not-secure took us back to a few seconds to start up 20 instances.