Monthly Archives: January 2010

Firebase CE – What’s next?

So, we’ve released Firebase Community Edition. Now there’s a Java game server which is open source and available to be used. Now what?

Well, here’s what! There’s a couple of things we’d like to get out of the door immediately and some of them will start appearing soon indeed:

  • Script support: Java 6 has build in support for JavaScript and can be extended to support a rather large list of script engines. We’d like to add support for writing your game using any of these script languages. We’d love to see the first Ruby+Flash game out there!
  • IDE support: We already have proto code for some Eclipse plugins lying about, and these should be polished up and released. Of course, you can always use our Maven archetypes and plugins, and then import into Eclipse, but direct support would be nice too. Of course, if anyone want to use NetBeans weed have to figure out something for you to eh?
  • Documentation: It’s sparse at the moment and needs to be fleshed out. We hope you’d like to help us here by simply telling us what is missing and asking us about all those things weäve forgotten to write down. Or indeed wasn’t explained properly.
  • IoC support: This is a biggy. Again we have proto code for for Guice lying about which needs to be fixed and published, but obviously we need to add direct support for Spring as well. Actually, you can write your components in Guice now but you’d have to wire it together yourself. Spring needs to be tested, so let us get back to you on that, ok?

And that’s what we want to do the immediate future. Watch this space, this is going to be fun!

By |Tuesday, January 26, 2010|firebase|1 Comment

Performance Matters

Scale up or scale out?

I often advocate scale-out architectures. When it comes to choice of technology I often side with the guys at YouTube (http://highscalability.com/youtube-architecture) and state that you should choose the technology that allows you to be as productive and creative as possible. There is a reason we do not build web based systems in assembler or pure C.

Granted, using a low level languages may be faster but there is usually a high cost to pay in the development phase. Besides, most time are spent on remote calls anyway, right?

Even though I consider all of the above to be true, I want to balance the discussion a bit by talking a bit about the impact of performance, or rather the lack of performance, in medium to large systems. By medium to large I mean about 10 to 100’s of servers.

For the sake of argument and simplicity I will define performance as the possibility to execute the same amount of meaningful work to the end user with a lesser number of physical servers. This may not be the most stringent or correct definition, but it will do for this post.

Complexity

Consider hosting, monitoring and maintaining a system that consists of 4 servers (e.g. 2 frontend/business servers and 2 database servers). Now consider the same system scaled out to 100 servers (e.g. 80 frontend/business servers and 20 database servers). What are the difference in running the smale scale system compared to the large one? More specifically;

What does this mean to:

  • Deployment? 2 servers are easy to do manually, but 80?
  • Rollbacks of releases?
  • Monitoring? 2 processes may fit nice in a screen, but 80?
  • Hardware failure?
  • Redundancy?
  • Network routing?

How does it compare to monitor and check the pulse on 4 servers and 100 servers?

Obviously it will be more complex to care for a larger system, but my argument is that the complexity grows quickly and it grows in more than one aspect. Increased complexity will also affect many aspects of the daily routines and project cycles in the company. Costs for maintenance will certainly go up, but most likely, project throughput will also decline. Releases and infrastructural changes must suddenly be coordinated and carefully planned for. New functionality and added services to the system must consider a more intricate integration. More constraints, such as network bandwidth and increased RPC’s will start to play a part. What is the total cost for the company?

Hardware

What about machine failure? According to this blog, http://www.linesave.co.uk/google_search_engine.html, Google has about 60 000 servers and predict that 60 machines will fail everyday. This means that a server has a predicted failure chance of 0.001 every day. Below is a chart for the chance of machine failure within a month.

Server failure prediction

As you add servers to the system, the chances for a single server downtime increases. This will put additional load on the operational personel.

Real life example

Our primary product at Cubeia is Firebase, a gameserver tailored for casual games. If we look at one of our competitors (whose name I will not mention here), we can compare our deployment requirements for a poker network setup targeted for 25 000 concurrent users. Running on Firebase we could almost host this on a single server (v1.7, octocore, 4G RAM, cost approx €2000), but lets scale out to four servers for redundancy (i.e. a server failure will not bring down the system).

Our competitor states a need for:

  • 13 Lobby servers
  • 50 Poker game servers

All in all, 63 servers for running the same functionality (assumingly since we cannot compare every detailed aspect).

What are the costs of running a system with 4 servers versus a system with 63 servers?

Predictions for monthly machine failure:

  • 4 servers: 11.31%
  • 63 servers: 84.91%

As a sidenote, according to this Gartner press release, http://www.gartner.com/it/page.jsp?id=1015715, a single x86 server costs about $400 per year in power only. Just the power saved with a 2 server system would be about $23 600 per year.

Some Last Words

I am not advocating that you should spend an insanely amount of man years to polish every function call and algorithm to achieve performance in it’s most glorious perfection. If you are a startup or a small scale company then agility and release speed is probably the most important thing to you right now. But as with everything in life, there is another side to consider as well and if your system is growing this side will become increasingly important.

So, my point is; buy that extra core-server, go for the SSD-disks in your database, do remove unnecessary CPU intensive algorithms, work out contentions and bottlenecks in your implementation! And be proud of it!

Keeping complexity and deployment sizes down will be important as you grow.

Fredrik Johansson is a founder and CEO of Cubeia Ltd.

You can contact him at: fredrik.johansson(at)cubeia.com

By |Thursday, January 21, 2010|misc|Comments Off on Performance Matters