java

Auto generated ID with Morphia and MongoDB

Following Fredriks post on unit testing Morphia and MongoDB, here’s a short how-to (with sources for the lazy) for using automatically generated integer ID with Morphia.

By |Saturday, September 22, 2012|java, misc|1 Comment

How To Use Cubeia Styx

The other day we release Styx to the public. It has served us well over the years as the main protocol generator for Firebase, so we thought we should be a bit more transparent with it. Speaking of “it”, here’s what “it” is:

  • A protocol format specified in an XML file.
  • Binary and JSON packaging for the above.
  • Automatic API generation in Java, C++, Flash, HTML5, etc…

This might still be a bit abstract, so in this post we’ll instead show you how to use it for yourself.

By |Thursday, April 19, 2012|java|Comments Off on How To Use Cubeia Styx

How-To Use Cubeia Firebase RNG Service

So, the other day we released an RNG service to the wild. It is a Marsenne Twister implementation with background cycling and discarded draws. It’s fast, secure and possible to drop right into you existing Firebase projects. Here’s how:

First, you need to add the service JAR into your projects dependencies in order to use it. Here’s how it looks in Maven:

<dependency>
  <groupId>com.cubeia.firebase.service</groupId>
  <artifactId>mt-random</artifactId>
  <version>1.0</version>
  <type>jar</type>
  <scope>provided</scope>
</dependency>

Note that we add it as “provided” as it will be included in the service when we’re deployed, so we don’t need to include it into our other dependencies. Generally it’s a good pattern to try not to duplicate classes that are included in services as you open up a can of class loading problems if you do.

Having done this, what you need in order to get hold of an actual Random object to use, is the Firebase service registry. This is available in, for example, the Game context which you get on the “init” method. If we have the game context we can do a method like this:

private Random getRandom() {
  Class<RandomService> key = RandomService.class;
  ServiceRegistry registry = gameContext.getServices(); // needs context
  RandomService randomService = registry.getServiceInstance(key); // get service
  return randomService.getSystemDefaultRandom(); // get random
}

Now you’re ready to roll! You could assign the random to an instance field, but make sure you don’t do it in the game state as the Random is not serialize at the moment, and it will result in an ugly exception if you try it.

Here’s one more trick for you: if you want to use “mvn firebase:run” in you project (have a look here for more details on that), you need to add another dependency. The Firebase Maven plugin needs to know about the actual service archive in order to get your dependencies right, so also add the following to your dependencies:

<dependency>
  <groupId>com.cubeia.firebase.service</groupId>
  <artifactId>mt-random</artifactId>
  <version>1.0</version>
  <type>firebase-sar</type>
  <scope>provided</scope>
</dependency>

By default, all Firebase artifact such as the one above, which are in the scope “provided” will be included in the deploy folder when you run Firebase via Maven. If on the other hand, you’re running Firebase from a standard installation, just copy the “mt-random.sar” archive into “game/deploy” and you’re done!

By |Thursday, February 23, 2012|firebase, java|Comments Off on How-To Use Cubeia Firebase RNG Service

How I Learned to Stop Worrying about Serialization and Love the Wrapper

Background
One of the great benefits with Firebase is the support for transparent failover. All you as a game developer have to do is to make sure that your game state is serializable.

The Problem
One common pattern that follows from this approach is that non-serializable classes (such as the Notifier or the Scheduler) tend to get passed around a whole lot, because they can’t be stored as instance variables. If you want to do Domain Driven Development, this is a bad pattern.

Half of the Solution
Storing those instances as “transient” is always an option, but only solves half of the problem. On deserialization, those values will be null and there’s no way to recreate them in readObject().

The Solution
Instead of referring to the Scheduler and Notifier directly, create a wrapper class which holds them as transient members. Then, pass that into all your collaborating classes. The parent class will hold a reference to this same wrapper and on each call from Firebase, it will inject the live versions of the Notifier and Scheduler and since all collaborating classes refer to the same instance of the wrapper, we’re now all set!

An Example
I find that a concrete example always makes things clearer. Here’s some code for a poker game. First, here’s the game state class.

public class Poker implements Serializable {
    private Wrapper wrapper = new Wrapper();

    public void addPlayer() {
        Player player = new Player(wrapper);
        ...
    }

    public void handle(GameDataAction action, Table table) {
        injectCallbacks(table.getNotifier(), table.getScheduler());
        ...
    }

    private void injectCallbacks(GameNotifier notifier, Scheduler scheduler) { 
        wrapper.setNotifier(notifier);
        wrapper.setScheduler(scheduler);
    }
}

And here’s the wrapper.

public class Wrapper implements Serializable {
    private transient GameNotifer notifier;
    private transient Scheduler scheduler;
    
    ... getters and setters ...
}

Which means that from any any place in the code, we can do something like player.updateBalance(newBalance), which would look like:

public class Player {

  private final transient Wrapper wrapper;

  public Player(Wrapper wrapper) {
    this.wrapper = wrapper;
  }
  
  public void updateBalance(int newBalance) {
    balance = newBalance;
    notifier().notifyAllPlayers(createGameDataAction(newBalance));
  }

  private Notifier notifier() {
    return wrapper.getNotifier();
  }
}

So, time for you to fire up your refactoring tool and put some candy in that wrapper!

By |Saturday, June 18, 2011|cubeia, firebase, java|2 Comments

Unit Tests with Guice and Warp Persist

Yesterday I needed to do some unit testing for a project using Guice and Warp Persist. And one thing that seems to be lacking, or wasn’t to be found be me yesterday, is the kind of unit testing you can do with Spring, where each method is wrapped by a transaction which is rolled back when the method ends.

To simplify, it enables you to do this:

@Test
public void createUser() {
    User u = userService.createUserWithId(1);
}

@Test
public void createUserWithSameId() {
    User u = userService.createUserWithId(1);
}

If you assume the “userService” is transactional, we’re on a database that spans multiple methods, and that the user ID must be unique, the above pseudo-code should fail as we’re trying to create two users with the same ID. If, however, there was a transaction wrapping both methods, and that transaction was rolled back, we’d be fine.

So, can you do it with Guice and Warp Persist? Sure you can!

(I’m using TestNG and JPA by the way, but you’ll get the point).

We’ll create a base class that sets up the Guice context as well as handles the transactions for us. Let’s start with creating the methods we need:

public abstract class JpaTestBase {

  @BeforeClass
  public void setUpJpa() throws Exception {
    // setup guice + jpa here here
  }

  @AfterClass
  public void tearDownJpa() throws Exception {
    // stop jpa here
  }

  /**
   * Return the module needed for the test.
   */
  protected abstract List<? extends Module> testModules();

  @BeforeMethod
  public void setupTransaction() throws Exception {
    // create "session in view"
  }

  @AfterMethod
  public void cleanTransaction() throws Exception {
    // rollback transaction and close session
  }
}

We’re using the TestNG “before” and “after” class to setup and tear down JPA. If the test was in a bigger suite you could probably do it around all test classes, but this will do for now. I’m also including a “testModules” method that subclasses should use to make sure their own test classes are setup correctly.

Before we go on, I’ll add some dependencies which will be explained later:

@Inject
protected PersistenceService service;

@Inject
protected WorkManager workManager;

@Inject
protected Provider<EntityManagerFactory> emfProvider;

We’ll setup the Guice context and the JPA persistence service in the “setUpJpa” method:

@BeforeClass
public void setUpJpa() throws Exception {
  // create list with subclass modules
  List<Module> list = new LinkedList<Module>(testModules());

  // add persistence module
  list.add(PersistenceService
            .usingJpa()
            .across(UnitOfWork.REQUEST)
            .forAll(Matchers.annotatedWith(Transactional.class), Matchers.any())
            .buildModule());

  // modules to array and create Guice
  Module[] arr = list.toArray(new Module[list.size()]);
  injector = Guice.createInjector(arr);

  // make sure we get our dependencies
  injector.injectMembers(this);

  // NB: we need to start the service (injected)
  service.start();
}

The persistence service will work across UnitOfWork.REQUEST as that’s what we’re emulating here. I’m also matching the transaction for any classes, this enables me to mark an entire class as transactional, as opposed to single methods, which I find handy.

All we need to do to shut down is to close the service, like so:

@AfterClass
public void tearDownJpa() throws Exception {
  service.shutdown();
}

Now, let’s wrap the methods. Remember our dependency injections earlier? Well, here’s where they come in handy. The WorkManager is used by Warp Persist to manage a session, we can tell it to start and end “work” and this will correspond to an EntityManager bound to the current thread. Let’s start with that:

@BeforeMethod
public void setupTransaction() throws Exception {
  workManager.beginWork();
}

@AfterMethod
public void cleanTransaction() throws Exception {
  workManager.endWork();
}

So before each method we’ll open a new EntityManager which will be closed when the method ends. So far so good. Now, in order to enable a rollback when the method ends we first need to wrap the entire execution in a transaction, which means we need to get hold of the EntityManager. Luckily, Warp Persist has a class called ManagedContext which holds currently bound objects (a bit like a custom scope), in which we’ll find our EntityManager keyed to it’s persistence context, ie. the EntityManagerFactory that created it. Take a look again at the injected dependencies we injected above: As the test only handles one persistence context we can safely let Guice inject a Provider for an EntityManagerFactory for us and assume it is going to be the right one.

Still with me? OK, let’s do it then:

@BeforeMethod
public void setupTransaction() throws Exception {
  // begin work
  workManager.beginWork();
  // get the entity manager, using it's factory
  EntityManagerFactory emf = emfProvider.get();
  EntityManager man = ManagedContext.getBind(EntityManager.class, emf);
  // begin transaction
  man.getTransaction().begin();
}

@AfterMethod
public void cleanTransaction() throws Exception {
  // get the entity manager, using its factory
  EntityManagerFactory emf = emfProvider.get();
  EntityManager man = ManagedContext.getBind(EntityManager.class, emf);
  // rollback transaction
  man.getTransaction().rollback();
  // end work
  workManager.endWork();
}

And that’s it! Each test method is now within a transaction that will be rolled back when the method ends. Now all you need to do is to write the actual tests…

By |Sunday, April 25, 2010|java|4 Comments

A Google Analytics Plugin for Nexus

Since Firebase Community Edition uses Maven heavily, I realized I’d like to track our Nexus repository in Google Analytics. The Nexus Book says that there exists such a plugin already, but apparently now one knows where it is. So here’s my attempt to write a simple one.

If you’re an impatient kind, here’s the source code.

I created the project  in Eclipse, instructions here.

Now, let’s start off with a “repository customizer” which is Nexus extension point we’ll use to insert a custom request processor into every repository…

public class GaTrackerRepositoryCustomizer implements RepositoryCustomizer {

    private final Logger log = LoggerFactory.getLogger(getClass());

    @Inject
    @Named("gaTracker")
    private RequestProcessor gaTracker;

    @Override
    public void configureRepository(Repository rep) throws ConfigurationException {
        log.debug("Attaching tracker to: " + rep.getName());
        rep.getRequestProcessors().put("gaTracker", gaTracker);
    }

    @Override
    public boolean isHandledRepository(Repository rep) {
        boolean b = true;
        log.info("Handles repository '" + rep.getName() + "': " + b);
        return b;
    }
}

Not too complicated. We’re using injection to get hold of the actual tracker components and we’re inserting it into all repostories.

Wait, all repositories? Yes, and it’s problem I haven’t really figured out yet. Ideally I’d like to track “hosted” repositories only. However, we’ve configured all our builds to use a few common development “groups” for convenience. However, Nexus seems to treat groups as first class members, so even though an artifact may be deployed in a hosted repository while accessed through a group, the request processor will not get notified for the hosted repository, only the group. I tend to see “groups” as equivalent to “views” in which case I’d expect the hosted repository to be called as well, but, no…

Now, let’s create a request processor which will react when someone “reads” a path in a repository.  We’ll take it in pieces…

@Named("gaTracker")
public class GaTrackerRequestProcessor implements RequestProcessor {

    public static final String GA_TRACKER_ID =
        System.getProperty("cubeia.nexus.gaTrackerId");
    public static final String GA_REFERER_URL =
        System.getProperty("cubeia.nexus.gaRefererUrl");

    private final Logger log = LoggerFactory.getLogger(getClass());
    private final JGoogleAnalyticsTracker tracker;

    public GaTrackerRequestProcessor() {
        if(GA_TRACKER_ID == null) {
            String msg = "Missing system property 'cubeia.nexus.gaTrackerId'";
            throw new IllegalStateException(msg);
        }
         log.info("Creating new tracker, with id: " + GA_TRACKER_ID);
        tracker = new JGoogleAnalyticsTracker("nexus", "read", GA_TRACKER_ID);
        checkRefererUrl();
        adaptLogging();
    }

    [...]

We configure the plugin via system properties (not too beautiful, I know), the “tracker id” is the google tracking code id and is mandatory, and the “refer url” will be set on the call to GA if available.  We’re using the JGoogleAnalytics library to call GA for us. Also, I’m being naughty and throwing an illegal state exception if the tracker id is missing, since GA updates every 24 hours we’d like to be notified on errors early.

There’s  two methods above, one sets the logging in the tracker code to use to slf4j logger instead and the other checks for and sets the referer URL:

private void adaptLogging() {
    /*
     * Adapt the logging to use slf4j instead.
     */
    tracker.setLoggingAdapter(new LoggingAdapter() {

        public void logMessage(String msg) {
            log.debug(msg);
        }

        public void logError(String msg) {
            log.error(msg);
        }
    });
}

private void checkRefererUrl() {
    if(GA_REFERER_URL != null) {
        /*
         * If we have a referer URL we need to set this. However, the
         * tracker does not have a getter for the URL binding strategy, so
         * we'll simply create a new one, ugly, but works.
         */
        log.info("Modifying GA tracking to use referer URL: " + GA_REFERER_URL);
        GoogleAnalytics_v1_URLBuildingStrategy urlb;
        urlb = new GoogleAnalytics_v1_URLBuildingStrategy("nexus", "read", GA_TRACKER_ID);
        urlb.setRefererURL("http://m2.cubeia.com");
        // set new referer
        tracker.setUrlBuildingStrategy(urlb);
    }
}

Not too complicated eh? The only thing to note is that the only way to set the refer URL isby creating a new URL building strategy. Well, I can live with that.

Before we go on we’ll create a small subclass on FocusPoint which we’ll use for tracking. Since JGoogleAnalitycs is made primarily for applications the focus point will URL encode itself, however, that won’t work for us, so we need to override it’s getContentURI method:

/**
 * Simple inner class that adapts the content URI to
 * not be URL-escaped.
 */
private static class Point extends FocusPoint {

    public Point(String name) {
        super(name);
    }

    @Override
    public String getContentURI() {
        return getName();
    }
}

And finally we’ll tie it all toghether. We’ll react on “read” actions, create a URI (of form ‘//path’) and track asynchronously (which will spawn a new thread for calling GA:

 public boolean process(Repository rep, ResourceStoreRequest req, Action action) {
    if(action == Action.read) {
        /*
         * 1) create path by appending repo path to repo id
         * 2) create a subclass of focus point that handles proper URI's
         * 3) track asynchronously, this will perform the tracking on a new thread
         */
        String path = rep.getId() + req.getRequestPath();
        log.debug("Tracking path: " + path);
        FocusPoint p = new Point(path);
        tracker.trackAsynchronously(p);
    } else {
        log.debug("Ingoring action '" + action + "' for: " + req.getRequestPath());
    }
    return true;
}

And that’s it. It’s not perfect though ideally I’d like to track hosted repositories only, I’d like to avoid tracking crawlers and I would prefer not to configure via system properties (hint for you Nexus bundle users out there, you can set system properties in the “conf/wrapper.conf” files), but I’ll leave those for a rainy day.

Here’s the source code as a Maven project.

Enjoy!

Update: If you download and try to compile, you will probably be missing the JGoogleAnalytics lib. You’re welcome to use our, GA-tracked, repository if you like 🙂

<repository>
  <id>cubeia-nexus</id>
  <url>http://m2.cubeia.com/nexus/content/groups/public/</url>
  <releases>
    <enabled>true</enabled>
  </releases>
  <snapshots>
    <enabled>true</enabled>
  </snapshots>
</repository>
By |Thursday, February 11, 2010|java|1 Comment

Pomodoro client

Viktor and I have released a small client application for working with the Pomodoro technique.

What is Pomodoro?

The aim of the Pomodoro Technique is to use time as a valuable ally in accomplishing what we want to do in the way we want to do it, and to enable us to continually improve the way we work or study.

The application is called Pomodairo and is released as open source at Google Code. You can find the application here.

There is also a good and free book about the Pomodoro technique here.

By |Saturday, May 30, 2009|java|Comments Off on Pomodoro client

A Java Concurrency Bug, and How We Solved It

Everyone agrees that good debugging is critical. But it is not entirely trivial when it comes to multi-threaded applications. Here’s the story how I tracked down a nasty bug in the Java5 ReentrantReadWriteLock.

Update 15/4: To be a bit clearer, the Java bug in question is on 1) Java 5; and 2) only when using fair locks, non fairness seems to work, however, that was never an option for us, if nothing else due to this other Java bug… Thanks to Artur Biesiadowski for the heads up.

Our product Firebase is designed to offload threading issues and scalability from game developers to a generic platform. However, one of our customers experienced mysterious “freezes” when running a fairly small system load in their staging environment.

First step on the way: getting stack dumps when the system is frozen. In other words, request that the client do a “kill -3” on the server process when it’s hanged, as this dumps all threads and their current stack traces to the standard output. This we did, but only got confused by it, all threads seemed dormant and there was no visible dead-lock in the stack traces.

However, several threads were all mysteriously waiting on a read lock deep down in the server, and seemingly not getting any further, despite that fact that no-one was holding the write lock. This wasn’t conclusive though as there was one thread waiting to take the write lock and this would block the readers. But given that the server was actually frozen it looked suspicious. So my first analysis concluded:

As far as I can see, the only abnormal thing that could have caused this stack is if a thread have taken the lock (read or write) and then crashed without releasing it, however there doesn’t seem to be any place in the code not safeguarded by try/finally (where the lock is released in the finally clause).

Implied in that conclusion is of course that this might either be a normal behavior and we’re looking in the wrong direction, or that we have a more serious Java error on our hands.

There’s a lot of information to be had from a ReentrantReadWriteLock, including the threads waiting for either read or write privileges, and the thread holding a write lock (if any), but not (strangely enough) the threads actually holding a read lock. And as a reader thread can effectively block the entire lock by not unlocking while a writer is waiting, this is information you really need to know.

So the next step was to get hold of the reader threads. I did this by sub-classing the ReentrantReadWriteLock to return my own version of the read lock, which, simplified, did this:

Set readers = Collections.synchronizedSet(new HashSet());  

public Set getReaders() {
    return readers;
}  

public void lock() {
    super.lock();
    readers.add(Thread.currentThread());
}  

public void unlock() {
    super.unlock();
    readers.remove(Thread.currentThread());
}

Given this code, we now have a set containing a snapshot of the threads holding the read lock. I then added a JMX command for printing the following information to standard out for the given read/write lock:

  1. The threads waiting for a read lock, including their stack traces.

  2. The threads waiting for the write lock, including their stack traces.

  3. Any threads holding a read lock, including their stack traces.

  4. The thread holding the write lock, including its stack trace, if any.

I shipped this patched code to the client and asked them to freeze the server with the patch, print the new debug information, and then send me the output. Which they did, and the relevant output looked like this (very much shortened):

Holding reader: Thread[DQueue Handoff Executor Pool Thread { GAME }-1,5,main]
    sun.misc.Unsafe.park(Native Method)
    java.util.concurrent.locks.LockSupport.park(LockSupport.java:118)
    […]  

Waiting writer: Thread[Incoming,dqueue,127.0.0.1:7801,5,Thread Pools]
    sun.misc.Unsafe.park(Native Method)
    […]  

Waiting reader: Thread[DQueue Handoff Executor Pool Thread { GAME }-1,5,main]
    sun.misc.Unsafe.park(Native Method)
    [...]

See anything strange here? It appears that the same thread is both holding and waiting for a read lock at the same time. But this is supposed to be a reentrant lock. In which case…

So, the freeze was caused by this: There’s a bug in Java5 (but not in Java6) where a fair ReentrantReadWriteLock stops a reader from re-entering the lock if there’s a writer waiting. It is of course easy to write a test case for, which you can find here.

This is now submitted as a bug to Sun, but I have yet to get a confirmation and bug number.

As for Firebase, it is now patched and released to use manually tracked re-entrance for read/write locks through the entire server when running under Java5, looking, again very much simplified, like this:

private ThreadLocal count = new ThreadLocal();  

public void lock() {
    if (get() == 0) {
        // don't lock if we alread have a count, in other words, only
        // really lock the first time we enter the lock
        super.lock();
    }
    // increment counter
    increment();
}  

public void unlock() {
    if (get() == 0) {
        // we don't have the lock, this is an illegal state
        throw new IllegalMonitorStateException();
    } else if (get() == 1) {
        // we have the lock, and this is the “first” one, so unlock
        super.unlock();
        remove();
    } else {
        // this is not the “first” lock, so just count down
        decrement();
    }
}	

// --- HELPER METHODS --- //  

private void remove() {
    count.remove();
} 	 

private int get() {
    AtomicInteger i = count.get();
    if (i == null) return 0;
    else {
        return i.intValue();
    }
} 	 

private void increment() {
    AtomicInteger i = count.get();
    if (i == null) {
        i = new AtomicInteger(0);
        count.set(i);
    }
    i.incrementAndGet();
} 	 

private void decrement() {
    AtomicInteger i = count.get();
    if (i == null) {
        // we should never get here...
        throw new IllegalStateException();
    }
    i.decrementAndGet();
}

The above code isn’t trivial, but shouldn’t be too hard to decode: We’re simple managing a “count” each time a thread takes the read lock, but we’re only actually locking the first time, the other times we simply increment the counter. On unlock, we decrement the counter, and if it is the “last” lock, if the counter equals 1, we do the real unlock and remove the counter.

There are two things to learn, 1) in any sufficiently complex server, the above debug information is nice to have from the start on any important lock; and 2) it really is time to migrate to Java6.

Lars J. Nilsson is a founder and Executive Vice President of Cubeia Ltd. He’s an autodidact programmer, former opera singer, and lunch time poet. You can contact him at lars.j.nilsson in Cubeia’s domain.

By |Tuesday, April 14, 2009|java|5 Comments

New Article: Real Time BI

Since I’ve spent some time discussing complex event processing in various contexts the last weeks, I thought I’d write up a small article about it. And lo, here it is.

I do point out in the beginning that the article is primarily targeted towards gambling networks, however it can be applied to other domains as well. Financial fraud management? Sure. Industry process supervision? No problems. And so on…

By |Tuesday, March 10, 2009|java|Comments Off on New Article: Real Time BI

Class Loading and AOP

I have a new article up, Debugging Class Loading with AOP, in which I briefly discuss how Aspect Oriented Programming (AOP) using AspectJ can be used to debug class loading, in particular in instances where a system have multiple class loaders.

It does not cover how to use the load time weaver that comes with AspectJ. For you reference, this is roughly how to use it (for Java 5 or higher):

  1. Download AspectJ from the Eclipse site.
  2. Create a configuration file called “aop.xml” in a META-INF folder on the class path. For an example file, see below.
  3. Start your VM with: -javaagent:pathto/aspectjweaver.jar

Which is about it, the “aop.xml” file may look something like this for running the code in the article in a Tomcat 6 installation:

<code lost in blog system move>

Notice above that we include the base package of your application for weaving and the Catalina class loader packages. We’re doing this to limit the scope of the weaver, otherwise your debugging may take a bit too much time and resources.

By |Wednesday, February 25, 2009|java|Comments Off on Class Loading and AOP