Monthly Archives: February 2009

Class Loading and AOP

I have a new article up, Debugging Class Loading with AOP, in which I briefly discuss how Aspect Oriented Programming (AOP) using AspectJ can be used to debug class loading, in particular in instances where a system have multiple class loaders.

It does not cover how to use the load time weaver that comes with AspectJ. For you reference, this is roughly how to use it (for Java 5 or higher):

  1. Download AspectJ from the Eclipse site.
  2. Create a configuration file called “aop.xml” in a META-INF folder on the class path. For an example file, see below.
  3. Start your VM with: -javaagent:pathto/aspectjweaver.jar

Which is about it, the “aop.xml” file may look something like this for running the code in the article in a Tomcat 6 installation:

<code lost in blog system move>

Notice above that we include the base package of your application for weaving and the Catalina class loader packages. We’re doing this to limit the scope of the weaver, otherwise your debugging may take a bit too much time and resources.

By |Wednesday, February 25, 2009|java|Comments Off on Class Loading and AOP

How to Solve the “file already exists” Maven/SVN Problem

If you are using Subversion and Maven, you might have experienced a message that looks something like: svn: File ‘/folder/pom.xml’ already exists, when executing the “mvn release:prepare” goal.

This has surely caused some frustration for us. After looking around we understood that the underlying bug is in Subversion while the Maven problem is reported here.

In the latter bug report, there is a suggestion for a workaround:

First, execute “mvn release:prepare”. When the error message appears, run “svn update” and then try again with “mvn release:prepare -Dresume”.

This has worked every time for us, but it has not ceased to feel like an ugly workaround.

Good luck!

By |Friday, February 20, 2009|java|2 Comments

Design a Server Cluster for Load Testing

I have composed a new article which describes how we implemented a clustered solution on top of an existing load generator in Java. The article can be found here: How to Design a Server Cluster for Load Testing

By |Saturday, February 14, 2009|java|1 Comment

Behold the Site!

After a few years of ‘who care about the looks’ mentality, we now have redesigned the entire website. The system is still Joomla, but now the template is ‘HiveMind’ from the excellent Really good stuff!

So… What do you think?

By |Friday, February 13, 2009|cubeia|Comments Off on Behold the Site!

10 Sure Signs You Are Doing Maven Wrong

The “Maven Frustration Syndrome” is a severe decease that I don’t think many developers using Maven have not suffered from. Luckily, there are ways to rub Maven the right way. Here’s a list of ten things that I have seen in many projects and what to do about them. The best part of it all is that there are pretty quick fixes to most of these issues.

1. You constantly update and “mvn install” everything and the kitchen sink, just to be sure

This is the most common problem I have seen. The fix for this will have benefits for many of the points below. Here’s a statement that I don’t think you can find anywhere in the Maven documentation, but that I firmly believe is true:
“Every Maven artifact HAS to have a home in one Maven repository.”

This means that, in your company, you have to have a repository manager, such as Archiva or Artifactory. Every module that you develop must be deployed to this repository. When should we deploy our artifacts, you might ask? The answer is: after each successful build, by your build server. We currently use Hudson for this, but Continuum or TeamCity will do the job as well.

Now, when each module is always deployed to the repository, and your Maven installation fetches its dependencies from this repository, you will no longer have to “update and install everything”. If you are working on one module, you should only have to build that module. The other modules will be downloaded to you automatically.

2. You and your colleagues solve the “dependency not found” problem by copying someone’s entire .m2-folder via memory stick
As crazy as this workaround seems, I have seen it done over and over again. Again, the solution is to have a repository manager. Once this is hooked up properly, you will never have to do the ugly “who has a memory stick?” dance again. You shouldn’t even have to fear wiping your local repository entirely, since download speed from the local repository manager should be very fast.

3. You solve missing dependency problems to old versions of local artifacts by taking the latest pom file of the module, changing the version and then installing that module
Consider the scenario where someone has changed the version of a module from 1.3-SNAPSHOT to 1.4-SNAPSHOT. You have been on a (well deserved) holiday, so you never updated and installed the 1.3-SNAPSHOT. Now, there’s another module that depends on 1.3-SNAPSHOT. How will you get a hold of that version? Well, you could dig around in the source management system to find the latest version of the module, where the version was still 1.3-SNAPSHOT, update to that version and install it. Or you could do the dirty trick of just changing your latest checkout from 1.4 to 1.3, install and then cross your fingers and hope that it works.

I think we all can agree that neither solution is pretty. Again, had your company had a repository manager, if there were at least one successful build of 1.3-SNAPSHOT, it would be in the repository. Problem solved.

4. You have lots of shell scripts / batch files that traverse the target folders of your modules to create a zip file
Maven has a way of doing this, called “assemblies”. They are not the easiest to set up, but once you have it set up, it will integrate with Maven in a much smoother way than most home cooked scripts. There are so many things that can go wrong with these scripts once versions start to change, modules are moved and so on. I think this is a matter of using the tool that is designed to do the job. This also means that you can create an assembly and deploy it to the Maven repository. Indeed, this is how we publish the distributions of Firebase to our customers and it works well for us.

5. You have lots of xml in your poms whose only purpose is to copy files into your deploy folders, but it’s not working properly and it’s not portable
This is the flip side of the coin in point 4. Maven does not seem to be built for copying files. If all you want to do is to copy your artifact (be it a war, an ear or a zip file), it might be easier to just create an ant / shell / batch script to do the job. I have been known to be a culprit of trying to bend Maven this way, but it just doesn’t seem to want to go there with me. One issue is which phase to bind the copy task to, not to mention problems with making the solution portable, so that your colleague can have the file copied to C:deploy, you can have it deployed to ~/deploy while the build server should skip this step altogether.

6. Your build stalls forever on “checking for updates from xxx-repo” and you don’t know why
This is a real time consumer that few developers seem to be sufficiently annoyed by to find the root cause of. Perhaps the delay is just too good of an excuse to go and get a coffee. Anyhow, a common cause of this problem is that you have snapshot dependencies and a number of repositories listed in your pom files. Maven has no concept of which artifact it should check for in which repository, so it happily looks for updates of all artifacts in all repositories. The trick, again (you guessed it), is to use a repository manager.

If you configure your Maven installation to look for all artifacts in this repository (by setting a mirror in your settings.xml), Maven will only look there. Since this server is local, it will be fast. It’s important here that the repository manager is also configured properly. For Archiva, this means that you should cache failures and only update snapshots daily.

7. If you work on the release branch, your trunk build might fail (or 1.0-SNAPSHOT should be good enough for everyone)
There are a few things that you should remember to do when creating a branch of your project. One is to tell your build server to also build this branch and the other is to change the version in your pom files. If you neglect to do the latter, problems will ensue when you change the signature of a method in your branch, and then try to develop on the trunk again.

There is a goal in the release plugin called “branch“, by running “mvn release:branch”, Maven can automatically rename the versions in the poms for you. (Disclaimer, I haven’t tried this myself, since we usually only branch when we create a release, using “mvn release:prepare” followed by “mvn release:perform”).

8. All your company internal dependencies end with -SNAPSHOT
The idea of a snapshot seems comforting to developers. It’s almost done. But sooner or later, you will have to put your foot down and release your module to the world, or at least your colleagues. There are a few problems with staying in snapshot land forever. First, it slows down your build, since Maven has to check for the latest snapshot from remote repositories (or at least repository, if you followed point 6). Second, if you depend on a snapshot, it’s hard to know which version of the snapshot you are depending on. My build can fail, while your build works, just because you happen to have a newer snapshot.

If you depend on a company internal module that works the way it is right now, it might be a good idea to drop a release of that module and change the dependency to the non snapshot version. Now we know that even if the guys working on that module go crazy with new features, your module will still build, since it depends on a stable release.

9. If you run “mvn dependency:analyze” the list of unused and undeclared dependencies is too long to recite before lunch
This might seem a bit anal, but there is a real danger in not knowing exactly what you depend on. The biggest issue here is caused by a problem with Maven’s way of handling transitive dependencies. Because of this, you can use code in your dependency’s dependencies and it will compile just fine. The problems come when either you decide to change the scope of that dependency to “provided”, or the writers of the module you depend on decide to change its dependencies. Now, the code will no longer compile. It also is an undocumented dependency which can lure you into a false sense of security. I have written more about this issue here.

Luckily, there is a way to check how many shortcuts you have taken by running “mvn dependency:analyze”. This will tell you which modules you are using without having declared a dependency to them and which modules you depend on, but are not using. Obviously, the second list is not as dangerous, but still brings in unnecessary jar files which will increase build time and the size of your project assemblies.

10. When someone releases a new version of a plugin that you use, your builds tend to fail
Maven has for some reason decided that you don’t need to specify the version of your plugin dependencies. Maven will find the “latest and greatest” for you. When a new version of a plugin is released, you will download and use it without knowing it. Sometimes, the “latest and greatest” is not always so, as low and behold, new versions can contain new bugs. Therefore, it is a good practice to always declare the version that you wish to use when you specify a plugin in your pom file. If you really want to be sure, you can specify the versions of Maven’s internal plugins as well.

The Official Maven Site
Maven: The Definitive Guide

By |Friday, February 13, 2009|java|4 Comments

Firebase 1.6.0 Released

The latest and greatest of Firebase incarnations is now official. Firebase 1.6.0 was released today after nearly four months of development.

This release contains the following functional highlights:

  • Prepared transactions. Each event transaction is now lazily bound to a JDBC data source or a JPA entity manager. These prepared objects is unique per event, and will be committed or rolled back automatically when the event is handled.
  • Service transactions. Given the new and improved internal transaction stack, event driven services can now be configured to use JTA.
  • Server SSL support. The Firebase server now supports SSL certificates in Java Key Store or PKCS#12 formats for client connections.
  • Activator routing. It is now possible to send events to and from activators within a Firebase installation. This gives games, services and tournaments an option to interact with the activators which was previously not possible.

But under the hood, there are more improvements. We have switched to use the Bitronix JTA provider, making JTA a very attractive feature once more, the lobby has seen several bug fixes and improvements, we have a completely new internal transaction stack resulting in better performance and, hopefully, better extendability, we have significant speed improvements and much more.

Firebase 1.6.0 also marks a transition in target Java platform. So far Firebase has been exclusively targeted towards the Java 5 platform, from this release and onwards we will concentrate more on Java 6.

By |Friday, February 13, 2009|firebase|Comments Off on Firebase 1.6.0 Released