Saturday, November 30, 2013

Building bndtools projects with Gradle

Headless builds in bnd and bndtools

Bndtools is by far the easiest way to develop OSGi applications. The extremely fast code-save-test development cycle would be reason enough. If you're new to Bndtools, take a look at this video first to see it in action.

Bndtools always generated ANT build files for each project to support "headless" builds, for example on a build server. While ANT seems to be a long forgotten build tool for some, it actually works quite nicely because the build files are generated and in most cases you don't need to ook at them much. That is until you want to add some custom steps in your build. Think about code coverage tools, JavaScript minification etc.

Selecting a better build tool

To start looking at an alternative build tool we first need to define what we need from a build tool:
  • Building bnd projects without additional setup
  • Running bnd integration tests
  • Easily adding custom build steps/plugins/scripts
  • Integration with bnd dependency management
That last step is important. Bnd already manages our build dependencies in the .bnd files; we don't want to repeat this in a build tool specific way! This makes our selection process easier; we are only looking for a build tool, not for a dependency management tool.

The two most obvious candidates in the Java ecosystem are Maven and Gradle. Let's evaluate both.

Maven is both a build tool and a dependency management tool. Dependency Management works well (although it has many flaws) but as a build tool Maven isn't exactly perfect. Building a standard project with Maven is easy, but adding your own build steps is a quite horrible experience. Even the most trivial script would require you to write (or use) a Maven plugin, which requires a lot of boilerplate steps. This is also the reason that many (non-OSGi) projects are currently migrating from Maven to Gradle. We don't need Maven's dependency management. Bnd already does this for us in a much more natural way for OSGi, using OSGi metadata instead of POM files. 

Gradle is a more generic build tool with optionally dependency management similar to Maven. Gradle build files are Groovy scripts with a powerful DSL. It's trivial to add your own build steps and hook into the default life-cycle. Groovy turns out to be an extremely powerful language to write build scripts. Specially when compared to writing ANT scripts in XML or Maven's completely broken plugin development. Another nice bonus is that we can re-use existing ANT plugins directly from Gradle. 

Gradle obviously matches our requirements better, so let's take a look how we can integrate with bnd!

Integrating Gradle with bnd

Bnd is not only a tool (as most people know it), but also has a powerful Java API. Using this API we can perform builds and test runs from code. This is exactly what Bndtools (which is built on top of bnd) is doing as well. Because Gradle build scripts are Groovy code, we can use the bnd API directly from our build script, instead of launching bnd as an external process. 

As an example for this post I have taken the modularity-showcase which is part of our book, and set up a Gradle build. Let's take a look at the build file and walk through it step by step.

Setting up dependencies
Because the build script is using bnd, we need the bnd jar to be on the build script classpath.

Generating settings.gradle
A multi project build in Gradle requires a settings.gradle file, which lists the projects part of of the build. Since each project in the workspace is a bnd project, we can create a task that creates the settings.gradle file for us. This is exactly what the generatesettings task is doing. If you want to exclude certain projects from the build, you could filter those right there.

Building projects
Because our build script is Groovy code, we can use the bnd API directly. Before we can build anything we have to initialise the workspace.

Next we have to build each individual project. Gradle's DSL uses the subprojects syntax to declare tasks that run on each project. For each project we enable the Java plugin; this gives us a Java compiler. Next we configure our source and target directories to match bndtools defaults.

The next step is an important one. We declare that each compileJava task (the default compile task) depends on the bndbuild task of all it's project dependencies. A project dependency is a dependency on another bundle build by another bndtools project.

Next we add junit as a default dependency to the testCompile step, in case someone added junit only as an Eclipse library instead of adding it to the build path. This is common because Eclipse does this automatically.

In the bndbuild step we finally perform the build itself. This is as easy as calling the method and making sure that errors and warnings are reported correctly.

Testing projects
Bnd's excellent integration testing support can also be used headless, generating junit XML output which can be parsed by most build servers. We can recognise a test project by checking the Test-Cases header in the bnd files. For each test project we simply invoke bndProject.test() which will run the tests and generate the XML files.

There is one thing to be careful with! Do not add an OSGi shell in your integration tests. While this works within the IDE, it will terminate the tests when running in headless mode.

Packaging a release
After performing a headless build it's useful to collect generated jar files and external dependencies, so that we can install them on a server, for example by using Apache ACE. Collecting generated bundles is easy, but how do we collect external dependencies that we need for a deployment?

In bndtools we use .bndrun files to run projects. In these files we specify which bundles (both from the workspace and externally) we want to install. A while ago I created an ANT task in bnd that parses these files and copies all bundles to a directory. From that directory you can easily copy the files to whatever deployment environment you use.

This ANT task is reused in the release task. You could create multiple run configurations and export all of them.

Generating a Gradle wrapper
A convenient Gradle feature is the possibility to generate a wrapper. This will generate a gradlew script (both for Windows and MacOS/Linux systems) that download Gradle and run the build. This way you can run Gradle builds even when a build server doesn't "support" Gradle.

Running the build

Before we can run a build, we fist need to generate settings.gradle. Remember to repeat this when you add new projects to the workspace!

> gradle generatesettings

Now we can run our build. Let's run a clean build and integration test.

> gradle clean bndbuild bndtest

Finally we can collect bundles for a deployment.

> gradle release

Multi project and single project builds

With the build file that we have seen we can run both multi project and single project builds. This is a standard Gradle feature and works for our builds as well. For a single project build you simply invoke gradle bndbuild within the project's directory instead of the workspace level.

Gradle builds in the real world

Setting up a build for a small project is always easy. How does this scale to large projects? For the last few months the PulseOn project that I'm working on has been running on Gradle builds. To give you an idea of size, we generate ~300 bundles and run ~1500 integration tests each build. This runs on a Bamboo server for each feature branch in Git. Our code base includes both Java and Groovy code.

Our builds also include code coverage metrics based on Clover, JavaScript optimisation using Require.js and the Closure compiler and Sonar metrics. This shows that Gradle works perfectly together with bnd to perform large, real-life builds. 

Sunday, November 3, 2013

Visualizing OSGi services with D3.js

Because I couldn't resist writing a bit of code during my vacation I started playing with D3.js, a data visualization library for JavaScript, and used it to visualize dependencies between OSGi services. If you are already familiar with designing systems based on OSGi services you might just want to take a look at the video directly. If you need a little more introduction; continue reading.

We are using OSGi services for pretty much  everything; all our code lives within services. Services are the primary tool when implementing a modular architecture. By just using services, your code isn't necessarily modular yet however. A lot of thought has to go into the design of service interfaces and dependencies between services.

In a services based architecture it's obviously a good thing to re-use existing services when implementing new functionality. At the same time it's important to be careful with creating too many outgoing dependencies. If everything depends on everything else, it becomes very difficult to make any changes to the code (also known as "not modular"...). When implementing a system you will start to notice that some services are re-used by many other services. Although not an official name at all, I often call them core services; a service that plays a central role in your system. These services must be kept stable because major changes to it's interface requires a lot of work. Outgoing dependencies from these services should also be used with care. To guarantee stability of the service, it's good to keep the number of dependencies low. This prevents a ripple effect of changes when touching something. In practice, a core service should only depend on other services which are very stable. Do not depend on services that are likely to change often.

Many other services might not be used by any other services at all however, for example a scheduled job or a RESTful webservice that only serves a specific part of a user interface. These services can easily be replaced or even be discarded when no longer needed. In an agile environment this happens all the time. For these services it's not really a problem to depend on other services, specially not the core services of the system.

If your architecture is sound, you probably have a very clear idea about which services are your core services. Still, it's useful to actually visualize this to identify any services that have more incoming dependencies than you expected, or at least see which other services have a dependency on a certain service. And that's exactly what I did for this experiment.

We use Apache Felix Dependency Manager to create services and dependencies between them. Because of this I used the Apache Felix Dependency Manager API to create the list of dependencies between services. Note that this will not show services that are not created by Dependency Manager. The visual graph itself is created by D3.js based on this example.

The code is available on BitBucket: