Pages

Monday, June 2, 2014

Deploying OSGi applications

There are many different options for deploying OSGi applications, and with that, many opinions
about the "best" way to run OSGi apps in production. In this post I will try to explain the different
options and also explain why we deploy applications the way we do at Luminis Technologies.
Let's first see the options we have.

The era of application servers

For many years the deployment of Java backend applications involved application servers, and I
have been doing so for many years. For this discussion a Servlet container like Tomcat is the
same thing as an application server; although it’s smaller, it’s built on the same concepts.
Let's look at what an application server actually offers:
  • Running multiple applications on shared resources
  • Container level configuration of data sources and other resources
  • Framework components such as a Servlet container
  • Some basic monitoring and log viewing facilities

This seems to be quite useful features, and definitely have been in the past. The basic idea is
that we have multiple applications that should be deployed on the same machine, using the
same resources. In a time where compute resources can easily be virtualised in small sizes
(using cloud services like EC2, but even more so using technology like Docker), you may
wonder if this is still relevant? Why not give each application a separate process, or even
separate (virtual) machines? The overhead of creating multiple Java VMs is hardly relevant any
more, so why create a maintenance dependency between two unrelated applications?
Framework components such as a Servlet container are also no longer heavy weight
components, and can easily be embedded within an application.

Looking at the Micro Services ideas and the popularity of tools like Spring Boot it’s clear that the
idea of large application servers is gone. Of course there are new problems to deal with in a
setup like this; instead of managing one large container, we need to manage many small isolated
machines and processes. This is not necessarily difficult, but definitely different in the area of
deployment and management.

Accepting the fact that there might be alternative ways to deploy Java application, let’s take a
look at options to deploy OSGi applications.

Self contained executable JARs

Based on a Bnd run configuration we can generate an executable JAR file that contains an OSGi
framework, and all bundles that should be installed. The whole application can be started by
simply executing the JAR file:
java -jar myapp.jar
The obvious benefit is that it's extremely simple to run, and that it runs everywhere where Java is
installed. There is no need for any kind of installation or management required of some kind of
container. When the application needs to be upgraded, we can simply replace the JAR file and
restart the process. This update process is a bit harsh; we don’t use the facilities provided by
OSGi to update parts of the application by updating only new bundles. If we have a large number
of machines/devices to update it also requires some manual work or scripting, we have to
update all those machines.


Let’s explore options to make this process a bit more flexible.

Provisioning using Apache ACE

This is the deployment mechanism we use ourselves at Luminis Technologies. In this approach
we don’t manually install our application on a machine or container. Instead we use a
provisioning server as a broker. When a new release of our software is ready, the bundles are
uploaded to Apache ACE. The bundles can be grouped into features and distributions. This is a
powerful way to create variations of distributions based on a common set of bundles.

As long as we don’t register any targets, our bundles just sit on Apache ACE, and our application
is not running yet. To actually run the app, we need to start and register a target. A target is an
OSGi framework with the Apache ACE management agent bundle installed. Based on
configuration we pass to the agent, it registers to the Apache ACE server. Apache ACE will than
send the distribution prepared for this target to the target. The target receives the deployment
package, starts the bundles and will be up and running. The target itself can be started again as
a self contained executable JAR file, and run everywhere where Java is installed (including
embedded devices etc.).


Why add this extra complexity to deployments!? There are a number of benefits compared to
simply running self contained JARs.
  • Incremental updates
  • Distribution management
  • Automatic updates of sets of targets

When new bundles are uploaded to Apache ACE, the targets that use these bundles can
automatically be updated. The deployment package send to the targets only contains the
updated bundles, and the updates are installed while the target is running; the target never has to
be restarted. This also makes it easy to update large numbers of targets that all run the same
software. We use this to update large number of cluster nodes running on Amazon EC2, but the
same mechanism works great for the embedded/IoT world where a large number of devices
requires an update. This is even more useful when there are variations of distributions used by
the targets. Instead of rebuilding each distribution manually, updates are automatically deployed
by Apache ACE to relevant distributions.

You could create the same mechanism using some scripting. You might make distributions
available in some central location and use scripts to push those distributions to targets. Although
this is not rocket science, it’s still quite some work to actually get this working, specially when
incremental updates are required (for example in low bandwidth situations).

From the perspective of targets both solutions are pretty much equal; applications are started as
a process and should be managed and monitored as such.


Configuration and Monitoring without a container

When deploying applications as processes we need some way to configure and monitor the
application. For configuration we have all required tools built into OSGi already: Configuration
Admin. Using Config Admin we can easily load configuration from any place; property files, a
REST interface, a database… This opens up endless possibilities to keep configuration data
separate from the software release itself. In the PulseOn project we deploy application nodes to
Amazon EC2. At EC2 there is the concept of user data; basically arbitrary configuration data we
can specify when configuring machines. The user data is made available on a REST interface
only accessible by the machine itself. This data is loaded, and pushed to Config Admin, which
configures our components.

What about monitoring? An application server often has functionality to view log files, sometimes
combined for a cluster of nodes. Well, this is not very useful at all by itself. Does it make sense
to just look through log files? What we need is mechanisms to actively report problems, actively
check the health of nodes and mechanism to analyse log files in smart ways. We don’t really
care if the application process is running or not; it only matters if the application serves client
requests correctly. There are plenty of great tools available to centralise log analyzes and active
monitoring and reporting should be part of our application services.

A nice example from the PulseOn project again is our health check mechanism. Each OSGi
service in our application can implement a health check interface. The service itself reports if it’s
healthy. Our load balancers check these health checks and decide if a node is healthy based on
these checks. When a node is unhealthy the cluster replaces that node.

OSGi app servers

I hope I have made my point by now that a application server or container deployment model is
really not necessary any more today. Still there are lots of users deploying OSGi bundles to
containers, so let’s discuss this further. One pouplar container to use for OSGi is Apache Karaf.
Apache Karaf is basically an application server focussed on OSGi. Using Karaf it’s easy to
deploy multiple applications in the same container. Also it comes with a bunch of pre-installed
features to more easily work with technology that is not primarily designed to be used in a
modular environment. While this is great when depending on these technologies; you should
probably ask yourself if it’s such a good idea to use non-modular frameworks in a modular
architecture in the first place… Frameworks and components designed to be used with OSGi,
such as the components from the Amdatu project, don’t require any tricks to use. On the long
term this will keep your architecture a lot cleaner.

Other users deploy OSGi applications to Java EE app servers like Wepshere or Wildfly/EAP.
The main benefit is integration with Java EE technology, bridging the dynamic OSGi world with
the static, but familiar Java EE world. This is recipe for disaster. Although you can easily use
things like JPA and EJB, it breaks all concepts of service dynamics. More importantly, you really
don’t need to do this. Tools for dependency injection, creating RESTful web services and work
with data stores is available in a much more OSGi natural way, so why stay in the non-modular

world with one leg and lose a lot of OSGi’s benefits?

2 comments:

  1. Good article.
    Regarding your standpoint on Karaf, I disagree a bit. Of course Karaf is a OSGi container, but it doesn't mean that you can't deploy "non OSGi" application. For instance, you can deploy non OSGi bundle in Karaf (using wrap), you can deploy Spring application in Karaf, you can deploy Camel routes, without really knowing the OSGi part.
    But, the real question is: why I can't use all the good feature provided by Karaf and OSGi (like OSGi services, configadmin, etc) ?
    More over, users could want to split their applications in very small parts (too much). But you can create a more kind of "monolithic" application.

    ReplyDelete
  2. That's exactly my point about Karaf. If you write pure OSGi applications and don't rely on libraries/frameworks that don't work well with OSGi (e.g. Spring), Karaf doesn't really add much. OSGi services like ConfigAdmin can be used without Karaf as well (we are doing so in most of our code).

    When you don't meet these requirements and have a semi-modular applications at best, Karaf makes life a lot easier. My opinion is however that you really shouldn't allow these kind of things in your architecture; keep it clean and truly modular.

    That being said, it clear that many people will find great value in Karaf.

    ReplyDelete