Pages

Showing posts with label Amdatu. Show all posts
Showing posts with label Amdatu. Show all posts

Thursday, November 6, 2014

First beta of Amdatu Bootstrap

For the past few months we have been working on a development tool: Amdatu Bootstrap. Amdatu Bootstrap makes OSGi development faster and easier by providing an interactive tool to automate common tasks like configuring a build path or run configuration and it integrates with many libraries. Amdatu Bootstrap is built on top of Bnd and is typically used together with Bndtools.

The video below gives an impression how you can use Amdatu Bootstrap.


Amdatu Bootstrap comes with a web based UI and an OSGi based backend. The reason for working with web technology in the frontend is because this makes it easy to develop a user friendly application, and it gives the possibility to integrate in different IDEs.

So why not just extend Bndtools with the functionality of Amdatu Bootstrap? First of all, because Bndtools is based on Eclipse, it is not very easy to extend Bndtools. A lot of knowledge about Eclipse RCP is required, even for relatively simple tasks. Also, we want a tool with the potential to be used with other IDEs in the future. Amdatu Bootstrap is designed to be extensible. It is completely based on OSGi services, and adding a plugin is as easy as implementing an interface.

Amdatu Bootstrap went through several iterations of APIs and ideas, while being used by a diverse group of early users. We are now really happy with the API and the way the tool works, and are announcing the first beta release. Please provide feedback! You can send feedback on the mailinglist or create issues and feature requests on JIRA. If you want to help out even more you can take a look at the plugin development guide and work on some awesome new plugins.

Links:





Thursday, August 28, 2014

Join me at JDD Krakow

October 13th and 14th I will be speaking at JDD in Krakow. I was speaking at JDD last year I'm very happy to be back! The conference is not too large which give you great opportunity to actually meet and talk to people. There are some excellent speakers on the schedule already, and I'm expecting many more. The call for papers is still open, so you can be one of them: http://14.jdd.org.pl/cfp/cfp/

Krakow is also a great place to be, and seems to be a hotspot for software engineering.

My talk is a two hour introduction to OSGi:
Modularity is becoming more relevant each day. It is the key to maintainable code and the ultimate agile tool. OSGi is the only mature modularity solution available today. In this talk you will see OSGi development in action.
OSGi has a name of being hard to use and complex. With today’s tools and frameworks this is far from true! In this presentation you will see an OSGi application being built from scratch and learn about package imports and exports, dynamic services, dependency injection and integration with JAX-RS and MongoDB. This talk is both for developers new to OSGi that want to learn the OSGi basics, and for developers with some OSGi experience looking to optimize their workflow.


If you have the possibility: make sure to be there!


Monday, June 2, 2014

Deploying OSGi applications

There are many different options for deploying OSGi applications, and with that, many opinions
about the "best" way to run OSGi apps in production. In this post I will try to explain the different
options and also explain why we deploy applications the way we do at Luminis Technologies.
Let's first see the options we have.

The era of application servers

For many years the deployment of Java backend applications involved application servers, and I
have been doing so for many years. For this discussion a Servlet container like Tomcat is the
same thing as an application server; although it’s smaller, it’s built on the same concepts.
Let's look at what an application server actually offers:
  • Running multiple applications on shared resources
  • Container level configuration of data sources and other resources
  • Framework components such as a Servlet container
  • Some basic monitoring and log viewing facilities

This seems to be quite useful features, and definitely have been in the past. The basic idea is
that we have multiple applications that should be deployed on the same machine, using the
same resources. In a time where compute resources can easily be virtualised in small sizes
(using cloud services like EC2, but even more so using technology like Docker), you may
wonder if this is still relevant? Why not give each application a separate process, or even
separate (virtual) machines? The overhead of creating multiple Java VMs is hardly relevant any
more, so why create a maintenance dependency between two unrelated applications?
Framework components such as a Servlet container are also no longer heavy weight
components, and can easily be embedded within an application.

Looking at the Micro Services ideas and the popularity of tools like Spring Boot it’s clear that the
idea of large application servers is gone. Of course there are new problems to deal with in a
setup like this; instead of managing one large container, we need to manage many small isolated
machines and processes. This is not necessarily difficult, but definitely different in the area of
deployment and management.

Accepting the fact that there might be alternative ways to deploy Java application, let’s take a
look at options to deploy OSGi applications.

Self contained executable JARs

Based on a Bnd run configuration we can generate an executable JAR file that contains an OSGi
framework, and all bundles that should be installed. The whole application can be started by
simply executing the JAR file:
java -jar myapp.jar
The obvious benefit is that it's extremely simple to run, and that it runs everywhere where Java is
installed. There is no need for any kind of installation or management required of some kind of
container. When the application needs to be upgraded, we can simply replace the JAR file and
restart the process. This update process is a bit harsh; we don’t use the facilities provided by
OSGi to update parts of the application by updating only new bundles. If we have a large number
of machines/devices to update it also requires some manual work or scripting, we have to
update all those machines.


Let’s explore options to make this process a bit more flexible.

Provisioning using Apache ACE

This is the deployment mechanism we use ourselves at Luminis Technologies. In this approach
we don’t manually install our application on a machine or container. Instead we use a
provisioning server as a broker. When a new release of our software is ready, the bundles are
uploaded to Apache ACE. The bundles can be grouped into features and distributions. This is a
powerful way to create variations of distributions based on a common set of bundles.

As long as we don’t register any targets, our bundles just sit on Apache ACE, and our application
is not running yet. To actually run the app, we need to start and register a target. A target is an
OSGi framework with the Apache ACE management agent bundle installed. Based on
configuration we pass to the agent, it registers to the Apache ACE server. Apache ACE will than
send the distribution prepared for this target to the target. The target receives the deployment
package, starts the bundles and will be up and running. The target itself can be started again as
a self contained executable JAR file, and run everywhere where Java is installed (including
embedded devices etc.).


Why add this extra complexity to deployments!? There are a number of benefits compared to
simply running self contained JARs.
  • Incremental updates
  • Distribution management
  • Automatic updates of sets of targets

When new bundles are uploaded to Apache ACE, the targets that use these bundles can
automatically be updated. The deployment package send to the targets only contains the
updated bundles, and the updates are installed while the target is running; the target never has to
be restarted. This also makes it easy to update large numbers of targets that all run the same
software. We use this to update large number of cluster nodes running on Amazon EC2, but the
same mechanism works great for the embedded/IoT world where a large number of devices
requires an update. This is even more useful when there are variations of distributions used by
the targets. Instead of rebuilding each distribution manually, updates are automatically deployed
by Apache ACE to relevant distributions.

You could create the same mechanism using some scripting. You might make distributions
available in some central location and use scripts to push those distributions to targets. Although
this is not rocket science, it’s still quite some work to actually get this working, specially when
incremental updates are required (for example in low bandwidth situations).

From the perspective of targets both solutions are pretty much equal; applications are started as
a process and should be managed and monitored as such.


Configuration and Monitoring without a container

When deploying applications as processes we need some way to configure and monitor the
application. For configuration we have all required tools built into OSGi already: Configuration
Admin. Using Config Admin we can easily load configuration from any place; property files, a
REST interface, a database… This opens up endless possibilities to keep configuration data
separate from the software release itself. In the PulseOn project we deploy application nodes to
Amazon EC2. At EC2 there is the concept of user data; basically arbitrary configuration data we
can specify when configuring machines. The user data is made available on a REST interface
only accessible by the machine itself. This data is loaded, and pushed to Config Admin, which
configures our components.

What about monitoring? An application server often has functionality to view log files, sometimes
combined for a cluster of nodes. Well, this is not very useful at all by itself. Does it make sense
to just look through log files? What we need is mechanisms to actively report problems, actively
check the health of nodes and mechanism to analyse log files in smart ways. We don’t really
care if the application process is running or not; it only matters if the application serves client
requests correctly. There are plenty of great tools available to centralise log analyzes and active
monitoring and reporting should be part of our application services.

A nice example from the PulseOn project again is our health check mechanism. Each OSGi
service in our application can implement a health check interface. The service itself reports if it’s
healthy. Our load balancers check these health checks and decide if a node is healthy based on
these checks. When a node is unhealthy the cluster replaces that node.

OSGi app servers

I hope I have made my point by now that a application server or container deployment model is
really not necessary any more today. Still there are lots of users deploying OSGi bundles to
containers, so let’s discuss this further. One pouplar container to use for OSGi is Apache Karaf.
Apache Karaf is basically an application server focussed on OSGi. Using Karaf it’s easy to
deploy multiple applications in the same container. Also it comes with a bunch of pre-installed
features to more easily work with technology that is not primarily designed to be used in a
modular environment. While this is great when depending on these technologies; you should
probably ask yourself if it’s such a good idea to use non-modular frameworks in a modular
architecture in the first place… Frameworks and components designed to be used with OSGi,
such as the components from the Amdatu project, don’t require any tricks to use. On the long
term this will keep your architecture a lot cleaner.

Other users deploy OSGi applications to Java EE app servers like Wepshere or Wildfly/EAP.
The main benefit is integration with Java EE technology, bridging the dynamic OSGi world with
the static, but familiar Java EE world. This is recipe for disaster. Although you can easily use
things like JPA and EJB, it breaks all concepts of service dynamics. More importantly, you really
don’t need to do this. Tools for dependency injection, creating RESTful web services and work
with data stores is available in a much more OSGi natural way, so why stay in the non-modular

world with one leg and lose a lot of OSGi’s benefits?

Sunday, April 27, 2014

Ten reasons to use OSGi

In this post I will discuss ten reasons to use OSGi. The reason for this post is that there are many misconceptions about OSGi. At Luminis Technologies we use OSGi for all our development, and are investing in OSGi related open source projects. The reason to do so is because we think it's the best available development stack, and here are some reasons why.

#1 Developer productivity

One of OSGi's core features is that it can update bundles in a running framework without restarting the whole framework. Combined with tooling like Bndtools this brings an extremely fast development cycle, similar to scripting languages like JavaScript and Ruby. When file is saved in Bndtools, the incremental compiler of Eclipse will build the affected classes. After compilation Bndtools will automatically rebuild the affected bundles, and re-install those bundles in the running framework. It's not only fast, but also reliable; this mechanism is native to OSGi, and no tricks are required.

Compare this to doing Maven builds and WAR deployments in an app-server... This is the development speed of scripting languages combined with type-safeness and runtime performance of Java. It's hard to beat that combination.


#2 Never a ClassNotFoundException

Each bundle in OSGi has it's own class loader. This class loader can only load classes from the bundle itself, and classes explicitly imported by the bundle using the Import-Package manifest header. When an imported package is not available (exported by another bundle) in the framework, the bundle will not resolve, and the framework will tell you when the bundle is started. This fail-fast mechanism is much better than runtime ClassNotFoundExceptions, because the framework makes you aware of deployment issues right away instead of when a user hits a certain code path in runtime. 

Creating Import-Package headers is easy and automatic. Bnd (either in Bndtools or Maven) generates the correct headers at build time, by inspecting the byte-code of the bundle. All used classes that are not part of the bundle must be imported. By letting the tools do the heavy lifting, there's not really any way to get this wrong. This is unless there's dynamic class loading in the code (using Class.forName). Luckily this is hardly ever necessary besides JDBC drivers.

The Import-Package mechanism does introduce a common problem when using libraries. The transitive dependency madness in Maven has made some developers unaware of the fact that some libraries pull in many, many, other dependencies. In OSGi this means those transitive dependencies must also be installed in the framework, and the resolver makes you immediately aware of that. While this makes it harder to use some libraries, you can argue this is actually a good thing. From an architectural perspective, do you really want to pull in 30 dependencies just because you want to use some library or framework? This might work well for a few libraries, but breaks sooner or later when there are version conflicts between dependencies. Automatically pulling in transitive dependencies is easy for developers, but dangerous in practice. 

#3 All the tools for modern (web) backends

Even more important than the language or core platform is the availability of mature components to develop actual applications. In the case of Luminis Technologies that's often everything related to creating a backend for modern web applications. There is a wealth of open source OSGi components available to help with this. The Amdatu project is a great place to look, as well as Apache Felix. Amdatu is a collection of OSGi components focussed on web/cloud applications. Examples are MongoDB integration, RESTful web services with JAX-RS and scheduling. 

It is strongly advisable to stay close to the OSGi eco-system when selecting frameworks. Not all frameworks are designed with modularity in mind, and trying to use such frameworks in a modular environment is painful. This is an actual downside of OSGi; your choice of Java frameworks is somewhat limited by the compatibility with OSGi of the frameworks. This might require you to leave behind some of the framework knowledge that you already have, and learn something new. Besides the investment of learning something new, nothing is lost. There are so many framework alternatives, do you really need that specific framework even although it's not fit for modular development?

In practice we most commonly hear questions about either using OSGi in combination with Java EE or Spring. As a heavy user of both in the past, I'm pretty confident to say that you don't need either of them. Dependency injection is available with Apache Felix Dependency Manager, Declarative Services and others, and I already mentioned Amdatu as a place to look for components to build applications. 

#4 It's fast

OSGi has close to zero runtime overhead. Invocations to OSGi services are direct method calls and no proxy magic is required. Remember that OSGi was originally designed to run embedded on small devices; it's extremely lightweight by design. From a deployment perspective it's fast as well. Although there are app-servers with OSGi support, we prefer to deploy our apps as bare bones Apache Felix instances. This way nothing is included that we don't need, which drastically improves startup speed of applications. Though that a few seconds startup time for an app-server is impressive? That's what an OSGi framework does on a Raspberry Pi ;-)

#5 Long term maintainability

This should probably be the key reason to use OSGi; modularity as an architectural principle. Modularity is key to maintainable code; by splitting up a code base in small modules it's much easier to reason about changes to code. This is about the basic principles of separation of concerns and low coupling/high cohesion. These principles can be applied without a modular runtime as well, but it's much easier to make mistakes because the runtime doesn't enforce module boundaries. A modular code base without a modular runtime is much more prone to "code rot", small design flaws that break modularity. Ultimately this leads to unmaintainable code.

Of course OSGi is no silver bullet either. It's very well possible to create a completely unmaintaintable code base with OSGi as well. However, when we adhere to basic OSGi design principles, it's much easier to do the right thing.

Another really nice feature of a modular code base is that it's easy to throw code away. Given new insights and experience it's sometimes best to just throw away some code and re-implement it from scratch. When this is isolated to a module it's extremely easy to do; just throw away the old bundle and add a new one. Again, this can be done without a modular runtime as well, but OSGi makes it lot more realistic in practice. 

#6 Re-usability of software components

A side effect of a modular architecture is that it becomes easier to re-use components in a different context. The most important reason for this is that a modular architecture forces you to isolate code into small modules. A module should only have a single responsibility, and it becomes easy to spot when a module does too much. When a module is small, it's inherently easy to re-use. 

Many of the Amdatu components are developed exactly that way. In our projects we create modules to solve technical problems. When we have other projects requiring a similar component, we share these implementations cross-project. If the components prove to be usable and flexible enough, we open source them into Amdatu. In most cases this requires very limited extra work.

This has benefits within a single project context as well. When the code base is separated into many small modules, it becomes easier to make drastic changes to the architecture, while still re-using most of the existing code. This makes the architecture more flexible as well, which is a very powerful tool.

#7 Flexible deployments

OSGi can run everywhere, from large server clusters to small embedded devices. Depending on the exact needs there are many deployment options to choose from. Using Bndtools or Gradle it's easy to export a complete OSGi application to a single JAR that can run by simply running "java -jar myapp.jar". In deployments with many servers (as is the case in many of our own deployments) we can use Apache ACE as a provisioning server. Instead of managing servers manually, software updates are distributed to servers automatically from the central provisioning server. The same mechanism works when we're not working with server clusters, but many small devices for example.

The flexibility of deployments also implies that OSGi can be used for any type of application. We can use the same concepts when working on large scale web applications, embedded devices or desktop applications.

OSGi can even be embedded into other deployment types easily. There are many products that use OSGi to create plugin systems, while the application is deployed in a standard Servlet container. Although I wouldn't advice this for normal OSGi development, it does show how flexible OSGi is for deployments.

Also check out this video to learn more about Apache ACE deployments.


#8 It's dynamic

Code in OSGi is implement using services. OSGi services are dynamic, meaning that they can come and go at runtime. This allows a running framework to adapt to new configuration, new or updated bundles and hot deployments. Basically, we never need to restart an application. I recently blogged about this in more detail in a post "Why OSGi Service dynamics are useful".

#9 Standardized configuration

One of the OSGi specifications that I find most useful is Configuration Admin. This specification defines a Java API to configure OSGi services. On top of this API there are many components that load configuration from various places, such as property files, XML files, a database, provisioned from Apache ACE or loaded from AWS User Data. The great thing is that your code doesn't care where configuration comes from, it just needs to implement a single method to receive configuration. Although hard to understand that Java itself still doesn't have a proper configuration mechanism, Configuration Admin is extremely useful because almost every application needs configuration.

#10 It's easy

This might be the most controversial point in this post. Unfortunately OSGi isn't immediately associated with "easy" by most developers. This is mostly caused by developers trying to use OSGi in existing applications, where modularity is an afterthought. Making something non-modular into something modular is challenging, and OSGi doesn't magically do this either. However, when modularity is a core design principle and OSGi is combined with the right tooling, there's nothing difficult about it. 

There are plenty of resources to learn OSGi as well. Of course there is the book written by Bert Ertman and me, and there are a lot of video tutorials available recorded at various conferences where we speak.

Finally, when trying out OSGi, try it with a full OSGi stack, for example as described in our book or the Amdatu website. Don't try to fit your existing stack into OSGi as a first step (which is actually advise that applies to learning almost any new technology).

Video sources:
Book:



Tuesday, April 22, 2014

Micro Services vs OSGi services

Recently the topic of Micro Services has been getting a lot of attention. The OSGi world has been talking about micro services for a long time already. Micro services in OSGi are also often written as µServices, which I will use in the remainder of the post to separate the two concepts. Although there is a lot of similarity between µServices in OSGi and Micro Services as recently became popular, they are not the same. Let's first explore what OSGi µServices are.

OSGi µServices

OSGi services are the core concept that you use to create modular code bases. At the lowest layer OSGi is about class loading; each module (bundle) has it's own class loader. A bundle defines external dependencies by using an import-package. Only packages which are explicitly exported can be used by other bundles. This layer of modularity makes sure that only API classes are shared between bundles, and implementation classes are strictly hidden.
This also imposes a problem however. Let's say we have an interface "GreeterService" and an implementation "GreeterServiceImpl". Both the API and implementation are part of bundle "greeter", which exports the API, but hides the implementation. Now we take a second bundle that want to use the GreeterService interface. Because this bundle can't see the GreeerServiceImpl, it would be impossible to write the following code:

GreeterService greeter = new GreeterServiceImpl();

This is obviously a good thing, because this code would couple our "conversation" bundle directly to an implementation class of "greeter", which is exactly what we're trying to avoid in a modular system. OSGi offers a solution for this problem with the service layer, which we will look at next. As a side note, this also means that when someone claims to have a modular code base, but doesn't use services, there is a pretty good chance that the code is not so modular after all....

In OSGi this problem is solved by the Service Registry. The Service Registry is part of the OSGi framework. A bundle can register a service in the registry. This will register an instance of an implementation class in the registry with it's interface. Other bundles can then consume the service by looking it up by the interface of the service.  The bundle can use the service using it's interface, but doesn't have to know which implementation is used, or who provided the implementation. In it's essence the services model is not very different from dependency injection with frameworks such as CDI, Spring or Guice, with the difference that the services model builds on top of the module layer to guarantee module borders.



OSGi services are often called micro services, or µServices. This makes sense because they are "lightweight" services. Although there is the clear model of service providers and service consumers, the whole process works within a single JVM with close to zero overhead. There is no proxying required, so in the end a service call is just a direct method call. As a best practice a service does only a single thing. This way services are easy to replace and easy to reuse. These are also the immediate benefits of a services model; they promote separation of concerns, which in the end is key to maintainable code.

Comparing with Micro Services

So how does this relate to the Micro Services model that recently got a lot of attention? The obvious difference is that OSGi services live in a single JVM, and the Micro Services model is about completely separate deployments, possibly using many different technologies. The main advantages of such a model go back to the general advantages of a modular system:


  1. Easier to maintain: Unrelated code is strictly isolated from each other. This makes it easier to understand and maintain code, because you don't have to worry too much about code outside of the service.
  2. Easier to replace: Because services are small, it's also easy to simply throw a service away and re-implement it if/when requirements change. All you care about is the service interface, the implementation is replaceable. This is an incredibly powerful tool, and will prevent "duct taping" of code in the longer term.
  3. Re-usability: Services do only a single thing, and can be easily used in new scenarios because of that. This goes both for re-usability in different projects/systems when it's about technical components, or re-usability of functional components within a system.


Do these benefits look familiar when thinking about SOA? In recent years not much good is said about SOA, because we generally associate it with bloated tools and WSDLs forged in the deepest pits of hell. I loathe these tools as well, but we should remember that this is just (a very bad) implementation of SOA. SOA itself is about architecture, and describes basically a modular system. So Micro Services is SOA, just without the crap vendors have been trying to sell us.

Micro Services follow the same concept, but on a different scale. µServices are in-VM, Micro Services are not. So let's compare some benefits and downsides of both approaches.

Advantages of Services within a JVM

One advantage of in-VM services is that there is no runtime overhead; service calls are direct method calls. Compared to the overhead of network calls, this is a huge difference. Another advantage is that the programming model is considerably simpler. Orchestrating communication between many remote services often requires an asynchronous programming model and sending of messages. No rocket science at all, but more complicated than simple method calls.
The last and possibly most important advantage is ease of deployment. An OSGi application contain many services can be deployed as a single deployment, either in a load-balanced cluster or on a single machine. Deploying a system based on Micro Services requires significant work on the DevOps side of things. This doesn't just include automation of deployments (which is relatively easy), but also making sure that all required services are available with the right version to make the whole system work.


Advantages of Micro Services

The added complexity in deployments also offers more flexibility. I believe the most important point about Micro Services is that services have their own life-cycle. Different teams can independently work on different services. They can deploy new versions independently of other teams (yes this requires communication...), and services can be implemented with the tools and technology that is optimal for that specific service. Also, it is easier to load balance Micro Services, because we can potentially horizontally scale a single service instead of the whole system.

This brings the question back to scale of the system and team. When only a single team (say maximum 10 developers) works on a system, the advantages of Micro Services compared to µServices don't seem to weigh up. When there are multiple teams working on the same system, this might be a different story. In that case it could also be an option to mix and match both approaches. Instead of going fully Micro Service, we could break up an already modular system into different deployments and have the benefits of both. Of course, this adds new challenges and requirements; for starters, we need a remoting/messaging layer on top of services, and we might need to modify the granularity of services.

This article was mostly written as a clarification of the differences between uServices and Micro Services. I'm a strong believer of the power of separated services. From my experience building large scale OSGi applications, I also know that many of the benefits of modularity can be achieved without the added complexity of a full Micro Service approach. Ultimately I think a mixed approach would work best on a larger scale, but that's just my personal view on the current state of technology. 

Sunday, March 23, 2014

Upcoming conferences

This year already proves to be an interesting year with conferences. This week I will start with a talk at JavaLand, a brand new conference in Germany. Together with Sander Mak I will talk about Modular JavaScript. We will show options to modularize a JavaScript code base. We will discuss module systems, see a lot of RequireJS, talk about dependency injection and services and show real world best practices.

Next will be DevNation, another conference that I'm really looking forward to. Devnation is also a new conference, with an amazing speaker line up. I will be speaking about OSGi with a practical introduction of modular development. There will be a lot of live coding in this talk so that you get a good impression of OSGi development in practise. Along the way you will learn about bundles, imports/exports, OSGi services and their dynamics and see practical topics such as integration testing and creating modern web applications.



Last but not least there will be GeeCon beginning of May, where Sander and me will be speaking about Modular JavaScript again. It's exciting to see how JavaScript is becoming an importart part of the Java developer's tool stack more and more. GeeCon was great last year, so my expectations are high for this year as well.

Sunday, November 3, 2013

Visualizing OSGi services with D3.js

Because I couldn't resist writing a bit of code during my vacation I started playing with D3.js, a data visualization library for JavaScript, and used it to visualize dependencies between OSGi services. If you are already familiar with designing systems based on OSGi services you might just want to take a look at the video directly. If you need a little more introduction; continue reading.



We are using OSGi services for pretty much  everything; all our code lives within services. Services are the primary tool when implementing a modular architecture. By just using services, your code isn't necessarily modular yet however. A lot of thought has to go into the design of service interfaces and dependencies between services.

In a services based architecture it's obviously a good thing to re-use existing services when implementing new functionality. At the same time it's important to be careful with creating too many outgoing dependencies. If everything depends on everything else, it becomes very difficult to make any changes to the code (also known as "not modular"...). When implementing a system you will start to notice that some services are re-used by many other services. Although not an official name at all, I often call them core services; a service that plays a central role in your system. These services must be kept stable because major changes to it's interface requires a lot of work. Outgoing dependencies from these services should also be used with care. To guarantee stability of the service, it's good to keep the number of dependencies low. This prevents a ripple effect of changes when touching something. In practice, a core service should only depend on other services which are very stable. Do not depend on services that are likely to change often.

Many other services might not be used by any other services at all however, for example a scheduled job or a RESTful webservice that only serves a specific part of a user interface. These services can easily be replaced or even be discarded when no longer needed. In an agile environment this happens all the time. For these services it's not really a problem to depend on other services, specially not the core services of the system.

If your architecture is sound, you probably have a very clear idea about which services are your core services. Still, it's useful to actually visualize this to identify any services that have more incoming dependencies than you expected, or at least see which other services have a dependency on a certain service. And that's exactly what I did for this experiment.

We use Apache Felix Dependency Manager to create services and dependencies between them. Because of this I used the Apache Felix Dependency Manager API to create the list of dependencies between services. Note that this will not show services that are not created by Dependency Manager. The visual graph itself is created by D3.js based on this example.

The code is available on BitBucket: https://bitbucket.org/paul_bakker/dependency-graph/overview