Microservices Architectural in Java

What is Microservices in Java?

Microservices in Java is an architectural style that allows developers to separate an application into smaller services to create, test, and release different features and functionalities individually to save time and improve efficiency. Additionally, teams can work on various microservices simultaneously for faster development.

More developers are moving from monolith to microservices architecture to support increasingly complex software structures. In Java development, there are various frameworks available for working with microservices, including Spring Boot, DropWizard, Micronaut, and Spark. As a relatively new technique, real-world use of microservices remains somewhat a mystery.

Microservices Java Architectures

The JRebel team at Perforce recently conducted a survey to get firsthand accounts of microservices adoption and how they are being used specifically within the Java environment.

We asked a group of professionals using microservices and Java in their main project what their microservices architectures look like. Here are some highlights of what we found:

Spring is the framework of choice at 86% of responses.

Docker has a large footprint for microservices users at 61% of responses.

22% are using more than 26 microservices in their main application.

More than half of developers experience redeploy times of one minute or more.

And here we finally turn to our microservices-based example, where we designate each independent server to perform a certain business function. Here’s how we can manage to save the day with a microservices architecture.


Step 1. Split it

We split our application into microservices and got a set of units completely independent for deployment and maintenance. In our application, 2 user profile servers, 3 order servers and a notification server perform the corresponding business functions. Each of microservices responsible for a certain business function communicates either via syncHTTP/REST or async AMQP protocols.

Step 2. Pave the ways

Splitting is only the starting point of building a microservice-oriented architecture. To make your system a success, it is more important and still even more difficult to ensure seamless communication between newly created distributed components.

First of all, we implemented a gateway. The gateway became an entry point for all clients’ requests. It takes care of authentication, security check, further routing of the request to the right server as well as of the request‘s modification or rejection. It also receives the replies from the servers and returns them to the client. The gateway service exempts the client side from storing addresses of all the servers and makes them independently deployable and scalable. We also set the Zuul 2 framework for our gateway service so that the application could leverage the benefits of non-blocking HTTP calls.

Secondly, we've implemented the Eureka server as our server discovery that keeps a list of utilized user profile and order servers to help them discover each other. We also backed it up with the Ribbon load-balancer to ensure the optimal use of the scaled user profile servers. 

The introduction of the Hystrix library helped to ensure stronger fault tolerance and responsiveness of our system isolating the point of access to a crashed server. It prevents requests from going into the void in case the server is down and gives time to the overloaded one to recover and resume its work.

We also have a message broker (RabbitMQ) as an intermediary between the notification server and the rest of the servers to allow async messaging in-between. As a result, we don’t have to wait for a positive response from the notification server to proceed with work and have email notifications sent independently.

Let’s now review the monolith problems we could come across and see what happens with them in our microservices-based application:

Complete shutdown

If one server goes slow because of overload or crashes completely, the life won’t stop and, often, the user won’t even notice any braking. The system will either re-route the requests to its substitutes (as we have 2 user profile servers and 3 order servers) or proceed with its work and resume the function as soon as the server is recovered (in case of our notification server crashes). Maybe the client won’t get notifications right away, but at least they won’t have to look at the ‘pre-loader’ for ages.

Complicated updates

Now we can easily update what we need. As the units are completely independent, we just re-write the needed servers to add some new features (recommendation engine, fraud detection, etc.). In our example, to introduce an IP tracker and report suspicious behavior (as Gmail does), we create fraud detection server and slightly modify our user profile servers while the rest of the servers safely stays intact.

Frustrating UX

Loosely coupled nature of our microservices architecture and its incredible potential for scaling allows tackling incidents with minimal negative effect on user experience. For example, when we see that some of our core features run slow, we can scale the number of servers handling it (as we did from the start with user profile and order servers) or let them go a little slow for a while, if the features are not vital (as we did with notifications). In some other cases, it makes sense to skip them at all in peak times (as it happens with pop-ups that, for a while, show only textual description with no image included).

 

Related blog:

 

Jenkins Interview questions and answers