Leaner Microservices at SadaPay using GraalVM

What is GraalVM?🤔

Java applications have historically been distributed as platform-independent bytecode, which is compiled to the host machine’s native instructions by the JVM at runtime. However, there’s a new player in town! GraalVM – a high-performance, JDK that compiles Java applications ahead of time (AOT) into standalone binaries. This means that the compilation to native instructions happens before the application is run, rather than during runtime. Pretty cool, right?

This has several benefits such as faster startup times and reduced memory usage. Additionally, GraalVM supports several programming languages such as JavaScript, Ruby, Python, and R, with the ability to interoperate between them. So, while Java applications are still widely distributed as bytecode, GraalVM presents a new and exciting option for those of us seeking faster and more efficient ways of running our applications.

If you are interested in knowing more about GraalVM as a technology, head over to the docs here.

Why is GraalVM a good fit for SadaPay and you?

At SadaPay, we rely heavily on Java Virtual Machines (JVMs) for our microservices. However, as we continue to expand, grow, and create new microservices, we have encountered several challenges related to high memory usage and resource constraints leading to high infra costs.

In monolithic architectures, the cost of shipping the JVM is a one-time expense. You can simply include the JVM with your application and you’re good to go. However, in distributed architectures like microservices, this cost can quickly scale linearly with the number of microservices you have. So while you gain benefits such as team autonomy and scalability, you lose big on the infrastructure side due to increased deployment and maintenance costs.

Although it is a relatively new technology, we at SadaPay are no strangers to being on the cutting edge. We are currently adopting GraalVM for our new microservices to improve their performance and efficiency, while reducing their memory footprint, startup times, and our infrastructure costs! Is this too good to be true? Well, not at all, but there are some catches.

In this article, we’ll delve deep into understanding some of the challenges ⚔️ we faced, how we overcame them, and what benefits we realized as a result. So let’s get started! 

🚧Challenges We Faced In Migrating to Native Image🚧

At SadaPay, we maintain a Spring Kotlin Template Service to improve developer productivity and reduce the time and effort required to create new microservices. This template is a GitHub repository that provides all the default boilerplate code and infrastructure setup required to get your service up and running.

Initially, we were interested in comparing the performance of both JVM and GraalVM executables using our template service and compare their memory footprint to understand what GraalVM brings to the table.

We created a new service called jvm-service using the traditional JVM approach and deployed it to our staging environment without any issues. However, modifying the template to make it compatible with graalvm and creating a competitor service called graalvm-service proved to be a challenge. We had to migrate the entire template from Spring Boot 2.7.x to Spring Boot 3.x.x, as native image support in Spring was added in version 3, along with a minimum JDK 17 requirement. Although migrating to the new version of Spring was relatively smooth, we encountered difficulties when adapting our internal reflection-based API response validation approach to be compatible with native images.

We spent a good deal of time researching ways to discover and register reflection-based strategies at compile time. To address this issue, we utilized Spring’s programmatic AOP management API and developed a workaround by registering a pointcut advisor bean at compile time and writing a custom implementation of the method interceptor interface.

Another major obstacle we faced was the lack of support for dynamic beans in Spring native images, as many of our internal libraries relied on them. To address this, we developed a workaround that allowed us to register situational configurations and conditional beans at runtime before the application context is loaded. Additionally, we utilized an EnvironmentAware component to read interesting application properties and then registered appropriate beans through BeanFactoryPostProcessor.

However, this was not the end of our troubles. Soon, we encountered:

Log4j Compatibility Issues:

At SadaPay, we primarily use Log4j as our logging implementation, and this is where we observed another one of the GraalVM’s limitations: its limited compatibility with third-party libraries.

After creating the native executable, we noticed that the application couldn’t boot up because it wasn’t pairing well with the Log4j dependency 😞


To solve this problem, we decided to switch to a newer logging alternative which would work out-of-the-box with native images. While we did come across other logging options during our research, we ultimately settled on Logback, the successor to Log4j, which is also the logging implementation that Spring defaults to. This fixed the initial problem, but we had to invest more time in configuring Logback to ensure consistent logging across all microservices and seamless integration with our log processor. For more information about Logback configurations, see here.

Architecture Compatibility Issue in Dockerization

When migrating to native image, it is necessary to update our CI/CD workflows to ensure that our native images are built and published correctly. However, we faced a challenge due to an architecture compatibility. After building the native image, we observed that the application did not run successfully in our pod.


On investigating further we found that our cd was running on a machine with different architecture, while the base image of Dockerfile has a different architecture.

Native executables are not cross-compatible as they are only runnable on the architecture they are built on.

This means we have to ensure that the system/machine on which our CI job will run and the base image of our Dockerfile is the same. Hence we used the same system architecture for our base image and for our ci/cd to ******run jobs. This help us get rid of architecture compatibility issues and our application was able to run successfully on the pod. 🚀

Results/Benefits We Realised 🤩

After resolving all the challenges we were able to deploy this same new service (graalvm-service) too on staging.

The results were remarkable as we expected them to be 🥳

Decreased Memory Footprint 🤯

As you can see from the images below the graalvm-service took ~60% less memory than its JVM-based counterpart. 🤩

Faster Startup Time 🚀

In addition to consuming less memory, the native image also provides faster service startup time which means new pods can get up and running in milliseconds rather than seconds.

At SadaPay, we have recognized that Spring-based JVM applications can be unsuitable for serverless or Function as a Service (FaaS) environments due to their slow startup times. These environments require applications to be lightweight, fast, and efficient, which can be a challenge for traditional JVM-based applications. Developers looking to create faster and more efficient microservices suited for such environment can use GraalVM 🤩

One of the other power of native image deployment is it minimizes vulnerability. Let’s understand how it does that.

Minimize Vulnerability 👨🏽‍💻

As we strive to adopt GraalVM for our microservices at SadaPay, we must ensure that our applications remain secure. One of the ways we achieve this is by limiting the runtime behavior of our applications. To achieve this, we have implemented several measures. Firstly, no new unknown code can be loaded during runtime. This ensures that there are no unexpected vulnerabilities that could be exploited. Secondly, we only include paths that have been proven to be reachable by the application. This ensures that only the necessary components are included in the image, reducing the attack surface. Thirdly, we disable reflection by default and require an explicit include list. This ensures that reflection cannot be used to gain unauthorized access. Lastly, deserialization is only enabled for a specified list of classes. This helps protect against deserialization vulnerabilities. With these measures in place, we can rest assured that our microservices remain secure and protected against attacks such as just-in-time compiler crashes, wrong compilations, or “JIT spraying” to create machine code gadgets.

Compact Packaging 📦

Finally, we are down to a final performance improvement GraalVM provides and that is it provides compact packaging. Since native images can be relatively smaller in size, which means that our container will also be small and easier to deploy. Below you can check the images of lightweight containerized applications with JVM and native image, native image surpasses this chart by a long way.

Conclusion 🤾🏻‍♂️

At SadaPay, we are in the final phase of adopting GraalVM in our tech stack and we hope to see significant improvements in our application’s performance.

As we continue to scale our operations and add more features to our platform, we plan to further optimize our microservices using GraalVM. We are excited to see what the future holds for this technology and how it can help us continue to deliver innovative financial solutions to our customers. 🏦

Thank you for reading until the end! We hope that our experience with implementing GraalVM for leaner microservices at SadaPay was helpful. 🎉

Stay tuned for more engineering blogs from us! 🚀