• Maarten Vandeperre

  • Software Engineer

This year’s Devoxx was once again awesome. With all the commotion around microservices, I felt I had to go see Sam Newman’s talk “Security and Microservices” (amongst others). Newman is the author of Building Microservices (2015), a book aimed at developers to fully grasp the meaning of a microservice architectural design.

His talk mentioned the four elements of continuous improvement when it comes to application security: prevention, detection, response and recovery. Here, I’d like to zoom in on some of the things Newman talked about when it comes to prevention, because it is the first step taken in the continuous process of application security; prevention can protect our application from most of the security threats it faces.

Application security is actually a hot topic within “the microservice movement”; nowadays almost everyone in the software business tries to implement a microservice architecture. The result is that APIs are exposed to the outside world. As a consequence, we need to rethink the security of APIs, but Newman points out the trap in this process: Overengineering.

The Overengineering Trap

Overthinking our security model more often than not results in a waste of both money and energy. Newman illustrates this as follows: Imagine you have a building that you want to protect from intruders. In order to prevent it from being intruded, you could buy a fence with electric wires, you could buy €10,000 IP-cameras, … and so on and so forth, but if, for example, the lock of the door is broken, a better choice would be to just go and buy a new door lock. So rather than securing every little detail, you pick the most important details and secure those.

The thing is: How do we decide what details to focus on in securing an application? To help guide you through these decisions, Newman uses the analogy of an attack tree. The example is based on a safe you want to protect: You have to analyse how unauthorized people can access the safe (e.g. by bribing people with authorization, by bombing it, … etc.). As soon as you’ve analysed all the possible paths an intruder might take to access the safe, put it in a tree format.

Then give every subpath a weight of how likely it is an intruder will choose that path. Basically, if the likelihood is extremely low, don’t bother looking into it. In the case of the safe, an intruder could try all the possible combinations, but if it would take him a thousand years to try them all, it’s not a scenario you are going to protect your safe against.

HTTPS: Not as Obvious as It Seems

The thing you need to keep in mind is to always start with the basics. A quarter of Newman’s talk is spent on HTTPS… Say what? Everyone knows that you need to use HTTPS to have a better security layer… right? Yes, but back to the example of the cameras: Although many companies are using HTTPS on all their public-facing APIs, internally they often still use HTTP, making it possible for one security hole on the public-facing API to infect your whole system.

Why would one use HTTP when HTTPS is so much more secure and available? Well, one of the key reasons of not using HTTPS is that it costs either time or money to set it up. Newman points out that you can turn to Let’s Encrypt, an organization that offers certificates for free and an automated way to renew them. Basically: Start using HTTPS. Now.

Stay Up to Date before it’s Too Late

The last, but not the least topic I want to discuss is keeping your stuff up to date, and this for both software as well as infrastructure. It’s not worth spending hours and hours or thousands of dollars on securing your software, while you’re still working on an old operating system that might still have security holes and thus harm the overall security of your application.

Newman mentions a surprising fact: In his production environment, he doesn’t use official (or non-official) Docker images from the Docker Hub. Instead, every image he uses is “handmade”. The main reason to do so is because you can’t update them and a lot of them contain security bugs (e.g. most of the Ubuntu images on the Docker hub contain a bash security hole).

In other words, patching is not something you should do whenever you find the time, or whenever it suits you. It’s a key part of maintaining an overall secured system or application, and checking for updates should become an important part of your continuous integration chain.

Think It Through, But Don't Overthink It

Also, why put all your energy in securing all of your public-facing APIs? Take authentication, database encryption, file encryption … etc. Is it always worth it? For example, you have a website full of movie reviews. Is it worth it to start overengineering by securing all of the data in the database? What would be the harm in, say, someone taking the movie data directly from your database, when you already expose these data anyway?

Instead of securing the whole application, it might be enough to only secure the microservice around “user credentials”, which could save you a lot of time in setting up and maintaining the security around it. In short: Why overengineer when it’s not necessary?

 

Newman’s talk initially sparked my attention because of the subject. I went to the talk to learn about the security of software applications and, in particular, microservices, but was surprised by the main focus: Keep it simple, start with the basics. Don’t start by securing from exotic or advanced scenarios while your basic security (i.e. internal network, system updates, ...) is not set up properly, and really think the possible security threats through. In short, security starts with a well-thought-out plan and a good follow-up on it. And in the end, don’t we all love it when a plan comes together?