No doubt that cybersecurity risks are on the rise. There is no month that goes by without a new major breach being divulge. And this is without the ones that are not divulge at all. In the fight against this industry, yes hacking is an industry, you must stay ahead of the malevolents. But at the same time, you must maintain control on cost and complexity. In fact, reduced complexity usually means better security.
The old adage says that you must be successful at implementing all necessary counter measures, while the hackers only need to breach one. Not only that, your counter measures must be operational without fault at all time. The hackers only need to succeed at penetrating your defence once.
How to do resolve this conundrum; as software architecture becomes more complex, the attack surface expends and hackers are getting more sophisticated? Part of the answer is to:
In doing so, you are effectively reducing complexity, make things secure by design, and unlock scalability of your operations. At the same time, you are also removing most of the tools and means that a hacker needs to exploit any remaining, unknown, or new weaknesses present in the ecosystem.
Implementing the measures outlined above does not reduce the need for the traditional methods, such as granting privileges on a need be basis, recording all activity, or performing active testing of the defences. However, the work and cost of sifting logs, verifying false alerts, and analyze test reports will be kept under control. Your team will be able to free resources to stay ahead of the enemy!
Lets see how we can do this in details and significantly reduce the attack surface.
Zero trust architecture, proposed 30 years ago, is taking the front stage in the last few years. It is a immense improvement in how we think about IT security. In a nut shell, or simply stated using a very naive functional definition, an organization simply need to assume that all of its computers, servers, IoT or other IT resources are connected to the Internet. At that point, the only course of action is to always verify, validate and authorize every single piece of data, every requests, and all actions. However, although crucial, this only provides a single layer of mean and method of protection. Specifically, attacks targeting kernel or some library vulnerabilities will be not curtailed by the zero-trust paradigm. Additional means are needed.
This paper does not focus on zero trust as such. Numerous blog entries and articles abound and expertly cover the topic. See the zero-trust definition and history on wikipedia
Part of the notion of zero trust is that the concept of a trust perimeter does not exist. Trusting a nearby equipment always was a bad thing. However, beside trust, or lack thereof, the perimeter concept is still useful. But before going any further, a bold statement is due: without any network access, a computer can never be compromised by any malevolent actor. This is not entirely true if you consider removeable media, such as USB keys and other SD cards but lets leave that to another discussion. So, equipped with this notion that no network access means no hacking possible, it stands to reason that the goal should be to approach this state (no network) as close as possible. The closest that this goal will ever be achieved is by:
Example of a working scenario
The goal is simple; just prevent any access to take place other than what is necessary. And when that access is allowed, do not trust the peer, its data or its request.
In the diagram in the previous section an attack vector is still possible. Should one of the server be compromised, lateral movement is still possible. There are 2 fundamental ways of preventing this:
The main problem with firewalls, beside their complexity and constant management is that they can stop blocking without anyone noticing. In other words, filrewalls inherently suffer from the silent failure syndrom, which means that you must use other tools (e.g. pen test tools). Using those other tools:
Instead of setting up road blocks, which can silently fail, just stop your server from listening to external requests. Make them listen only on the loopback interface and provide an active method for external requests to reach the loopback interface. This active method will be built with the following attributes:
Management, monitoring, troubleshooting and debugging functions too should be only permitted via outbound connections.
Needless to say, the easiest way to reduce, or even remove all listeners is by simply removing all unnecessary softwares / packages present in the machine. Only the few components necessary to deliver the service should be left. Management tools, such as SSH, could even be removed and only dynamically and temporarily added on a need be basis. By doing so, the overall cybersecurity posture will be improved by:
The modern hardening techniques presented in this paper aim at reducing the complexity and cost of managing production setups. This, and improvement in the security methods will make the life of hackers more difficult.
NearEDGE is committed at delivering simple solutions that will:
Follow us on LinkedIn for more news