Modern hardening

Modern infrastructure hardening

Stay ahead of hackers and make their life more complex. Do this while reducing complexity, minimizing the attack surface, and lowering your cost.
By NearEDGE | March 3, 2023 | Read time 4 min
Shielded fighter with words on body

No doubt that cybersecurity risks are on the rise. There is no month that goes by without a new major breach being divulge. And this is without the ones that are not divulge at all. In the fight against this industry, yes hacking is an industry, you must stay ahead of the malevolents. But at the same time, you must maintain control on cost and complexity. In fact, reduced complexity usually means better security.

The old adage says that you must be successful at implementing all necessary counter measures, while the hackers only need to breach one. Not only that, your counter measures must be operational without fault at all time. The hackers only need to succeed at penetrating your defence once.

How to do resolve this conundrum; as software architecture becomes more complex, the attack surface expends and hackers are getting more sophisticated? Part of the answer is to:

  • Reduce network access everywhere possible and use a zero trust paradigm
  • Limit access to outbound only connection requests
  • Continuously measure everything (not covered in this article)
  • Minimize the content present in the machine. This is valid for both applications, configuration as well as data

In doing so, you are effectively reducing complexity, make things secure by design, and unlock scalability of your operations. At the same time, you are also removing most of the tools and means that a hacker needs to exploit any remaining, unknown, or new weaknesses present in the ecosystem.

  • Without network access, the hacker can not reach a target system
  • When no inbound connection is possible, even if a hacker succeeds at compromising a nearby system, he will not be able to connect a high value system
  • Should a service be compromised in some form, measuring everything will detect any malevolent attempt at persisting the intrusion
  • By removing system content, particularly applications but also user profiles and credentials, a successful attack will fail to spread due to lack of means

Implementing the measures outlined above does not reduce the need for the traditional methods, such as granting privileges on a need be basis, recording all activity, or performing active testing of the defences. However, the work and cost of sifting logs, verifying false alerts, and analyze test reports will be kept under control. Your team will be able to free resources to stay ahead of the enemy!

Lets see how we can do this in details and significantly reduce the attack surface.

Reducing network access and zero trust

Zero trust architecture, proposed 30 years ago, is taking the front stage in the last few years. It is a immense improvement in how we think about IT security. In a nut shell, or simply stated using a very naive functional definition, an organization simply need to assume that all of its computers, servers, IoT or other IT resources are connected to the Internet. At that point, the only course of action is to always verify, validate and authorize every single piece of data, every requests, and all actions. However, although crucial, this only provides a single layer of mean and method of protection. Specifically, attacks targeting kernel or some library vulnerabilities will be not curtailed by the zero-trust paradigm. Additional means are needed.

This paper does not focus on zero trust as such. Numerous blog entries and articles abound and expertly cover the topic. See the zero-trust definition and history on wikipedia

Part of the notion of zero trust is that the concept of a trust perimeter does not exist. Trusting a nearby equipment always was a bad thing. However, beside trust, or lack thereof, the perimeter concept is still useful. But before going any further, a bold statement is due: without any network access, a computer can never be compromised by any malevolent actor. This is not entirely true if you consider removeable media, such as USB keys and other SD cards but lets leave that to another discussion. So, equipped with this notion that no network access means no hacking possible, it stands to reason that the goal should be to approach this state (no network) as close as possible. The closest that this goal will ever be achieved is by:

  • Using a link scope only configuration at the node. This is much easier to do with IPv6. Using nmtui (NetworkManger) you just need to disable IPv4 and set IPv6
  • No servers (listener) are present in the machine (more on this in the next section)

Example of a working scenario

Simplified network diagram showing the effectr of using IPv6 link scope
Diagram showing using link scope for network isolation

The goal is simple; just prevent any access to take place other than what is necessary. And when that access is allowed, do not trust the peer, its data or its request.

Limit access to outbound connections

In the diagram in the previous section an attack vector is still possible. Should one of the server be compromised, lateral movement is still possible. There are 2 fundamental ways of preventing this:

  • Setup road blocks in the form of firewall rules or comparable methods
  • Simply prevent all access requests by not having any listener at all

The main problem with firewalls, beside their complexity and constant management is that they can stop blocking without anyone noticing. In other words, filrewalls inherently suffer from the silent failure syndrom, which means that you must use other tools (e.g. pen test tools). Using those other tools:

  • Increase complexity
  • Incur cost, both capex and opex
  • Create another management burden

Instead of setting up road blocks, which can silently fail, just stop your server from listening to external requests. Make them listen only on the loopback interface and provide an active method for external requests to reach the loopback interface. This active method will be built with the following attributes:

  • Present a single point of monitoring
  • Participate in zero-trust
  • Perform a single outbound connection to the reserve proxy (see diagram in previous section), or its equivalent
  • Protect the reverse proxy by preventing other software in the server to access the proxy
Diagram showing the concept of having so listeners inside a server
Concept of an active access method for secure servers

Management, monitoring, troubleshooting and debugging functions too should be only permitted via outbound connections.

Minimize content present in machines

Needless to say, the easiest way to reduce, or even remove all listeners is by simply removing all unnecessary softwares / packages present in the machine. Only the few components necessary to deliver the service should be left. Management tools, such as SSH, could even be removed and only dynamically and temporarily added on a need be basis. By doing so, the overall cybersecurity posture will be improved by:

  • Removing tools that hackers need to spread
  • Reduce maintenance by limiting CVE applicability
  • Unlock various modern architectures, such as immutable OS, continuous OS re-deployment and other advance methodologies should they be deem desired

Conclusion

The modern hardening techniques presented in this paper aim at reducing the complexity and cost of managing production setups. This, and improvement in the security methods will make the life of hackers more difficult.

NearEDGE is committed at delivering simple solutions that will:

  • Facilitate the operationalization of the scenarios described earlier
  • Complement your existing layered (using bastion or other means) security architecture

Follow us on LinkedIn for more news

Free account
Share this article


Follow us



Book a meeting
All articles
Compute Anywhere Anytime
Contacts
438 McGill, suite 500
Montréal, QC
H2Y 2G1
[email protected] Contact Us
© 2021 - 2025 NearEDGE, Inc. |   Privacy policy  |   Terms of Service