As penetration testers, we’re used to getting caught. Getting caught tells us we’ve done our job, and the responders are evidently doing theirs. However, as I highlighted in my AusCERT Talk in 2020, the recurring revenue and price point for managed security services can often attract firms that are wholly inappropriate for this line of work.
One of our recent pentests encountered another cyber security cosplay firm, however their response mechanisms caused unnecessary drama that I think needs open, constructive feedback.
An overview of the unnamed Security Operations Centre
The Security Operations Centre (SOC) in question is typical of the managed security services provider that I regularly criticize. A disproportionate number of salespeople and managers relative to capability was observed in their LinkedIn presence, however there was an additional and rather infuriating quality – the SOC had off-shored all their critical functions so that a couple of white guys who used to sell real estate or holiday package deals could increase their profit margins and decrease quality. This in turn creates a workforce culture that is not grounded in its core business, and cyber security fiction starts playing out. I’ll also note that they were responsible for providing IT support, which was certainly behind on patching.
So how did this play out?
We effectively compromised the network they were responsible for within 2 hours of connecting, and had full control over all systems at 5pm on a Tuesday. The following day we were due to focus on post-exploitation scenarios, however we were alerted to an inconsistent set of events that saw us hit the panic button. As we went through our debrief, we realised that there were a few big mistakes.
Mistake 1: Inaccurate time logs
The trigger for our panic was that PowerShell was being executed at 7pm, two hours after we’d turned off our tools. My last experience of such an event was when a threat actor had started cleaning up and planning an exit, possibly ransoming the network at the same time.
As it turns out, 7pm wasn’t when the event which we triggered at 5pm was viewed by the SOC. Two hours after our last action which triggered the alert and 6 hours after our first action on the network, effective time logs and clarity in reporting would have prevented the initial panic.
Mistake 2: Lack of response
The SOC did not raise this with the client until 12:30am, and no actual response besides an alert took place.
The company in this instance had been sold an “EDR” solution without the R (response). The focus of response is to detect, contain, eradicate and recover – none of which had taken place throughout the event. A playbook of responses, effectively rehearsed and with time as a focus would have seen enough information to make a call that this was not an established threat.
Mistake 3: Source and destination IP addresses not provided
Notwithstanding the time issue, the quickest way to resolve this matter would have been to provide the source of events occurring within the network and identifying affected systems. To simply say PowerShell is running is not enough. A lack of traceability was only compounded by the fact that no one could tell us if there was outbound connectivity which, alongside the inaccurate timestamps, returned us to the conclusion that the SOC had not planned its continuous monitoring and key information requirements.
Mistake 4: Unqualified assessment as to the events
The email from the SOC to internal management was “data breach detected.” This was not a data breach, and the evidence provided did not lead to a qualified conclusion of a data breach. This could have been an initial entry, a ransomware event, or even an intent to disrupt operations.
A qualified assessment means analysing and interpreting the information you have, in order to come up with a likely course of action or events taking place. A single PowerShell log, analysed in an untimely manner and without any other data sources to correlate it, is not a qualified assessment.
Mistake 5: Proposing to charge $25K and shutting down the clients network
Our direction to the client was to get the SOC to go through normal operations for response. The SOC’s proposed course of action was to shut down the entire company and conduct an incident response for $25K. Not only does this in effect scare the threat actors off, but it’s a terrible thing to do to a company that has a heightened requirement for availability of its services. Given the ineffectiveness of this provider, we elected to roll Incident Response (IR) ourselves.
Undertaking incident response as a penetration tester
Three commands I run after getting privileged access to a windows system are:
1. ipconfig /displaydns – this lets me know what domains have been resolved
2. netstat /n – identify who is connected to the system (including EDR solutions)
3. tasklist – map out running processes
We executed these against the targeted systems, and did not identify anything suspicious beyond our own presence. After this, we exfiltrated the windows logs and identified the PowerShell commands run, correlating the security events to positively identify the reported events as us, and that the reported time was incorrect. This took all of 1 hour without any additional charges to the client.
In Conclusion
What frustrates me is that among all of the flash reporting and news articles we keep seeing in cyber security, common process faults, poor time management and unqualified analysis are commonplace . However these are to be expected when we don’t have a process of cool, calm and collected thinking that is required of cyber security practitioners. In an industry that is attracting much fanfare, resources and prestige, holding delivery to account is essential.
We’re seeking to expand our process of SOC training, as well as tearing to shreds dodgy providers that harm an industry that relies on trust.
If your business is in need of assistance, please contact us.
Author: Edward Farrell, Cyber Security Consultant