Cyber Security Q&A With Avella's Daryl Flack (Claude wasn't invited to the session :)
I recently talked in this ‘ere blog about a mighty fine conversation I had with Avella Security’s Daryl Flack.
So good that we decided to follow it up with a Q&A with yer man. I created four questions based on my own recent experiences and observations and here, indeed are Daryl’s own observations in return. And I was lying in the title – I did post the same questions to Claude; he attempted to write a tome in response. Most of it was probably stuff he nicked from Daryl in the first place…
Thankfully Daryl’s replies are more succinct – read and learn:
What can businesses learn – and use to adapt their defence – from the high-profile breaches of last year in the UK retail/car industries?
High-profile breaches across UK retail and automotive brands demonstrate that attackers are not focused on sector boundaries but on identifying where resilience is weakest, and they offer important lessons for how businesses should adapt their defences.
Many commercial businesses still treat cyber security as an IT or audit function, where frameworks such as ISO 27001 are met as a baseline and then largely left at that. However, the recent breaches show that compliance alone does not equate to security. In contrast, Critical National Infrastructure organisations tend to operate under stricter regulatory pressure, which forces greater maturity in governance, investment, and preparedness. Commercial organisations often lack this same external pressure, which contributes to inconsistent board-level understanding, underinvestment in security, and gaps in incident readiness.
To address this, businesses should adopt a “resilience by design” mindset, where security and recovery capabilities are built into systems from the outset rather than added later. This involves understanding which assets and services are most critical, and what the impact would be if they were disrupted. It also requires looking beyond individual systems to understand how failures might occur across interconnected environments, including third-party suppliers, which are increasingly common entry points for attackers. Security controls should be designed in from the start, with continuous monitoring to detect abnormal behaviour and regular reassessment as systems evolve. Alongside this technical foundation, organisations need a culture where cyber awareness is shared across all levels, supported by training and aligned incentives that reinforce resilience rather than purely operational delivery.
Recent breaches also reinforce the importance of assuming compromise rather than focusing solely on prevention. Attackers now use increasingly sophisticated methods, including AI-driven social engineering and ransomware-as-a-service, meaning even well-defended organisations can be breached. As a result, businesses must strengthen identity controls, adopt Zero Trust principles, and reduce reliance on perimeter-based security models. Network segmentation is essential to limit the blast radius of an attack, while supply chain risk management must become a continuous discipline rather than a periodic review.
Ultimately, organisations must recognise that resilience and recovery are as important as identification, detection and prevention. The ability to maintain or restore operations quickly after an incident is now a core business requirement. Those that embed these lessons will be better positioned not only to reduce the likelihood of compromise but also to withstand and recover from inevitable attacks.
How many companies are genuinely planning for recovering from a breach, rather than adopting the “it’ll never happen to us” mentality and what do they need to do in order to get to a status where they can confidently recover quickly, at least to a minimum viable business level?
While awareness is improving, relatively few organisations are genuinely prepared to recover effectively from a breach. Many still operate with an “it’ll never happen to us” mindset, particularly where they lack the capability to accurately assess business impact.
To move beyond this mindset, organisations need to adopt an assumed breach approach and focus on how quickly they can return to a minimum viable business level. Having backup systems and disaster recovery plans is only the starting point; what matters is how well those plans can be executed under pressure.
This requires active preparation rather than passive documentation. Regular testing of backup restoration, structured tabletop exercises and live simulations are essential to identify gaps in processes, communication and decision-making. Organisations also need to clearly define their minimum viable operations and develop a realistic understanding of recovery timelines and costs. True resilience comes from preparation and rehearsal, not just the existence of plans.
Do businesses generally understand the relevance of existing and forthcoming security compliance acts and regulations and are using them to shape and adapt their cyber security strategies, or are many simply ignoring them? If the latter, what are the implications and what is – or could be – the knock-on effect on cyber insurance compliance and claims?
There is a mixed level of engagement with existing and forthcoming security regulations. While some organisations are proactively using them to shape and strengthen their cyber security strategies, others, particularly smaller businesses, tend to approach them more as requirements to be met, often within the constraints of limited resources and competing priorities.
However, these regulations have important implications, as compliance is increasingly becoming a baseline expectation, and failure to meet it can increase both operational risk and financial exposure. This is particularly relevant in the context of cyber insurance, where insurers are becoming more robust and expect organisations to demonstrate strong security controls and thorough risk assessments before issuing policies.
Where organisations do not meet these expectations, they may face higher premiums, restricted cover or even denial of insurance, and in the event of a claim, non-compliance or gaps in controls can lead to disputes or rejected payouts. Even when an organisation feels it has compliant, that doesn’t prevent external factors from impacting a claim.
The NotPetya destructive wiper malware attack was a cautionary tale. The US and UK governments attributed the attack to Russia which led to Zurich deny Mondelez’s $100 million claim under a property policy, and invoking a “hostile or warlike action” exclusion by a “government or sovereign power.”, but Mondelez sued Zurich in Illinois federal court; the parties settled mid-trial in 2022 without disclosing terms or precedent-setting rulings. All policies can have limits or conditions that significantly affect payouts, and this can be very nuanced and will only become more so over time.
Some companies, given the large premiums for cyber insurance policies, are therefore looking to set that money aside for cyber improvements and incident response rather than buying a policy they may never use. However, without an accurate assessment of the business impacts they could be exposed to, which few small companies could realistically assemble themselves, it’s almost impossible to know how much money to reserve, with the risk being that they are unable to cover their costs in the event of a major security incident.
An added complexity is that many commercial contracts are now requiring cyber insurance as a perquisite to doing business.
The bolt-on approach to creating cyber security defences has led to a model where focus is on minimising the existing attack surface but, does it not make more sense to turn this on its head and start with a zero-attack surface and make data, applications and services available on an on-demand basis? If so, does this mean effectively ripping up existing security deployments and almost starting from scratch?
The bolt-on approach to cyber security, where organisations continually layer new tools to reduce an ever-growing attack surface, has reached a point of diminishing returns where tool and alert fatigue is having a real-world impact on operations. In that sense, there is a strong argument for moving towards a zero-trust architecture where minimising the attack surface is a core principle. Where data, applications and services are not persistently exposed but instead made available on an on-demand, least-privilege basis.
This approach better reflects the current threat landscape. With increasingly sophisticated attackers, widespread use of AI in social engineering, and the growth of Ransomware-as-a-Service, Minimising the perimeter is key. A model that assumes nothing is exposed by default, and that access is tightly controlled, ephemeral and continuously verified, can significantly reduce opportunities for exploitation and limit lateral movement.
However, this does not necessarily mean ripping up existing security deployments and starting from scratch. For most organisations, that would be impractical, costly and potentially disruptive. Instead, the shift should be viewed as an architectural evolution rather than a wholesale replacement. Existing investments in areas such as identity, monitoring, segmentation and endpoint security still play a critical role, but need to be realigned around this more dynamic, zero-trust-oriented model.
In practice, this means evolving from static, perimeter-based controls towards identity-driven access, stronger segmentation, and on-demand exposure of services. Over time, organisations can reduce their reliance on bolt-on controls and build a more integrated, resilient security posture. The goal is not to discard what already exists, but to reframe and modernise it so that security is embedded into how services are delivered, rather than layered on afterwards.
Note from author: Dary’s replies came to three MS Word pages in total. Claude was already up to 13 pages when I switched him off. Yes, you can still do that to AI. For now at least…
