NHS versus ransomware - the dangers of cure by tickbox

The weekend’s WannaCry ransomware attack disrupted 48 NHS organisations, caused departments to be shut and operations to be cancelled. In short, people's lives were affected. Cyber security – or the lack or it - shook the real world and rocked real lives. Lots of PR went round in the aftermath with tips on security - but to effect real change is more than a tickbox exercise. What lessons should we take from the attack? What questions should NHS boards ask? Should IT and security departments change the way they work? Should patients think differently about their data in the NHS?

Posted by Brian Andrew Runciman on 16th May 2017
Login to join the conversation

Comments (10)

Matthew Russell Bennion

27 July 2017

This has a lot to do with the culture of sharing data via email attachments. It's a difficult one to lock down as this was a macro based security breech but Antivirus should have been the first line of defence followed by Trust firewalls to block any strange requests.

Personally, I think world wide web access should be limited in NHS Trust environments the firewalls should be tightened to access only IPs/addresses that are need for essential systems/servers. Doing this would improve network traffic no end!

Maybe restrict attachments to nhs.uk/nhs.net addresses with recovery of other attachments from other addresses requiring a request to the service desk. I know this wouldn't stop the virus spreading if it had already breached but it would stop the external source attachment dead in its tracks.

Martin Tom Brown0

02 June 2017

There is one aspect of the WannaCry attack against the NHS (and many other Windows using sites worldwide) that doesn't seem to be getting the airtime that it deserves. This attack was made possible by US NSA allowing (some of) their stash of dangerous zero day Windows exploits to be stolen by hackers. If they cannot be trusted to keep military grade weaponised code safe then civilian IT people are really up against it. The only saving grace was that an accidental hero managed to find and hit the kill switch before too much damage was done. The next time around there may not be a kill switch to hit.

We have to accept going forward that such cyber attacks are inevitable from time to time and with increasing sophistication. Slow fuse ones that propagate silently encrypting as they go are potentially far worse than one that immediately draws attention to itself by rapid exponential growth.

One problem for many big ticket scientific instruments with useful hardware lifetimes of 10-20 years is that the manufacturers are keen to sell shiny new hardware and do not support drivers for their older hardware on newer OS's. Networks have to carefully isolate such insecure legacy machines. You see the same thing in the domestic market with scanners and printers (but the capital cost is nowhere near as great).

Perhaps there needs to be a new extreme critical priority Windows update that nags every day that you don't install it for patches that have been made essential by the breach of military grade malware targeting previously undisclosed zero day privileged code execution exploits. I'd hate to allow general automatic "critical" Windows updates as some of them invariably do break certain applications. It is always going to be a balance of risk against benefits for any set of updates.

It seems to me we need a new category of risk here "Above Critical" for bugs that allow direct execution of arbitrary hostile code.

Andrew Charles Ellis0

22 May 2017

One question is whether there is sufficient understanding at Board level about the role of IT in the NHS, and sufficient accountability. This attack demonstrated that the health service cannot now function properly if the IT fails because its systems and equipment are widely interconnected. Surely NHS digital systems (including medical systems as well as the administration systems) should be being managed as critical infrastructure with clearly defined and demonstrably achievable RPOs and RTOs, yet in many cases they appear to be being managed as non-critical administrative systems. Ultimately this comes down to the business case for service continuity, whether service continuity matters and if so, whether the Board is accountable for failure to maintain service continuity. Only when Board members are held personally accountable for service failures like this will the situation change.

Howard Edward Gerlis

19 May 2017

No-one has mentioned the most basic and yet most inportant element that we as "professionals" should be aware? Backups, backups, backups. Simple.

Adrian Firth

19 May 2017

There are lots of contributory factors, but ultimately we need to be asking:

(1) Why are 'security' agencies weaponizing exploits instead of getting them fixed? (2) Why do we not develop products, services and solutions with a view to having support across the entire lifecycle, including ongoing component and infrastructure availability? (risk is the flip-side of innovation, at which point you think 'risk management' and therefore 'apply holistic infosec practices to the risk analysis' and therefore also to product design) (3) Why do people not have control over data about them, i.e.: their data?

John Robert Sherwood

18 May 2017

Patch management is not just a simple exercise in applying the patches. It requires a careful risk management approach. It needs an end-to-end process that takes account of the other incidental risks, such as emergent system properties brought about by interactions. Typically there will be hundreds, even tens of thousands of platforms to be patched, and this can take a long time to roll out the patch. Another issue is – which systems should be patched first and what is the priority order?

The patch management process should begin with creating a state of ‘patch-readiness’, meaning having the process in place and tested for it’s own suitability. Some of the key steps to be incorporated into the process should include:

  1. Make the patch management process an integral sub-process of business continuity management.
  2. Ensure that patch management will enable business and not hinder it.
  3. Assess the business criticality of IT systems so that priority in patching can be decided based on critical need.
  4. Ensure good vulnerability intelligence from CERT bulletins and the like, so that zero-day attacks can be identified and likely consequences assessed. This is an essential aspect of patch prioritisation – which ones should be applied first and what sequence of patches is the best. Some at least will be recursive patches – patches on patches.
  5. Test each patch on a test platform to assess its effects on overall system performance. Assess the risks of patch failure.
  6. Always develop a regression plan before applying any patch in a live production environment. This may require having a disk image of the unpatched state, because many patches are irreversible once applied.
  7. The regression plan should also be tested thoroughly to ensure that it would work if needed.
  8. Roll out the patches in a systematic way according to the priorities identified and monitor the impacts on live production systems in case the patch testing has failed to identify problems.

Sean Collins

17 May 2017

The problems ultimately come down to :

  1. Not having a single body that is responsible for technical standards across the NHS.
  2. Not having IT Directors on the boards of the individual trusts to help make budgeting decisions.
It's unfortunate but the NHS has become a numbers game, and not having the right staff and departments in place to help guide IT strategy and risk management when it comes to budgeting has proved to be fatal.

I saw a quote from James Stewart ( former deputy chief technology officer at the Government Digital Service) that said "We could have bought [security] patches, but that doesn't mean we would have installed them.". My jaw hit the floor when I read this.

Ganiat Omolara Kazeem

17 May 2017

I am a graduate student Working on my thesis away from campus this year

I recieved 42 emails or so I screenshot a few and sent it to our IT That's because the risk had now expanded from destroying computers on a likely restorable network to personal computers holding work belonging to cash strapped students

The response was glib " we've advised our staff, if you deleted them there's nothjng to worry about"

I circulated screenshots within a few social groups

Here is the interesting thing

I got more And this time The sender spoofed my university email amd eventually made a classic mistake (used a short form of my name only found on social media)

Long and short of my experience wherein the only way to avoid accidental opening of emails was to set an automatic delete filter independently.

It shows that this attack relies on open information such as social media channel data and also relies on convincers, spoofing to demonstrate authenticity. I'm appaled gmail who provide my student email don't have SPF protection.

For a lot of academics One click and poof Life work gone

In any event, my take on this is after the chicago blue netflix episode showing this last month, who and how does the NHS police its mail network.

Secondly therein in this problem lie the evils of outsourcing outsourced work that is outsourced and hiring agencies to manage critical technical jobs instead of hiring fixed staff.

In simplicity My conclusion

The devil/s behind these attacks are people inside the organisation, linked to us on social media (Im defriending all strange names methodically now) and on arbitary sites where we professionally give work and personal or academic emails in working for the greater good.

So;... the security to prevent a repeat must begin with shaking and rinsing from the inside.

Brian Andrew Runciman

17 May 2017

Agreed, this discussion is in the health and care section though, so I wanted to get views on that aspect.

Howard Edward Gerlis

17 May 2017

It's not only the NHS that was affected. Important to think more broadly.