LEARNING ON THE JOB?
Cyber-security is one of the hottest topics right now but views differ on its future. Chris Hall looks at the debate.
One of the many disturbing factoids bandied about in discussions about cyber-security is that it typically takes an organisation eight months to even realise it has been attacked. The implication being that you’ve been burgled before you know it and the perpetrators have long since made their escape. Such paranoia-inducing statistics are, of course, best guesses. However, they also reflect the ever-growing sophistication and diversity of cyber-attacks.
Alongside well-known email-based ‘phishing’ scams (“Dear account holder…”) and distributed denial of service (DDoS) attacks designed to bring websites and web-based services to a halt, cyber-intruders are targeting customer and transaction data as well as other digitised assets, with the potential to damage the victim’s reputation and its balance sheet.
Juniper Research recently predicted the global cost of data breaches would quadruple by 2019 to $2.1tn, while Gartner expects IT security spending to increase from $75bn last year to $170bn by 2020.
In particular, the growing incidence of advanced persistent threats (APTs) – which may collect, delete or destroy data over a period of months – is causing alarm. Eight months or not, lengthy detection rates are causing firms in the financial markets and beyond to explore new ways of identifying and tackling rogue elements that have sneaked through the layers of cyber-protection erected in recent years.
Many believe a signification contribution can be made by tools that monitor and analyse information flows, teaching themselves to recognise and respond to particular patterns as they process massive quantities of data. Whether you call it cognitive computing, big data analytics, machine-learning algorithms or artificial intelligence (AI), this is an area of significant growth in cyber-security.
According to Russell Stern, CEO of specialist solutions provider Solarflare, RSA Conference 2016 – a 40,000-plus information security event held in early March in San Francisco – was abuzz with firms plying software that can detect cyber-threats in close to real time. “A lot of firms are looking to detect breaches earlier and remediate them faster using AI. But potential customers may struggle to identify their ideal provider in such a crowded market,” he says.
The logic for these claims is well-established. Just as the algorithms that recommend books or films on Amazon or Netflix become more accurate the more data they can analyse on an individual’s reading or viewing preferences, so AI-based cyber-security tools increase the speed with which they can spot a rogue IP address or unusual traffic upsurge over time. Rather than taking eight months to detect a cyber-attack, a machine-learning algorithm could isolate and shut down a system within minutes or hours of suspicious behaviour being identified.
In addition, such tools should not endlessly spew out false positives because – like the human brain – they can be trained to search for more evidence if necessary, learning how to react from ‘experience’. Moreover, the recent explosion in demand for big data analysis across multiple industries means many tools are already highly sophisticated, while the facilities required to store, transport and analyse huge quantities of data are widely available, and increasingly affordable.
As well as detecting cyber-attacks, it is claimed machine-learning tools also have a role in quickly determining how systems were infiltrated, what damage was done and when they are safe to bring back online.
Some observers believe AI-based solution vendors still have much to learn about cyber-security, while others suggest they must take their place alongside the existing armoury of weapons ranged against cyber-criminals, rather than replacing them. This is true in the securities markets, where threats are many and the stakes high. Compared with other industries, the sector is a challenging and appealing target.
It is high-profile and highly regulated, populated by sophisticated firms holding extremely valuable assets. Most of its data flows and digital assets are extremely well protected and highly structured. While most transactions are between trusted counterparts, it’s not always possible to check every communication or credential – even in these days of ‘know your customer’s customer’ – meaning the securities trading value chain is only as strong as its weakest link. In short, the securities market is tough, but tempting, whether your aim is to damage confidence in the system or an individual institution, to make financial gain through theft or ransom of client data, or industrial espionage.
“Retail banking is an easier target than securities trading, not least because of the comparative complexity of monetising the proceeds, but on the other hand, you only have to win once to win big,” says Richard Benham, professor in residence, UK National Cyber Skills Centre.
Cyber-security threats evolve over time as perpetrators become more sophisticated, often by information sharing, while organisations within the securities markets are subject to different types of attack. For example, a survey conducted by World Federation of Exchanges and the International Organisation of Securities Commissions (IOSCO) in 2013 found DDoS was the most common form of attack, with no exchange operators reporting attempted financial theft.
However, Stern, whose firm initially specialised in low-latency trading technology, and now provides platforms that accelerate, monitor and secure network data, says DDoS attacks are a secondary priority to ensuring firms have full control of access to platforms and devices in the workplace. “AI has a key role, but cyber-attackers can always find a way round any software-based solution; firms need to implement layers of defence to protect themselves, including both hardware and software,” he says.
Monitoring network flow for suspicious abnormalities is an established part of cyber-security, but machine-learning algorithms need to know what they’re looking for when searching through the digital haystack for a potentially malicious needle, notes Allan Russell, senior vice president of strategy at analytics solutions provider SAS. “Machine-learning tools can sift through the data at great speed, but they will only find what they’re programmed to look for, for example deviations from normal data patterns that suggest a device is part of a botnet,” he says. As such, they are typically flagging up potential problems for humans to act upon, rather than automatically responding to the threat themselves.
And although data storage and analysis costs have declined, Russell points out that some firms only capture low-level network data to save on costs, which means it has to be enriched – for example with IP addresses – before it can be meaningfully analysed.
Many remain highly sceptical. Anthon Chuvakin, a research vice president in Gartner’s security and risk management group, blogged last year that the rush to deploy machine-learning algorithms to bolster cyber-security is highly risky because today’s tools apply non-deterministic logic, i.e. they are not guaranteed to deliver the same result from a given starting condition. “Do we want a security guard that shoots people based on random criteria, such as those he ‘dislikes based on past experiences’, rather than using a whitelist (let them pass) and a blacklist (shoot them!)?” he asked. Others also worry about the imbalance between positive and negative cyber-attack data samples for AI tools to learn from.
But the growing nature of the threat requires action. A February 2015 IDC whitepaper asserted that the prevalence of APTs demands a new, pro-active response from government and industry, including the use of “predictive and behavioural tools” to detect threats, understand attacks and execute appropriate enterprise-wide responses.
Sharing Stern’s concerns about the hackability of all software tools, IOSCO senior economist Rohini Tendulkar backs the multi-layered, consistent vigilance outlined in the IOSCO-Committee on Payments and Market Infrastructures recent consultative report*. “100% security is an illusion,” she says. “If you assume machine-learning algorithms will detect all threats, you could risk letting response and recovery falter. Cyber-security is never complete.”
*Guidance on cyber-resilience for financial market infrastructures. CPMI-IOSCO. November 2015.