Data management : Leveraging technology : Heather McKenzie

THE ABCs OF DATA MANAGEMENT.

Data management has been a major theme for years but Heather McKenzie argues that many firms are still just laying the foundations.

Over the past few years, an array of regulations ranging from Basel III to MiFID II has occupied the minds of data professionals within financial services. At the same time, innovative technologies such as distributed ledger, artificial intelligence and machine learning have emerged. However, despite the many hours devoted to data management, some firms are still trying to get the basics right, with cutting-edge technology remaining a pipedream.

Andy Schmidt, a vice-president and global industry lead for retail banking at CGI, says the most important technology for any financial services firm when looking at data management is “a clean sheet of paper”. They should sketch out their requirements first. “Many projects fail because the proper expectations and hypotheses have not been set. Without these it is impossible to succeed,” he says.

There are four key areas that present challenges: data availability (where the data is), data quality, data accuracy and data sufficiency (does the data answer the question that is being asked?). The firms that do a better job are those that recognise that some of the data may reside outside of the organisation. However, until firms can get a grip on these issues, Schmidt says they won’t be able to benefit from the latest technologies such as real-time streaming data.

Chris Probert, partner and UK data practice lead at technology and management consultancy Capco, says much of the work firms did to meet regulatory requirements has been “relatively successful” and they are now ready to use these efforts as a foundation on which to add value. He advises that it is all about the data – “better data means better decisions.”

Probert identifies some significant themes in data technology: knowledge graphs, machine learning and ontology. Knowledge graphs are software tools that capture data and show how different data sets are related, how the data is being used and what changes are being made to the data and by whom. “The key thing to recognise about data in financial institutions is that people still don’t know what they have. There’s a big push now to ensure people are more connected with the data within their institution,” he says.

Taking a different slant

Machine learning is now being deployed in data management, particularly to handle the bias that all too often exists in data. Most organisations allow biases to creep into their definitions of data and the technology, along with artificial intelligence, can eliminate such prejudices. Ontology, which used to be considered “boring data science work”, is now of significance, says Probert. An ontology is a formal description of knowledge as a set of concepts within a domain and the relationships between them. Connecting language to describe data items will be the bedrock of organisations coming to grips with legacy systems and data, he adds.

Not enough organisations regard data management as an enabler of data analytics, he continues: “Some 70% of data science is about wrangling data. Good data management should bring that percentage down.” Probert says there has been a pivot whereby some firms are now looking for extra value from the strategies they put in place to meet regulatory requirements. “Good data management will be a digital enabler, helping with analytics and customer retention. If you don’t control data, you cannot use it properly,” he says. Creating a single view of a customer across all siloed products using internal and external data, will help organisations to gain a fuller, richer view of their clients.

Ed Thomas, a principal analyst at Global Data, whose work is focused on AI, also highlights the issue of bias in data. If firms feed data into an AI system without being aware of the biases in the information, the results will not be as expected, he says. “Every piece of data has bias in it and that will be reflected in any outcome unless work has been done to mitigate the bias. That is fundamental to the success of AI.”

Many organisations will be unaware of the bias in their data; Amazon deployed an AI-based employment tool and fed in data it thought it had fully anonymised. However, the AI system could identify males and females based on their hobbies.

The ‘rubbish in, rubbish out’ axiom applies even to the most advanced of technologies. Thomas says financial services firms are beginning to understand that they need to prepare their data in order to get the most out of AI technologies. “The point was not always made in the past and there’s been a lot of hype around AI so people were keen to get moving on projects. It works both ways – AI-based tools will help you get more out of your data, but you need good data to begin with.”

Machine learning tools can also help make sense of that data. “This is a big job for any company,” says Thomas. “Data is often neglected and seen only as an IT problem. But it is increasingly seen as a business fundamental of vital importance. There is hard work to be done to maintain and catalogue data so a firm can get competitive advantage from AI technology.”

Thomas describes data as an “extremely valuable resource”, which needs to be utilised. When it comes to AI, ensuring success of any project is a data issue, not a product one. “AI is not a silver bullet; it needs to be fed and maintained,” he adds.

Andrew Kouloumbrides, CEO of Xceptor, says success in data management is “less about the technology and more about the data itself”. Financial firms want a single view of accurate, representative and timely data. Some firms have tackled this by creating a master data file, while others have been “more pragmatic” and developed strategies and architectures to cope with the multiplicity of data sources.

He adds, “Being able to get hold of the internal and external data is where AI and machine learning come in. About 80% of data within a financial institution is unstructured, and natural language processing and machine learning help firms to extract this data, transforming it in a way that makes it reusable and replicable and to get a better understanding of it.”

Coping with the past

One of the biggest challenges for financial firms, says Kouloumbrides, is the vast amount of legacy systems and data architectures. Many of the solutions have been “sticking plasters”, which now need to be removed. This can be done only with a detailed data strategy.

Such a strategy should consider the data requirements of end customers. “Many financial institutions are focused on what they need to do to make themselves more efficient from a data perspective. They are very inward-focused and haven’t really thought about the data requirements of their end customers.” Firms spend a great deal of time and cost in provisioning data to clients, only to face follow-up requests for further slicing and dicing of the data. “Not only is this an overhead, it is also a source of customer dissatisfaction. Clients are often unhappy with the data they receive.”

Firms must find a way to create a bridge between their clients and the representation of the data they provide, through approaches such as data lakes or data mining. This requires a move towards self-service tools, enabling clients to slice and dice the data the way they want.

There is some soul-searching in financial firms about how to, or whether to, monetise data, which Kouloumbrides describes as part of the core relationship with clients. There is a significant cost for financial firms in preparing data for clients and responding to follow-up requests. “To monetise this, a firm needs a strong data strategy that allows it to understand the data it is holding and to provide complete and accurate data in real time to clients.”

©Best Execution 2019
[divider_to_top]

 

Related Articles

Latest Articles