INVESTMENT MANAGEMENT: “IT’S THE TECHNOLOGY, STUPID”
Neil Smyth, Marketing and Technology Director at StatPro.
New assets come with new challenges
Assets under management are at an all-time high. In 2013 they stood at $68.7 trillion, which represents a 13% increase over the pre-crash peak in 2007. The future also appears to be rosy for the asset management industry with global AUM predicted to grow to over $100 trillion by 2020, but how do active managers stay competitive and relevant in a world where investors will have more choice than ever before? We’re already seeing the traditional institutional investor looking outside the normal channels of vanilla asset management by ploughing funds into alternative assets with hedge funds being the vehicle of choice. A recent survey found that pensions are the fastest growing investor segment and are the largest contributor to the growth of the hedge fund industry. Personal wealth is also on the increase. High net worth client assets are predicted to increase from $52.4 trillion in 2012 to $76.9 trillion in 2020. As the global economy continues to recover and emerging markets grow, the mass affluent client market is expected to increase from $59.5 trillion to over $100 trillion in 2020.
Where will these assets end up? What kind of investments and asset types will be dominant and how can asset managers compete to ensure they are managing this money over someone else? Traditional active management is under threat from new asset classes and new lower cost investment vehicles. The huge growth in ETFs allows any investor to gain access to markets and market segments without the need for active management. It’s easier than ever to diversify your investments using passive means. The growth in passive investing is not just restricted to ‘Joe Public’ either. A 2013 survey by Ignites of 1,001 investment professionals, many of whom make their living promoting active management products, found that two thirds have invested a sizeable amount of their own money in passive products with only one in five saying they avoided passive products all together. Having easy access to these new channels and only paying four basis points for holding them in some cases, presents a real challenge to margins and profitability in the active management market.
It’s obvious that there is real pressure on margins and profitability within the capital markets industry in general. Regulations keep on coming and many directly affect the asset management industry and drive up costs. A recent KPMG paper states that the asset management industry is investing heavily in compliance, on average spending more than seven percent of their total operating costs on compliance technology, headcount or strategy. Based on extrapolations from their data, KPMG believes that compliance is costing the industry more than $3 billion. This new wave of regulation affects all aspects of the asset management business, from the advice they provide, the way they trade securities to the way they report on performance and how they market themselves to investors. Keeping up with new regulations and ensuring effective implementation is a real challenge, and the impact of non- compliance has never been more serious.
Technology infrastructure – need for change
Is the existing technology infrastructure and application landscape within the average asset manager up to the task of ensuring cost effective management of increasing data volumes, reporting and regulatory pressure? I think not. With two thirds of asset management firms relying on technology from 2006 or before, what chance do they have of competing for new capital while managing the cost and margin pressures we’ve already discussed? Many systems exist as standalone solutions, they perform certain tasks and produce data, but they don’t consider a more complete workflow, they were not designed to integrate with multiple systems together on a single data set. This traditional infrastructure creates silos of data that doesn’t add any value because it’s immobile, difficult to share and cannot be acted upon by the right business users in a timely fashion. Along with budget pressure, a growing desire to outsource non-core activities and this dislocation of data, the priorities regarding software development and infrastructure deployment are undergoing a major transformation within our industry. The IT strategy needed to meet today’s challenges must be based on an open architecture framework, elastic, scalable and on-demand hardware, collaborative workflows and new service delivery models such as Software as a Service (SaaS) and cloud. These new service delivery models mean new external partnerships. This switches the priority of internal IT teams from managing hardware and software development projects, to managed services and external partner and service level management. CEB TowerGroup estimates that the majority of applications could be delivered via alternative methods as early as 2016. The short term implication is that investment in IT capabilities is now assuming that traditional hosted and on-premise solutions are the last resort.
There is no doubt that existing hardware and software platforms are going to be replaced. The speed of that replacement is variable, but the trend towards outsourcing is continuing and makes sense as the new delivery models fit with the growing desire to streamline IT infrastructure and operations. This desire is being fuelled by the focus on core business activities rather than support operations such as IT. This new ability to consume technology as a service means the internal IT ‘power stations’ will no longer be required at such a scale as they are today. Think of this migration to managed services and outsourced infrastructure like the move to the power grid, instead of owning and operating a power station attached to your factory.
These traditional local IT ‘power stations’ are not up to the task when it comes to the growth in data volumes and the sheer amount of analytical calculations that are needed with today’s multiple asset class portfolios and the compliance reports that are needed to satisfy regulators on a daily basis. Legacy applications and the architecture behind them were simply not designed to handle the scale and complexity found all across the asset management world today. Even if a system went live in 2010, it was probably architected in 2006 and developed in 2007/8 before going through an 18 month installation project. It was designed before the very first iPhone, so what chance does it have of coping with today’s requirements?! These on-premise applications were designed to run on single servers or at best, a small cluster of servers. The issue here is that eventually the system will plateau. It simply won’t handle any more data and it cannot produce the output any faster. Adding more data simply extends the processing window. Adding more processing power or memory doesn’t make any difference once you reach this point. The problem lies in the underlying architecture and design of the software. How many times do you experience delays in waiting for a system to recalculate results? Or if there is a data issue, waiting for an overnight process to catch up after the correction has been made?
Many IT departments have invested heavily in ‘virtualisation’ – the ability to create multiple virtual computers all living on one large pool of hardware. This enables flexibility in managing servers and also helps with portability and disaster recovery. It also reduces costs because you get full utilisation from your hardware allowing you to get more from less. The problem is that this infrastructure enhancement doesn’t address the issue of the software application itself. If the software isn’t designed to scale then it is still going to plateau on a virtual server the same as it does on a physical one. The solution is scalable, multi-tenant software. Software needs to be able to scale out over many servers (even hundreds). Think about Google. Do you think the whole thing runs on one or two monster servers somewhere, or does it scale out across tens of thousands of smaller machines that each perform smaller tasks? Scaling out gives an application the ability to scale with business requirements. The cloud is where this scalability works best. Infrastructure as a Service (IaaS) providers like Amazon are able to power up thousands of virtual servers in a matter of seconds to meet the demands of an application during heavy loads or time critical calculations. They are simply powered off when not needed and the application owner is charged on an hourly basis for the servers they use. To put some real dollar values on this, Amazon charges $1.68 an hour for a server with 32 processors, 60 gigabytes of memory and 320 gigabytes of super-fast storage. These economies of scale are impossible to match with local on premise IT infrastructure. Multi-tenancy is also a key architectural element in the next generation of applications, especially from external providers (see more on multi- tenancy below). Cloud-based SaaS applications are constantly being upgraded behind the scenes which abstracts the business users from the pain of software upgrades.
Changing vendor landscape – the move to SaaS
The changes in technology and the deployment of services are also having a tremendous impact on the technology vendors that supply the asset management industry. Many have come from the world of the large software deployment project and the support and maintenance that goes with it. This world only works for so long before the cost of innovation is too high. Costs rise and service levels and innovation drop because too much time is spent on servicing outdated deployments of the ‘land based’ software that may have been developed many years ago. Software deployed on-premise with a client, instantly leaves the control of the vendor. They aren’t in control of the environment or any customisations that are made locally. Supporting this structure when you have hundreds of clients quickly becomes very difficult and very expensive. This results in poor support, poor service levels and cost increases which eventually get passed on to the end client. The level of innovation drops because the technology vendor has to maintain old versions of software. A high percentage of developer resources are wasted on fixing issues in old versions that are still live and in production with the client, instead of focussing on new versions and improvements. Having the ability to focus on a single version of a technology solution no matter how many end clients there are is a huge productivity bonus for the technology vendor. These productivity and innovation gains mean that development companies in all industries are switching to multi-tenant architectures with the cloud as the delivery mechanism. Certainly new software start-ups are able to benefit from day one using this new approach. This will eventually drive vendors who fail to adapt, out of business altogether.
While the business advantages of a streamlined IT strategy with outsourced partners are becoming clearer for asset managers, there are key questions of control and data security that must be addressed. It seems every month there is a new headline story of a ‘cloud hack’ or data security breach somewhere and this damages the reputation for cloud vendors and SaaS developers. What is the reality and what can be done to ensure control is maintained and data is secure? Classifying data and the business importance of various services helps to create a framework for assessing the readiness of internal applications for alternative methods of delivery. Not all applications are mission critical, not all data is ultra-sensitive so it’s important to map the requirements to the reality, rather than having a single policy that prevents any of the business benefits from being realised. Comparing new cloud-based SaaS applications with existing on-premise solutions will quickly result in anyone realising that pure play SaaS applications are more secure. Security is built in from the ground up and not added on at the end of the development process which is the case for many on premise applications.
Choosing a vendor: Pure play cloud vs fake ‘cloudwash’ – 5 key questions
But what is a pure play SaaS application and how can you compare one service versus another and why should you care? Well, the simple reason you should care is that fake ‘cloudwash’ applications do not deliver all the benefits you may be expecting. You may be sleepwalking into another IT project that is as flexible as an oil tanker and as cost effective as an English Premier League football team. Here are five questions you should be asking the application vendor and the answers you should look for to help you spot the difference.
1. How many versions of the system are in production? The answer should be one.
Why? True SaaS applications are single version, no matter how many clients are using the system. This is called multi-tenancy. Think of it like a five star hotel. Lots of private rooms with security and even safes in the room, but they’re all in the same building. They share the pool and the bar and if the hotel upgrades its facilities then everyone benefits at the same time. All the rooms are secure but the hotel can get access with permission to clean and service your room. If your solution has multiple versions then you have ‘the software grid of death’ – lots of live installations with different versions everywhere. This is bad because it means the software vendor has to maintain all their clients while trying to write new versions of the software. They must divert resources to support legacy implementations instead of focusing on innovation.
2. Does the service run on shared hardware or your own dedicated server?
The answer should be shared hardware.
Why? Sounds counter-intuitive, you want your system on your own hardware right? Wrong. A true cloud-based SaaS application is scalable. It has access to hardware that is elastic. It can grow as more demand is placed on it. You simply ‘spin up’ new servers and keep on scaling. True SaaS applications, like StatPro Revolution, have been written and built especially to utilise hardware in this way. Applications pretending to be real SaaS still sit on side-by-side servers that cannot scale.
3. What is the update schedule? How many updates get released each year?
It should be frequent – at least five or six updates a year.
Why? With SaaS you’re paying for a service. The beauty of a real SaaS application is that it’s much easier for the software vendor to issue new updates and improvements because they only have to do this on one environment. SaaS vendors don’t have to manage IT upgrade projects so they do what they do best, make great applications. You may think that too many new versions could be a problem. How can I keep up? How can I test all these changes? Testing everything is an old legacy process that you may be used to with traditional software. You don’t need to test everything every time there is a new release because they are in smaller bite-size pieces. When you get a huge dump of new code every 12 months then you need to test many things. With SaaS and the cloud, you don’t, you simply log in and start using new functionality.
4. Is the platform secure? Has it passed external security audits?
Well, obviously the answer should be yes and yes.
Security in pure SaaS applications is paramount because of the multi-tenant features and the fact that they are designed to be internet facing. It’s like building a house then thinking about adding a security alarm afterwards versus building a castle and designing it to be secure by digging a moat and adding a drawbridge from the very start. Passing well known industry audits is key to demonstrating a vendor’s competency and commitment to information security. Look for things like ISO27001:2013 and SSAE16 certifications. Also ask about penetration tests, not quite as painful as they sound but they effectively involve a specialist security company ethically hacking your service to report on possible vulnerabilities. These are an essential part of maintaining a secure service.
5. Does the system play nicely with others?
Yes, pure cloud SaaS applications think about integration with other systems.
Why? Asset managers use many applications and they all generate data in some form or another. It’s very common to share data between applications and to integrate them together. Data is the essential ingredient in many systems and it can be a very time consuming and expensive thing to manage. Cloudwash systems that are not true SaaS multi-tenant applications often have multiple versions with various release cycles (see question 1). They don’t always have easy ways to integrate, leaving you to build your own solution to get it to talk with anything other than itself. If you have to build your own integration solutions you may get left behind when a new version is released that breaks your integration and leaves you with a big problem. Pure SaaS based applications often include a web API. This is an interface that allows you to connect with all your local applications and other cloud based platforms. This interface is supported by the software vendor and you’re never left alone to maintain it yourself. As new versions are released you simply get access to more data that you can integrate with other systems.
It’s clear that the investment management industry needs to replace legacy IT systems in order to stay competitive. This isn’t something that will take place overnight but we have seen how on-premise and traditional hosted solutions can’t begin to scale or meet the business requirements of today’s asset manager. Regulation is here to stay and investors are looking for returns from a wider array of asset classes. All this creates data volume and pressures on reporting and transparency. A new generation of IT and software is required to meet these demands and to bring greater levels of agility and collaboration to the industry. As a portfolio analytics provider, StatPro began planning its journey to the cloud in 2008 and we’re confident this was the right decision as we continue to bring pure SaaS based applications to the market.