Exploiting Data in Wealth Management
Despite the hype, Big Data is not a top priority for private bankers and wealth management CIOs. Maybe it’s just terminology. But more likely, it indicates that private banks don’t exploit the full potential of their data. They should rethink their data management strategy and analysis approach by adopting the technology and algorithms employed by internet giants like Google and Amazon.
Big in the Eye of the Beholder
What an organization considers to be Big Data depends on its data management capabilities and the capabilities of the applications that analyze the data. The majority of private banks and wealth management firms use on-hand database management tools and traditional data processing applications. For them, Big Data is data they have collected but cannot fully exploit with the tools they are using.
The financial industry has always been data-rich, but in recent years, the amount of data collected and stored has increased steadily. In private banking, this increase has come from new regulatory requirements as well as additional client and sales channels. Structured, transactional data in form of call reports and logs of customer interactions are accumulated. However, only a subset of these data gets systematically analysed, mostly for management information and decision support purposes, as well as for risk and compliance reporting.
Little is done to offer insight and guidance for the front office, yet it is the front office where data analytics becomes a differentiating factor. With the type of software programs that Google, Amazon, and other Big Data players have developed, the front office can get valuable information to personalize its client advisory process and to deliver better service. Smart software enables browsing through data of different types and structures. It allows data to be analysed without having to transform and load them into the rigid model of a data warehouse. It lets analysts dynamically define what they are interested in and test new hypotheses on the fly.
Google and Amazon have shown how it should be done. They abandoned the traditional warehousing approach in favour of a more dynamic analysis approach.
In the traditional approach, data are extracted, transformed and loaded into a data warehouse for further analysis. Many of the legacy systems used by private banks lack the models needed to store all the details about clients, products, fees, orders, etc. Over time and with the pressure to capture more data, these limitations have led to work-arounds, the proliferation of data silos, and an overall more complex data architecture. The road from data silos to a warehouse is rocky. Data elements with heterogeneous semantics and different granularity must be matched and mapped into one format. This process is time-consuming and error-prone. The approach followed by the Internet giants greatly facilitates this pre-analysis phase: Data are analysed at their source, and relevant subsets are subsequently assembled with those from other sources for the final output. Analysis does not have to wait until the data reaches the warehouse.
The other major difference between the traditional and new approach pertains to the analysis itself. In the traditional approach, the analysis is predefined: You know what you are looking for. In many cases, the model of the data warehouse is already geared towards the type of analysis it has to support. Data are aggregated or truncated to fit into the schema. During this process, information is being lost. A good illustration is the purchase price of a product, which is averaged at the portfolio level in the warehouse. Using the warehouse, the analyst cannot retrieve the client’s profit or loss for this product at the transaction level. With the new approach, no information is lost. The real beauty, however, is that the analysis can be defined dynamically and changed from one analysis step to the next.
Gleaning Valuable Findings from the Sea of Information
The basic mechanism behind many powerful analysis programs for large datasets is a ‘map’ procedure that performs filtering and sorting and a ‘reduce’ procedure that performs a summary operation. Assume that we want to know about all high-net-worth individuals who are invested in gold and own a vacation home in Hawaii. A first iteration over client portfolios will extract the portfolios with an asset allocation to gold. The second iteration will go over the call reports of clients filtered out during the first iteration and extract those reports that mention a vacation home in Hawaii. The result will be aggregated to generate the final output, such as a list of clients matching the two criteria.
Conclusion
Private banks should embrace Big Data as an evolutionary move to leverage the assets they already have. They should also adapt to the analysis approach spearheaded by Google, Amazon, and others. What needs to be highlighted is the intelligence necessary to make good use of the data and the innovation necessary to come up with new ideas on how to dice and slice the data. One of the advantages of the traditional analysis approach has been the clear definition of what you want to get from the data. To profit from the new approach, banks need to put analysts in charge who know the business, know what data are available, and who understand business needs and priorities.