Your SEO optimized title page contents
Ask About Our Solutions

Data delivery down to a tee – Performance Measurement Review

RIMES was recently asked by Performance Measurement Review to submit editorial content outlining the challenges involved in streamlining the delivery of index data to the performance team.

Performance Measurement Review is published by Osney Media, in association with the Performance & Risk Association which is devoted to aiding learning and development in the areas of performance measurement, attribution, risk and client reporting.

What follows is the full article published in PMR Volume 1, issue 4:

Data delivery down to a tee

If you streamline the delivery of index data to the performance team you will improve data quality, reduce the time to market and the burden of data management. Ultimately this will minimise the stress on the performance team so that they can forget about the benchmark data and enable them to concentrate more time on the value-add tasks and the investment process.

Data quality is extremely important to the performance team, but it is getting increasingly harder to keep pace with the wide range of benchmarks required to match the fund manager’s investment strategies.

The performance team is now required to report both internally and externally at a much greater frequency and end users have become better informed and consequently more demanding. In recent years there has been a growth in attribution systems and a major focus on fixed income attribution with sophisticated models that require a large amount of data. The burden therefore falls on the IT department and quite often on the performance team to source a wide range of benchmarks to supplement the portfolio data.

So what are the challenges of sourcing benchmark data? The first question is one of using in-house resources to do it yourself or outsource to a data provider. If the IT department has generous resources available they will be suitably placed to facilitate the needs of the performance team. If the choice is to use a data vendor or data integrator, will one supplier be enough or will you require multiple vendors to meet all of your reporting requirements?

Whichever route is selected it is important to have the ability to keep pace with the fund managers’ changing requirements and desire to meet new mandates. How quickly can the IT department or data vendor react to a request to provide data for additional indices to enable the performance team to produce the required monthly or quarterly client reports? Does the solution allow for multiple sources of benchmarks across many asset classes; equities, fixed income, real estate, hedge funds and commodities?

Equally, can the internal architecture, such as the data warehouse or middleware systems, cope with the increasing growth of data sources, multiple file formats and an increasing variety of data items across multi-asset classes?

The demand for custom benchmarks calculated by official index providers is on the increase to meet the evolving investment strategies and to facilitate the need for greater transparency. This area presents a new problem; can the data aggregator handle the provision of multiple custom benchmarks from various sources?

As the list of benchmarks, constituents and files grows it means that the team requires a way of tracking all of the sources and equally ensures that all files received are complete. They will need to manage the relationship with each data vendor and keep detailed contact lists to ensure timely responses from each vendor in the event of delays or data corrections.

Once the data has been sourced, questions with regards to its transparency arise. With the data available is it now possible to replicate the index composition? Is it now possible to derive the performance of the index from the constituents to within an acceptable level of tolerance? In order to do this for the total return versions of equity benchmarks adjustment factors and dividend information will be required. The data will need to be of such quality, that it will allow for the roll-up calculations to take place on a daily- and month-to-date basis.

Let us take a look at the issues around sourcing the data by going direct to each official index generator. First, the IT department will need to put in place a number of feed handlers to retrieve and load the multiple file types and formats. It is possible that the data collection process is destined for a data warehouse solution and therefore IT is responsible of obtaining data for front office as well as the middle office. Ownership of the data will need to be established between the performance team, the front office or IT.

In many cases the performance team will take ownership (and the burden) of owning the data as they have a much more detailed requirement compared to the rest of the company. If the IT department has the ownership it is important that the flow of data to the performance team is efficient and not delayed by the work flow of the internal applications. It would be highly beneficial if whoever takes ownership has a complete understanding of the dynamics of the underlying data.

Managing data feeds presents a number of challenges due to multiple delivery points. These may include email, FTP or simply downloads from website. The feed handlers will need to be modified to cover the different options and, of course, be able to parse the various file types and formats.

With data updating continuously, indices are calculated on a global basis and the loading process becomes a constant cycle. It may even require the handling of updates throughout the weekend to cover the different publishing times between vendors. For example, some fixed income providers such as iBoxx and Merrill Lynch calculate their indices on a weekend when the month-end does not fall on a business day. Therefore, as the system processing needs to be comprehensive so does the data solution receiving the data. To cope with the global processing will invariably result in a requirement for round-the-clock support to monitor and address any delays or issues with the data.

In order to ensure the items are mapped correctly, the processing and standardising of the data is an important task. This will require the input of IT, a business analyst and/or a performance analyst. Different calculations libraries will need to be maintained to handle the diversity of equity and fixed income benchmarks. For example, knowing whether the index composition is market capitalization weighted, or factor weighted, or whether there are any diversified constraints or issuer constraints within the index.

In the fixed income arena it is important to understand how to handle variations of rebalancing, for instance month-end as opposed to first of the month. Furthermore, the front-office requirements may differ to those of the performance team; particularly when comparing forward or backward looking portfolios, such as the statistics and returns universes from Lehman Brothers. In some cases it may be necessary to process additional files on the rebalancing date to ensure that performance can be replicated and the bps noise reduced to within a predetermined level of tolerance.

Once the coverage and availability of data has been established, the data quality becomes the focus, as it is of importance to the performance team. This makes timely identification of quality issues a key requirement. It would need to be understood where in the process the issue gets uncovered. Does the data vendor spot the issue, perhaps the data warehouse or middleware system captures it or maybe the performance system itself? Uncovering an issue is only part of the solution, the location and cause of the actual error would then need to be known.

For example, incorrect dividends, adjustment factors or changes in shares or nominal can cause these kinds of issues. The handling of missing or changing identifiers is also an essential aspect of data management. Perhaps the data error only affects the index or maybe a correction is required for both index and constituents. Once the issues have been identified the data vendor needs to be informed and the issues need to be investigated before a correction file or files are issued and processed. The data validations will need to be carried out a second time to ensure that the quality assurance levels are satisfied.

Understanding and identifying these types of failures can be extremely time consuming for the performance team. So this is where outsourcing the data and quality management to a data aggregator brings greater economies of scale and efficiency.

If you can streamline the delivery of index data to the performance team, you will improve data quality; reduce the time to market and the burden of data management.

Ultimately this will minimise the stress on the performance team so that they can forget about the benchmark data and enable them to concentrate more time on the value-add tasks and the investment process.

Want to find out more?

If you have any questions about our thought leadership content, please get in touch.