Our policy statement on our strategic approach to transparency published in July 2008 set out a transparency agenda, designed to provide a range of information about complaint-handling – by us and by others – for wider scrutiny.
Sharing this information with a wide range of stakeholders is intended to facilitate the resolution of complaints on a timely basis and enhance the efficiency of our service. One important part of the transparency agenda is the publication of complaint data on individual financial businesses – including the percentage of complaints upheld.
Our September 2008 discussion paper on publication of complaint data: next steps explained the relevant data that was available for publication and sought comments on various practical issues about publication.
We are grateful to those who submitted comments. There were 46 respondents, including the 44 listed at the end of this document. Most came from the financial services industry. Interestingly for a discussion about transparency, two large financial businesses did not even wish to be named as respondents, and another large financial business submitted its comments in confidence.
This policy statement summarises the main comments that we have received and the decisions that our board has made in the light of all the comments received. We have carefully considered all of the comments received, even where specific comments are not set out in this policy statement.
This feedback statement explains:
Some industry respondents took the opportunity of expressing concerns about the principle of publication.
But some other respondents took the opportunity of expressing support for the principle.
We are assured that there is no legal impediment to our publishing business-specific complaint data in the way described in this paper. As it is incidental to the effectiveness of our role in complaint-handling, it is well within our powers.
A number of public-interest arguments have been put forward in favour of publishing business-specific complaint data. But the key consideration is that, having carefully considered the position, we believe that publishing the data will facilitate the resolution of disputes quickly and with minimum formality.
The complaint-handling rules require consumers to give financial businesses up to eight weeks to try and resolve complaints before they can be referred to the ombudsman service. So the differing approaches of different financial businesses to complaint-handling play a key role in shaping the inflow of work to the ombudsman service and its subsequent handling.
The cost of the ombudsman service is charged, under statutory powers, to the financial services industry – and is ultimately borne by the industry’s consumers. That cost is affected by the number of cases received, and by whether or not financial businesses treat complaints in a way that the ombudsman service considers to be fair and reasonable.
We note the suggestions by some industry respondents that publication of business-specific complaint data will increase our workload, by increasing complaints and causing financial businesses to behave in ways that will make resolving complaints harder. But our judgement is that the most likely result will be to reduce our workload.
Financial businesses will be encouraged to deal with complaints properly in the first place. And if they are confident that they have treated the complaint fairly, they will stand by their decision. There will be no reason to pay-off unjustified complaints, because these will be shown up by the percentage of cases upheld.
We are not persuaded by the argument that publication of business-specific complaint data will undermine consumer confidence in financial services. If it were undermined by the number of complaints and the percentage upheld, the damage has already been done – because aggregate complaint data is already published.
It would be better for all our stakeholders, including the financial services industry itself, if they were able to see that not all financial businesses are the same. For example, the comparative data we produced for 11 large financial groups for the half-year July to December 2008 showed:
Publication of the data should help reduce unnecessary referrals of complaints to the ombudsman service.
This supports the notion that the published data should be broadly based. So we intend to publish the data on all financial businesses about which complaints have been referred to the ombudsman service, except where the number of cases falls below a minimum threshold and is not statistically meaningful.
The published data will include comparatively positive information as well as comparatively negative information. This is not an exercise in "naming and shaming". Dealing with financial businesses that behave badly is a matter for the regulators.
Parliament has given the role of handling individual complaints to the ombudsman service and the role of consumer protection to the FSA. So the data is not designed to shape the choices consumers make when they buy financial products. The data will focus on the ombudsman-related issue of complaint-handling.
Which financial businesses are to be covered by the data: A few respondents asked whether the business-specific complaint data would include financial businesses passporting into the UK from elsewhere in the European Economic Area (EEA).
Whether the data should include numbers of new cases:
Which closed cases are to be covered by the data: Some industry respondents said that it would be unfair for the outcome data to include cases which had been taken on by the ombudsman service before the financial business had completed its investigations and issued a final response.
Whether there should be a further breakdown of the data: Different respondents proposed a wide range of ways in which they would prefer the data to be broken-down. These included by:
Some industry respondents said that the data should not include complaints in respect of predecessor businesses. A few said that "campaign" cases, such as cases about mortgage endowments or payment protection insurance, should be categorised separately.
We will include data in respect of services provided in or from the UK by all financial businesses that are within our jurisdiction. This does include the UK branches of financial businesses passporting into the UK from elsewhere in the EEA.
We still intend to publish the numbers of new cases as well as the percentage of closed cases upheld – whilst continuing to point out that larger businesses are likely to have more complaints than smaller ones. Excluding the number of cases would provide a misleading picture. Consider, for example, the different pictures given by the following alternatives:
We will exclude from the published data new cases that were referred to the Pensions Ombudsman for resolution during the period.
It is true that we take on some cases where eight weeks have elapsed since the consumer first complained to the financial business but the financial business has not yet issued a final response. If financial businesses are concerned about how these cases will impact on the outcome data, the remedy lies in their own hands – by improving their in-house complaints handling.
We have noted the wide range of ways in which different respondents have asked for the data to be broken-down. Some of these are purely subjective. Some are things which are not recorded. Some are designed to assist consumers to choose products, which is not the purpose of the data. Others would involve a level of detail likely to obscure the overall picture.
But we do consider that it would be helpful to all our stakeholders if we provide a break-down of the data according to the same major product groups that the FSA intends to use for complaints-reporting from August 2009:
We do not consider that there is sound reason to exclude data relating to complaints in respect of predecessor businesses – which would otherwise simply "disappear". And, incidentally, it might be no bad thing if the complaint record of a financial business became something that formed part of a buying business’s due diligence when a business changes hands.
Some industry respondents said that we should not publish until we had conducted consumer research to confirm what consumers would find most useful and informative, and we should then pilot the data with financial businesses and the consumer sector before publication.
Some respondents said that any aim to publish the first data in autumn 2009, and possibly for the first half of 2009, was ambitious and arguably premature. They questioned whether this would allow sufficient time for the systems to be tested, verified and audited.
Several respondents said that we should set out our plans generally for how we intended to progress our accessibility and transparency agenda, so that data publication could be seen in context and so that the various interdependencies and timelines could be seen in the round.
Some industry respondents said the data should not be published until the industry was confident in the consistency of the ombudsman service’s complaint-handling and after publication of the promised on-line digest of the ombudsman service’s approach.
Some respondents said that we should not proceed with our publication plans until the FSA had confirmed its plans. But other respondents said that we should not delay publication – and that, if FSA were to change its plans, the ombudsman service should nevertheless proceed.
We appreciate that some financial businesses would welcome a delay in our publishing the complaint data. But we believe that it would facilitate our work, and be in the public interest, for us to proceed as soon as possible. We intend to publish the data every six months, starting from the autumn of 2009. The groundwork is already in place.
Concerning our wider accessibility/transparency agenda:
Either party is free to reject an adjudicator’s decision and "appeal" to an ombudsman, but financial businesses actually do so in less than three per cent of cases. So, bearing in mind the availability of this "appeal", we remain unconvinced by industry arguments based on alleged inconsistency of case outcomes.
We have been liaising closely with the FSA about our respective proposals for publishing complaints data. The FSA will announce in due course its progress in this area. But neither we nor the FSA consider that our publication of business-specific complaints data need be delayed on this account.
Q1: Do you agree that the published data about the outcomes of closed cases should distinguish simply between closed cases where there has been a change, or no change, in favour of the consumer?
Q2: If you consider that the published data about the outcome of closed cases should be divided into a greater number of categories, please say what and why – and indicate how they would be defined in order to eliminate subjectivity and controversy.
More than half of the industry respondents did not agree that we should distinguish simply between closed cases where there has been a change, or no change, in favour of the consumer. They said that this could be unfair to the financial business and misleading to consumers.
Various suggestions were made. Regularly-occurring themes were:
But a significant number of industry respondents and a majority of consumer bodies supported the proposed approach – because it avoided subjective judgements and complexity, which would make the data confusing. Many thought that concerns about the simplicity of the approach could be mitigated by the associated explanatory notes.
Some of the respondents who agreed that the data should be published in a simple format at least initially, thought that over time (and in the light of how the data was received) the ombudsman service should think about publishing more sophisticated data – including distinguishing cases where financial businesses had deliberately turned down valid complaints.
Some respondents said that the outcome data would be more meaningful if we also published average figures for all financial businesses. The averages would also reflect a range (from small to large) of changes in favour of the consumer, and would help to provide a yardstick.
We accept that some so-called goodwill payments arise because a financial business increases an offer of compensation, which it made at the in-house stage, as a result of a case being referred to us. But that is not necessarily the typical situation in which such payments arise.
A significant number of so-called goodwill payments also arise where financial businesses, having rejected complaints when they were considered in-house, recognise that our investigation is going against them and pre-empt a formal decision by offering a "goodwill settlement" without admitting liability. This has happened, for example, in a significant proportion of the payment protection insurance (PPI) cases we have resolved.
Many respondents disagreed with the suggestion that we should distinguish simply between closed cases where there has been a change, or no change, in favour of the consumer. But fewer addressed the practicalities of how we could grade outcomes in a way that was meaningful, fair, verifiable and avoided both subjectivity and controversy.
A degree of subjectivity was acceptable in the comparative reports that we produced to the 11 large financial groups, because this information was not published – and the data for the 10 comparator groups was anonymised. We do not consider that such subjectivity would be appropriate for published business-specific data.
We do not think that a change in outcome of £300 following the ombudsman service’s involvement could reasonably be called "nominal" (as suggested by some industry respondents). Nor do we think that a 10% change of £5,000 in relation to a claim of £50,000 could reasonably be called "small".
And respondents did not address how to assess the degree of change in outcome in cases (referred to in the discussion paper) where we direct the financial business to do something, such as:
So we do not intend to grade the degree of change in favour of the consumer. But we will make the range of outcomes clear in the explanatory notes. And we intend to publish average uphold rates (based on all the cases we have resolved in respect of all financial businesses during the relevant period) in order to provide a comparison.
Q3: Do you agree that the initial threshold for financial businesses about which we publish complaint data should be at least 30 new cases and 30 closed cases within the relevant period?
Some industry respondents said that a threshold based only on numbers of cases alone could be unfair to larger financial businesses, whose higher case numbers would reflect their larger market share.
They said that such a threshold could fail to reflect that smaller financial businesses, with fewer cases, might have proportionately higher uphold rates. This could mislead consumers into believing that it was safer to do business with smaller financial businesses.
Some consumer respondents said that consumers would want access to comparative data on smaller financial businesses – to help inform their decisions on which financial businesses to use, especially in sectors dominated by smaller financial businesses.
Some respondents said that the data would be more meaningful if we also published average figures. And some said that it would be fairer to set the threshold according to the size of the financial business and/or proportion of cases upheld.
A couple of respondents suggested that we should only publish data for the "worst" and "best" performers – the parameters for which should be set against the average for all financial businesses.
Many other respondents agreed that the proposed threshold looked about right and would be a sensible starting point – which could be kept under review in the light of experience and fluctuations in the numbers of cases. One industry body suggested raising the threshold to 50.
Some of these respondents, whilst supporting the proposed threshold for the time being, said that we should keep an open mind towards extending the published data in order to cover all financial businesses in time.
Few respondents addressed the view we expressed in the discussion paper that, below a minimum number of closed cases, the outcome data would not be a statistically meaningful indication of a financial business’s approach.
So we intend to set the threshold so as to include only those financial businesses that had at least 30 new cases and 30 closed cases during the relevant period. But we will keep that under review in the light of experience and future movements in case numbers.
Additionally, we intend to publish average uphold rates (based on all the cases we have resolved in respect of all financial businesses during the relevant period) – so that the uphold rate in respect of the financial businesses whose data is published can be compared with the average for all financial businesses.
Q4: Do you agree that, where applicable, the published data should identify any larger group to which the financial business belongs?
Q5: If a group includes some financial businesses with case numbers above the threshold and some with case numbers below, do you agree that the published data should include those below – to provide a full picture for the group?
The vast majority of respondents agreed that helpful context would be added if the data identified any larger group to which the business belonged.
Some respondents said that the explanatory notes should make it clear that financial groups can cover a diverse range of businesses, and that the data for one financial business within a group was not necessarily indicative of the performance of other financial businesses within the group.
Some respondents suggested that care was needed to ensure that group information did not confuse consumers or make the data less meaningful for them.
A majority of respondents said that, where a group included some financial businesses with case numbers above the threshold for publication and some with case numbers below the threshold, we should not publish the data for those below the threshold.
Some respondents said that the data for the individual financial businesses would be meaningless if comparisons could not be made with similar types of financial businesses (for example, investment advisers) below the threshold which were not part of a larger group.
Respondents who considered that a threshold could be unfair to larger financial businesses said that to include financial businesses with lower volumes simply because they belonged to the same group would only compound the unfairness.
With the support of the vast majority of respondents, we intend to identify any larger group to which the business belonged. If this changed during the period, we will identify the group to which it belonged at the end of the period.
Our intention to break-down the data according to five product groups may go some way to addressing the concerns of some financial businesses that consider they have a better record than others in the same group. But it is really a matter for group management to ensure that all subsidiaries adhere to appropriate standards.
We have decided not to publish the data for financial businesses that fall below the threshold, even where other financial businesses in the same group come above the threshold. We have concluded that this could be open to misunderstandings and false comparisons.
Q6: Do you agree that, where applicable, the published data should identify any other major trading names under which the financial business deals with the public?
The vast majority of respondents agreed that identifying major trading names would be helpful. One or two said that the data itself should be published by trading name – as this would reflect the name the financial business had used, and would be most meaningful.
Some respondents asked how appointed representatives of networks would be treated for the purposes of the data. Some other respondents asked how such information would be kept up-to-date.
With the support of the vast majority of respondents, we intend to identify major trading names used by a particular financial business or group. If the use of a particular trading name moved from one financial business to another during the period, we will identify the user at the end of the period.
It would not be practicable for us to quote the names of all of the appointed representatives of FSA-regulated financial businesses, including networks. But a search against a particular representative in the FSA register will identify the principal financial business.
As explained in the discussion paper, it is not possible for us to break-down the data itself by trading name. Each complaint is necessarily recorded against the legal entity which is legally responsible for answering it. But our intention to break-down the data according to five product groups may go some way to help.
Q7: Do you agree that notifying the financial business of the outcome recorded at the end of each case and having the aggregate data sample-tested by our internal auditors (currently KPMG) will provide sufficient verification of the data?
The majority of respondents agreed that notifying the financial businesses of the outcome recorded at the end of each case would be very helpful and would allow businesses to check what had been recorded.
Some respondents said that we would need to have a robust process to deal with any challenges by the financial business to the outcome recorded – and that any data still subject to challenge should be excluded when the data was published.
Some respondents said that the data should be checked by an independent person. And some thought that the checks should also cover a review of the ombudsman service’s processes and systems.
The majority of respondents agreed that notifying the financial business of the outcome recorded at the end of the case, and having the aggregate data checked by our internal auditors, would provide sufficient verification of the data.
Several respondents said that a summary of the results of the checks should be published – either alongside the data itself or alternatively in the ombudsman service’s annual review.
Several respondents said that financial businesses should also be given their data before publication, so that they could reconcile and verify the aggregate figures themselves.
One or two said that if the checks identified the need for changes or additional resources, a full cost-benefit analysis should be conducted to justify any further costs that would be borne by the industry.
As indicated in the discussion paper, we will continue to include correct recording of outcomes as a key part of our case-management and quality-control work. We have significantly increased the resources allocated to quality control.
Since the beginning of January 2009, we have been notifying the financial businesses of the outcome recorded at the end of each case – and we have established a process of escalating any challenge by the financial business.
This should enable us to resolve any challenges speedily, and before the relevant data is published. But we will exclude from the data any cases where the financial business challenged the recorded outcome at the time and we have not yet reached a view on the merits of the challenge.
Our internal auditors (KPMG) have checked the processes that we use to capture the data, and we will ask them to sample-check the data against case records before publication. Any error in the published data would be damaging to our own reputation, so we share an interest in the data being properly verified.
Those respondents who called for an independent auditor and a published report appear to have misunderstood the status of our internal auditors. They are an international firm of chartered accountants, bound by professional standards, and responsible to our audit committee (which consists only of non-executive board members).
We also plan to give financial businesses named in the published data (or their parent groups) prior notice of the data we intend to publish about them. But we may cease to do so without further consultation if this leads to any leakage of the data, or other abuse of confidentiality, by financial businesses before publication.
Q8: Do you agree that "our" data should be published in respect of each half of the calendar year (January to June and July to December) in line with the FSA proposal?
Responses to this question were mixed. One or two respondents said that the data should be published more frequently – for example, on a rolling-month basis.
Some respondents said that the data should only be published yearly.
A significant proportion of respondents agreed that it would be sensible to publish data in respect of the same periods as proposed by the FSA. This would help to provide a more rounded picture of a financial business’s complaint record, and the accompanying explanations could make it clear that the data related to different sets of complaints.
We need to draw a reasonable balance between publishing data that is not stale and the resources required to compile, verify and publish the data. We consider that six-monthly publication achieves the right balance.
Publishing the data six-monthly has the additional advantage that it would help to provide a more rounded picture if the FSA also publishes data for the same period. It is not dependent on that, but we would consider reviewing the periods if the FSA’s plans were to change.
Q9: Have you any suggestions on how the possible format for publication [published in PDF format in the discussion paper] could be improved? Are the generic explanations adequate?
Some respondents said that it was unlikely that the generic explanations would be meaningful for consumers – and that external commentators, including the media, would disregard the explanations and focus on the headlines to be derived from the raw statistics.
Some other respondents said that the format could only be decided once final decisions had been taken about content.
Those respondents who had asked for additions to the data published about each financial business (for example, product-types or degree of change in outcome) – as described earlier in this document – naturally asked for those additions to be reflected in the format and explanations.
Specific suggestions from other respondents included:
Some respondents warned that information overload or poorly presented information could cloud rather than illuminate understanding. One or two respondents said that any negative consequences would be minor and would be outweighed by the positive consequences.
A significant proportion of respondents were content with the proposed format and explanations. They said that any calls for additional data and/or context should not delay publication; the data would allow for meaningful comparisons as it stood. If necessary, the information could be added to or amended over time, in the light of experience.
We agree with those respondents who said that it would be helpful to decide which of the explanations were essential to a proper understanding of the data and focus on these. Too much information is likely to produce confusion rather than clarity.
In the light of what we have said earlier in this document, we will add:
We will conduct further research into the format and explanatory notes before publication – with the aim of maximising understanding (and minimising misunderstanding) by all users of the data.
Q10: Have you any other comments on the practical matters that remain to be resolved in implementing the decision to publish business-specific complaint data?
One or two industry respondents asked for reassurance that the ombudsman service’s adjudicators would not be incentivised to uphold complaints.
A number of respondents said that we should publish a communication plan for the publication of the data – including prior positioning of the underlying factors that influence complaint data, to counter the potential for crude headlines and distorted interpretations.
Several respondents asked if the ombudsman service would have a dedicated help-line for queries about the data, including for any challenges by financial businesses after publication.
A couple of respondents said that, as the outcome data would already be historical at the time of publication, the ombudsman service should strive for timely handling of cases – and publish its own timeliness figures alongside the data.
Some respondents asked for clarification of any contingency plans the ombudsman service has in place, to cope with any increase in the number of cases generated by publication of the data and targeting by claims-management companies.
One respondent suggested that the European Commission’s proposals to harmonise the recording and reporting of complaints data should be borne in mind, as they may impact on our plans.
Our role is to be impartial. We take that very seriously. Our adjudicators are not incentivised to uphold complaints.
We will put in place appropriate media arrangements around the publication of the data, just as we do with publication of our annual review and other higher-profile issues. But, as explained in the discussion paper, we do not consider it part of our role to use the data in order to comment publicly on the complaints-handling record of individual businesses.
We will continue to publish data about the timeliness of our own case-handling in our annual review and in our corporate plan and budget. And the explanatory notes accompanying the published data will make clear that, as some of the closed cases might have been with us for some time, the outcome data will not necessarily represent the most recent performance of a financial business’s in-house complaint-handling.
The workload projections in our corporate plan and 2009/10 budget take account of the possible effects (both positive and negative) of data publication.
We are aware of the European Commission’s proposals to harmonise the recording and reporting of complaints data – and, as members of their consultative group on this, we are liaising closely with them.
A significant majority of (industry and consumer) respondents who commented on this point said that – though the outcome data could be meaningful on its own – the numbers of new cases would be more meaningful if they could be seen in the context of the market share of the relevant financial businesses.
Some, but by no means all, of these industry respondents said that publication of new case numbers should be delayed until contextualisation could be agreed. Consumer respondents favoured publication anyway, with contextualisation following in the future when and if a suitable method could be agreed – and cautioned against over-complexity.
As we explained in our discussion paper:
In November 2008 we brought together a group of representatives from industry bodies, consumer bodies and the FSA – to begin a process to see whether they could agree how market-share can be measured and published. They have met again since, but there are no firm proposals to report yet.
Though the working group was asked to focus on the specific issue of contextualisation, some members have found it difficult to avoid spending significant time discussing the principle of publication. We hope that the decisions recorded in this paper will help the group to concentrate on the issue for which it was established.
We are conscious that, in the absence of a deadline, this process could potentially take a long time – especially as it involves obtaining agreement from some amongst the industry who are not enthusiastic about publication at all. We do not consider that publication of the data should be postponed indefinitely.
The issue of contextualising the complaint data has been in the public domain at least since we published our discussion paper in September 2008. But, to give the working group a fair opportunity to make progress now that our intentions have been further clarified, we will not publish the first set of data before 1 September 2009.