What is impact?

Impact is about deliberate positive change.

Many organisations, including charities, exist to create impact. Impact is the deliberate, positive change, or outcome, that a charity creates in the lives of the individuals and communities it works with, or in the environment, or economy. Measuring impact is about determining the extent to which an organisation has achieved the change it intended.

Impact matters for charities that exist to create positive changes in line with their mission. It matters for non-charity social enterprises, that are trying to do much the same, as well as for an increasing range of private sector companies wishing to lighten their footprint and to cause good as they go.

Why impact matters

As a charity, delivering change in people’s lives, or improvements in the environment, impact is going to be important to you and is likely to be a key part of your organisation’s mission or purpose. You will want to understand what change is most needed and be able to plan how to make that happen. You’ll want that to be done efficiently and effectively, and what you achieve should inform your internal and external reporting, so that:

  • Everyone inside your organisation, including your Board can determine your progress against your strategy and be energised by the difference you are making as you pursue your goals
  • Key external stakeholders such as partners and funders can understand what you are doing and support it; this could also pave the way to enable you to lead and influence others in their own impact journey.

When it’s useful to measure impact

There are three particular points in a strategic cycle when it is most likely to be useful to consider impact:

  1. Planning what you are going to do before delivery commences (or before any data collection for measurement has happened): you should project forward and identify the impact that is expected to arise from an activity (the ‘hypothesis’). This is helpful in the same way that financial budgets enable delivery to be monitored after it has started. Your impact hypothesis will enable you to clarify and test the sense of your plans. It gives a benchmark against which to judge results, and it provides an articulation of what you are doing as an explanation to others, and as a foundation for deciding what to measure.
  2. Guiding and structuring delivery: ongoing regular monitoring helps to manage and guide delivery. Just as you would regularly refer to financial accounts to ensure that performance is in line with budget (and identify opportunities to improve), it is important to do the same for impact. You can also re-check the accuracy and completeness of the impact hypothesis at regular intervals and think about lessons you might draw across into future delivery of this and other activities.
  3. Looking back to measure effectiveness, and to develop learning: during and after an activity has been delivered, it is useful to measure and report on what has been achieved. This helps to meet the reporting needs of stakeholders including funders and commissioners, as well as meeting the obligations for public benefit reporting. Measurement is also an important tool to put delivery into its full context. For charities some activities do not generate profit or surplus: this does not mean that they are poor value for money. The ‘value’ part of the value for money equation often includes a blend of social and environmental outcomes that must be considered alongside other measures.

Find out more about internal and external uses for impact measurement and the needs of your audience.

Definitions of key terms

This resource uses the GECES definitions*. GECES take outcomes as the changes, whenever they happen, and impact is defined as the extent to which they are caused by the organisation. The GECES group see value in separating all the changes from the changes that can be evidenced as being caused by the organisation’s activities, so making that causal link clearer.

Inputs: the resources used in the delivery of an intervention

Activity: what is being done with those resources – how it touches intended beneficiaries

Output: the tangible products or services from the activity

Outcomes: the positive or negative change arising as a result

Impact: the measurable positive and negative effect of an organisation’s activities: the extent to which the outcomes arising are attributable to the organisation’s work

Indicator: a way of attaching value to outcomes and impact

*The GECES Standard

Telling your impact story

Stories matter

At the core of impact planning, management and measurement are stories. As a charity, you need to be able to tell clear stories about:

  • The lived experience of those you work with and exist to support
  • The positive changes you aim to help them create in their lives
  • The changes that your charity has already helped create

These stories must be true and must be evidenced, but at the same time we shouldn’t be afraid of talking about them as stories. Trying to measure impact without a clear story on which to base it is likely to result in measuring what doesn’t matter.

Telling a powerful story about your impact involves:

Being clear about your focus

You need to be clear about the focus of your organisation’s work. Who or what is intended to benefit from your activities? Depending on the nature of your organisation, this may be the people with whom you work, or the environmental benefits you seek to create.

Explore the needs, taking a broad view of people’s lives and the things that affect them (e.g. the welfare system, transport systems and so on) before your charity gets involved. You can then build out to understand which outcomes would meet those needs (in simple terms an outcome is a met need), and importantly which of these outcomes might be influenced by your charity’s activities.

In doing this you should also consider who benefits from the changes made. Outcomes may be primary – directly affecting the lives of those you seek to support. Or the benefits may be consequential or secondary effects in those same peoples’ lives or the lives of others. Thus those that benefit from the outcomes may be a much wider group of stakeholders than just the people your charity directly works with, and the interests of all should be recognised, not least as it will then be everyone’s perspectives that will influence what you measure.

An example of primary and secondary outcomes

In a social housing construction project, a primary outcome may be building energy efficient homes, the beneficiary being the environment. The consequential or secondary outcomes of this primary one might include:

  • Greater affordability of the housing for the tenant, with lower energy bills
  • The economic viability of the property for the housing association owning it because the rent is more affordable and costs are lower, and the property can be left for longer without major capital works to uprate the energy performance as standards change.

In this example, the impact expends well beyond the primary outcome and not taking the secondary outcomes into account would limit the breadth of understanding of true impact

Understanding how the change happens

You then need to explore what it is that makes those outcomes happen – in other words, your activities. In doing this you should consider not just what you do, but the way in which you do it, and the necessary reaction from those you seek to support.

The impact story you've been developing will give you much of the content you need to develop your Theory of Change: what you do, why you do it and your underlying rationale.

A Theory of Change is an explanation of how the changes you seek to create are achieved.

It clarifies what change is needed, for whom and why, and fills in that missing middle: how do the organisation’s activities contribute to making that happen? It gives an explanation but it also gives you confidence in the logic and value of what you are doing and the resources you are using.

A Theory of Change provides a key foundation for meaningful measurement - if you know what should happen and how, you can measure more effectively by focusing on those aspects. You measure what matters.

Building and evidencing your stories

Building and evidencing your stories can happen when you are developing your strategy or programme of work, before or after piloting, or once the full programme is running. There will be different levels of evidence and insight available, but regardless of that you can and should go through the thought process in much the same way, by engaging your team in developing the stories; gathering the data; thinking about the number of impact stories you’ll need; and deciding whether you go for a short or long term horizon.

Engaging your team

Involving staff and volunteers who help to deliver your organisation’s activities is one of the best ways of gaining a good overview of the stories of people you support. They will often be familiar with multiple case studies and are also likely to have a better understanding of the circumstances of those for whom your activities haven’t been successful (this is a group you may find difficult to reach for research purposes).

The people your charity works with will often be experts with regard to their own lived experiences. Commissioners, trustees and other stakeholders may be more remote from the delivery end but will have a good understanding of more high-level factors.

There is a broader cultural point here. Impact reporting should be something that engages everyone in the organisation, not just senior management. It should be motivating for delivery teams to see the value of what they are doing in their daily work, as well as informing strategy at board level. It should encourage further analysis to understand how you can make further improvements. For impact reporting to take a central role in strategy development, you need the whole organisation to understand, believe in and value it.

Gathering the data

There are many different types of data from which your stories could be built and various ways in which you might gather that data. Use this checklist to help you get started.

Several of these approaches involve gathering data in the form of stories, expanding and exploring it from different perspectives, identifying and challenging interpretations, and drawing in the evidence to support the view obtained. The advantage of gathering information in this way is:

  • Getting detailed, nuanced experiences which embrace not just what was intended or happened, but why
  • Building confidence by triangulating – comparing and reconciling – the information from several sources.

However you choose to combine these elements to build and evidence your stories, the gathering of data needs to be followed by an analysis of what you have discovered, drawing out a conclusion and testing that back. You could do this with, for example, your original workshop group/s, or with a board/management steering group, to assess whether what you have concluded makes sense.

How many stories do you need?

Think about the number of impact stories you’ll need. It is very rarely possible to capture everything that an organisation does for everyone that it works with. However, it is important to capture a reasonable overview of the majority of your activities and the outcomes achieved. Each organisation will find its own level of detail, but there are two approaches that can frequently be helpful:

  • Delivering multiple services to the same people: seek to obtain or build case studies for commonly occurring service combinations;
  • Delivering a common service to multiple groups: in this case it would be sensible to start by looking at service user archetypes.

Generally speaking, aim for no fewer than four and no more than seven service line or service user archetype stories: this keeps the enquiry and reporting exercise at a manageable level.

Short or long term analysis?

You will need to reflect on whether the story is a long- or short-term one. Some activities have life-long effects while others may be much shorter-term.

It is rare for organisations to track people they have worked with for the remainder of their lives. In these situations, the longer-term outcomes may need to be based on assumptions and evidence from research that has followed the effects of similar interventions for a longer period (or has shown the damage from failing to intervene as a counterfactual). It is fine to recognise an effect in your storylines without being certain enough to include it in the evaluation.

When you are ready to tell your story

Once you’ve gathered and analysed all of the information, looked at what is helping and how it works – both the approaches, and how the response of the people you support enables the change to happen, you will be ready to tell your impact story.
You can present this in a diagram or as a written explanation. There are a number of diagram formats in common use and you should consider which best suits you and your audience. Common to all of them, though, is an explanation of:

  • Who is being helped and what are their needs
  • What primary and secondary outcomes would meet these needs, and how do the primary ones lead to the secondary ones
  • What activities does the organisation undertake that cause or encourage those outcomes, and what are the approaches – characteristics – of delivery and the necessary response from the people that you support that makes them uniquely effective.

Impact framework development and selecting metrics

The outcomes that you have identified in your story-telling exercises can now be:

  • Tested as to their relevance to stakeholders (i.e. those with an interest in your outcomes and impact)
  • Captured in an impact framework
  • Matched with metrics that help stakeholders understand whether the outcome has been delivered.

Before you explore these in more depth, think about the eight characteristics of good measurement2, which should be:

  1. Relevant: related to and arise from the outcomes it is measuring
  2. Helpful: in meeting the needs of the stakeholders, both internal and external
  3. Simple: both in how the measurement is made and in how it is presented
  4. Natural: arising from the normal flow of activity to outcome
  5. Certain: both in how it is derived and how it is presented
  6. Understood and accepted: by all relevant stakeholders
  7. Transparent and well-explained: so that the method by which the measurement is made and how that relates to the services and outcomes concerned are clear
  8. Founded on evidence: so that it can be tested, validated and form the grounds for continuous improvement.

2GECES standards.

Relevance to stakeholders

Picking up on the first two of the eight qualities of good measurement in the eyes of the stakeholders – those expecting to use the measurement – what would be relevant and helpful? This may be obvious from your work in developing your impact story, but it may be worthwhile to have a further conversation with them.

Just because a stakeholder thinks that they need a particular metric doesn’t mean you have to provide it. Explore that further: will it make a difference? Is the effort needed to gather the data proportionate to the benefits of having it? Is there a better way of providing that information?

Building and maintaining an impact framework

A Framework is comprised of:

  • the outcomes you are seeking to measure
  • the metrics you are attaching to them to make that measurement possible
  • the link back to your impact stories and the answer to the question 'what is causing those valuable outcomes?'

A Framework should certainly be a solid foundation for meaningful measurement. However it should not be fixed and immovable: it should grow and be refined as circumstances, needs, and priorities, stakeholders’ views and involvement, and the approaches to delivery develop and change.

It should be as focused as possible so it doesn’t become unwieldy. Wherever possible3, it should share common ground with the frameworks used by other organisations in similar fields.

3GECES suggests that this might be 80% of the time.

Deciding on the type of metrics

As well as ensuring that your metrics, and the measurement that you draw from them, meet the eight characteristics outlined above, there are three other key areas to consider:

1. Different metrics for different purposes?

There are a number of different purposes for impact measurement. It follows that different aspects may be relevant for different purposes, or different stakeholders, and they may require different metrics. As a simple example, a Local Authority adoption team may want to know how many children achieve permanent new homes that stand the test of time, whereas a social worker supporting the placement may want to know in more detail what is working for the child and the parents.

2. Outcomes themselves or outputs?

Sometimes it’s going to be better to measure outputs as they are more tangible and give us early clues about whether we are on track to achieve the outcomes or not.

Frameworks may include a mixture of outcomes metrics and informed output metrics, and frequently do.

3. Financial or other metrics?

Do we need a financial measurement, or is something else more helpful? The answer is ‘it depends.’ Financial metrics are useful where the change carries a financial effect that is relevant to the stakeholder.

Where financial values are placed on outcomes, they broadly fall into three types:

a. Costs avoided – a cost to dealing with the situation if the outcomes are not achieved no longer has to be spent. This could include the release of resources for something else.

b. Efficiency gains – the cost of supporting a person or situation still arises but is lower than it would otherwise have been. An example might be an intervention that supports users of public services to engage with them effectively.

c. Wider consequent financial gains – these may be income for a family, economic activity within a community, or employment or productivity gains, to name just a few.

Gathering and storing data

You need to gather and store enough data to reinforce your story and justify what you are claiming. This checklist will help you think about data integrity, and the list of possible methods of will help you develop your ideas about the options open to you.

Data protection checklist:

When gathering and storing data you should ask yourself:

  • Does the person providing the data know the purpose for which it will be used, and have they consented?
  • Is the data stored securely – both in terms of confidentially and so that it cannot be lost?
  • Has the data gathering and storage complied with necessary laws, including GDPR?
  • Have vulnerable subjects’ interests been protected?

See the Tools for success: Compliance guide for more information on data protection legislation.

Methods of data gathering:

Think about which of the following will work best for you and the measurement data you are trying to gather:

  • Semi-structured interview (conversation around a series of topic headings) and transcription
  • Written questionnaire (online or physical)
  • Questionnaire, but delivered by interview
  • Workshops or other conversational approaches
  • Documentary review, including web-based searches and literature reviews
  • Financial and other data reviews
  • Results of beneficiary self-assessments
  • Some newer approaches such as film, personal audio or video diaries.

There are a number of important considerations when thinking about gathering and storing data, including ensuring data is valid - sound and reliable – and making necessary adjustments to take into account things that could have happened anyway, or that others may also have been involved in.

Data validity

You should always seek to produce measurement that is sound and reliable for its purposes.Things to consider include:

  • The systems by which data is gathered – are they sound?
  • The data gathered – is it open to bias, so it may not mean what it appears to mean?
  • The analysis of the data – is it appropriate, fair and objective?
  • The conclusions – are they rational, balanced and fair?
  • The disclosure of information to the reader – is it sufficient to enable them to understand the methodology and the conclusions, and to be able to repeat the work themselves, with the right access to the people involved?

Adjustments

Independent assurance or validation is a testing of some or all of these points by someone who is experienced, but not involved in the data gathering or analysis. It follows a programme of work, and results in a report that may be:

  • Published externally, with the published impact report, like a financial audit report
  • Written up and used internally, like an internal audit report.

Adjusting for deadweight, attribution and displacement

These three adjustments originate from the financial evaluation approach in Social Return on Investment (SROI) but are of relevance in other forms of evaluation. When presenting the findings of your evaluations, it is wise to show percentage deductions to eliminate the value of:

  • Outcomes that would have happened in any case (deadweight)
  • Another person/organisation’s contribution to the outcomes you are claiming (attribution)
  • Damage caused elsewhere as a result of your activity (displacement) – or to include an explanation of why you are confident that no such damage has arisen.

Rather than setting these against each element of a financial calculation, or to non-financial metrics, it is more sensible to include these as adjustments to the overall result. Like other areas of uncertainty, sensitivity testing on these deductions is recommended.

There is more on deadweight, attribution and displacement in the next section about pitfalls and how to avoid them.

Common pitfalls and how to avoid them

Impact measurement comes with several challenges and some common pitfalls. Many of these stem from a quest for certainty rather than reasonableness. These are some of the issues you may encounter and how to mitigate them:

Not developing your stories from a wide enough perspective

This comes from not exploring the secondary outcomes around the life of those whom you support widely enough. Positive effects for a wider group of stakeholders, may be missed, and the benefits not highlighted and explored. The outcomes in the life of someone you support may also not be followed far enough into the future, again underestimating the benefits. The answer is to ensure you explore these aspects fully as you develop your impact stories.

Seeking certainty on both the baseline and outcomes achieved

You can make rational assumptions as to what would have happened but you can’t know. Using a control group may fall into the ethical trap of denying an intervention to those that need it. You can avoid this by:

  • Gathering evidence about the trajectory of the person you support, before the intervention
  • Making reasonable, evidence-based assumptions
  • Sensitivity-testing those assumptions.

Assuming your hypothesis is complete and accurate

As an organisation, you may be starting with the presumption that your work is valuable, and you may even have a precise view as to how and where that impact arises. Equally, you might have a more sceptical view that the organisation does not achieve impact. It is helpful to have a hypothesis, provided that you approach the research process to test it with an open mind. Remaining fixed on your hypothesis might risk:

  • Failing to see unexpected ways in which you are helping people that emerge from their stories
  • Failing to structure enquiry in a sufficiently open way to enable broader themes to emerge
  • Being unable to believe the findings that emerge from your enquiry (for whichever reason).

Using ratios in your conclusions

Several methodologies, notably Social Return on Investment (SROI ), present a result in the form of a ratio showing the impact return per pound spent. Whilst this may help succinctly to present the value of your impact in a simple way (and one that may be more accessible for some audiences), it has limitations, including:

  • It is not possible to measure everything and so the ratio that you present reflects the impact of what is measurable compared to the cost of all of your activity. The two don’t match (however close you may get to measuring everything);
  • Ratios invite comparison between activities and between organisations that may not be fair; and
  • Your cost of delivery might not be the right comparator to use. For example, where an alternative (but lower impact) service would be available for the same cost, the proper presentation would be the value of your impact (adjusted appropriately for deadweight) set against the £nil incremental cost of service delivery: that ratio is not possible to calculate.

In general, it is best to state the value of your impact and the excess of that impact compared to the cost of delivery. In doing so, it is important to ensure that you clearly state any issues that limit comparability and stating your impact as being “at least” the answer you have calculated.

Making assumptions without challenging them

Assumptions may well be needed to fill gaps in your stories and as inputs to your financial evaluations. This is, by definition, adding something you don’t know into the midst of other things that you can support with evidence. Rather than starting from the perspective that we don’t know some of the data, it is more helpful to look at what we do know.

They key words here are prudent and reasonable. It is rarely necessary to assume that an outcome will re-occur annually for the remainder of someone’s life to deliver a financial evaluation that exceeds the cost of delivery, an obviously prudent assumption is likely to suffice and be acceptable to most readers. Remember that reasonable goes both ways: an assumption might be unreasonably short as well as unreasonably long: avoid making your assumption so prudent that it ceases to be realistic.

A key test to apply to test your assumptions is a sensitivity test. Ask yourself: if that key assumption were wrong, would it change the findings sufficiently that the intended reader would come to a different conclusion? You test that by making the change and looking at the effects on the findings. If they give a different view, you need to have a closer look at that assumption, or at the very least highlight to the reader how much it matters so they can draw their own conclusions.

Making reasonable assumptions

Thinking about a relatively simple example, if you provide support that enables someone to live semi-independently (as opposed to a residential setting) in early adulthood, you may well know:

  • The level of skill that is demonstrated at the end of your activity
  • The next destination of the beneficiary
  • The initial prognosis for the beneficiary
  • The unit costs of the initial prognosis and the actual destination.

You almost certainly don’t know:

  • How long the person you support will live in total
  • Whether, or how long it will be until, the outcomes of your activity drop-off as other factors (such as age or other illness) bring them back towards the original prognosis
  • Whether any abnormal incidents will occur that cause a drop-off in outcomes.

With the above in mind, you might:

  • Make a prudent assumption about total life expectancy
  • Make a prudent assumption about how long your outcomes will last (e.g. that independence remains for five years and then drops-off over the next five years to revert to the baseline prognosis)

Adjust the unit costs or the proportion of your those who do benefit in this way to account for risks other than drop-off.

Not adjusting for deadweight, attribution and displacement

There are three pitfalls of particular importance in this area, one corresponding to each of the three elements outlined above in 'Making assumptions...':

Deadweight

The pitfall:

Ignoring it - effectively saying that the outcome could not have arisen by chance, or by any other means

How you can address it:

  • Don’t take this line. Think about the outcomes being targeted. Could they have arisen by chance? What other interventions might have helped if yours wasn’t around?
  • Given it’s neither 0% nor 100%, discuss and work the range inwards, and then take a prudent view. Looking at what others in similar assessments have concluded will also help.

Attribution

The pitfall:

Deciding that there is no alternative attribution – that you delivered the outcomes without anyone else’s help

How you can address it:

  • Again, don’t take this line – at the very least the beneficiary will have helped by cooperating.
  • List or map out who helped at which stages, and then, in your workshops, for stories or other evaluation get a sense of the proportionate contribution of each, based on cost, effort and expertise, and importance.

Displacement

The pitfall: 

Missing a key displacement effect that is relevant to stakeholders’ views

How you can address it:

  • Explore widely the stories and look out beyond the immediate life story of the beneficiary.
  • Consider, as part of your risk and sensitivity assessments, what displacement effect there might have been. Put an indicative value on it and explain the basis of your assumptions.

Using and learning from what you discover

One of the important benefits of impact measurement is to help you review your performance, reassess what you’re doing and how you’re doing it, and ultimately achieve even greater impact in the future. Each of these aspects are covered in more detail below:

The cycle of review and learning

A cycle of assessment, reviewing and learning needs to be built into your operation. This checklist outlines what this might cover:

  • Do we understand the findings?
  • Do they make sense?
  • How will we use this information?
  • Do the findings support our original understanding, or should this be updated?
  • What can we learn to improve and re-focus delivery?
  • Who else needs to see the findings?
  • What do our employees, volunteers, trustees and other stakeholders think?

Remember, measurement needs to continue. Is your process working and how could it be built better into ongoing operations? Schedule in a regular review time.

Improving your own performance

Taking a fresh look at your activities and the ways in which they achieve change for the communities your charity supports can help to:

  • Improve your understanding of how different activities connect, or to see new similarities between them. Perhaps this might even change your thinking about the structure of your organisation: perhaps you have one team with four activities rather than four teams
  • Change the way you communicate your mission and vision internally and externally: perhaps you might change from saying that “we support people who have [insert challenge…]” to “we enable people with [challenge] to…”
  • Investigate afresh which aspects of your work are contributing most towards high priority outcomes and reviewing opportunities for other activities to improve or refocus
  • Re-assess the stakeholders (particularly funders) with whom you work, based on an understanding of who else sees the effects of the change you are achieving. It might be that impact reporting gives you an opening to talk to new potential funders or to extend your work with existing funders
  • Look afresh at partnership working with other organisations: perhaps looking at impact and thinking about attribution makes you realise that there would be opportunities to serve the people that you support even more effectively if you could work in a more joined-up way with someone else.

Simply producing an impact report might help to raise your profile and attract new people to access your activities, which would hopefully lead directly to an increase in your impact.

Refocussing your strategy

A typical strategy development cycle might take the vision and mission statement for your organisation and then look at operationalising that by setting targets linked to delivering activities in the coming year (or perhaps for a three- or five-year planning cycle). What if your cycle started with thinking about what outcomes you want to deliver (or prioritise) and then looking again at how you articulate your mission and vision? What if the activities and targets you set for the next strategic cycle focused on particular ways in which you want to see people’s lives changing?

This may involve a very subtle linguistic adjustment, but the cultural difference could be huge.

Having outcomes in mind as part of your strategic thinking can also help to inform the way you look at transactional decisions such as:

  • Raising finance: do you just want the money to fund delivery, or do you want to draw in support from a source that understands and values your ultimate aims as an organisation?
  • Mergers and acquisitions: do you just want to add financial value to the organisation, or do you want to focus on opportunities that will enhance delivery of our priority outcomes? Equally, if you are approached by another organisation seeking a merger, does it become a question of financial strength, or do you need to consider whether and to what extent a merger would increase impact (and do you challenge options that are neutral or damaging to your impact)?
  • Re-organising activities: all organisations are likely to need to review their activity from time to time. In some cases that need is imposed upon them by external factors. Rather than looking only at options to reduce costs of delivery, could you look at prioritising the continuation of activities that have the strongest impact? Could you consider transferring or merging unsustainable activities that have high impact to another organisation for whom they would be sustainable?

Raising your profile

Influencing the views and behaviours of others, including your funders and policy makers, is a valuable outcome of impact reporting. Your impact report can serve to:

  • Highlight the value of your activities and open up new audiences for your message
  • Encourage other organisations to look at their own impact
  • Raise the profile of the issues and challenges that you seek to address, and highlighting the needs of those you support; and
  • Potentially to have systemic influence by encouraging other organisations to reconsider how they provide support.

Any or all of the above can help to position your organisation as a leader in your area of focus (or beyond).


Acknowledgements

CCE Tools for success: Impact Assessment

Author: Jim Clifford OBE, Director and CEO, Sonnet Impact Advisory & Impact

CCE series editors: Caroline Copeman, Lucy Joseph

Disclaimer

While great care has been taken to ensure the accuracy of information contained in this publication, information contained is provided on an ‘as is’ basis with no guarantees of completeness, accuracy, usefulness, timeliness or of the results obtained from the use of the information. The Centre for Charity Effectiveness can take no responsibility or liability for any expense, damage, loss or liability suffered as a result of any inadvertent inaccuracy within this publication. In the event of any errors or omissions, we may correct the publication without any obligation or liability to you.

The views and commentary contained in this publication are based on likely industry developments at the time, future trends or events based on information known to the authors at the date of publication and are not necessarily those of the Centre for Charity Effectiveness. Information contained in the publication should not be relied upon as a basis for financial investment. The authors and publisher accept no responsibility whatsoever for decisions based on the publication, which should not be relied upon as a basis for entering into transactions without seeking specific, qualified, professional advice.

Copyright © The Centre for Charity Effectiveness, Bayes Business School (formerly Cass), City, University of London, 2022.
All rights reserved. No part of this publication (including associated graphs, data, appendices or attachments) may be reproduced in any material form, distributed or communicated to any third party, without the copyright owner’s express written permission. Requests for permissions to use content, quotations or extracts from the publication should be addressed to CCE@city.ac.uk