05-22-2024 05:12 PM - edited 06-15-2024 04:55 PM
Input |
Activity |
Output |
Outcome |
Impact |
Company Resources: time, people, money, etc. | What Companies do with their resourcesPrograms offered and services provided | Immediate results of the activities: products, services, time spent | Resulting behavior, condition or state of well-being | Change in behavior, condition or state of wellbeing that resulted from the activities |
HOW MUCH WE DO |
||||
HOW WELL WE DO IT | ||||
IS ANYONE BETTER OFF? |
Category |
Metric |
Metric Type |
|
Grants and Donations | Total financial value of grants portfolio | Output | How Much |
Number of recipients/partner organizations | Output | How Much | |
% grants that align to impact priority areas | Output | How Well | |
% revenue that giving represents | Output | How Well | |
Partner/Grantee satisfaction/ experience score | Outcome | How Well | |
Aggregate number of individuals served by grantee organizations | Output | How Much | |
Product | Number of organizations/individuals receiving donated product | Output | How Much |
CSAT or NPS score of individuals/organizations using donated product | Outcome | How Well | |
Value of donated product | Output | How Much |
Valuing Donated Product
There is no standard. Approaches vary for ascribing a dollar value to donated product. Establish fair market value (FMV) or actual selling price and take into consideration discounts as well as donations.
Partner/Grantee Experience
More and more philanthropies are reporting not only how much they are giving, but the quality of their partnership and how well they are collaborating with and supporting grantee organizations. Read more about the Trust Based Philanthropy Learning & Evaluation Framework.
Most of the common metrics for measuring philanthropy impact can be categorized as outputs — they tell us about how much an organization is doing and possibly about how effectively their operations are running. They do not tell us a whole lot about outcomes or the impact that’s generated through philanthropic giving.
This is, in large part, because companies are giving across a wide range of issues, causes and geographies, each representing different sets of goals, purposes and outcomes. Companies will need to identify outcomes metrics that speak to their unique philanthropic mission and goals, and realistically reflect their investments with partners and communities. But this doesn’t mean you are alone, and need to figure it all out yourself. Although each company’s mission may be unique, a great deal of work already exists to standardize the measurement of program effectiveness and investments on specific topics and issue areas. It is recommended to both build your own Theory of Change and understand current thinking in the field.
World Benchmarking Alliance (WBA): Anchored to the UN SDGs, the WBA is creating benchmarks across a wide range of pressing ESG Issues.
Global Reporting Initiative: Provides sustainability reporting standards, including sector and topic standards.
Evaluation.Gov: The US Federal Government creates evidence-based evaluation metrics for its departments and programs. If you are working on a particular social change topic, especially if based in US geographies, this can be an excellent resource.
Leverage what exists for grantee reporting: When it comes to asking grantees to report to you, consider the level of effort relative to your investment and explore whether you can leverage what they are already reporting, rather than asking for something bespoke.
Attribution & Contribution: Contribution is the idea that your influence is only one of many factors that brought about a change, while attribution is the idea that your intervention was the only reason for the change.” (SoPact) More than likely, your institution is but one funding-actor in a wide ecosystem, so the goal is to think about how you are contributing, rather than what impact you can claim as your own.
In late 2023, the Pledge 1% Builder members gathered for a virtual thinkspace huddle, where they came together to share specifics about what they are measuring, what makes it hard and what makes it worth it. Attendees engaged in small working groups and documented their perspectives on a shared white board. The results of these conversations are shared here:
What makes it hard |
What makes it worth it |
|
|