YOU RUN A government agency or nonprofit.  You ask management experts how you can assess your “bottom line,” given that earning profits is not your mission.  You want to know if you are doing a good job delivering value for the public money you are spending.

The experts tell you to take three easy steps. First, figure out your goals and priorities.  Second, develop indicators that correspond to your goals.  Third, collect data to fill out the indicators on a scorecard or dashboard.

It sounds right: organizing the vast amount of information related to your programs into neat indicators could help you to focus on what is important.  You have heard the saying that what gets measured gets managed. Your funders also ask for data-driven decision-making, and you think that dashboards will get you there.

You hire consultants to take your staff through the process.  Everyone meets regularly.  You list goals and priorities, but it can be hard to narrow the list. You do not want to leave any staff unrepresented on the list, as if their program does not matter.  Some months later, the team starts planning indicators.  This is tricky because the consultants tell you to measure the outcomes of your programs, not the inputs (number of staff, materials purchased) or the outputs (number of events, publications, inspections, visits, repairs.)  Figuring out how to measure the ways you are changing the world feels daunting, but your team gets creative and sketches out measures.

Next comes the task of collecting data to fill out the scorecards or dashboards.  You do not have the budget to undertake the mammoth data collection effort your plans now call for.  Your databases are limited and the quality of your current data is not great. You look to outside agencies that are devoted to data collection (Census etc.), but the numbers they provide might not answer the right questions or be timely enough for decision-making.  You revise the scorecards to include more output measures. The outputs, after all, should lead to the desired outcomes. You have some data on the outputs and you have more control over the outputs.

Now the scorecards are almost ready, two years have passed, and the consultant’s contract is up.  Your team is burnt out on the scorecard project; they have services to deliver.  For all of the time it took to create them, the measures do not seem that useful.  Where you do have outcome measures, it is not clear that the numbers have anything to do with your agency’s interventions.  Perhaps it was the weather or the world economy that moved the needle. The many output measures lack context and detail.  Your staff begins to question whether it is worth the time to report the numbers to keep the scorecards up-to-date.  It is not obvious how to interpret the information on the scorecards.

Yes, we filled 2,319 potholes last year.  But were there more potholes to be filled this year than usual? Did the weather have an impact? Was your budget to resurface or reconstruct roads (rather than fill potholes) sufficient? Did you fill the potholes on the busiest streets first? Were you fast enough at filling the potholes to satisfy residents?  In your effort to tackle potholes, did you neglect to sweep the streets?

Yes, we met with 210 clients.  Did you make a difference in your clients’ lives?  Did you reach the clients with the most need?

Now you have a new governor, mayor, board president, or secretary, some staff has turned over, your funding has shifted, and external situations are changing. Your agency’s priorities and goals have changed.  Your indicators look outdated.  The scorecards that never really worked get shelved.

The whole process gave your team time to reflect on its purpose, but it took a lot of time and money and did not deliver actual tools for management.

An alternative approach    

A different approach is to get started now with the data you have. Get in the practice of using data.  Get your team together regularly to look at the data you already collect as well as outside data sources that might be relevant. Explore the numbers, and see if some of the information can help you in solving problems.  Start with low-hanging fruit on topics that are important and for which data are available, such as overtime or absences, injuries, or clicks on social media.  Engage your team in conversations about the challenges they face and how to use the data and evidence to address the challenges.  Keep talking about your big-picture goals.

As you gain experience in using data, the limits and inadequacies of your data systems will come into focus.  You will see that staff needs more training on data input protocols.  You will see that new fields should be added to the entry forms.  And you will realize that work processes need to be adjusted to give staff time for data entry, or perhaps you need to purchase tablets for staff to use in the field.  Your team will get better at using data, so when you secure funding to upgrade your information systems, you will know what you need.

Along the way, you will identify some useful indicators that you want for your team to review regularly, but the list may shift over time, and it will not be comprehensive in representing all of your priorities, because some things are tough to measure. You might come up with other creative ways to gauge progress, perhaps a randomized control trial or experiment, focus groups, or phone interviews.  You will use quantitative and qualitative tools to evaluate your work.

Regular meetings looking at data, solving problems, and considering outcomes will put the team in a better place to answer questions about how they are doing at delivering public value.  Two years into the process, instead of burning out while creating an impossible plan destined for the shelf, the team feels empowered, ready to get more data and evidence to figure out if they are doing the right work well.  Two years in, they probably still will not have that golden indicator, like a red letter on final exam, but they will be harnessing information, as messy as it is, to do a better job.

Amy Dain is an associate with the Government Analytics Program (GAP) at the Collins Center for Public Management at UMass Boston. GAP promotes the second model of data-driven management described in the article.  Dain also runs a consulting business in Newton that focuses on public policy research. She can be reached at dainresearch@gmail.com.