When I write about "outcome-driven" government, it is not uncommon to hear this type of response:
"'Outcome-driven' is a wonderful concept wrought with problems of definition, faked stats and apathy.
Better still would be legislation aimed at making output-driven really 'output driven.' How can we free it from spin doctors?!"
Any social scientist is familiar with program evaluation methods and knows that objective assessment can be manipulated many ways. Those circumstances demand high standards from those performing the evaluation. Citizens who are responsible for their government can't just throw up their hands to accountability and performance measurement.
I addressed that subject in extensive detail in the book, Smart Data, Enterprise Performance Optimization Strategy by James A George and James A Rodger (c) 2019 Wiley Publishing.
The truth detector in evaluating government outcomes, measurements, and performance is from asking the question, "How will the outcome be achieved?" If people can't answer "how" then they will be unlikely to accomplish the objective and to produce the desired or required results.
When one begins to go down this path, government processes are revealed in scope, scale, and complexity. One of the things that need to be understood, of course, are the resources required to produce the outcomes. You can't do that without modeling and attributing processes with resources, cost, and time metrics. Resources include enabling people and technologies that perform the work.
The Department of Defense does this as a matter of routine. The Government Accountability Office does too (http://www.gao.gov/).
Cynicism doesn't get the job done. Vigilance and attention help to improve the process.
Not intending to be critical or harsh, I provided the Obama administration with an example of how to improve their outcome statements. The examples appear in pictures from pages of my book.