Teaching Case: Evaluation of Preschool for California’s
Children
11
The Conditions for Strategic Learning are Examined
Among those conditions was that the foundation must be an organization committed to ongoing
learning, Weiss said. People have to be willing and supported in speaking up, be encouraged to offer
“discrepant” information and have the ability to try new things and fail without being punished.
Much of her thinking was informed by Harvard Business School professor David Garvin’s work on
building learning organizations.
9
While Weiss had worked with Packard in the past and had a strong feeling that it was the learning
organization required for this type of evaluation, she needed to make sure before agreeing to do
an evaluation on the preschool subprogram. Weiss and Coffman flew out to Packard. They wanted
to see firsthand how Salisbury and her team worked together.
“I can still remember Lois and her team all sitting around the table and we had this really interesting
discussion where we laid out our approach,” Weiss said. “[Julia and I] tried to glean from this team
whether the conditions necessary for us to succeed were there.”
“Lois looked for ideas from others and was open to hearing alternative opinions,” Weiss continued.
“I also had the sense that Lois was flexible. She is a risk taker, she has a strong strategy and theory of
change but it is not set in stone. I had the sense that if the data didn’t point in the way she wanted
to go that she would make changes. I thought ‘this is a team that can use and learn from this
approach.’ And, we will have fun doing this work. It will be an exciting journey to try and accomplish
something important.”
Coffman added, “The reason we thought it would work in this case is because Packard is very
much a learning-‐oriented group. They talk strategy every single day. They were constantly thinking
about what they need to be doing differently. It was an opportunity to build evaluation into that
process as one thing that informed their future.”
At the same time, while there was a good deal of confidence that the conditions were right,
according to Coffman there was still a lot of uncertainty about the precise
conditions needed for a
strategic learning approach to work.
“It worked in this case,” she said. “But I still have questions about what really has to be in place at
the start in terms of organizational context and culture for this to have a solid chance of working,
and what may not be there right away but you can create as you go. We went on instinct and our
previous experiences. It was a gamble in some ways—for us and for Packard.”
Evaluators Try to Strike a Balance with Different Users’ Needs
Meanwhile, Packard Foundation leadership was coming off its own uncomfortable experience
with evaluation. An evaluation director with a more traditional and academic approach to
evaluation had recently left after only a short time on the job. It was a mismatch almost from the
start. The experience, among others, left Packard program staff uncertain and a bit wary about the
role and usefulness of evaluation.
9
For example, see Garvin, D. (1993). Building a learning organization. Harvard Business Review, 71(4), 78-‐92.
Teaching Case: Evaluation of Preschool for California’s Children
12
“From my perspective, I was skeptical about the utility of evaluation,” said Kathy Reich, who was a
program officer on the preschool grantmaking at the time. “I came from an advocacy background
and I’m used to making quick decisions with the information on hand. I was a new grantmaker. I
didn’t appreciate evaluation. There was skepticism about evaluation that was widely—though not
universally—shared at the Children, Families and Communities program.”
Still, Reich remembers a strong message from the Board for the need to evaluate this large and risky
investment.
“We had not made a ten-‐year commitment to a goal before. The dollar commitment we made was
not the usual practice. The Foundation was coming off a significant period of contraction. The
message was pretty clear to us, ‘listen, if you are going to make this kind of commitment and invest
this kind of money you better have an evaluation.’”
As Salisbury and her team began to work with Weiss and Coffman and their team to craft an
evaluation approach, the interests of Packard’s Board of Trustees were never far from their mind.
While the Board supported the preschool subprogram and understood that its policy advocacy
approach would likely entail a different kind of evaluation, it included some business executives,
many of who were scientists and engineers. They brought the mindset of expecting results based on
rigorous, controlled experiments.
Evaluators tried to balance the need to provide Trustees with more traditional “outcome” results
while also giving Packard program staff ongoing feedback about how the strategy was unfolding.
Rather than choosing one approach over another, they decided they could do both.
“I promised both learning and accountability,” said Coffman, who has managed the evaluation from
the start. “This is a common dilemma for foundations and for evaluators. Foundations buy the
learning approach but ultimately have to report to their board members who almost always ask the
impact and accountability question. I convinced myself that we would collect information that would
be equally compelling to both the program staff and the Trustees.”
“But these were two different groups that had different purposes in mind for the evaluation,”
Coffman continued. “We didn’t address that discrepancy early on and we should have, although I’m
not clear how that would have been negotiated. Ultimately I’m not sure we adequately met the goal
of an evaluation that was focused simultaneously on learning and accountability. We got through it
but we didn’t solve it.”
Weiss has another perspective: “There is a tension [between an accountability evaluation and a
strategic learning evaluation]. It’s a tension you have to manage. For me, it’s not an either or.
Sometimes you have to do one or another. Sometimes it’s important to do both. As a funder, I would
want to try and have as much of both as possible. I would want to know ‘Am I getting closer to the
goal?’ I don’t want to know after I tried and failed.”
The dilemma—or at least the tension—described by the evaluators raises a larger question: can an
evaluation simultaneously pursue the dual purpose of both learning and accountability? If it does,
then do evaluators end up doing two different evaluations under one umbrella? Coffman asked.