Navigating the Tensions of Community Engagement Evaluation through Culturally Responsive and Equity-Oriented Approaches

For decades, community-campus partnerships have helped transform traditional notions of research. Through approaches such as community-based participatory research (CBPR), decolonizing methodologies, and participatory action research (PAR), community and academic partners have expanded the confines of expertise, centering local, experiential, Indigenous, and professional knowledge in research (Fine & Torre, 2021; Minkler & Wallerstein, 2011; Stanton, 2014). Voices in our field have challenged narrow conceptualizations of who is a “researcher” (Blodgett et al., 2011; Ishimaru & Bang, 2022), redefined concepts like validity to account for the process and impact of research-in-action (Anderson & Herr, 1999; Torre et al., 2012), critiqued racist and colonial practices embedded in traditional research approaches (Chilisa, 2019; Darroch & Giles, 2014), and expanded what research products look like beyond the narrow confines of academic publishing (Chen et al., 2010).


Introduction
For decades, community-campus partnerships have helped transform traditional notions of research.Through approaches such as community-based participatory research (CBPR), decolonizing methodologies, and participatory action research (PAR), community and academic partners have expanded the confines of expertise, centering local, experiential, Indigenous, and professional knowledge in research (e.g., Fine & Torre, 2021;Minkler & Wallerstein, 2011;Stanton, 2014).Voices in our field have challenged narrow conceptualizations of who is a "researcher" (e.g., Blodgett et al., 2011;Ishimaru & Bang, 2022), redefined concepts like validity to account for the process and impact of research-in-action (e.g., Anderson & Herr, 1999;Torre et al., 2012), critiqued racist and colonial practices embedded in traditional research approaches (e.g., Chilisa, 2019;Darroch & Giles, 2014), and expanded what research products look like beyond the narrow confines of academic publishing (e.g., Chen et al., 2010).
We have seen much less evolution in how our field approaches evaluation.Just as research has become more participatory and community-centered, we have an opportunity to shift our evaluation methodologies toward more participatory and community-centered models that align with the principles and practices of partnership.Thus, we ask: What would it look like if we brought the same creative and critical lenses that we have used to redefine research to the work of redefining evaluation?

What Do We Evaluate?
The focus of evaluation in community-campus partnerships has evolved over time.From an early focus on student learning outcomes, evaluations have expanded to include the impact on partnering organizations, the readiness of higher education institutions, and, most recently, an emphasis on assessing broader community impacts (James & Logan, 2016;Peacock et al., 2020).This shift holds promise for centering and holding campuses accountable to partnering communities.However, this depends on which domains of impact are evaluated: Are those choices driven by funders' priorities?University goals?Partner organizations?Participant perspectives?Do they privilege quick results or long-term changes?Programmatic successes or personal transformations?The more we lean into evaluating broader impacts, the more we risk making assumptions about what really matters to community members.
Traditionally, evaluation is approached as a systematic process focusing mostly on accuracy and alignment.Programs are assessed for their fidelity to the plan and their success in producing predetermined outcomes (CDC, 2013).Perhaps the most common tool for defining the targets of evaluation in this approach is a logic model (Lowery et al., 2006;Mrklas et al., 2023;Mrklas et al., 2022;Phipps et al., 2016).Logic models essentially map the momentum of community engagement in terms of inputs and processes that evolve into outcomes and impacts, and their wide use came about in part through advocacy from the United Way, the CDC, and other funding agencies (Frechtling, 2007).This approach has many advocates who argue that developing a logic model supports planning, project management, and communication as well as evaluation (Frechtling, 2007;Mills et al., 2019).
Logic models have drawbacks when used for complex, community-based work.For example, their linear approach to causality and focus on predetermined outcomes has difficulty accounting for social systems' non-linear nature, context's influence, and unpredictable or emergent outcomes (Patton, 1997;Rogers, 2008).Logic models explain the world in a mechanistic way that does not match the social realities of community members and may be culturally misaligned with how communities think about and enact their work.For example, Banks and colleagues (2017) describe how the impacts of participatory action research do not fit into linear models in which impact is produced at the end of a project.Rather, diverse impacts on individuals, partners, and communities come at different stages of a recursive, cyclical process.
These challenges have led some to rethink logic models to better encompass complexity and cultural worldviews (Rogers, 2008) or to look to other ways of conceptualizing and visualizing impact (Raftery et al., 2016).In CBPR, logic models have been adapted to emphasize partnership processes and dynamics using constructs such as synergy, collective empowerment, relationships, and community involvement (Sandoval et al., 2011;Wallerstein et al., 2020).Partnerships themselves become a central focus, with domain areas matching values or principles for equitable partnerships (Ortiz, Nash, Shea, et al. 2020).This focus on understanding how partnerships evolve and grow demonstrates respect for the time and energy it takes to build an impactful partnership, which is often overlooked in our rush toward quantifiable outcomes, though it does not address all the critiques of the logic models.
So, evaluators are left with the question: How do we define and map out the domains of evaluation in ways that assist in goal setting and planning without oversimplifying or distracting from the rich complexity of partnerships in action?The answer to this question depends heavily on the reason we are engaged in evaluation in the first place.

Why Do We Evaluate?
The American Evaluation Association (n.d.) describes evaluation as systematic efforts to "determine merit, worth, value or significance" (p. 1).The CDC (2013) writes that evaluation is used to "understand what a program does and how well the program does it" (p. 1).In both cases, the why of evaluation has to do with judgment -answering questions such as "To what extent does the program achieve its goals?How can it be improved?Should it continue?Are the results worth what the program costs?" (AEA, 2024, p. 1).
These definitions reflect the roots of modern evaluation in demands to judge the effectiveness of government and foundation-funded social programs (Giancola, 2020).They also point to the high stakes of evaluation.Evaluation is aimed at making judgments about resources, including which efforts will be funded and which will not.This puts heavy pressure on engagement practitioners to use evaluation to "make the case" for the value of the work to funders, college/university leadership, and the field of higher education at largeespecially in disciplines where community engagement is still a marginal practice.This focus on external accountability and the "case" for partnerships can take away from another key purpose of evaluation: internal learning and improvement (Bamberger & Segone, 2011).We have seen evaluation activities create time for reflection, catalyze dialogue among partners, and build safe avenues for communication that uncover unheard or marginalized perspectives.Evaluation can be a form of praxis, combining critical reflection, theory, and action for personal and social transformation (Wallerstein et al., 2021).However, the kinds of metrics needed for internal learning and transformation may be quite different from those needed to convince outsiders of a partnership's value.Moreover, like other forms of high-stakes evaluation, the potentially existential nature of the stakes does not create an incentive for digging into problematic areas or areas for growth, which might be exactly the focus of a more improvementoriented approach.
A third purpose for evaluation can be to hold partnerships accountable to the communities they seek to benefit (Peacock et al., 2020).After all, it is often the community involved that is most directly impacted by the success or failure of the effort.However, we've found that communities are rarely the ones demanding systemic evaluation.So, as evaluators, we ask ourselves: How do we navigate the multiple purposes of evaluation and the desires of its multiple audiences, some of whom have more institutional power than others?The why of evaluation is inevitably shaped by who is influencing its design and implementation.

Who Evaluates?
Evaluation has grown into a large industry.Some engagement offices or research centers have their own in-house evaluators, while others rely on institutional or consulting evaluators.There are strengths and weaknesses between internal and external evaluators (Conley-Tyler, 2005).
However, in both cases, the model relies on a professional vantage point separate from those doing the work.This model reflects evaluation's roots in the tenets of experimental science and the continued dominance of Western, post-positivist approaches in which evaluators adopt stances of objectivity (Chilisa, 2019;Greene & McClintock, 1991).
Expertise in evaluation methods and outside perspectives is valuable.They are often seen as an antidote to biases and incentives that might lead insiders to paint an over-rosy picture.At the same time, such an approach can often be misaligned with the historical and cultural context, devalue the local, Indigenous, and professional knowledge of partners, and shift power away from those on the ground (Conley-Tyler, 2005;Muller, 2018).External evaluators bring their own biases and incentives, which can be embedded in seemingly objective measures and be in tension with the lived experiences of participants, leading to the use of measures that are not locally valid.In communities facing histories of marginalization and oppression, this approach can reproduce colonial relationships in which colonists reserve the power to define legitimate knowledge and judgment while devaluing Indigenous ways of knowing (Chilisa, 2019).As one of Paul's community partners once asked, "Why do we need to pay large amounts of money to outsiders to come in and prove our programs work when we already know they do?" These tensions have led some to recommend more participatory approaches to community engagement evaluation (CDC, 2011;McCloskey et al., 2011).Participatory methods in evaluation include impacted community members in all phases, beginning with defining what should be evaluated and how.These methods hold promise for building capacity among partners, adapting evaluation methods and measures to the local context, and producing findings that are more useful to partners.At the same time, participatory evaluation raises new tensions.We have seen how it can become a burden on partners and take valuable time away from work in the community, particularly when there are no resources to pay community members for their participation.Yet, less burdensome forms of engagement in which partners are minimally involved or only brought in toward the end risk becoming tokenistic rather than truly collaborative.In addition, partners are being asked to take a significant risk in taking part in an evaluation that they do not have full control over.An evaluation that inadequately captures the full value of the work that partners are doing could risk their funding and support.
The question for evaluators is: How do we leverage the knowledge and expertise of communities and professional evaluators in ways that center community power while not overburdening partners?This question is inextricably linked to how we go about collecting, analyzing, and disseminating our work.

How Do We Evaluate?
Quantification retains a privileged place in evaluation, with a hyper-focus on metrics, benchmarks, key performance indicators, and "evidence-based practice" (Denzin, 2009;Muller, 2018).This is part of a larger trend that Muller (2018) calls "metric fixation"in which quantitative metrics have gone from useful tools to the unquestioned centerpiece of organizational improvementand is closely related to the rise of neoliberalism in higher education (Cantwell & Kauppinen, 2014;Denzin, 2009).Quantitative measurement offers potential benefits in terms of simplicity, comparability, and bias reduction.Showing improvement in such metrics can open the doors for private and public funding.At the same time, these measures can have demonstrated weaknesses, including overemphasizing what can be measured rather than what is important, oversimplifying complex realities, and decontextualization (Muller, 2018).
These weaknesses are amplified in community-campus partnerships, which involve highly complex social systems, are driven by local and cultural contexts, and in which the goals of academics, community organizations, community members, and other stakeholders are multiple.There is a lot of interest in, and intriguing examples of, using qualitative data and storytelling to dig deeper into the complexities of partnerships and their individual, relational, and collective impacts (e.g., Chazdon et al., 2017).Such approaches can have much more cultural resonance in communities where storytelling is a common form of knowledge production and dissemination.However, despite its value, qualitative data is often tacked on and anecdotal, especially as evaluations are summarized for leaders and the public through dashboards and one-pagers.This is, in part, a question of time and resources.Growing expectations of partnerships to deliver outcomes quickly are not necessarily paired with the time and space to attend to the partnership itself, or the resources to support more innovative and time-intensive methods of evaluation.With limited structures to support the evaluation of engagement, we often fall back on the simplest and most easily implemented approaches such as head counts, volunteer hours, publication numbers, and measures of satisfaction, methods that all agree are inadequate for capturing the relational, cultural, collaborative, and community-centered nature of our work.
So, the question remains: How can we develop evaluation processes that utilize multiple methods and capture evidence that is meaningful to community within a context of limited time and resources?

A Way Forward
There are no simple solutions to the tensions outlined above.That should not hold us back from taking bold steps to reimagine what the evaluation of community-campus partnerships can look like.There have been substantial innovations in the field of evaluation over the past decade or so, which have made only limited inroads into community engagement.By drawing on cutting-edge work in evaluation and taking strong stances on the what, why, who, and how of evaluation, we see hope for navigating these tensions.
One promising area of innovation is equitable evaluation, which is slowly emerging as a critical practice in community-engaged work.There are many different approaches to equitable evaluation and several networks and organizations that are iteratively building and implementing principles of equitable evaluation in their work.Non-profit organizations and communityengaged networks such as changematrix.org,equitableeval.org,funder, and evaluation equity network (FEAN), and others (see Appendix A) have published on the importance of equitable evaluation through the lens of cultural relevance, the inclusion of evaluators of color, and equitably navigating inherent tensions in evaluation.
One framework we are looking closely at is culturally responsive equitable evaluation (CREE), which integrates questions of culture, diversity, inclusion, and equity across all phases of evaluation.The CREE framework, as defined by the Expanding the Bench (2019) initiative, "incorporates cultural, structural, and contextual factors (e.g., historical, social, economic, racial, ethnic, gender) using a participatory process that shifts power to individuals most impacted" (para.2).CREE is not a single method of evaluation, but rather a framework or approach that should be infused in all evaluation methodologies.It is rooted in decolonial approaches to evaluation and allows the community to define and own its identity, culture, and worldview through the evaluation process.Expanding the Workbench, led by Change Matrix, developed the 10 CREE principles to guide equitable and culturally responsive evaluation practice (Harrison, 2021; see Table 1).Through the diverse implementation of these principles, evaluators can take strong stances on the what, why, who, and how of evaluation that advance the goals of community-campus partnerships.CREE expands the what of evaluation to include critical self-reflection, the structural and historical context that shapes evaluation outcomes, and power dynamics within the partnerships themselves.CREE centers the why of evaluation on advancing equity and social justice.The who is decidedly participatory, a commitment to community power across all stages of the evaluation process.The how begins with communities' culturally-rooted forms of communication and knowledge production and allows for the evaluation's shape to be defined by the communities themselves.In addition, CREE leads us to raise questions that rarely come up in evaluative discussions, such as who is most impacted by evaluation, who owns the evaluation data, and how the evaluation process might be causing harm (Waetzig & Villalobos, 2023).This is particularly relevant for evaluation in the US's highly diverse communities.For example, according to the US census, over 25 million people in the United States are disproportionately disadvantaged due to language barriers (Ghanbarpour et al., 2020).Engagement with these communities requires a strong lens of linguistically and culturally responsive equitable evaluation to respond to the community's diverse, intersecting identities and experiences.Consequently, approaching data holistically when appropriate, approaching evaluating in a participatory process, and always utilizing cultural contexts to understand the communities involved in partnerships, engagement, and evaluation are non-negotiable practices for equitable evaluation.
In a sense, what CREE and similar frameworks ask of us is that we approach evaluation as we would a community-campus partnership.For almost 25 years, we have used the CCPH Principles of Partnership (PoP) (Seifer, 2000) as a model to guide community-academic partnerships.Composed as a series of maxims such as "Principles and processes for the Partnership are established with the input and agreement of all partners, especially for decisionmaking and conflict resolution" and "The Partnership values multiple kinds of knowledge and life experiences'', the principles aim to translate ethical goals into actionable work (CCPH Board of Directors, 2013).They guide discussion and initiate a principle-minded programmatic trajectory leading to quality processes, meaningful outcomes, and transformational experiences.Proactive attention to equity through practices such as partnership agreements, community partner payments, and community partner representation sets the foundation for our work.If we approach evaluation through these same principles, we can create a context that supports equitable and culturally responsive methods.These principles focus our attention on what a highquality, equitable collaboration looks like and the time and resources required to sustain it.They offer a framework for defining and assessing the kinds of collaboration necessary to carry out CREE in a full, participatory manner.
As partnership builders, CEPs should be at the forefront of this work, both because we have many of the necessary skills and because it will improve our evaluation efforts.Centering community, collaborating equitably, honoring multiple ways of knowing, and acknowledging the varying and diverse cultural contexts of communities allows for a more nuanced and ultimately valid assessment of partnerships and initiatives.We cannot let a hyper-focus on prescribed outcomes and deliverables drown out attention to relationships, inclusion, representation, and connections over time.As we continue to explore these approaches in our work at CCPH, we welcome ongoing dialogue around how evaluators can be a part of advancing the work of equity and social justice through evaluation practice.