Growth as an Evaluator

Eight weeks ago, the AEA Evaluator Self-Assessment placed me firmly in the “Developing” camp. Today, my responses describe me as “Proficient.” That difference marks the shift Stevahn, King, Ghere, and Minnema (2005) describe as moving from situational ability—doing the basics under guidance—to professional practice—executing competently in new contexts. Understanding and measuring professional growth as an evaluator rests on their six-cluster taxonomy, the AEA’s five competency domains (American Evaluation Association [AEA], 2018), and the IBSTPI performance statements (International Board of Standards for Training, Performance, and Instruction [IBSTPI], 2006). Together, these touchstones clarify what “good” looks like, offer a shared language for self-diagnosis, and remind me that technical skill, ethical stance, and cultural responsiveness are inseparable. The principles embodied in those professional standards documents frame the “Evaluation of Learning Systems” course and now shape my emerging practice.

As a “Proficient” evaluator, I can plan and run most small- to mid-scale studies with minimal oversight while recognizing the gulf that still separates me from expert status. The turning point came during our capstone evaluation of the UN CC:Learn Climate-Change Legal Regime course:
  • I built a 10-item Likert survey aligned to the study’s four key questions, evidence of the design competence AEA tags as “credible and feasible evaluations that address purposes and questions” (AEA 2.3, 2018).
  • I calculated means, medians, and dispersion, and visualized them in learner-friendly histograms—demonstrating systematic inquiry and the ability to “use evidence to make evaluative judgements” (Stevahn et al., 2005).
  • I applied open and axial coding techniques, meeting IBSTPI’s expectation to “analyze and interpret data” (Competency 11; IBSTPI, 2006).
  • I developed a narrative that combined survey patterns and interview insights, showing stakeholders how quantitative satisfaction scores matched or diverged from qualitative descriptions of navigation pain points.
  • I produced a comprehensive written report and, within a 72-hour deadline, recorded a concise video presentation, fulfilling AEA’s mandate to “communicate results in timely, appropriate, and effective ways” (AEA 3.5, 2018).

These products prove I can integrate design, analysis, interpretation, and communication—the through-line of Systematic Inquiry in Stevahn’s framework (2005). They also illustrate how methodological choices and clear, audience-centered messaging transform raw evidence into actionable knowledge for decision-makers.

Emerging Strengths

  • Methodological versatility. The ability to comfortably switch between Likert scales, descriptive statistics, and qualitative coding shows a growing command of mixed methods (AEA 2.4; Stevahn et al., 2005).
  • Evidence-based storytelling. Developing triangulation narratives synthesizing findings across data streams echoes IBSTPI’s emphasis on evaluator credibility (IBSTPI, 2006).
  • Audience-savvy delivery. The video brief’s streamlined design serves managers who lack time for the full report, illustrating competent Interpersonal practice (AEA Domain 5).
  • Self-calibration. Routine meta-evaluation cycles have strengthened my ability to diagnose my blind spots before external reviewers do, a meta-competence that accelerates future learning.

Continuing Stretches

  • Cultural & power analysis. My stakeholder maps still underplay informal authority and intersectionality (AEA 3.7; Stevahn et al., 2005).
  • Budget narratives. Cost justifications read like academic rationales rather than value-for-money stories (Stevahn Project Management cluster; IBSTPI 7). To mature, I must link each expenditure to tangible benefits in a language that finance leaders respect.
  • Field advocacy. Evaluation serves the public good, yet I rarely voice that belief beyond class (AEA 1.9). Elevating that narrative can widen evaluation’s impact and attract broader stakeholder buy-in.
  • Collaborative leadership. While I managed timelines, I deferred many relationship-management tasks to teammates; cultivating shared leadership will strengthen future multi-site evaluations.

Competencies That Surprised Me

  • Promoting social justice (AEA 1.8). I once assumed evaluation was neutral; now I see design choices can hide or expose inequity.
  • I performed three meta-evaluations of the draft report—each time logging strengths, gaps, and fixes—addressing AEA competencies 1.5, 2.12, and 4.8.
  • I generated descriptive statistics and histograms for every survey item, meeting AEA competencies 2.11, 2.13, and 2.14.
  • By foregrounding justice concerns, iteratively meta-evaluating, and grounding conclusions in transparent statistics, I integrated Stevahn’s Professional Practice, Reflective Practice, and Systematic Inquiry pillars—demonstrating how ethical stance, method, and reflection converge in credible evaluation.

Next Steps

  • Deepen cultural humility (AEA 3.7; Stevahn et al., 2005) via a Culturally Responsive Evaluation workshop and shadowing a bilingual evaluator.
  • Master budgeting & value stories (Stevahn 4.6; IBSTPI 7) by linking cost lines to utility metrics in future proposals.
  • Advocate for evaluation’s public value (AEA 1.9) through community-facing blog posts, conference lightning talks, and brief explainer videos.
  • Institutionalize reflective practice (Stevahn Reflective Practice cluster) with after-action reviews for every deliverable.
  • Advance mixed-methods depth (AEA 2.4; IBSTPI 11) by attending advanced qualitative-analysis training and piloting visual-analytics dashboards.
  • Strengthen collaborative facilitation. On my next project, I plan to co-design a stakeholder engagement charter, clarifying roles, decision rights, and inclusion strategies that respect marginalized voices.

My journey from “Developing” to “Proficient” reflects growth in integrity, methodology, contextual acuity, project management, reflective discipline, and interpersonal skill—the six pillars Stevahn et al. (2005) identify, echoed by the AEA (2018) and IBSTPI (2006) frameworks. I now realize evaluation is as much about relationship-building and systems sense-making as it is about p-values or rubrics. Pursuing “Expert” status will require mentoring, scholarship, and intentional practice. Still, the experience gained on this project positions me to complete future studies that answer questions, advance equity, inform wiser decisions, and champion evaluative thinking—fulfilling the broader societal mandate at the heart of our profession.

References

American Evaluation Association. (2018). Evaluator competencies. https://www.eval.org

International Board of Standards for Training, Performance, and Instruction. (2006). Evaluator competencies. Author.

Stevahn, L., King, J. A., Ghere, G., & Minnema, J. (2005). Establishing essential competencies for program evaluators. American Journal of Evaluation, 26(1), 43–59. https://doi.org/10.1177/1098214004273180