Evaluator Competencies Reflection

I’ve spent decades in educational technology and IT consulting, but stepping back and assessing myself as an evaluator has reminded me that there’s always room to grow. I recently completed a professional survey focused on my skills as an evaluator. I rated myself between a three and a four on a six-point scale for most skills. It felt like taking stock of a big open-source project: there’s a decent foundation but plenty of room for new features and refinements. I’m not a total newbie or an all-knowing guru, but a competent mid-level evaluator.

Many of the skills we develop in other areas of our professional lives—such as project management, interpersonal communication, and analytical thinking—translate directly into evaluator competencies (International Board of Standards for Training, Performance, and Instruction [IBSTPI], 2006; American Evaluation Association [AEA], 2018). For instance, leading cross-functional teams fosters the interpersonal abilities noted in IBSTPI’s “Professional Foundations” domain (IBSTPI, 2006), while experience with systematic decision-making and evidence-based approaches parallels the American Evaluation Association’s emphasis on credible, methodical inquiry (AEA, 2018). By applying preexisting strengths within the ethical and practical frameworks recommended by the AEA, professionals address the underlying requirements of effective evaluation, ensuring sound evaluative practice (AEA, 2018; IBSTPI, 2006).

In their article “Establishing Essential Competencies for Program Evaluators,” Stevahn, King, Ghere, and Minnema make a case for defining evaluator competencies as a powerful way to unify the field of evaluation and improve our reflective practice. Their work suggests that having a standardized set of competencies allows novice and experienced evaluators to identify areas of strength, uncover blind spots, and pursue a more structured path to professional growth (Stevahn et al., 2005). Reflecting on my skills and experience, I realized that a standardized framework defines what I need to improve or learn next.

Looking at my strengths in evaluation, I see parallels with the development and management approach I’ve followed for much of my career. I’m decent at gathering data and using it to drive decisions. I can parse what’s working in a training program, find patterns, and share those insights with colleagues or stakeholders. Communicating findings reminds me of how I share project updates or bug fixes in an online repository: you want everyone to understand the core issues and potential solutions quickly (AEA, 2018).

At the same time, the self-assessment showed me where I could step it up. One area that stood out was ensuring I factored in cultural context and stakeholder diversity, especially from the start of the evaluation (IBSTPI, 2006). For much of my career, I’ve participated in global communities of developers that support various open-source projects. However, until I saw these evaluation competencies spelled out, I didn’t realize that I should bring the same level of intention to understanding and respecting different groups involved in a learning program. It’s not that I ignore culture—I just haven’t built a habit of mixing those considerations into the entire evaluation process, from planning to reporting (AEA, 2018).

I also noticed that my structured reflection activities lack formality. I do a mental recap after finishing a project—like scanning for errors after pushing code live—but I rarely write down what I’ve learned or systematically analyze what worked and didn’t (AEA, 2018). The AEA competencies (Domain 1.5) suggest a more deliberate process, which makes sense. If I can keep a brief journal or log about each step—similar to how version control systems track every change—I can see patterns and solutions more clearly and refine my method the next time.

I also need to identify underlying assumptions explicitly (Domain 2.5). Evaluative strategies make certain assumptions: “Participants will respond to this survey” or “The data is trustworthy”. Sitting down to list them at the start of a project is not something I’ve been in the habit of doing (IBSTPI, 2006). This reminded me of open-source developers pursuing development patterns based on the quirks of specific frameworks or libraries. It’s the same principle: naming our assumptions helps prevent blind spots.

I need to be more intentional about “advocating for the field of evaluation” (Domain 1.9). I’m used to touting open-source software tools and explaining why they matter to potential clients or partners. I hadn’t realized I should be doing something similar for evaluation—making sure people know why it’s a vital and independent discipline that deserves respect and resources (AEA, 2018). In large projects, we often assume that everyone understands the value of each component. In reality, it takes constant communication to keep people on the same page about the necessity of specific teams or products.  

So, how do I pursue the areas that I have identified as weaknesses? One big step is connecting with more experienced evaluators, like pairing newcomers with established contributors in open-source communities (IBSTPI, 2006). I’d like to see how they incorporate cultural competence from day one and handle complexities in data collection or stakeholder engagement. Another clear step is to commit to a reflection process for every evaluation I conduct—writing down the positives, the negatives, and the “still not sure” moments. Treating reflection like a version control commit might put me in the right mindset (AEA, 2018).

I also plan to deepen my knowledge of statistical analysis and mixed-methods approaches. I’m comfortable with straightforward descriptive statistics and simple qualitative data coding. But if I push into more advanced methods—like in-depth statistical modeling or more elaborate qualitative analysis—I’ll be able to design and run evaluations that address larger, more complex questions. This approach aligns with my professional experience as a software developer, where learning new design patterns or frameworks helps me tackle more complicated projects (IBSTPI, 2006).

Lastly, I want to volunteer my evaluation skills for community or non-profit projects. Like open-source development, volunteering lets you tackle real-world problems and learn from diverse people. This approach aligns with the collaborative spirit I value. It provides more practice in applying competencies like cultural awareness and stakeholder involvement in settings that may differ from my usual corporate or educational experiences (AEA, 2018).

Overall, I’m looking at my mid-level self-rating in evaluation as an invitation rather than a verdict. In software development, there’s always a new feature, bug fix, or plugin around the corner. Similarly, as I build my evaluation skills, I’ll keep iterating. Each project becomes an opportunity to refine my understanding of culture, data, reflection, and advocacy (IBSTPI, 2006; AEA, 2018). I’m hopeful that with steady practice, collaboration, and reflection, I’ll be able to inch closer to expert status.

References

American Evaluation Association. (2018). AEA evaluator competencies. https://www.eval.org

International Board of Standards for Training, Performance, and Instruction. (2006). Evaluator competencies. https://ibstpi.org

Stevahn, L., King, J. A., Ghere, G., & Minnema, J. (2005). Establishing essential competencies for program evaluators. American Journal of Evaluation, 26(1), 43–59. https://doi.org/10.1177/1098214004273180