Week 4 Kellogg Institute: Dr Hunter Boylan, Program
Assessment and Evaluation
“We must learn to measure what we value rather than
value what we can measure.” Astin, 1992
“Never let bad news travel alone.” Boylan [include
a plan for improvement]
“Persistence wears down Resistance.”
“If you don’t establish your own criteria for
evaluation, someone else who doesn’t know jack s&!# will do it for you.”
Boylan
Brief Summary: Dr Boylan’s topics included “The
Evaluation Mystique,” Audiences for Evaluation, Developing the Right Evaluation
Questions, Industry Standards for Evaluation, Formative vs Summative
Evaluation, Qualitative vs Quantitative Evaluation, Astin’s I+E=O model (Input+Environment=Output),
Levels of Evaluation, Tools for Evaluation, A model for Evaluating
Developmental Programs, Cost-effectiveness vs Cost benefit evaluation, Case
Studies for program evaluation (Eyeballing), Use of Focus Groups, Writing Evaluation
Reports, Disseminating Findings, What is Program Research? (hint: it is not “this
is how we teach English at X College!), A model for Program Evaluation, What is
the Big Picture in Dev Ed?, Responding through Evaluation, the Do’s and Don’ts
of Evaluation, Power to the Program, and Hunter’s 11 Maxims for Empowering the
Program.
We discussed developing benchmarks for progress for
students and ways to incentivize small steps towards completing a developmental
math sequence or financial literacy workshop or degree plan. Dr Boylan described the “heart” of Dev Ed—a model
for assessment that created a loop of formative evaluation, summative
evaluation, interventions to courses, advising, tutoring, and a return to
formative evaluation. We learned that “Evaluability”
happens when stakeholder expectations are known. We looked at non-cognitive
assessments for student suitability for online or computer-based courses and
realized that many of the ones that are already out there have Double Barreled
questions or are otherwise poorly designed, reinforcing that we ought to put
extra effort into getting a second opinion on our questionnaire design before
we put things out there to be sure to be getting the right kinds of answers we
seek.
We also distinguished between primary, secondary, tertiary
and serendipitous data in formative/summative data collection.
The final session really established the importance
of getting developmental education right—what is at stake for the United States
through the next decade. The demand for skilled workers rises, and we will need
to prepare the underprepared to fill our needs.
America’s largest untapped resource is its poor, and programs like
developmental education make possible social mobility and ability to
participate as full productive citizens.
Critique: It seemed the class enjoyed the pace of
the presentations with the regular quizzes and prizes. I appreciated the
structured exercises for our table to discuss the concepts or work through the
case studies. I especially value the
model evaluation report and our time critiquing how well it followed the
template and what other information could have been included to make the report
even more compelling. I also appreciate the reminders of the importance of
telling our story and the compelling stories of our successful students.
Description of the implications: One implication I
will carry away from this week is related to my own affect about my constantly
evolving programs. I have been feeling
bad about always implementing a new program each fall, as if I’m constantly on
some grail quest or that I am somehow unsatisfied and unable to just accept
things they way they are. In reflection I see that I’m being responsive to data
and trends that we learn each year from the previous years’ success and
failure. My program is on a continuous
improvement plan. As we understand more
about the students’ characteristics and performance, we can use the information
to redesign/adapt it to their needs. For
example, if we learn that 75% of our students are “non-traditional” in one or
more specific ways, we can respond with programs that serve those students’
needs for childcare or evening classes or flexible online options.
Another implication is that I will use this
information as I set up a plan for measuring evaluation outputs for my
practicum. I have a template for how to
describe the inputs (the demographics/traits of my students), the environmental
conditions I will set up (the college readiness curriculum and video
assignments), and the outputs of this pilot program. I now have a framework for a comprehensive research
report to be used for multiple audiences—“Evaluation is at the heart of
Developmental Education.”
A third take-away: I will look into creating a “Certificate
of Completion” for students who successfully complete the developmental
sequences for reading/writing/math in the STAY program to recognize their
accomplishment. I will think about a way
to recognize them after they have completed their 100-level course of English
and Math as well.
Fourth: I will look into the cost effectiveness
data for our program and ask the VP to help me sort through the information.
Fifth: It seems that “eyeballing” initial raw data
gives good feedback for future qualitative inquiry.
Sixth: My thoughts are drawn to our “high
achieving, low test score students”—I want to spend more time defining their
characteristics, developing programs to support them, refining those programs,
and seeing more of them to graduation.
Seventh: It is good to keep labs and tutoring
centers close to the classrooms. I’ll
keep this in mind as we schedule rooms.
Eighth: I will work from the model evaluation
report to do a program evaluation this fall.
I’ll ask Institutional Reporting to help me as well as other members of the
Enrollment Management team.
And finally, Dr Boylan’s parting words about
developing the untapped resources of our less affluent, first generation,
minority students got me thinking about how best to recruit the next generation
of developmental educators—perhaps identifying and encouraging successful dev
ed students to pursue the path, perhaps using Mentoring.