Guest Post by Mark Pullen

Attribution: flickr.com/photos/29096601 @N00/4700349141
With 1:1 technology initiatives proliferating in schools around the country, it’s clear that an ever-increasing number of school boards, school administrators, teachers, and students see technology adding value to the classroom experience. Defining that added value, however, has been elusive; measuring it, virtually impossible. To this point, schools incorporating 1:1 technology have largely relied on standardized test scores to measure the success or failure of those programs.
Without a better measure by which to judge 1:1 initiatives, the mass media have, essentially by default, chosen to assess them by that same metric as well. In September 2011, The New York Times published one influential article that was critical of a 1:1 program in Arizona; in February 2012, they turned around and penned an article, which lavished praise on a similar program in North Carolina. What was the difference? Standardized test scores in the North Carolina school had increased significantly since the 1:1 initiative had begun, while the Arizona school’s test scores had remained flat.
I believe that it is up to us as tech-integrating educators and administrators to change this narrative. We must come up with an alternative measure with which to determine whether or not the introduction of technology into the classroom has been successful. If we fail to do so, the current test score fixation will remain, and as a result, since education technology improves student learning and engagement in more ways than what can be measured on a fill-in-the-bubble test, the benefits of classroom technology will largely remain hidden from the public.
Here’s one possible solution: I believe that students, teachers, parents, and administrators should be surveyed annually to collect data designed to measure the average level of student engagement in school, parental satisfaction with their child’s education, teacher satisfaction with the education their students are receiving, whether or not parents feel their child’s education is preparing them for the future, and more. Ideally, of course, baseline data would be collected in all of these areas before a 1:1 initiative ever takes place.
Given the “sound bite” media culture in which we live, I propose including numeric rankings for many of these questions, while also creating a number of opportunities for lengthier responses. (For example, “On a scale of 1-10, how satisfied were you with the education your child received this past school year? Please explain your answer in the space below.”) This will allow schools to supplement the black-and-white numeric data of a standardized test with sound- bite-friendly numeric data of their own: “Student engagement has increased 35% since we began our Bring Your Own Device initiative this fall.” More thoughtful audiences will appreciate the lengthier responses, but let’s be honest: those aren’t the folks who were swayed by the test score data in the first place.
Leave a comment to continue the conversation: How do you feel we can best transform the ways in which the public measures the success of ed tech integration in schools?
About the Author:
Mark Pullen has been an elementary teacher for 13 years, currently teaching third grade in East Grand Rapids, MI. He’s an advocate for classroom technology integration, and writes extensively on that subject on behalf of Worth Ave Group, a leading provider of laptop, tablet computer, and iPad insurance for schools and universities.
Like this:
Like Loading...
Related
Mark, your ideas for different types of numeric scores based on student engagement is a necessary one. We need to be looking at what motivates our kids to learn and grow and improve instead of just raising their standardized test scores. I would also add that every school needs a portfolio of student works and videos of students working. We can’t reduce everything of value to numeric scores so even if a school has one portfolio full of samples of work for every single student that is better than a number or percent. Lots of wrk to sift through? Yes. Too much to take on and evaluate quickly and easily? Yes. But much more valuable and a much better representation of a 1:1 program’s success.